2
<title>Server Oriented System Tuning Info</title></head>
3
<body><ob></ob><h1>System Tuning Info for Linux Servers</h1>
6
This page is about optimizing and tuning
7
Linux based systems for server oriented tasks. Most
8
of the info presented here I've used myself, and
9
have found it to be beneficial. I've tried to avoid
10
the well tread ground (hdparm, turning off hostname
11
lookups in apache, etc) as that info is easy to find
14
Some cases where you might want to apply some of
15
benchmarking, high traffic web sites, or in case
16
of any load spike (say, a web transfered virus is
17
pegging your servers with bogus requests)
22
<a href="#disk">Disk Tuning</a><br>
23
<a href="#fs">File system Tuning</a><br>
24
<a href="#scsi">SCSI Tuning</a><br>
25
<a href="#elevators">Disk I/O Elevators</a><br>
26
<a href="#network">Network Interface Tuning</a><br>
27
<a href="#tcp">TCP Tuning</a><br>
28
<a href="#fds">File limits</a><br>
29
<a href="#procs">Process limits</a><br>
30
<a href="#threads">Threads</a><br>
31
<a href="#nfs">NFS</a><br>
32
<a href="#apache">Apache and other web servers</a><br>
33
<a href="#samba">Samba</a><br>
34
<a href="#ldap">Openldap tuning</a><br>
35
<a href="#shm">Sys V shm</a><br>
36
<a href="#pty">Ptys and ttys</a><br>
37
<a href="#benchmarks">Benchmarks</a><br>
38
<a href="#monitoring">System Monitoring</a><br>
39
<a href="#utils">Utilities</a><br>
40
<a href="#links">System Tuning Links</a><br>
41
<a href="#music">Music </a><br>
42
<a href="#thanks">Thanks</a><br>
43
<a href="#todo">TODO</a><br>
44
<a href="#changes">Changes</a><br>
51
<a name="disk"><b><h4>File and Disk Tuning</h4></b></a>
53
Benchmark performance is often heavily based on
54
disk I/O performace. So getting as much disk I/O as possible
57
Depending on the array, and the disks used, and the controller,
58
you may want to try software raid. It is tough to beat software
59
raid performace on a modern cpu with a fast disk controller.
61
The easiest way to configure software raid is to do it
62
during the install. If you use the gui installer, there
63
are options in the disk partion screen to create
64
a "md" or multiple-device, linux talk for a software raid
65
partion. You will need to make partions on each of the
66
drives of type "linux raid", and then after creating
67
all these partions, create a new partion, say " /test",
68
and select md as its type. Then you can select all
69
the partions that should be part of it, as well as the
70
raid type. For pure performance, RAID 0 is the way to go.
72
Note that by default, I belive you are limited to 12 drives
73
in a MD device, so you may be limited to that. If the drives
74
are fast enough, that should be sufficent to get >100 MB/s
77
One thing to keep in mind is that the position of a partion
78
on a hardrive does have performance implications. Partions
79
that get stored at the very outer edge of a drive tend to
80
be significantly faster than those on the inside. A good
81
benckmarking trick is to use RAID across several drives,
82
but only use a very small partion on the outside of the disk.
83
This give both consistent performance, and the best performance.
84
On most moden drives, or least drives using ZCAV (Zoned Constant Angular Velocity),
85
this tends to be sectors with the lowest address, aka, the first
86
partions. For a way to see the differences illustrated, see the
87
<a href="http://www.coker.com.au/bonnie++/zcav/">ZCAV</a> page.
90
This is just a summary of software RAID configuration. More
91
detailed info can be found elsewhere including the
92
<a href="http://www.linuxdoc.org/HOWTO/Software-RAID-HOWTO.html">Software-RAID-HOWTO</a>,
93
and the docs and man pages from the <b>raidtools</b> package.</p><p>
96
<a name="fs"><b></b></a></p><h4><a name="fs"><b>File System Tuning</b></a></h4><p>
99
Some of the default kernel paramaters for system performance
100
are geared more towards workstation performance that
101
file server/large disk io type of operations. The most
102
important of these is the "bdflush" value in
105
These values are documented in detail in
106
/usr/src/linux/Documenation/sysctl/vm.txt.
108
A good set of values for this type of server is:
109
</p><ul><pre>echo 100 5000 640 2560 150 30000 5000 1884 2 > /proc/sys/vm/bdflush
113
(you change these values by just echo'ing the new values to the file.
114
This takes effect immediately. However, it needs to be reinitilized
115
at each kernel boot. The simplest way to do this is to put this
116
command into the end of /etc/rc.d/rc.local)
118
Also, for pure file server applications like web and samba servers, you
119
probably want to disable the "atime" option on the filesystem. This
120
disabled updating the "atime" value for the file, which indicates
121
that the last time a file was accessed. Since this info isnt very
122
useful in this situation, and causes extra disk hits, its typically
123
disabled. To do this, just edit /etc/fstab and add "notime" as
124
a mount option for the filesystem.
127
</p><ul><pre>/dev/rd/c0d0p3 /test ext2 noatime 1 2
131
With these file system options, a good raid setup, and the bdflush
132
values, filesystem performace should be suffiecent.
135
The <a href="#elevators">disk i/o elevators</a> is another kernel tuneable that can be
136
tweaked for improved disk i/o in some cases. </p><p>
142
<a name="scsi"><b><h4>SCSI Tuning</h4></b></a><p>
144
SCSI tuning is highly dependent on the particular
145
scsi cards and drives in questions. The most effective variable
146
when it comes to SCSI card performace is tagged command queueing.
148
For the Adaptec aic7xxx seriers cards (2940's, 7890's, *160's, etc)
149
this can be enabled with a module option like:
151
</p><pre> aic7xx=tag_info:{{0,0,0,0,}}
154
This enabled the default tagged command queing on the first
155
device, on the first 4 scsi ids.
157
</p><ul><pre> options aic7xxxaic7xxx=tag_info:{{24.24.24.24.24.24}}
159
in /etc/modules.conf will set the TCQ depth to 24
163
You probably want to check the driver documentation for your
164
particular scsi modules for more info.
168
<a name="elevators"></a><h4><a name="elevators"><b>Disk I/O Elevators</b></a></h4><p>
171
On systems that are consistently doing a large amount of disk I/O,
172
tuning the disk I/O elevators may be useful. This is a 2.4 kernel
173
feature that allows some control over latency vs throughput by
174
changing the way disk io elevators operate. <p>
176
This works by changing how long the I/O scheduler will let
177
a request sit in the queue before it has to be handled. Since
178
the I/O scheduler can collapse some request together, having
179
a lot of items in the queue means more can be cooalesced, which
180
can increase throughput. </p><p>
182
Changing the max latency on items in the queue allows you to
183
trade disk i/o latency for throughput, and vice versa. </p><p>
185
The tool "/sbin/elvtune" (part of util-linux) allows you
186
to change these max latency values. Lower values means
187
less latency, but also less thoughput. The values can
188
be set for the read and write queues seperately.</p><p>
190
To determine what the current settings are, just issue:
191
</p><pre> /sbin/elvtune /dev/hda1
193
substituting the approriate device of course. Default
194
values are 8192 for read, and 16384 for writes.<p>
196
To set new values of 2000 for read and 4000 for
198
</p><pre> /sbin/elvtune -r 2000 -w 4000 /dev/hda1
200
Note that these values are for example purposes
201
only, and are not recomended tuning values. That
202
depends on the situation.<p>
204
The units of these values are basically
205
"sectors of writes before reads are allowed".
206
The kernel attempts to do all reads, then all writes, etc
207
in an attempt to prevent disk io mode switching, which
208
can be slow. So this allows you to alter how long
209
it waits before switching.</p><p>
211
One way to get an idea of the effectiveness of these
212
changes is to monitor the output of `isostat -d -x DEVICE`.
213
The "avgrq-sz" and "avgqu-sz" values (average size
214
of request and average queue length, see man page for
215
iostat) should be affected by these elevator changes.
216
Lowering the latency should cause the "avqrq-sz" to
217
go down, for example.</p><p>
219
See the <b>elvtune</b> man page for more info. Some
220
info from when this feature was introduced is also
221
at <a href="http://lwn.net/2000/1123/kernel.php3">Lwn.net</a></p><p>
223
This info contributed by Arjan van de Ven.</p><p>
226
<a name="network"></a><h4><a name="network"><b>Network Interface Tuning</b></a></h4><p>
228
Most benchmarks benifit heavily from making sure
229
the NIC's in use are well supported, with a well written driver.
230
Examples include eepro100, tulip's, newish 3com cards, and acenic
231
and sysconect gigabit cards.
233
Making sure the cards are running in full duplex mode
234
is also very often critical to benchmark performace. Depending
235
on the networking hardware used, some of the cards may not autosense
236
properly and may not run full duplex by default.
238
Many cards include module options that can be used to force
239
the cards into full duplex mode. Some examples for common cards include
241
</p><pre>alias eth0 eepro100
242
options eepro100 full_duplex=1
244
options tulip full_duplex=1
247
Though full duplex gives the best overall performance,
248
I've seen some circumstances where setting the cards to half duplex
249
will actually increase thoughput, particulary in cases where the
250
data flow is heavily one sided.
252
If you think your in a situation where that may help, I
253
would suggest trying it and benchmarking it.
256
<a name="tcp"><b></b></a><h4><a name="tcp"><b>TCP tuning</b></a></h4>
259
For servers that are serving up huge numbers of
260
concurent sessions, there are some tcp options that should
261
probabaly be enabled. With a large # of clients doing their best
262
to kill the server, its probabaly not uncommon for the
263
server to have 20000 or more open sockets.
265
In order to optimize TCP performace for this
266
situation, I would suggest tuning the following
269
</p><pre>echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
272
Allows more local ports to be available.
273
Generally not a issue, but in a benchmarking scenario you
274
often need more ports available. A common example is clients
275
running `ab` or `http_load` or similar software.<p>
277
In the case of firewalls, or other servers doing NAT
278
or masquerading, you may not be able to use the full port
279
range this way, because of the need for high ports for use
283
Increasing the amount of memory associated with socket
284
buffers can often improve performance. Things like NFS
285
in particular, or apache setups with large buffer configured
286
can benefit from this. </p><p>
288
</p><pre>echo 262143 > /proc/sys/net/core/rmem_max
289
echo 262143 > /proc/sys/net/core/rmem_default
292
This will increase the amount of memory available for
293
socket input queues. The "wmem_*" values do the same
294
for output queues.<p>
296
<b>Note:</b> With 2.4.x kernels, these values are
297
supposed to "autotune" fairly well, and some people
298
suggest just instead changing the values in:
299
</p><pre>/proc/sys/net/ipv4/tcp_rmem
300
/proc/sys/net/ipv4/tcp_wmem
303
There are three values here, "min default max".
306
These reduce the amount of work the TCP stack has to do, so is often helpful in this situation.
307
</p><pre>echo 0 > /proc/sys/net/ipv4/tcp_sack
308
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
317
<a name="fds"><b><h4>File Limits and the like</h4></b></a><p>
320
<a name="fds"> Open tcp sockets, and things like apache are
321
prone to opening a large amount of file descriptors. The
322
default number of available FD is 4096, but this may
323
need to be upped for this scenario.
326
<a name="fds">The theorectial limit is roughly a million file descriptors,
327
though I've never been able to get close to that many open.
329
<a name="fds">I'd suggest doubling the default, and trying the test. If
330
you still run out of file descriptors, double it again.
332
<a name="fds">For example:
334
</p><ul><pre><a name="fds">echo 128000 > /proc/sys/fs/inode-max
335
echo 64000 > /proc/sys/fs/file-max
337
<a name="fds">and as root:
338
</a><pre><a name="fds"> ulimit -n 64000
341
<a name="fds">Note: On 2.4 kernels, the "inode-max" entry is no longer
345
<a name="fds">You probabaly want to add these to /etc/rc.d/rc.local so
346
they get set on each boot.
348
<a name="fds"> There are more than a few ways to make these
349
changes "sticky". In </a><a href="http://www.redhat.com/">Red Hat Linux</a>,
350
you can you /etc/sysctl.conf and /etc/security/limits.conf to
351
set and save these values.</p><p>
353
If you get errors of the variety "Unable to open file descriptor"
354
you definately need to up these values.
356
You can examine the contents of /proc/sys/fs/file-nr to
357
determine the number of allocated file handles, the number of
358
file handles currently being used, and the max number of
362
<a name="procs"><b></b></a><h4><a name="procs"><b>Process Limits</b></a></h4><p>
364
<a name="procs"> For heavily used web servers, or machines that spawn off lots
365
and lots of processes, you probabaly want to up the limit of processes
368
<a name="procs"> Also, the 2.2 kernel itself has a max process limit. The default
369
values for this are 2560, but a kernel recompile can take this
370
as high as 4000. This is a limitation in the 2.2 kernel, and has been
371
removed from 2.3/2.4.
373
<a name="procs">The values that need to be changed are:
375
<a name="procs"> If your running out how many task the kernel can handle by default,
376
you may have to rebuild the kernel after editing:
378
</p><pre><a name="procs"> /usr/src/linux/include/linux/tasks.h
380
<a name="procs">and change:
382
</p><pre><a name="procs">#define NR_TASKS 2560 /* On x86 Max 4092, or 4090 w/APM
387
</a><pre><a name="procs">#define NR_TASKS 4000 /* On x86 Max 4092, or 4090 w/APM
391
</a><pre><a name="procs">#define MAX_TASKS_PER_USER (NR_TASKS/2)
394
</a><pre><a name="procs">#define MAX_TASKS_PER_USER (NR_TASKS)
397
<a name="procs">Then recompile the kernel.
399
<a name="procs">also run:
400
</a></p><pre><a name="procs">ulimit -u 4000
403
<a name="procs"> <b>Note:</b> This process limit is gone
404
in the 2.4 kernel series.
407
<a name="threads"><b><h4>Threads</h4></b></a><p>
410
<a name="threads"> Limitations on threads are tightly tied
411
to both file descriptor limits, and process limits. </a></p><p>
413
<a name="threads">Under Linux, threads are counted as processes, so
414
any limits to the number of processes also applies
415
to threads. In a heavily threaded app like a
416
threaded TCP engine, or a java server, you can
417
quickly run out of threads.</a></p><p>
419
<a name="threads">For starters, you want to get an idea how many threads
420
you can open. The `thread-limit` util mentioned in
421
the </a><a href="#utils">Tuning Utilities</a> section
422
is probabaly as good as any. </p><p>
424
The first step to increasing the possible number of threads
425
is to make sure you have boosted any process limits as
426
mentioned before. </p><p>
428
There are few things that can limit the number of threads,
429
including process limits, memory limits, mutex/semaphore/shm/ipc
430
limits, and compiled in thread limits.
432
For most cases, the process limit is the first one to run into,
433
then the compiled in thread limits, then the memory limits. </p><p>
435
To increase the limits, you have to recompile glibc. Oh fun!.
436
And the patch is essentially two lines!. Woohoo!</p><p>
438
</p><pre>--- ./linuxthreads/sysdeps/unix/sysv/linux/bits/local_lim.h.akl Mon Sep 4
440
+++ ./linuxthreads/sysdeps/unix/sysv/linux/bits/local_lim.h Mon Sep 4
443
/* The number of threads per process. */
444
#define _POSIX_THREAD_THREADS_MAX 64
445
/* This is the value this implementation supports. */
446
-#define PTHREAD_THREADS_MAX 1024
447
+#define PTHREAD_THREADS_MAX 8192
449
/* Maximum amount by which a process can descrease its asynchronous I/O
451
--- ./linuxthreads/internals.h.akl Mon Sep 4 19:36:58 2000
452
+++ ./linuxthreads/internals.h Mon Sep 4 19:37:23 2000
454
THREAD_SELF implementation is used, this must be a power of two and
455
a multiple of PAGE_SIZE. */
457
-#define STACK_SIZE (2 * 1024 * 1024)
458
+#define STACK_SIZE (64 * PAGE_SIZE)
461
/* The initial size of the thread stack. Must be a multiple of PAGE_SIZE.
465
Now just patch glibc, rebuild, and install it. ;-> If you have
466
a package based system, I seriously suggest making a new package and using
469
Two references on how to do this are <a href="http://www.jlinux.org/server.html">Jlinux.org</a>, and <a href="http://www.volano.com/linuxnotes.html">Volano</a>.Both describe how to increase the number of threads so Java apps can use them.</p><p>
472
<a name="nfs"><b><h4>NFS</h4></b></a><p>
475
<a name="nfs">A good resource on NFS tuning on linux is the </a><a href="http://nfs.sourceforge.net/nfs-howto/performance.html">linux NFS
476
HOW-TO</a>. Most of this info is gleaned from there.</p><p>
478
But the basic tuning steps include:</p><p>
480
Try using NFSv3 if you are currently using NFSv2. There can be very significant
481
performance increases with this change. </p><p>
483
Increasing the read write block size. This is done with the <b>rsize</b> and <b>wsize</b>
484
mount options. They need to the mount options used by the NFS clients. Values of 4096 and
485
8192 reportedly increase performance alot. But see the notes in the HOWTO about experimenting
486
and measuring the performance implications. The limits on these are 8192 for NFSv2 and
487
32768 for NFSv3</p><p>
489
Another approach is to increase the number of nfsd threads running. This is normally
490
controlled by the nfsd init script. On Red Hat Linux machines, the value "RPCNFSDCOUNT"
491
in the nfs init script controls this value. The best way to determine if you need
492
this is to experiment. The HOWTO mentions a way to determin thread usage, but
493
that doesnt seem supported in all kernels.</p><p>
495
Another good tool for getting some handle on NFS server performance is
496
`nfsstat`. This util reads the info in /proc/net/rpc/nfs[d] and displays
497
it in a somewhat readable format. Some info intended for tuning Solaris,
498
but useful for it's description of the
499
<a href="http://www.princeton.edu/%7Eunix/Solaris/troubleshoot/nfsstat.html">
500
nfsstat format</a></p><p>
502
See also the <a href="#tcp">tcp tuning info</a></p><p>
507
<a name="apache"><b><h4>Apache config</h4></b></a><p>
510
<a name="apache"> Make sure you starting a ton of initial daemons
511
if you want good benchmark scores.
513
<a name="apache">Something like:
514
</a></p><pre><a name="apache">#######
519
# this can be higher if apache is recompiled
522
MaxRequestsPerChild 10000
525
<a name="apache"> <b>Note:</b> Starting a massive amount of httpd
526
processes is really a benchmark hack. In most real world
527
cases, setting a high number for max servers, and a sane
528
spare server setting will be more than adequate. It's just
529
the instant on load that benchmarks typically generate that
530
the StartServers helps with. </a><p><a name="apache">
532
The MaxRequestPerChild should be bumped up if you
533
are sure that your httpd processes do not leak memory. Setting
534
this value to 0 will cause the processes to never reach a limit.
536
<a name="apache"> One of the best resources on tuning these values, especially
537
for app servers, is the </a><a href="http://perl.apache.org/guide/performance.html">mod_perl performance
538
tuning</a> documentation. </p><p>
540
<b>Bumping the number of available httpd processes</b>
543
Apache sets a maximum number of possible processes at compile
544
time. It is set to 256 by default, but in this kind of scenario,
545
can often be exceeded.
547
To change this, you will need to chage the hardcoded limit
548
in the apache source code, and recompile it. An example of the change
551
</p><pre>--- apache_1.3.6/src/include/httpd.h.prezab Fri Aug 6 20:11:14 1999
552
+++ apache_1.3.6/src/include/httpd.h Fri Aug 6 20:12:50 1999
556
#ifndef HARD_SERVER_LIMIT
557
-#define HARD_SERVER_LIMIT 256
558
+#define HARD_SERVER_LIMIT 4000
563
<p> To make useage of this many apache's however, you will also
564
need to boost the number of processes support, at least for
565
2.2 kernels. See the <a href="#procs">section on kernel process limits</a>
566
for info on increasing this.</p><p>
569
The biggest scalability problem with apache, 1.3.x versions at
570
least, is it's model of using one process per connection. In cases
571
where there large amounts of concurent connections, this can require
572
a large amount resources. These resources can include RAM, schedular
573
slots, ability to grab locks, database connections, file descriptors,
576
In cases where each connection takes a long time to complete, this
577
is only compunded. Connections can be slow to complete because of
578
large amounts of cpu or i/o usage in dynamic apps, large files
579
being transfered, or just talking to clients on slow links.</p><p>
581
There are several strategies to mitigate this. The basic idea
582
being to free up heavyweight apache processes from having to
583
handle slow to complete connections. </p><p>
585
<b>Static Content Servers</b>
587
If the servers are serving lots of static files (images, videos,
588
pdf's, etc), a common approach is to serve these files off
589
a dedicated server. This could be a very light apache setup,
590
or any many cases, something like thttpd, boa, khttpd, or TUX.
591
In some cases it is possible to run the static server on the
592
same server, addressed via a different hostname. <p>
594
For purely static content, some of the other smaller more
595
lightweight web servers can offer very good performance.
596
They arent nearly as powerful or as flexible as apache,
597
but for very specific performance crucial tasks, they
601
Boa: <a href="http://www.boa.org/">http://www.boa.org/</a>
603
thttpd: <a href="http://www.acme.com/software/thttpd/">http://www.acme.com/software/thttpd/</a><br>
604
mathopd: <a href="http://mathop.diva.nl/">http://mathop.diva.nl</a><br>
608
If you need even more ExtremeWebServerPerformance, you
609
probabaly want to take a look at TUX, written by <a href="http://people.redhat.com/mingo">Ingo Molnar</a>. This is
610
the current world record holder for <a href="http://www.spec.org/osg/web99/results/res2000q3/web99-20000710-00057.html">SpecWeb99</a>.
611
It probabaly owns the right to be called the worlds fastest web server.</p><p>
619
For servers that are serving dynamic content, or ssl content,
620
a better approach is to employ a reverse-proxy. Typically,
621
this would done with either apache's mod_proxy, or Squid. There
622
can be several advantages from this type of configuration, including
623
content caching, load balancing, and the prospect of moving slow
624
connections to lighter weight servers. <p>
626
The easiest approache is probabaly to use mod_proxy and the
627
"ProxyPass" directive to pass content to another server. mod_proxy
628
supports a degree of caching that can offer a significant
629
performance boost. But another advantage is that since the proxy
630
server and the web server are likely to have a very fast interconnect,
631
the web server can quickly serve up large content, freeing up
632
a apache process, why the proxy slowly feeds out the content
633
to clients. This can be further enhanced by increasing the amount
634
of socket buffer memory thats for the kernel. See the
635
<a href="ttp://people.redhat.com/alikins/system_tuning.html#tcp">section
636
on tcp tuning</a> for info on this.</p><p>
640
<li><a href="http://perl.apache.org/tuning/">Info on using mod_proxy
641
in conjuction with mod_perl</a><br>
642
</li><li><a href="http://www.webtechniques.com/archives/1998/05/engelschall/">
643
webtechniques article on using mod_proxy</a><br>
644
</li><li><a href="http://httpd.apache.org/docs/mod/mod_proxy.html">mod_proxy home
646
</li><li><a href="http://www.squid-cache.org/">Squid</a><br>
647
</li><li><a href="http://www.zope.org/Members/rbeer/caching">Using mod_proxy with Zope</a><br>
654
One of the most frustrating thing for a user of
655
a website, is to get "connection refused" error messages.
656
With apache, the common cause of this is for the number of
657
concurent connections to exceed the number of available
658
httpd processes that are available to handle connections.
660
The apache ListenBacklog paramater lets you
661
specify what backlog paramater is set to listen(). By
662
default on linux, this can be as high as 128.
664
Increasing this allows a limited number of
665
httpd's to handle a burst of attempted connections.
668
There are some experimental patches from SGI
669
that accelerate apache. More info at:
671
<a href="http://oss.sgi.com/projects/apache/">http://oss.sgi.com/projects/apache/</a>
675
I havent really had a chance to test the SGI patches yet,
676
but I've been told they are pretty effective.
679
<a name="samba"><b></b></a><h4><a name="samba"><b>Samba Tuning</b></a></h4><p>
681
<a name="samba">Depending on the type of tests, there are a number of tweaks you
682
can do to samba to improve its performace over
683
the default. The default is best for general purpose file sharing,
684
but for extreme uses, there are a couple of tweaks.
687
<a name="samba">The first one is to rebuild it with mmap support. In cases where
688
you are serving up a large amount of small files, this
689
seems to be particularly useful. You
690
just need to add a "--with-mmap" to the configure line.
694
<a name="samba">You also want to make sure the following options are
695
enabled in the /etc/smb.conf file:
697
</p><pre><a name="samba">read raw = no
698
read prediction = true
699
level2 oplocks = true
702
<a name="samba">One of the better resources for tuning samba is the "Using
703
Samba" book from O'reily. The </a><a href="http://k12linux.mesd.k12.or.us/using_samba/appb_02.html">chapter
704
on performance tuning</a> is available online.</p><p>
708
<a name="ldap"><b></b></a><h4><a name="ldap"><b>Openldap tuning</b></a></h4><p>
710
<a name="ldap"> The most important
711
tuning aspect for OpenLDAP is deciding what attributes
712
you want to build indexes on.
715
<a name="ldap"> I use the values:
716
</a></p><pre><a name="ldap">cachesize 10000
729
<a name="ldap"> If you add the following parameters to /etc/openldap/slapd.conf
730
before entering the info into the database, they will all get indexed
731
and performance will increase.
734
<a name="shm"><b><h4>SysV shm</h4></b></a>
736
Some applications, databases in particular, sometimes
737
need large amounts of SHM segments and semaphores. The
738
default limit for the number of shm segments is 128 for 2.2.<p>
740
This limit is set in a couple of places in the kernel,
741
and requires a modification of the kernel source and a recompile
742
to increase them.</p><p>
744
A sample diff to bump them up:
746
</p><pre>--- linux/include/linux/sem.h.save Wed Apr 12 20:28:37 2000
747
+++ linux/include/linux/sem.h Wed Apr 12 20:29:03 2000
752
-#define SEMMNI 128 /* ? max # of semaphore identifiers */
753
+#define SEMMNI 512 /* ? max # of semaphore identifiers */
754
#define SEMMSL 250 /* <= 512 max num of semaphores per id */
755
#define SEMMNS (SEMMNI*SEMMSL) /* ? max # of semaphores in system */
756
#define SEMOPM 32 /* ~ 100 max num of ops per semop call */
757
--- linux/include/asm-i386/shmparam.h.save Wed Apr 12 20:18:34 2000
758
+++ linux/include/asm-i386/shmparam.h Wed Apr 12 20:28:11 2000
760
* Keep _SHM_ID_BITS as low as possible since SHMMNI depends on it and
761
* there is a static array of size SHMMNI.
763
-#define _SHM_ID_BITS 7
764
+#define _SHM_ID_BITS 10
765
#define SHM_ID_MASK ((1<<_SHM_ID_BITS)-1)
767
#define SHM_IDX_SHIFT (_SHM_ID_BITS)
770
Theoretically, the _SHM_ID_BITS can go as high as 11. The rule
771
is that _SHM_ID_BITS + _SHM_IDX_BITS must be <= 24 on x86.
773
In addition to the number of shared memory segments, you can
774
control the maximum amount of memory allocated to shm at run
775
time via the /proc interface. /proc/sys/kernel/shmmax indicates
776
the current. Echo a new value to it to increase it.
777
</p><pre> echo "67108864" > /proc/sys/kernel/shmmax
779
To double the default value.
781
A good resource on this is <a href="http://ps-ax.com/shared-mem.html">
782
Tunings The Linux Kernel's Memory</a>. </p><p>
784
The best way to see what the current values are, is to
790
<a name="pty"><b>Ptys and ttys</b></a>
792
The number of ptys and ttys on a box can sometimes be
793
a limiting factor for things like login servers and
794
database servers. <p>
796
On Red Hat Linux 7.x, the default limit on ptys
797
is set to 2048 for i686 and athlon kernels. Standard
798
i386 and similar kernels default to 256 ptys.</p><p>
800
The config directive CONFIG_UNIX98_PTY_COUNT defaults
801
to 256, but can be set as high as 2048. For 2048
802
ptys to be supported, the value of UNIX98_PTY_MAJOR_COUNT
803
needs to be set to 8 in include/linux/major.h</p><p>
805
With the current device number scheme and allocations,
806
the maximum number of ptys is 2048. </p><p>
810
<a name="benchmarks"><b></b></a><h4><a name="benchmarks"><b>Benchmarks</b></a></h4>
812
Lies, damn lies, and statistics.<p>
814
But aside from that, a good set of benchmarking utilities
815
are often very helpful in doing system tuning work. It
816
is impossible to duplicate "real world" situations,
817
but that isnt really the goal of a good benchmark. A
818
good benchmark typically tries to measure the performance
819
of one particular thing very accurately. If you understand
820
what the benchmarks are doing, they can be very useful tools.
823
Some of the common and useful benchmarks include:
828
<a href="http://www.textuality.com/bonnie/">Bonnie</a>
829
has been around forever, and the numbers it produces
830
are meaningful to many people. If nothing else, it's good
831
tool for producing info to share with others.
833
This is a pretty common utility for testing driver performance.
834
It's only drawback is it sometimes requires the use of huge
835
datasets on large memory machines to get useful results, but
836
I suppose that goes with the territory.<p>
838
Check <a href="http://people.redhat.com/dledford/benchmark.html">Doug
839
Ledford's list of benchmarks</a> for more info on Bonnie.
841
There is also a somwhat newer version of Bonnie called
842
<a href="http://www.coker.com.au/bonnie++/">Bonnie++</a> that
843
fixes a few bugs, and includes a couple of extra tests.
850
My personal favorite disk io benchmarking utility is `dbench`. It
851
is designed to simulate the disk io load of a system when running
852
the NetBench benchmark suite. It seems to do an excellent job
853
at making all the drive lights blink like mad. Always a good sign.
856
Dbench is available at <a href="ftp://ftp.samba.org/pub/tridge/dbench/">The Samba ftp site and
864
A nice simple http benchmarking app, that does integrity
865
checking, parallel requests, and simple statistics. Generates
866
load based off a test file of urls to hit, so it is flexible.
868
http_load is available from <a href="http://www.acme.com/software/http_load/">ACME Labs</a></p><p>
873
A (the?) ftp benchmarking utility. Designed to simulate real
874
world ftp usage (large number of clients, throttles connections
875
to modem speeds, etc). Handy. Also includes the useful
876
dklimits utility .<p>
878
dkftpbench is available from <a href="http://www.kegel.com/dkftpbench/">Dan kegel's page</a>
884
A multithread disk io benchmarking utility. Seems to do
885
an a good job at pounding on the disks. Comes with some
886
useful scripts for generating reports and graphs. <p>
888
The <a href="http://sourceforge.net/projects/tiobench">tiobench
894
<p> dt does a lot. disk io, process creation, async io, etc. </p><p>
896
dt is available at <a href="http://www.bit-net.com/%7Ermiller/dt.html">The dt page</a></p><p>
901
A tcp/udp benchmarking app. Useful for getting an idea
902
of max network bandwidth of a device. Tends to be more
903
accurate than trying to guestimate with ftp or other
910
Netperf is a benchmark that can be used to measure the performance of
912
different types of networking. It provides tests for both unidirecitonal
913
throughput, and end-to-end latency. The environments currently measureable
914
by netperf include: TCP and UDP via BSD Sockets, DLPI, Unix Domain Sockets,
917
Info: <a href="http://www.netperf.org/netperf/NetperfPage.html">http://www.netperf.org/netperf/NetperfPage.html</a>
919
Download: <a href="ftp://ftp.sgi.com/sgi/src/netperf/">ftp://ftp.sgi.com/sgi/src/netperf/</a>
921
Info provided by Bill Hilf.
926
httperf is a popular web server benchmark tool for measuring web
927
server performance. It provides a flexible facility for generating various
929
workloads and for measuring server performance. The focus of httperf is not
930
on implementing one particular benchmark but on providing a robust,
931
high-performance tool that facilitates the construction of both micro- and
932
macro-level benchmarks. The three distinguishing characteristics of httperf
933
are its robustness, which includes the ability to generate and sustain
934
server overload, support for the HTTP/1.1 protocol, and its extensibility
935
to new workload generators and performance measurements.
937
Info: <a href="http://www.hpl.hp.com/personal/David_Mosberger/httperf.html">http://www.hpl.hp.com/personal/David_Mosberger/httperf.html</a>
938
<br>Download: <a href="ftp://ftp.hpl.hp.com/pub/httperf/">ftp://ftp.hpl.hp.com/pub/httperf/</a>
940
Info provided by Bill Hilf.
945
Autobench is a simple Perl script for automating the process of
946
benchmarking a web server (or for conducting a comparative test of two
947
different web servers). The script is a wrapper around httperf. Autobench
948
runs httperf a number of times against each host, increasing the number of
949
requested connections per second on each iteration, and extracts the
950
significant data from the httperf output, delivering a CSV or TSV format
951
file which can be imported directly into a spreadsheet for
954
Info: <a href="http://www.xenoclast.org/autobench/">http://www.xenoclast.org/autobench/</a>
955
<br>Download: <a href="http://www.xenoclast.org/autobench/downloads/">http://www.xenoclast.org/autobench/downloads</a>
957
Info provided by Bill Hilf.
962
General benchmark Sites<p>
964
<a href="http://people.redhat.com/dledford/benchmark.html">Doug
965
Ledford's page</a><p>
967
<a href="http://devlinux.com/projects/reiserfs/">ResierFS benchmark
972
<a name="monitoring"><b></b></a><h4><a name="monitoring"><b>System Monitoring</b></a></h4>
974
Standard, and not so standard system monitoring tools
975
that can be useful when trying to tune a system.<p>
979
This util is part of the procps package, and
980
can provide lots of useful info when diagnosing
981
performance problems.<p>
983
Heres a sample vmstat output on a lightly used desktop:
984
</p><pre> procs memory swap io system cpu
985
r b w swpd free buff cache si so bi bo in cs us sy id
986
1 0 0 5416 2200 1856 34612 0 1 2 1 140 194 2 1 97
989
And heres some sample output on a heavily used server:
991
</p><pre> procs memory swap io system cpu
992
r b w swpd free buff cache si so bi bo in cs us sy id
993
16 0 0 2360 264400 96672 9400 0 0 0 1 53 24 3 1 96
994
24 0 0 2360 257284 96672 9400 0 0 0 6 3063 17713 64 36 0
995
15 0 0 2360 250024 96672 9400 0 0 0 3 3039 16811 66 34 0
998
The interesting numbers here are the first one, this is the number of the
999
process that are on the run queue. This value shows how many process are ready
1000
to be executed, but can not be ran at the moment because other process need to
1001
finish. For lightly loaded systems, this is almost never above 1-3, and
1002
numbers consistently higher than 10 indicate the machine is getting pounded.
1004
Other interseting values include the "system" numbers for in and cs. The
1005
in value is the number of interupts per second a system is getting. A
1006
system doing a lot of network or disk I/o will have high values here, as
1007
interupts are generated everytime something is read or written to the disk
1010
The cs value is the number of context switches per second. A context switch
1011
is when the kernel has to take off of the executable code for a program out
1012
of memory, and switch in another. It's actually _way_ more complicated than
1013
that, but thats the basic idea. Lots of context swithes are bad, since it
1014
takes some fairly large number of cycles to performa a context swithch,
1015
so if you are doing lots of them, you are spending all your time chaining
1016
jobs and not actually doing any work. I think we can all understand that
1024
Since this document is primarily concerned with network
1025
servers, the `netstat` command can often be very useful. It can
1026
show status of all incoming and outgoing sockets, which can
1027
give very handy info about the status of a network server.<p>
1028
One of the more useful options is:</p><p>
1029
</p><pre> netstat -pa
1032
The `-p` options tells it to try to determine what program has the
1033
socket open, which is often very useful info. For example, someone nmap's
1034
their system and wants to know what is using port 666 for example. Running
1035
netstat -pa will show you its satand running on that tcp port.
1037
One of the most twisted, but useful invocations is:
1039
</p><pre>netstat -a -n|grep -E "^(tcp)"| cut -c 68-|sort|uniq -c|sort -n
1042
This will show you a sorted list of how many sockets
1043
are in each connection state. For example:</p><p>
1053
Okay, so everyone knows about ps. But I'll just highlight
1054
one of my favorite options:<p>
1055
</p><pre>ps -eo pid,%cpu,vsz,args,wchan
1058
Shows every process, their pid, % of cpu, memory size, name,
1059
and what syscall they are currently executing. Nifty.</p><p>
1065
<a name="utils"><b></b></a><h4><a name="utils"><b>Utilities</b></a></h4>
1067
Some simple utilities that come in handy when doing
1068
performance tuning.<p>
1073
a simple util to check the acutally number of file descriptors
1074
available, ephemeral ports available, and poll()-able sockets.
1075
Handy. Be warned that it can take a while to run if there
1076
are a large number of fd's available, as it will try to
1077
open that many files, and then unlinkt them.<p>
1079
<a href="http://www.kegel.com/dkftpbench/">dkftpbench</a> package.</p><p>
1086
a tiny util for determining the number of file descriptors
1089
<a href="http://people.redhat.com/alikins/tuning_utils/fd-limit.c">fd-limit.c</a>
1095
A util for determining the number of pthreads a system can
1096
use. This and fd-count are both from the system tuning
1097
page for <a href="http://www.volano.com/linuxnotes.html">Volano
1098
chat</a>, a multithread java based chat server.<p>
1100
<a href="http://people.redhat.com/alikins/tuning_utils/thread-limit.c">thread-limit.c</a>
1109
<a name="links"><b></b></a><h4><a name="links"><b>System Tuning Links</b></a></h4>
1111
<a href="http://www.kegel.com/">http://www.kegel.com</a>
1113
Check out the "c10k problem" page in particular, but the entire
1114
site has _lots_ of useful tuning info.
1117
<a href="http://linuxperf.nl.linux.org/">http://linuxperf.nl.linux.org/</a>
1119
Site organized by Rik Van Riel and a few other folks. Probabaly
1120
the best linux specific system tuning page.
1122
<a href="http://www.citi.umich.edu/projects/citi-netscape/">http://www.citi.umich.edu/projects/citi-netscape/</a>
1124
Linux Scalibity Project at Umich.
1126
<a href="http://nfs.sourceforge.net/nfs-howto/performance.html">NFS Performance Tunging</a>
1128
Info on tuning linux kernel NFS in particular, and linux network and disk io in general
1131
<a href="http://home.att.net/%7Ejageorge/performance.html">http://home.att.net/~jageorge/performance.html</a>
1133
Linux Performace Checklist. Some useful content.
1135
<a href="http://www.linux.com/tuneup/">http://www.linux.com/enhance/tuneup/</a>
1137
Miscelaneous performace tuning tips at linux.com
1139
<a href="http://www.psc.edu/networking/perf_tune.html#Linux">http://www.psc.edu/networking/perf_tune.html#Linux</a>
1142
Summary of tcp tuning info
1146
<a name="music"><b></b></a></p><h4><a name="music"><b>Music</b></a></h4><p>
1148
<a name="music"> Careful analysis and benchmarking has shown
1149
that server will respond positively to being played
1150
the approriate music. For the common case, this
1151
can be about anything, but for high performane
1152
servers, a more careful choice needs to be made.</a><p>
1154
<a name="music"> The industry standard for pumping
1155
up a server has always been "Crazy Train", By Ozzy
1156
Ozbourne. While this has been proven over and over
1157
to offer increased performance, in some circumstances
1158
I recomdend alternatives.</a></p><p>
1160
A classic case is the co-located server.
1161
Nothing like packing up your pride and joy and
1162
shipping it to strange far off locations like
1163
Sunnyvale and Herndon, VA. Its enough to make
1164
a server homesick, so I like to suggest choosing
1165
a piece of music that will remind them of home and
1166
tide them over till the bigger servers stop picking
1167
on them. For servers from North Carolina, I like
1168
to play the entirety of "feet in mud again" by
1169
</a><a href="http://www.slendermusic.com/articles/record/geezer.phtml">Geezer Lake</a>. Nothing like some good old NC
1170
style avant-metal-alterna-prog. </p><p>
1172
Comentary, controverys,chatter. chit-chat.
1173
Chat and irc servers have their own unique set of
1174
problems. I find the polyrythmic and incessant
1175
restatement of purpose of <a href="http://www.elephant-talk.com/releases/discipli.htm#lyrics1">Elephant
1176
Talk</a> by King Crimson a good way to bend those servers back into shape.
1178
btw, Xach says "Crazy Train" has the best guitar solo ever.</p><p>
1181
<a name="thanks"><b></b></a></p><h4><a name="thanks"><b>Thanks</b></a></h4><p>
1183
<a name="thanks"> Folks that have sent me new info, corrected info, or just
1184
sat still long enough for me to ask them lots of questions.</a><p><a name="thanks">
1187
</li><li> Arjan van de Ven
1188
</li><li> Xach Beane
1189
</li><li> Michael K. Johnson
1190
</li><li> James Manning
1192
</p></li></a></p></ul>
1194
<a name="todo"><b></b></a><h4><a name="todo"><b>TODO</b></a></h4><p>
1196
<a name="todo"> <li> add info about mod_proxy, caching, listenBacklog, etc
1197
</li><li> Add info for oracle tuning
1198
</li><li> any other useful server specific tuning info I stumble across
1199
</li><li> add info about kernel mem limits, PAE, bigmeme, LFS
1200
and other kernel related stuff likely to be useful
1203
<a name="changes"><b></b></a><h4><a name="changes"><b>Changes</b></a></h4><p>
1205
<dl><dt><a name="changes">Nov 19 2001
1206
</a></dt><dd><a name="changes">s/conf.modules/modules.conf
1207
info on httpperf/autobench/netperf from Bill Hilf.
1210
</a></dd><dt><a name="changes">Oct 16 2001
1211
</a></dt><dd><a name="changes">Added links to the excellent mod_perl tuning guide, and
1212
the online chapter for tuning samba. Added some info about
1213
the use of MaxRequestsPerChild, mod_proxy, and listenBacklog
1214
to the apache section
1218
<a href="mailto:alikins@redhat.com">alikins@redhat.com</a>
b'\\ No newline at end of file'