NETWORK:
see also http://fasterdata.es.net/TCP-tuning/linux.html
- Adjust the wait_time and the TCP queue size
- do a cat /etc/sysctl.conf to view system parameters:
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
#
#
#
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_fin_timeout = 20
net.ipv4.ip_local_port_range = 1024 65000
kernel.msgmni = 512
fs.file-max = 2048000
kernel.sem = 250 32000 100 128
kernel.shmmax = 536870912
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
- netstat -s -p --tcp displays plenty of statistics on the state of networking:
IcmpMsg:
InType0: 520
InType3: 17210
InType8: 4282432
InType11: 2285
OutType0: 4282415
OutType3: 14683
OutType8: 520
Tcp:
13494422 active connections openings
16851165 passive connection openings
3174513 failed connection attempts
2989658 connection resets received
97 connections established
2032708482 segments received
1797592474 segments send out
2034074 segments retransmited
0 bad segments received.
19698882 resets sent
TcpExt:
1599568 invalid SYN cookies received
33244 resets received for embryonic SYN_RECV sockets
717 packets pruned from receive queue because of socket buffer overrun
34 ICMP packets dropped because they were out-of-window
8074204 TCP sockets finished time wait in fast timer
1470 time wait sockets recycled by time stamp
31993765 delayed acks sent
81156 delayed acks further delayed because of locked socket
Quick ack mode was activated 194244 times
1465773 times the listen queue of a socket overflowed
1465773 SYNs to LISTEN sockets ignored
617478244 packets directly queued to recvmsg prequeue.
1866789567 packets directly received from backlog
2618179228 packets directly received from prequeue
764515521 packets header predicted
53336874 packets header predicted and directly queued to user
214677769 acknowledgments not containing data received
1325675260 predicted acknowledgments
131990 times recovered from packet loss due to fast retransmit
Detected reordering 15 times using reno fast retransmit
3 congestion windows fully recovered
2 congestion windows partially recovered using Hoe heuristic
3452 congestion windows recovered after partial ack
0 TCP data loss events
9153 timeouts after reno fast retransmit
13533 timeouts in loss state
54353 fast retransmits
1086481 retransmits in slow start
665969 other TCP timeouts
TCPRenoRecoveryFail: 103601
40848 times receiver scheduled too late for direct processing
43658 packets collapsed in receive queue due to low socket buffer
3483274 connections reset due to unexpected data
719921 connections reset due to early user close
17585 connections aborted due to timeout
IpExt:
InMcastPkts: 22414134
OutMcastPkts: 13165007
InBcastPkts: 10954581
- monitor OS for swaps (vmstat)
vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 2638660 97288 36468 922356 7 4 36 23 0 1 5 0 93 1 0
- in the JRockit Mission Control JRA recordings, watch out for weblogic.utils.io.Chunk , they can be harbingers of need to adjust the weblogic.utils.io.chunkpoolsize parameter
place one file store per disk if you have non-RAID disks
if you notice latency in the Connection Pool, use the property Pinned-To-Thread to associate a connection to a thread
Monday, October 5, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment