Tuning freebsd?

On FreeBSD raised the balancer HaProxy, at times in logs appear the message
Proxy *** reached system memory limit at 67 sockets. Please check system tunables.


After a short period of time after a series of such messages, the server loses connection with the outside world until you reboot. Here's the output of netstat-m:
2004/2991/4995 mbufs in use (current/cache/total)

2001/1803/3804/32768 mbuf clusters in use (current/cache/total/max)

2001/1583 mbuf+clusters out of packet secondary zone in use (current/cache)

0/135/135/12800 4k (page size) jumbo clusters in use (current/cache/total/max)

0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)

0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
4503K/4893K/9396K bytes allocated to network (current/cache/total)

0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)

0/0/0 requests for jumbo clusters denied (4k/9k/16k)

0/0/0 sfbufs in use (current/peak/max)

0 requests for sfbufs denied

0 requests for sfbufs delayed

0 requests for I/O initiated by sendfile

0 calls to protocol drain routines


If the amount of network mbufs, sockets and clusters I found how to increase using sysctl, then the memory allocation (in bold) are not able to find variables. Prompt, in what side to dig? Any idea what the problem is precisely that.
October 3rd 19 at 02:22
2 answers
October 3rd 19 at 02:24
Solution
More

kern.ipc.nmbclusters=400000
kern.ipc.maxsockbuf=83886080


I have here:
21502/10887/32389 mbufs in use (current/cache/total)
20464/7831/28295/400000 mbuf clusters in use (current/cache/total/max)
20464/7823 mbuf+clusters out of packet secondary zone in use (current/cache)
0/0/0/253036 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/74973 9k jumbo clusters in use (current/cache/total/max)
0/0/0/42172 16k jumbo clusters in use (current/cache/total/max)
92607K/36767K/129374K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
129374K bytes

here is how you change? the first of the above already moved, second right now, too, increased. - niko.Rueck commented on October 3rd 19 at 02:27
and overloaded server? - Kayleigh.Park commented on October 3rd 19 at 02:30
Yes, there are fewer here:
3311K/4144K/7455K bytes allocated to network (current/cache/total)
- niko.Rueck commented on October 3rd 19 at 02:33
Show:
uname-a
egrep -v '#|^$' /etc/sysctl.conf
- Kayleigh.Park commented on October 3rd 19 at 02:36
# uname-a
HF1 FreeBSD 8.3-RELEASE FreeBSD 8.3-RELEASE #0: Mon Apr 9 21:23:18 UTC 2012 root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
# egrep -v '#|^$' /etc/sysctl.conf
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
kern.ipc.somaxconn=4096
net.inet.tcp.sendspace=65536
net.inet.tcp.recvspace=65536
net.inet.tcp.msl=15000
net.inet.icmp.icmplim=50
net.inet.icmp.drop_redirect=1
net.inet.icmp.log_redirect=1
net.inet.tcp.drop_synfin=1
kern.ipc.nmbclusters=400000
net.inet.tcp.maxtcptw=40960
kern.ipc.maxsockets=204800
net.inet.ip.portrange.first=1000
net.inet.ip.portrange.last=65535
net.inet.ip.portrange.randomized=0
kern.ipc.maxsockbuf=83886080

kind of figured, the allocated memory increases as needed - niko.Rueck commented on October 3rd 19 at 02:39
Promote another value, then more memory will reserve under network.

kern.ipc.shmmax=67108864
kern.ipc.shmall=67108864
net.inet.tcp.rfc3465=0
net.graph.maxdgram=8388608
net.graph.recvspace=8388608
net.route.netisr_maxqlen=4096
kern.ipc.maxsockbuf=83886080
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=524288
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=65536
- Kayleigh.Park commented on October 3rd 19 at 02:42
Well, thank you. Almost all used. Core without NETGRAPH_SOCKET (googled that that is what is required to control the net.graph.x parameters). - niko.Rueck commented on October 3rd 19 at 02:45
Make another dirty hack:

- edit /usr/src/sys/sys/select.h and /usr/include/sys/select.h files and
modify there FD_SETSIZE value from 1024U to 16384U:

#ifndef FD_SETSIZE
#define FD_SETSIZE 16384U


Before Assembly of the world.
And then perezaleite applications that require more than 1024 open sockets. - Kayleigh.Park commented on October 3rd 19 at 02:48
October 3rd 19 at 02:26
ipf is used? If Yes, then there must also be limits to raise.
For example:
net.inet.ipf.fr_statemax
net.inet.ipf.fr_statesize
no, do not use - niko.Rueck commented on October 3rd 19 at 02:29

Find more questions by tags HighloadFreeBSDHAproxy