I'm currently building a new iteration of my DIY router - the new system has a pair of 10 gig ports. I'm running ubuntu 23.04 on a R68S U1
Initially during speed tests with iperf2 between a system I know can handle 10 gig line speeds I was getting 5 gig speeds. One of Asus's guides for their equipment suggested testing with iperf at 800k
geek@router-t1:~$ iperf -s -w 800k
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 416 KByte (WARNING: requested 781 KByte)
------------------------------------------------------------
[ 1] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 52191
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-60.0461 sec 35.0 GBytes 5.01 Gbits/sec
Interestingly this indicated that my TCP window size was smaller, and this was precisely what Asus warned about.
This never happens with windows clients, only linux ones... which is curious, but probably another issue
Adding the following lines - as suggested here
net.core.wmem_max=4194304
net.core.rmem_max=12582912
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 87380 4194304
resulted in twice the benchmarks
geek@router-t1:~$ iperf -s -w 800k -B 10.0.0.1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.53 MByte (WARNING: requested 781 KByte)
------------------------------------------------------------
[ 1] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 57480
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-60.0443 sec 69.2 GBytes 9.90 Gbits/sec
I understand this adjusts the socket receive buffer and the size of a buffer for a newly created socket - IBM has a pretty good explaination here to what they do here
How would I work out the appropriate size for a given system, and why does this have such a dramatic effect?