I occasionally get asked to test out FreeBSD/MIPS patches for people, as they don't have physical hardware present. I can understand that - the hardware is cheap and plentiful, but not everyone wants to have a spare access point around just to test out MIPS changes on.
However QEMU does a pretty good job of emulating MIPS if you're just testing out non-hardware patches. There's even instructions on the FreeBSD wiki for how to do this! So I decided to teach my wifi build system about the various QEMU MIPS emulator targets so it can spit out a kernel and mfsroot to use for QEMU.
Thus:
https://github.com/freebsd/freebsd-wifi-build/wiki/MipsQemuEmulatorImages
It turns out that it wasn't all that hard. The main trick was to use qemu-devel, not qemu. There are bugs in the non-development QEMU branch that mean it works great for Linux but not FreeBSD.
The kernel configurations in FreeBSD had bitrotted a little bit (they were missing the random device, for example) but besides that the build, install and QEMU startup just worked. I now have FreeBSD/MIPS of each variety (32 bit, 64 bit, Little-Endian, Big-Endian) running under QEMU and building FreeBSD-HEAD as a basic test.
Next is figuring out how to build gdb to target each of the above and have it speak to the QEMU GDB stub. That should make it very easy to do MIPS platform debugging.
I also hear rumours about this stuff working somewhat for ARM and PPC, so I'll see how hard it is to run QEMU for those platforms and whether FreeBSD will just boot and run on each.
Thursday, November 20, 2014
Tuesday, October 28, 2014
More RSS UDP tests - this time on a Dell R720
I've recently had the chance to run my RSS UDP test suite up on a pair of Dell R720s. They came with on-board 10G Intel NICs (ixgbe(4) in FreeBSD) so I figured I'd run my test suite up on it.
Thank you to the Enterprise Storage Division at Dell for providing hardware for me to develop on!
The config is like in the previous blog post, but now I have two 8-core Sandy Bridge Xeon CPUs to play with. To simply things (and to not have to try and solve NUMA related issues) I'm running this on the first socket. The Intel NIC is attached to the first CPU socket.
So:
Thank you to the Enterprise Storage Division at Dell for providing hardware for me to develop on!
The config is like in the previous blog post, but now I have two 8-core Sandy Bridge Xeon CPUs to play with. To simply things (and to not have to try and solve NUMA related issues) I'm running this on the first socket. The Intel NIC is attached to the first CPU socket.
So:
- CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz (2000.04-MHz K8-class CPU) x 2
- RAM: 64GiB
- HTT disabled
/boot/loader.conf:
# ... until ncpus is tunable, make it use 8 buckets.
net.inet.rss.bits=3
net.isr.maxthreads=8
net.isr.bindthreads=1
This time I want to test with 8 streams, so after some trial and error I found the right IPv4 addresses to use:
This time I want to test with 8 streams, so after some trial and error I found the right IPv4 addresses to use:
- Server: 10.11.2.1/24
- Client: 10.11.2.3/24, 10.11.2.2/24, 10.11.2.32/24, 10.11.2.33/24, 10.11.2.64/24, 10.11.2.65/24, 10.11.2.17/24, 10.11.2.18/24
The test was like before - the server ran one rss-udp-srv program that spawns one thread per RSS bucket. The client side runs rss-clt programs to generate traffic - but now there's eight of them instead of four.
The results are what I expected: the contention is in the same place (UDP receive) and it's per-core - it doesn't contend between CPU cores.
Each CPU is transmitting and receiving 215,000 510-byte UDP frames a second. It scales linearly - 1 CPU is 215,000 TX/RX frames a second. 8 CPUs is 215,000 TX/RX frames a second * 8. There's no degrading as the CPU core count increases.
That's 1.72 million packets per second. At 510 bytes frames it's about 7 gigabits/sec in and out.
The other 8 cores are idle. Ideally we'd be able to run an application in those cores - so hopefully I can get my network / rss library up and running enough to prototype an RSS-aware memcached and see if it'll handle this particular workload.
It's a far cry from what I think we can likely achieve - but please keep in mind that I know I could do more awesome looking results with netmap, PF_RING or Intel's DPDK software. What I'm trying to do is push the existing kernel networking subsystem to its limits so the issues can be exposed and fixed.
So, where's the CPU going?
In the UDP server program (pid 1620), it looks thus:
# pmcstat -P CPU_CLK_UNHALTED_CORE -T -w 1 -p 1620
PMC: [CPU_CLK_UNHALTED_CORE] Samples: 34298 (100.0%) , 155 unresolved
%SAMP IMAGE FUNCTION CALLERS
8.0 kernel fget_unlocked kern_sendit:4.2 kern_recvit:3.9
7.0 kernel copyout soreceive_dgram:5.6 amd64_syscall:0.9
3.6 kernel __mtx_unlock_flags ixgbe_mq_start
3.5 kernel copyin m_uiotombuf:1.8 amd64_syscall:1.2
3.4 kernel memcpy ip_output:2.9 ether_output:0.6
3.4 kernel toeplitz_hash rss_hash_ip4_2tuple
3.3 kernel bcopy rss_hash_ip4_2tuple:1.4 rss_proto_software_hash_v4:0.9
3.0 kernel _mtx_lock_spin_cooki pmclog_reserve
2.7 kernel udp_send sosend_dgram
2.5 kernel ip_output udp_send
In the NIC receive / transmit thread(s) (pid 12), it looks thus:
# pmcstat -P CPU_CLK_UNHALTED_CORE -T -w 1 -p 12
PMC: [CPU_CLK_UNHALTED_CORE] Samples: 79319 (100.0%) , 0 unresolved
%SAMP IMAGE FUNCTION CALLERS
10.3 kernel ixgbe_rxeof ixgbe_msix_que
9.3 kernel __mtx_unlock_flags ixgbe_rxeof:4.8 netisr_dispatch_src:2.1 in_pcblookup_mbuf:1.3
8.3 kernel __mtx_lock_flags ixgbe_rxeof:2.8 netisr_dispatch_src:2.4 udp_append:1.2 in_pcblookup_mbuf:1.1 knote:0.6
3.8 kernel bcmp netisr_dispatch_src
3.6 kernel uma_zalloc_arg sbappendaddr_locked_internal:2.0 m_getjcl:1.6
3.4 kernel ip_input netisr_dispatch_src
3.4 kernel lock_profile_release __mtx_unlock_flags
3.4 kernel in_pcblookup_mbuf udp_input
3.0 kernel ether_nh_input netisr_dispatch_src
2.4 kernel udp_input ip_input
2.4 kernel mb_free_ext m_freem
2.2 kernel lock_profile_obtain_ __mtx_lock_flags
2.1 kernel ixgbe_refresh_mbufs ixgbe_rxeof
It looks like there's some obvious optimisations to poke at (what the heck is fget_unlocked() doing up there?) and yes, copyout/copyin are really terrible but currently unavoidable. The toeplitz hash and bcopy aren't very nice but they're occuring in the transmit path because at the moment there's no nice way to efficiently set both the outbound RSS hash and RSS bucket ID and send to a non-connected socket destination (ie, specify the destination IP:port as part of the send.) There's also some lock contention that needs to be addressed.
The output of the netisr queue statistics looks good:
root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss # netstat -Q
Configuration:
Setting Current Limit
Thread count 8 8
Default queue limit 256 10240
Dispatch policy direct n/a
Threads bound to CPUs enabled n/a
Protocols:
Name Proto QLimit Policy Dispatch Flags
ip 1 256 cpu hybrid C--
igmp 2 256 source default ---
rtsock 3 256 source default ---
arp 4 256 source default ---
ether 5 256 cpu direct C--
ip6 6 256 flow default ---
ip_direct 9 256 cpu hybrid C--
Workstreams:
WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled
0 0 ip 0 25 0 839349259 0 49 839349308
0 0 igmp 0 0 0 0 0 0 0
0 0 rtsock 0 2 0 0 0 92 92
0 0 arp 0 0 118 0 0 0 118
0 0 ether 0 0 839349600 0 0 0 839349600
0 0 ip6 0 0 0 0 0 0 0
0 0 ip_direct 0 0 0 0 0 0 0
1 1 ip 0 20 0 829928186 0 286 829928472
1 1 igmp 0 0 0 0 0 0 0
1 1 rtsock 0 0 0 0 0 0 0
1 1 arp 0 0 0 0 0 0 0
1 1 ether 0 0 829928672 0 0 0 829928672
1 1 ip6 0 0 0 0 0 0 0
1 1 ip_direct 0 0 0 0 0 0 0
2 2 ip 0 0 0 835558437 0 0 835558437
2 2 igmp 0 0 0 0 0 0 0
2 2 rtsock 0 0 0 0 0 0 0
2 2 arp 0 0 0 0 0 0 0
2 2 ether 0 0 835558610 0 0 0 835558610
2 2 ip6 0 0 0 0 0 0 0
2 2 ip_direct 0 0 0 0 0 0 0
3 3 ip 0 1 0 850271162 0 23 850271185
3 3 igmp 0 0 0 0 0 0 0
3 3 rtsock 0 0 0 0 0 0 0
3 3 arp 0 0 0 0 0 0 0
3 3 ether 0 0 850271163 0 0 0 850271163
3 3 ip6 0 0 0 0 0 0 0
3 3 ip_direct 0 0 0 0 0 0 0
4 4 ip 0 23 0 817439448 0 345 817439793
4 4 igmp 0 0 0 0 0 0 0
4 4 rtsock 0 0 0 0 0 0 0
4 4 arp 0 0 0 0 0 0 0
4 4 ether 0 0 817439625 0 0 0 817439625
4 4 ip6 0 0 0 0 0 0 0
4 4 ip_direct 0 0 0 0 0 0 0
5 5 ip 0 19 0 817862508 0 332 817862840
5 5 igmp 0 0 0 0 0 0 0
5 5 rtsock 0 0 0 0 0 0 0
5 5 arp 0 0 0 0 0 0 0
5 5 ether 0 0 817862675 0 0 0 817862675
5 5 ip6 0 0 0 0 0 0 0
5 5 ip_direct 0 0 0 0 0 0 0
6 6 ip 0 19 0 817281399 0 457 817281856
6 6 igmp 0 0 0 0 0 0 0
6 6 rtsock 0 0 0 0 0 0 0
6 6 arp 0 0 0 0 0 0 0
6 6 ether 0 0 817281665 0 0 0 817281665
6 6 ip6 0 0 0 0 0 0 0
6 6 ip_direct 0 0 0 0 0 0 0
7 7 ip 0 0 0 813562616 0 0 813562616
7 7 igmp 0 0 0 0 0 0 0
7 7 rtsock 0 0 0 0 0 0 0
7 7 arp 0 0 0 0 0 0 0
7 7 ether 0 0 813562620 0 0 0 813562620
7 7 ip6 0 0 0 0 0 0 0
7 7 ip_direct 0 0 0 0 0 0 0
root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss #
It looks like everything is being dispatched correctly; nothing is being queued and/or dropped.
But yes, we're running out of socket buffers because each core is 100% pinned:
root@abaddon:/home/adrian/git/github/erikarn/freebsd-rss # netstat -sp udp
udp:
6773040390 datagrams received
0 with incomplete header
0 with bad data length field
0 with bad checksum
0 with no checksum
17450880 dropped due to no socket
136 broadcast/multicast datagrams undelivered
1634117674 dropped due to full socket buffers
0 not for hashed pcb
5121471700 delivered
5121471044 datagrams output
0 times multicast source filter matched
There's definitely room for improvement.
Friday, September 19, 2014
UDP RSS update: ixbge(4) turned out to have issues..
I started digging deeper into the RSS performance on my home test platform. Four cores and one (desktop) socket isn't all that much, but it's a good starting point for this.
It turns out that there was some lock contention inside netisr. Which made no sense, as RSS should be keeping all the flows local to each CPU.
After a bunch of digging, I discovered that the NIC was occasionally receiving packets into the wrong ring. Have a look at tihs:
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff80047713d00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742e100; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c2e00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c5000; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742ec00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff8004727a700; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f11600; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80047279b00; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f0b700; flowid=0x335a5c03; rxr->me=2
The RX flowid was correct - I hashed the packets in software too and verified the software hash equaled the hardware hash. But they were turning up on the wrong receive queue. "rxr->me" is the queue id; the hardware should be hashing on the last 7 bits. 0x3 -> ring 3, 0x2 -> ring 2.
It also only happened when I was sending traffic to more than one receive ring. Everything was okay if I just transmitted to a single receive ring.
Luckily for me, some developers from Verisign saw some odd behaviour in their TCP stress testing and had dug in a bit further. They were seeing corrupted frames on the receive side that looked a lot like internal NIC configuration state. They figured out that the ixgbe(4) driver wasn't initialising the flow director and receive units correctly - the FreeBSD driver was not correctly setting up the amount of memory each was allocated on the NIC and they were overlapping. They also found a handful of incorrectly handled errors and double-freed mbufs.
So, with that all fixed, their TCP problem went away and my UDP tests started properly behaving themselves. Now all the flows are ending up on the right CPUs.
The flow director code was also dynamically programming flows into the NIC to try and rebalance traffic. Trouble is, I think it's a bit buggy and it's likely not working well with generic receive offload (LRO).
What's it mean for normal people? Well, it's fixed in FreeBSD-HEAD now. I'm hoping I or someone else will backport it to FreeBSD-10 soon. It fixes my UDP tests - now I hit around 1.3 million packets per second transmit and receive on my test rig; the server now has around 10-15% CPU free. It also fixed issues that Verisign were seeing with their high transaction rate TCP tests. I'm hoping that it fixes the odd corner cases that people have seen with Intel 10 gigabit hardware on FreeBSD and makes LRO generally more useful and stable.
Next up - some code refactoring, then finishing off IPv6 RSS!
It turns out that there was some lock contention inside netisr. Which made no sense, as RSS should be keeping all the flows local to each CPU.
After a bunch of digging, I discovered that the NIC was occasionally receiving packets into the wrong ring. Have a look at tihs:
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff80047713d00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742e100; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c2e00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff800474c5000; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100034:
m=0xfffff8004742ec00; flowid=0x21f7db62; rxr->me=3
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff8004727a700; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f11600; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80047279b00; flowid=0x335a5c03; rxr->me=2
Sep 12 08:04:32 adrian-hackbox kernel: ix0: ixgbe_rxeof: 100032:
m=0xfffff80006f0b700; flowid=0x335a5c03; rxr->me=2
The RX flowid was correct - I hashed the packets in software too and verified the software hash equaled the hardware hash. But they were turning up on the wrong receive queue. "rxr->me" is the queue id; the hardware should be hashing on the last 7 bits. 0x3 -> ring 3, 0x2 -> ring 2.
It also only happened when I was sending traffic to more than one receive ring. Everything was okay if I just transmitted to a single receive ring.
Luckily for me, some developers from Verisign saw some odd behaviour in their TCP stress testing and had dug in a bit further. They were seeing corrupted frames on the receive side that looked a lot like internal NIC configuration state. They figured out that the ixgbe(4) driver wasn't initialising the flow director and receive units correctly - the FreeBSD driver was not correctly setting up the amount of memory each was allocated on the NIC and they were overlapping. They also found a handful of incorrectly handled errors and double-freed mbufs.
So, with that all fixed, their TCP problem went away and my UDP tests started properly behaving themselves. Now all the flows are ending up on the right CPUs.
The flow director code was also dynamically programming flows into the NIC to try and rebalance traffic. Trouble is, I think it's a bit buggy and it's likely not working well with generic receive offload (LRO).
What's it mean for normal people? Well, it's fixed in FreeBSD-HEAD now. I'm hoping I or someone else will backport it to FreeBSD-10 soon. It fixes my UDP tests - now I hit around 1.3 million packets per second transmit and receive on my test rig; the server now has around 10-15% CPU free. It also fixed issues that Verisign were seeing with their high transaction rate TCP tests. I'm hoping that it fixes the odd corner cases that people have seen with Intel 10 gigabit hardware on FreeBSD and makes LRO generally more useful and stable.
Next up - some code refactoring, then finishing off IPv6 RSS!
Wednesday, September 10, 2014
Receive side scaling: testing UDP throughput
I think it's about time I shared some more details about the RSS stuff going into FreeBSD and how I'm testing it.
For now I'm focusing on IPv4 + UDP on the Intel 10GE NICs. The TCP side of things is done (and the IPv6 side of things works too!) but enough of the performance walls show up in the IPv4 UDP case that it's worth sticking to it for now.
I'm testing on a pair of 4-core boxes at home. They're not special - and they're very specifically not trying to be server-class hardware. I'd like to see where these bottlenecks are even at low core count.
The test setup in question:
# for now redirect processing just makes the lock overhead suck even more.
# disable it.
net.inet.ip.redirect=0
net.inet.icmp.drop_redirect=1
/boot/loader.conf:
hw.ix.num_queues=8
# experiment with deferred dispatch for RSS
net.isr.numthreads=4
net.isr.maxthreads=4
net.isr.bindthreads=1
kernel config:
include GENERIC
ident HACKBOX
device netmap
options RSS
options PCBGROUP
# in-system lock profiling
options LOCK_PROFILING
# Flowtable - the rtentry locking is a bit .. slow.
options FLOWTABLE
# This debugging code has too much overhead to do accurate
# testing with.
nooptions INVARIANTS
nooptions INVARIANT_SUPPORT
nooptions WITNESS
nooptions WITNESS_SKIPSPIN
The server runs the "rss-udp-srv" process, which behaves like a multi-threaded UDP echo server on port 8080.
$ rss-udp-srv 1 10.11.2.1
The client-side runs four couples of udp-clt, each from different IP addresses. These are run in parallel (i do it in different screens, so I can quickly see what's going on):
$ ./udp-clt -l 10.11.2.3 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.4 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.32 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.33 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
The IP addresses are chosen so that the 2-tuple topelitz hash using the default Microsoft key hash to different RSS buckets that live on individual CPUs.
There are a couple of parts to this.
Firstly - I had left turbo boost on. What this translated to:
Finally, let's repeat the process but only receiving instead also echoing back the packet to the client:
$ rss-udp-srv 0 10.11.2.1
For now I'm focusing on IPv4 + UDP on the Intel 10GE NICs. The TCP side of things is done (and the IPv6 side of things works too!) but enough of the performance walls show up in the IPv4 UDP case that it's worth sticking to it for now.
I'm testing on a pair of 4-core boxes at home. They're not special - and they're very specifically not trying to be server-class hardware. I'd like to see where these bottlenecks are even at low core count.
The test setup in question:
Testing software:
- http://github.com/erikarn/freebsd-rss
- It requires libevent2 - an updated copy; previous versions of libevent2 didn't handle FreeBSD specific errors gracefully and would early error out of the IO loop.
Server:
- CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz (3292.59-MHz K8-class CPU)
- There's no SMT/HTT, but I've disabled it in the BIOS just to be sure
- 4GB RAM
- FreeBSD-HEAD, amd64
- NIC: '82599EB 10-Gigabit SFI/SFP+ Network Connection
- ix0: 10.11.2.1/24
# for now redirect processing just makes the lock overhead suck even more.
# disable it.
net.inet.ip.redirect=0
net.inet.icmp.drop_redirect=1
/boot/loader.conf:
hw.ix.num_queues=8
# experiment with deferred dispatch for RSS
net.isr.numthreads=4
net.isr.maxthreads=4
net.isr.bindthreads=1
kernel config:
include GENERIC
ident HACKBOX
device netmap
options RSS
options PCBGROUP
# in-system lock profiling
options LOCK_PROFILING
# Flowtable - the rtentry locking is a bit .. slow.
options FLOWTABLE
# This debugging code has too much overhead to do accurate
# testing with.
nooptions INVARIANTS
nooptions INVARIANT_SUPPORT
nooptions WITNESS
nooptions WITNESS_SKIPSPIN
The server runs the "rss-udp-srv" process, which behaves like a multi-threaded UDP echo server on port 8080.
Client
The client box is slightly more powerful to compensate for (currently) not using completely affinity-aware RSS UDP transmit code.- CPU: Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz (3192.68-MHz K8-class CPU)
- SMT/HTT: Disabled in BIOS
- 8GB RAM
- FreeBSD-HEAD amd64
- Same kernel config, loader and sysctl config as the server
- ix0: configured as 10.11.2.2/24, 10.11.2.3/32, 10.11.2.4/32, 10.11.2.32/32, 10.11.2.33/32
Running things
The server-side simply runs the listen server, configured to respond to each frame:$ rss-udp-srv 1 10.11.2.1
The client-side runs four couples of udp-clt, each from different IP addresses. These are run in parallel (i do it in different screens, so I can quickly see what's going on):
$ ./udp-clt -l 10.11.2.3 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.4 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.32 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
$ ./udp-clt -l 10.11.2.33 -r 10.11.2.1 -p 8080 -n 10000000000 -s 510
The IP addresses are chosen so that the 2-tuple topelitz hash using the default Microsoft key hash to different RSS buckets that live on individual CPUs.
Results: Round one
When the server is responding to each frame, the following occurs. The numbers are "number of frames generated by the client (netstat)", "number of frames received by the server (netstat)", "number of frames seen by udp-rss-srv", "number of responses transmitted from udp-rss-srv", "number of frames seen by the server (netstat)"- 1 udp-clt process: 710,000; 710,000; 296,000; 283,000; 281,000
- 2 udp-clt processes: 1,300,000; 1,300,000; 592,000; 592,000; 575,000
- 3 udp-clt processes: 1,800,000; 1,800,000; 636,000; 636,000; 600,000
- 4 udp-clt processes: 2,100,000; 2,100,000; 255,000; 255,000; 255,000
There are a couple of parts to this.
Firstly - I had left turbo boost on. What this translated to:
- One core active: ~ 30% increase in clock speed
- Two cores active: ~ 30% increase in clock speed
- Three cores active: ~ 25% increase in clock speed
- Four cores active: ~ 15% increase in clock speed.
Secondly and more importantly - I had left flow control enabled. This made a world of difference.
The revised results are mostly linear - with more active RSS buckets (and thus CPUs) things seem to get slightly more efficient:
- 1 udp-clt process: 710,000; 710,000; 266,000; 266,000; 266,000
- 2 udp-clt processes: 1,300,000; 1,300,000; 512,000; 512,000; 512,000
- 3 udp-clt processes: 1,800,000; 1,800,000; 810,000; 810,000; 810,000
- 4 udp-clt processes: 2,100,000; 2,100,000; 1,120,000; 1,120,000; 1,120,000
Finally, let's repeat the process but only receiving instead also echoing back the packet to the client:
$ rss-udp-srv 0 10.11.2.1
- 1 udp-clt process: 710,000; 710,000; 204,000
- 2 udp-clt processes: 1,300,000; 1,300,000; 378,000
- 3 udp-clt processes: 1,800,000; 1,800,000; 645,000
- 4 udp-clt processes: 2,100,000; 2,100,000; 900,000
The receive-only workload is actually worse off versus the transmit + receive workload!
What's going on here?
Well, a little digging shows that in both instances - even with a single udp-clt thread running which means only one CPU on the server side is actually active! - there's active lock contention.
Here's an example dtrace output for measuring lock contention with only one active process, where one CPU is involved (and the other three are idle):
Receive only, 5 seconds:
root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # dtrace -n 'lockstat:::adaptive-block { @[stack()] = sum(arg1); }'
dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C
kernel`udp_append+0x11c
kernel`udp_input+0x8cc
kernel`ip_input+0x116
kernel`netisr_dispatch_src+0x1cb
kernel`ether_demux+0x123
kernel`ether_nh_input+0x34d
kernel`netisr_dispatch_src+0x61
kernel`ether_input+0x26
kernel`ixgbe_rxeof+0x2f7
kernel`ixgbe_msix_que+0xb6
kernel`intr_event_execute_handlers+0x83
kernel`ithread_loop+0x96
kernel`fork_exit+0x71
kernel`0xffffffff80cd19de
46729281
Transmit + receive, 5 seconds:
dtrace: description 'lockstat:::adaptive-block ' matched 1 probe
^C
kernel`knote+0x7e
kernel`sowakeup+0x65
kernel`udp_append+0x14a
kernel`udp_input+0x8cc
kernel`ip_input+0x116
kernel`netisr_dispatch_src+0x1cb
kernel`ether_demux+0x123
kernel`ether_nh_input+0x34d
kernel`netisr_dispatch_src+0x61
kernel`ether_input+0x26
kernel`ixgbe_rxeof+0x2f7
kernel`ixgbe_msix_que+0xb6
kernel`intr_event_execute_handlers+0x83
kernel`ithread_loop+0x96
kernel`fork_exit+0x71
kernel`0xffffffff80cd19de
3793
kernel`udp_append+0x11c
kernel`udp_input+0x8cc
kernel`ip_input+0x116
kernel`netisr_dispatch_src+0x1cb
kernel`ether_demux+0x123
kernel`ether_nh_input+0x34d
kernel`netisr_dispatch_src+0x61
kernel`ether_input+0x26
kernel`ixgbe_rxeof+0x2f7
kernel`ixgbe_msix_que+0xb6
kernel`intr_event_execute_handlers+0x83
kernel`ithread_loop+0x96
kernel`fork_exit+0x71
kernel`0xffffffff80cd19de
3823793
kernel`ixgbe_msix_que+0xd3
kernel`intr_event_execute_handlers+0x83
kernel`ithread_loop+0x96
kernel`fork_exit+0x71
kernel`0xffffffff80cd19de
9918140
Somehow it seems there's less lock contention / blocking going on when both transmit and receive is running!
So then I dug into it using the lock profiling suite. This is for 5 seconds with receive-only traffic on a single RSS bucket / CPU (all other CPUs are idle):
# sysctl debug.lock.prof.enable = 1; sleep 5 ; sysctl debug.lock.prof.enable=0
root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.enable=1 ; sleep 5 ; sysctl debug.lock.prof.enable=0
debug.lock.prof.enable: 1 -> 1
debug.lock.prof.enable: 1 -> 0
root@adrian-hackbox:/home/adrian/git/github/erikarn/freebsd-rss # sysctl debug.lock.prof.stats | head -2 ; sysctl debug.lock.prof.stats | sort -nk4 | tail -10
debug.lock.prof.stats:
max wait_max total wait_total count avg wait_avg cnt_hold cnt_lock name
1496 0 10900 0 28 389 0 0 0 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2755 (sx:USB config SX lock)
debug.lock.prof.stats:
0 0 31 1 67 0 0 0 4 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:888 (spin mutex:sched lock 2)
0 0 2715 1 49740 0 0 0 7 /usr/home/adrian/work/freebsd/head/src/sys/dev/random/random_harvestq.c:294 (spin mutex:entropy harvest mutex)
1 0 51 1 131 0 0 0 2 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:1179 (spin mutex:sched lock 1)
0 0 69 2 170 0 0 0 8 /usr/home/adrian/work/freebsd/head/src/sys/kern/sched_ule.c:886 (spin mutex:sched lock 2)
0 0 40389 2 287649 0 0 0 8 /usr/home/adrian/work/freebsd/head/src/sys/kern/kern_intr.c:1359 (spin mutex:sched lock 2)
0 2 2 4 12 0 0 0 2 /usr/home/adrian/work/freebsd/head/src/sys/dev/usb/usb_device.c:2762 (sleep mutex:Giant)
15 20 6556 520 2254 2 0 0 105 /usr/home/adrian/work/freebsd/head/src/sys/dev/acpica/Osd/OsdSynch.c:535 (spin mutex:ACPI lock (0xfffff80002b10f00))
4 5 195967 65888 3445501 0 0 0 28975 /usr/home/adrian/work/freebsd/head/src/sys/netinet/udp_usrreq.c:369 (sleep mutex:so_rcv)
Notice the lock contention for the so_rcv (socket receive buffer) handling? What's going on here is pretty amusing - it turns out that because there's so much receive traffic going on, the userland process receiving the data is being preempted by the NIC receive thread very often - and when this happens, there's a good chance it's going to be within the small window that the receive socket buffer lock is held. Once this happens, the NIC receive thread processes frames until it gets to one that requires it to grab the same sock buffer lock that is already held by userland - and it fails - so the NIC thread sleeps until the userland thread finishes consuming a packet. Then the CPU flips back to the NIC thread and continues processing a packet.
When the userland code is also transmitting frames it's increasing the amount of time in between socket receives and decreasing the probability of hitting the lock contention condition above.
Note there's no contention between CPUs here - this is entirely contention within a single CPU.
So for now I'm happy that the UDP IPv4 path is scaling well enough with RSS on a single core. The main performance problem here is the socket receive buffer locking (and, yes, copyin() / copyout().)
Next!
Friday, August 1, 2014
Receive Side Scaling: figuring out how to handle IP fragments
The TL:DR; of this is - IP fragments are annoying.
If everything was awesome and there were never IP fragments, all TCP and UDP frames would always have the TCP/UDP header stamped on them, and the NIC could hash the TCP/UDP header in hardware to calculate the destination queue to receive traffic on.
However, everything isn't awesome and there will be cases where IP frames are fragmented. When this happens, the first frame in the fragment has the IPv4 header and the TCP/UDP header - but the subsequent fragments only have the IPv4 header. That means there's not enough information in the rest of the fragments to hash them to the same hash value and thus hardware queue as the first fragment - only the first has the full IPv4+TCP/UDP information.
The Intel and Chelsio NICs will hash on all packets that are fragmented by only hashing on the IPv4 details. So, if it's a fragmented TCP or UDP frame, it will hash the first fragment the same as the others - it'll ignore the TCP/UDP details and only hash on the IPv4 frame. This means that all the fragments in a given IP datagram will hash to the same value and thus the same queue.
But if there are a mix of fragmented and non-fragmented packets in a given flow - for example, small versus larger UDP frames - then some may be hashed via the IPv4+TCP or IPv4+UDP details and some will just be hashed via the IPv4 details. This means that packets in the same flow will end up being received in different receive queues and thus highly likely be processed out of order.
The Linux intel driver code flipped off IPv4+UDP hashing a while ago - they hash UDP frames by their IPv4 details only and then do whatever other load balancing in the kernel they choose. I found this and updated the FreeBSD drivers to do the same. This should result in less out of order UDP frames for UDP heavy workloads. I'm not sure about the Chelsio driver yet - when I convert it to the RSS framework it'll disable IPv4+UDP hashing if that isn't enabled at boot time. This is a good stop-gap, but it's not the whole story.
TCP is where it gets annoying. People don't want to flip off IPv4+TCP hashing as they're convinced that the TCP MSS negotiation and path-MTU discovery stuff will prevent there from being any IP fragmented TCP frames. But, well, that's not really viable in the real world. There are too many misconfigured networks out there and IP fragmentation does occur. So this is also a problem for TCP. This means that the IPv4 fragmented TCP frames in those sessions will come into another receive queue and CPU and this will show up as out of order data.
So, what's this all have to do with receive side scaling?
With RSS, there's a well defined hash for packets and a configuration for what the operating system and NICs are supposed to be doing. It's entirely possible that we'll configure IPv4+TCP to be hashed and also entirely possible we'll see IP fragments showing up on other CPUs. So in order to have the TCP stack run on the right CPU, the IP fragments need to be assembled on whichever CPU they're received upon and then re-injected into the correct destination queue to run on the correct CPU.
Fortunately the FreeBSD netisr scheme makes this easy.
So what I'm doing in my branch (and what will soon show up in -HEAD) is thus:
If everything was awesome and there were never IP fragments, all TCP and UDP frames would always have the TCP/UDP header stamped on them, and the NIC could hash the TCP/UDP header in hardware to calculate the destination queue to receive traffic on.
However, everything isn't awesome and there will be cases where IP frames are fragmented. When this happens, the first frame in the fragment has the IPv4 header and the TCP/UDP header - but the subsequent fragments only have the IPv4 header. That means there's not enough information in the rest of the fragments to hash them to the same hash value and thus hardware queue as the first fragment - only the first has the full IPv4+TCP/UDP information.
The Intel and Chelsio NICs will hash on all packets that are fragmented by only hashing on the IPv4 details. So, if it's a fragmented TCP or UDP frame, it will hash the first fragment the same as the others - it'll ignore the TCP/UDP details and only hash on the IPv4 frame. This means that all the fragments in a given IP datagram will hash to the same value and thus the same queue.
But if there are a mix of fragmented and non-fragmented packets in a given flow - for example, small versus larger UDP frames - then some may be hashed via the IPv4+TCP or IPv4+UDP details and some will just be hashed via the IPv4 details. This means that packets in the same flow will end up being received in different receive queues and thus highly likely be processed out of order.
The Linux intel driver code flipped off IPv4+UDP hashing a while ago - they hash UDP frames by their IPv4 details only and then do whatever other load balancing in the kernel they choose. I found this and updated the FreeBSD drivers to do the same. This should result in less out of order UDP frames for UDP heavy workloads. I'm not sure about the Chelsio driver yet - when I convert it to the RSS framework it'll disable IPv4+UDP hashing if that isn't enabled at boot time. This is a good stop-gap, but it's not the whole story.
TCP is where it gets annoying. People don't want to flip off IPv4+TCP hashing as they're convinced that the TCP MSS negotiation and path-MTU discovery stuff will prevent there from being any IP fragmented TCP frames. But, well, that's not really viable in the real world. There are too many misconfigured networks out there and IP fragmentation does occur. So this is also a problem for TCP. This means that the IPv4 fragmented TCP frames in those sessions will come into another receive queue and CPU and this will show up as out of order data.
So, what's this all have to do with receive side scaling?
With RSS, there's a well defined hash for packets and a configuration for what the operating system and NICs are supposed to be doing. It's entirely possible that we'll configure IPv4+TCP to be hashed and also entirely possible we'll see IP fragments showing up on other CPUs. So in order to have the TCP stack run on the right CPU, the IP fragments need to be assembled on whichever CPU they're received upon and then re-injected into the correct destination queue to run on the correct CPU.
Fortunately the FreeBSD netisr scheme makes this easy.
So what I'm doing in my branch (and what will soon show up in -HEAD) is thus:
- UDP is still hashed as IPv4-only frames for now. I'll change that later to hash on IPv4+UDP and have things reinjected on the correct destination RSS bucket / netisr queue / CPU.
- I create one netisr thread, pinned to a CPU, for each RSS CPU that's defined.
- Ideally I'd create one netisr thread for each RSS bucket and pin that, but that'll come later.
- IP fragments will be hashed to whatever the IPv4 hash calculates, so fragment reassembly will occur on some CPU;
- .. and it's the same CPU for all frames in a fragmented datagram.
- Then when the fragment is reassembled, a software hash is calculated for the newly reassembled frame.
- If RSS is configured to hash for IPv4 only, then it'll see that the hash on the reassembled datagram matches the configured hash for that packet type and reuse it.
- So, if it's UDP right now, it'll see that UDP is only hashing on IPv4 details and reuse it.
- .. but if IPv4+UDP hashing is configured, it'll software hash the packet and assign the new flow type and RSS hash.
- Then, it'll reinject the frame into netisr to be requeued and reprocessed.
- .. this uses the nh_m2cpuid function to calculate the destination CPU for the given RSS hash.
- If it's handled on the same destination CPU then it'll be handled.
- If it's handled on a different destination CPU then it'll be queued to that netisr and dispatched appropriately.
This works. It's not great, and I'd rather the IP fragment reassembly code was much more efficient, but it's correct. I'm going for correctness here to begin with.
Now, before you ask - yes, IPv6 has fragments and yes, I have to do the same thing for IPv6 flows. Most of the code is written.
Finally - the same thing applies to things like IPv4 tunnels, IPv6-in-IPv4 tunnels, IPSEC tunnels and the like. The NIC hashes the packets on the IPv4 header details but once the packet is de-encapsulated, it needs to be reinjected back into the correct CPU for further processing.
Monday, July 21, 2014
Application awareness of receive side scaling (RSS) on FreeBSD
Part of testing this receive side scaling work is designing a set of APIs that allow for some kind of affinity awareness. It's not easy - the general case is difficult and highly varying. But something has to be tested! So, where should it begin?
The main tricky part of this is the difference between incoming, outgoing and listening sockets.
For incoming traffic, the NIC has already calculated the RSS hash value and there's already a map between RSS hash and destination CPU. Well, destination queue to be much more precise; then there's a CPU for that queue.
For outgoing traffic, the thread(s) in question can be scheduled on any CPU core and as you have more cores, it's increasingly unlikely to be the right one. In FreeBSD, the default is to direct dispatch transmit related socket and protocol work in the thread that started it, save a handful of places like TCP timers. Once the driver if_transmit() method is called to transmit a frame it can check the mbuf to see what the flowid is and map that to a destination transmit queue. Before RSS, that's typically done to keep packets vaguely in some semblance of in-order behaviour - ie, for a given traffic flow between two endpoints (say, IP, or TCP, or UDP) the packets should be transmitted in-order. It wasn't really done for CPU affinity reasons.
Before RSS, there was no real consistency with how drivers hashed traffic upon receive, nor any rules on how it should select an outbound transmit queue for a given buffer. Most multi-queue drivers got it "mostly right". They definitely didn't try to make any CPU affinity choices - it was all done to preserve the in-order behaviour of traffic flows.
For an incoming socket, all the information about the destination CPU can be calculated from the RSS hash provided during frame reception. So, for TCP, the RSS hash for the received ACK during the three way handshake goes into the inpcb entry. For UDP it's not so simple (and the inpcb doesn't get a hash entry for UDP - I'll explain why below.)
For an outgoing socket, all the information about the eventual destination CPU isn't necessarily available. If the application knows the source/destination IP and source/destination port then it (or the kernel) can calculate the RSS hash that the hardware would calculate upon frame reception and use that to populate the inpcb. However this isn't typically known - frequently the source IP and port won't be explicitly defined and it'll be up to the kernel to choose them for the application. So, during socket creation, the destination CPU can't be known.
So to make it simple (and to make it simple for me to ensure the driver and protocol stack parts are working right) my focus has been on incoming sockets and incoming packets, rather than trying to handle outgoing sockets. I can handle outbound sockets easily enough - I just need to do a software hash calculation once all of the required information is available (ie, the source IP and port is selected) and populate the inpcb with that particular value. But I decided to not have to try and debug that at the same time as I debugged the driver side and the protocol stack side, so it's a "later" task.
For TCP, traffic for a given connection will use the same source/destination IP and source/destination port values. So for a given socket, it'll always hash to the same value. However, for UDP, it's quite possible to get UDP traffic from a variety of different source IP/ports and respond from a variety of different source/IP ports. This means that the RSS hash value that we can store in the inpcb isn't at all guaranteed to be the same for all subsequent socket writes.
Ok, so given all of that above information, how exactly is this supposed to work?
Well, the slightly more interesting and pressing problem is how to break out incoming requests/packets to multiple receive threads. In traditional UNIX socket setups, there are a couple of common design patterns for farming off incoming requests to multiple worker threads:
I decided this wasn't really acceptable for the RSS work. I needed a way to redirect traffic to a thread that's also pinned to the same CPU as the receive RSS bucket. I decided the cheapest way would be to allow multiple PCB entries for the same socket details (eg, multiple TCP sockets listening on *:80). Since the PCBGROUPS code in this instance has one PCB hash per RSS bucket, all I had to do was to teach the stack that wildcard listen PCB entries (eg, *:80) could also exist in each PCB hash bucket and to use those in preference to the global PCB hash.
The idea behind this decision is pretty simple - Robert Watson already did all this great work in setting up and debugging PCBGROUPS and then made the RSS work leverage that. All I'd have to do is to have one userland thread in each RSS bucket and have the listen socket for that thread be in the RSS bucket. Then any incoming packet would first check the PCBGROUP that matched the RSS bucket indicated by the RSS hash from the hardware - and it'd find the "right" PCB entry in the "right" PCBGROUP PCB has table for the "right" RSS bucket.
That's what I did for both TCP and UDP.
So the programming model is thus:
The main tricky part of this is the difference between incoming, outgoing and listening sockets.
For incoming traffic, the NIC has already calculated the RSS hash value and there's already a map between RSS hash and destination CPU. Well, destination queue to be much more precise; then there's a CPU for that queue.
For outgoing traffic, the thread(s) in question can be scheduled on any CPU core and as you have more cores, it's increasingly unlikely to be the right one. In FreeBSD, the default is to direct dispatch transmit related socket and protocol work in the thread that started it, save a handful of places like TCP timers. Once the driver if_transmit() method is called to transmit a frame it can check the mbuf to see what the flowid is and map that to a destination transmit queue. Before RSS, that's typically done to keep packets vaguely in some semblance of in-order behaviour - ie, for a given traffic flow between two endpoints (say, IP, or TCP, or UDP) the packets should be transmitted in-order. It wasn't really done for CPU affinity reasons.
Before RSS, there was no real consistency with how drivers hashed traffic upon receive, nor any rules on how it should select an outbound transmit queue for a given buffer. Most multi-queue drivers got it "mostly right". They definitely didn't try to make any CPU affinity choices - it was all done to preserve the in-order behaviour of traffic flows.
For an incoming socket, all the information about the destination CPU can be calculated from the RSS hash provided during frame reception. So, for TCP, the RSS hash for the received ACK during the three way handshake goes into the inpcb entry. For UDP it's not so simple (and the inpcb doesn't get a hash entry for UDP - I'll explain why below.)
For an outgoing socket, all the information about the eventual destination CPU isn't necessarily available. If the application knows the source/destination IP and source/destination port then it (or the kernel) can calculate the RSS hash that the hardware would calculate upon frame reception and use that to populate the inpcb. However this isn't typically known - frequently the source IP and port won't be explicitly defined and it'll be up to the kernel to choose them for the application. So, during socket creation, the destination CPU can't be known.
So to make it simple (and to make it simple for me to ensure the driver and protocol stack parts are working right) my focus has been on incoming sockets and incoming packets, rather than trying to handle outgoing sockets. I can handle outbound sockets easily enough - I just need to do a software hash calculation once all of the required information is available (ie, the source IP and port is selected) and populate the inpcb with that particular value. But I decided to not have to try and debug that at the same time as I debugged the driver side and the protocol stack side, so it's a "later" task.
For TCP, traffic for a given connection will use the same source/destination IP and source/destination port values. So for a given socket, it'll always hash to the same value. However, for UDP, it's quite possible to get UDP traffic from a variety of different source IP/ports and respond from a variety of different source/IP ports. This means that the RSS hash value that we can store in the inpcb isn't at all guaranteed to be the same for all subsequent socket writes.
Ok, so given all of that above information, how exactly is this supposed to work?
Well, the slightly more interesting and pressing problem is how to break out incoming requests/packets to multiple receive threads. In traditional UNIX socket setups, there are a couple of common design patterns for farming off incoming requests to multiple worker threads:
- There's one thread that just does accept() (for TCP) or recv() (for UDP) and it then farms off new connections to userland worker threads; or
- There are multiple userland worker threads which all wait on a single socket for accept() or recv() - and hope that the OS will only wake up one thread to hand work to.
I decided this wasn't really acceptable for the RSS work. I needed a way to redirect traffic to a thread that's also pinned to the same CPU as the receive RSS bucket. I decided the cheapest way would be to allow multiple PCB entries for the same socket details (eg, multiple TCP sockets listening on *:80). Since the PCBGROUPS code in this instance has one PCB hash per RSS bucket, all I had to do was to teach the stack that wildcard listen PCB entries (eg, *:80) could also exist in each PCB hash bucket and to use those in preference to the global PCB hash.
The idea behind this decision is pretty simple - Robert Watson already did all this great work in setting up and debugging PCBGROUPS and then made the RSS work leverage that. All I'd have to do is to have one userland thread in each RSS bucket and have the listen socket for that thread be in the RSS bucket. Then any incoming packet would first check the PCBGROUP that matched the RSS bucket indicated by the RSS hash from the hardware - and it'd find the "right" PCB entry in the "right" PCBGROUP PCB has table for the "right" RSS bucket.
That's what I did for both TCP and UDP.
So the programming model is thus:
- First, query the RSS sysctl (net.inet.rss) for the RSS configuration - this gives the number of RSS buckets and the RSS bucket -> CPU mapping.
- Then create one worker thread per RSS bucket..
- .. and pin each thread to the indicated CPU.
- Next, each worker thread creates one listen socket..
- .. sets the IP_BINDANY or IP6_BINDANY option to indicate that there'll be multiple RSS entries bound to the given listen details (eg, binding to *:80);
- .. then IP_RSS_LISTEN_BUCKET to set which RSS bucket the incoming socket should live in;
- Then for UDP - call bind()
- Or for TCP - call bind(), then call listen()
Each worker thread will then receive TCP connections / UDP frames that are local to that CPU. Writing data out the TCP socket will also stay local to that CPU. Writing UDP frames out doesn't - and I'm about to cover that.
Yes, it's annoying because now you're not just able to choose an IO model that's convenient for your application / coding style. Oops.
Ok, so what's up with UDP?
The problem with UDP is that outbound responses may be to an arbitrary destination setup and thus may actually be considered "local" to another CPU. Most common services don't do this - they'll send the UDP response to the same remote IP and port that it was sent from.
My plan for UDP (and TCP in some instances, see below!) is four-fold:
- When receiving UDP frames, optionally mark them with RSS hash and flowid information.
- When transmitting UDP frames, allow userspace to inform the kernel about a pre-calculated RSS hash / flow information.
- For the fully-connected setup (ie, where a single socket is connect() ed to a given UDP remote IP:port and frame exchange only occurs between the fixed IP and port details) - cache the RSS flow information in the inpcb;
- .. and for all other situations (if it's not connected, if there's no hint from userland, if it's going to a destination that isn't in the inpcb) - just do a software hash calculation on the outgoing details.
I mostly have the the first two UDP options implemented (ie, where userland caches the information to re-use when transmitting the response) and I'll commit them to FreeBSD soon. The other two options are the "correct" way to do the default methods but it'll take some time to get right.
Ok, so does it work?
I don't have graphs. Mostly because I'm slack. I'll do up some before I present this - likely at BSDCan 2015.
My testing has been done with Intel 1G and 10G NICs on desktop Ivy Bridge 4-core hardware. So yes, server class hardware will behave better.
For incoming TCP workloads (eg a webserver) then yes, there's no lock contention between CPUs in the NIC driver or network stack any longer. The main lock contention between CPUs is the VM and allocator paths. If you're doing disk IO then that'll also show up.
For incoming UDP workloads, I've seen it scale linearly on 10G NICs (ixgbe(4)) from one to four cores. This is with no-defragmentation, 510 byte sized datagrams.
Ie, 1 core reception (ie, all flows to one core) was ~ 250,000 pps into userland with just straight UDP reception and no flow/hash information via recvmsg(); 135,000 pps into userland with UDP reception and flow/hash information via recvmsg().
4 core reception was ~ 1.1 million pps into userland, roughly ~ 255,000 pps per core. There's no contention between CPU cores at all.
Unfortunately what I was sending was markedly different. The driver quite happily received 1.1 million frames on one queue and up to 2.1 million when all four queues were busy. So there's definitely room for improvement.
Now, there is lock contention - it's just not between CPU cores. Now that I'm getting past the between-core contention, we see the within-core contention.
For TCP HTTP request reception and bulk response transmission, most of the contention I'm currently seeing is between the driver transmit paths. So, the following occurs:
- TCP stack writes some data out;
- NIC if_transmit() method is called;
- It tries to grab the queue lock and succeeds;
But then whilst the transmit lock is held, because the driver is taking frames from the buf_ring to push into the NIC TX DMA queue
- The NIC queue interrupt fires, scheduling the software interrupt thread;
- This pre-empts the existing running transmit thread;
- The NIC code tries to grab the transmit lock to handle completed transmissions;
- .. and it fails, because the code it preempted holds the transmit lock already.
So there's some context switching and thrashing going on there which needs to be addressed.
Ok, what about UDP? It turns out there's some lock contention with the socket receive buffer.
The soreceive_dgram() routine grabs the socket receive buffer (SOCKBUF_LOCK()) to see if there's anything to return. If not, and if it can sleep, it'll call sbwait() that will release the lock and msleep() waiting for the protocol stack to indicate that something has been received. However, since we're receiving packets at such a very high rate, it seems that the receive protocol path contends with the socket buffer lock that is held by the userland code trying to receive a datagram. It pre-empts the user thread, tries to grab the lock and fails - and then goes to sleep until the userland code finishes with the lock. soreceive_dgram() doesn't hold the lock for very long - but I do see upwards of a million context switches a second.
To wrap up - I'm pleased with how things are going. I've found and fixed some issues with the igb(4) and ixgbe(4) drivers that were partly my fault and the traffic is now quite happily and correctly being processed in parallel. There are issues with scaling within a core that are now being exposed and I'm glad to say I'm going to ignore them for now and focus on wrapping up what I've started.
There's a bunch more to talk about and I'm going to do it in follow-up posts.
- what I'm going to do about UDP transmit in more detail;
- what about creating outbound connections and how applications can be structured to handle this;
- handling IP fragments and rehashing packets to be mostly in-order - and what happens when we can't guarantee ordering with the hardware hashing UDP frames to a 4-tuple;
- CPU hash rebalancing - what if a specific bucket gets too much CPU load for some reason;
- randomly creating a toeplitz RSS hash key at bootup and how that should be verified;
- multi-socket CPU and IO domain awareness;
- .. and whatever else I'm going to stumble across whilst I'm slowly fleshing this stuff out.
I hope to get the UDP transmit side of things completed in the next couple of weeks so I can teach memcached about TCP and UDP RSS. After that, who knows!
Saturday, June 7, 2014
Hacking on Receive Side Scaling (RSS) on FreeBSD
RSS is a Microsoft invention that tries to keep a given TCP or UDP flow (and I think IP, but I haven't yet tried that) on a given CPU core. The idea is to try and keep both flow-local data and flow-local locking on a single CPU core, increasing the chances that data is hot in the CPU core cache and reducing the chance of lock overhead.
You can find the RSS overview and programming details here:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff567236(v=vs.85).aspx
RSS and supporting technology has been making its way into FreeBSD for quite some time but it's not in any real shape that application developers can take advantage of.
Firstly, there's "PCBGROUPS", which looks to group PCB (protocol control block) data for a connection local to a CPU. Instead of there being one global PCB table for the system (well, VIMAGE for FreeBSD - each virtual image instance has its own PCB table) with one lock protecting it, there's now multiple PCB tables, one per "thing". Here, the thing is whatever the kernel developer thinks is worth grouping them by.
http://www.ece.rice.edu/~willmann/pubs/paranet_usenix.pdf
Now, until the RSS work went in, this code was in FreeBSD but sat unused. A kernel developer could provide the hooks needed to map TCP (and maybe UDP later) flows to a "thing" and have that map to a PCB group table - but it required some glue to stamp incoming connections and outgoing packets with some identifier (which we call a "flowid" in FreeBSD) with something that can map to said "thing". Then whenever a PCB lookup was needed, it would first try the lookup in the table mapped to by the mapping between the "flowid" and "thing" - if it was successful, it wouldn't have to use the global PCB table to do the lookup.
This is only good for established connections - creating and destroying a connection still requires manipulating that global PCB table and the single PCB table lock. I'm going to ignore fixing that for now, as that is a bigger issue.
Then Robert Watson added the RSS work done under contract to Juniper Networks, Inc. RSS provides one kind of mapping between the flowid from the NIC and which CPU to run work on. So that part worked great - but there wasn't any way for the application user to take advantage of it. Additionally, there's no driver awareness of it yet - I'll discuss this shortly.
So I grabbed a bunch of this work whilst at Netflix and tried to make sense of it. It turns out that if you can keep the work local to a CPU, a lot of the lock contention in the networking stack melts away. Here's what's going on:
You can find the RSS overview and programming details here:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff567236(v=vs.85).aspx
RSS and supporting technology has been making its way into FreeBSD for quite some time but it's not in any real shape that application developers can take advantage of.
Firstly, there's "PCBGROUPS", which looks to group PCB (protocol control block) data for a connection local to a CPU. Instead of there being one global PCB table for the system (well, VIMAGE for FreeBSD - each virtual image instance has its own PCB table) with one lock protecting it, there's now multiple PCB tables, one per "thing". Here, the thing is whatever the kernel developer thinks is worth grouping them by.
http://www.ece.rice.edu/~willmann/pubs/paranet_usenix.pdf
Now, until the RSS work went in, this code was in FreeBSD but sat unused. A kernel developer could provide the hooks needed to map TCP (and maybe UDP later) flows to a "thing" and have that map to a PCB group table - but it required some glue to stamp incoming connections and outgoing packets with some identifier (which we call a "flowid" in FreeBSD) with something that can map to said "thing". Then whenever a PCB lookup was needed, it would first try the lookup in the table mapped to by the mapping between the "flowid" and "thing" - if it was successful, it wouldn't have to use the global PCB table to do the lookup.
This is only good for established connections - creating and destroying a connection still requires manipulating that global PCB table and the single PCB table lock. I'm going to ignore fixing that for now, as that is a bigger issue.
Then Robert Watson added the RSS work done under contract to Juniper Networks, Inc. RSS provides one kind of mapping between the flowid from the NIC and which CPU to run work on. So that part worked great - but there wasn't any way for the application user to take advantage of it. Additionally, there's no driver awareness of it yet - I'll discuss this shortly.
So I grabbed a bunch of this work whilst at Netflix and tried to make sense of it. It turns out that if you can keep the work local to a CPU, a lot of the lock contention in the networking stack melts away. Here's what's going on:
- The receive thread(s) in the NIC driver processing packets are typically doing direct dispatch to the network stack - so they're running the receive side of the TCP stack;
- .. and the receive side of the network stack includes ACKs, which triggers the transmit side of the network stack;
- There's typically some deferred thread(s) in the NIC driver transmitting packets to each NIC queue;
- There's also application threads trying to queue data to the TCP socket, which also can dig into the socket and TCP stack state, which involves grabbing locks;
- And there's also timers firing to update state, and doing this involves grabbing locks.
Without RSS and without lining everything up on CPU cores, all the above can run on different cores. Whenever any of them try running at the same time, lock contention can occur and that particular task can stop. If the lock contention blocks the transmit or receive NIC threads, then not only is that connection affected - the whole NIC processing is affected.
There's still lock contention in the network stack - especially if you're doing a lot of new, short connections. The good folk at Verisign are working on that particular corner of the problem so I'm happy to defer to them.
So, I ended up doing a bunch of little pieces to get this lined up right:
- The per-CPU timer callwheels can now be optionally pinned to their CPU cores, so timer events running on CPU X actually do run on CPU X (yes, that was amusing to find..);
- There's support in the TCP stack for per-CPU timers, but it's not enabled by default;
- ... and it also didn't query RSS, netisr or anything to figure out how to map a flowid to a given CPU to run a timer on;
- Then to make matters worse, incoming TCP sessions didn't have a flowid assigned to the PCB until after the first data packet was read - which meant that the initial timer work would all assume CPU 0 and any queries on that particular PCB would return flowid=0 - so it would not find it in the right PCBGROUP.
So those are fixed in FreeBSD-HEAD. The per-CPU TCP timer and pinned-CPU timers aren't enabled by default - I'll only flip that on when I'm confident that the RSS stuff is working.
So that lets all the RSS stuff correctly work. But there wasn't a nice way to query the per-connection flowid or RSS information. So I then extended netstat to have 'R' as a flag - it returns the flowid and the flowid type. I'll add RSS information once I have a nice way to extract it out in bulk. It's still a good diagnostic tool to ensure that the IPv4/IPv6 hashing is working correctly.
Then I had to teach a driver about RSS so I could actually test it all out. I have some igb(4) hardware at home, so I did the minimal work required to teach it about the RSS key and assigning things to the correct CPUs. It's still incomplete but it's good enough to get off the ground. I'll go into more details about the driver requirements in a follow-up blogpost.
Finally, how are application developers supposed to use it? I'll cover that particular bit in another follow-up blog post as there's quite a lot to cover there.
Sunday, April 27, 2014
Why oh why must people think backdoors == NSA? Silly people.
I know that there's some collective paranoia here in the United States and in the security community in general about government and corporation spying leading to backdoors in equipment. And yes, it's likely very true in a lot of situations. The last two years of public information leaks testify to said collective paranoia.
There's been a few writeups lately about backdoors in wireless equipment. Here's something from the hacker news - http://thehackernews.com/2014/04/router-manufacturers-secretly-added-tcp.html . A helping hand? To the NSA? Perhaps. Does the NSA know about this stuff? Of course. I'd also not be surprised if they were actively using it.
But there aren't any other, less paranoid reasons out there in the articles. So let me put one out there, based on an 18 month stint at a hardware company that makes wireless chips. These manufacturers have a whole bunch of development testing, regression testing, certification testing and factory testing that goes on. Instead of building separate images to test versus ship (which may actually be against the regulatory certification rules!) they instead just leave a bunch of these remote execution backdoors in their product.
I think it's highly likely it's just very sloppy security, sloppy code design, sloppy quality control and sloppy development.
Just to be clear - the Atheros default software (as far as I'm aware) didn't have these hooks in them. All the AP firmare interfaces I played with at Atheros required authentication or manually starting things before you could use it. Noone these days ships the default Atheros development firmware on their product. This looks like all extra code that vendors have layered on top of things.
These companies should do a better job at their product development. But given the cutthroat pricing, cheap development and ridiculous product lifecycles, are you really surprised that the result has corners missed?
(That's why I run FreeBSD-HEAD on my kit here at home.)
There's been a few writeups lately about backdoors in wireless equipment. Here's something from the hacker news - http://thehackernews.com/2014/04/router-manufacturers-secretly-added-tcp.html . A helping hand? To the NSA? Perhaps. Does the NSA know about this stuff? Of course. I'd also not be surprised if they were actively using it.
But there aren't any other, less paranoid reasons out there in the articles. So let me put one out there, based on an 18 month stint at a hardware company that makes wireless chips. These manufacturers have a whole bunch of development testing, regression testing, certification testing and factory testing that goes on. Instead of building separate images to test versus ship (which may actually be against the regulatory certification rules!) they instead just leave a bunch of these remote execution backdoors in their product.
I think it's highly likely it's just very sloppy security, sloppy code design, sloppy quality control and sloppy development.
Just to be clear - the Atheros default software (as far as I'm aware) didn't have these hooks in them. All the AP firmare interfaces I played with at Atheros required authentication or manually starting things before you could use it. Noone these days ships the default Atheros development firmware on their product. This looks like all extra code that vendors have layered on top of things.
These companies should do a better job at their product development. But given the cutthroat pricing, cheap development and ridiculous product lifecycles, are you really surprised that the result has corners missed?
(That's why I run FreeBSD-HEAD on my kit here at home.)
Monday, March 31, 2014
Meraki Sparky boards, and constant resetting
There's a Mesh internet project at Sudo Room and they've been doing some great work getting a platform up and running. However, like a lot of volunteer projects, they're working with whatever time and equipment they've been donated.
A few months ago they were donated a few hundred Meraki Sparky boards. They're an Atheros AR2317 SoC based device with an integrated 2GHz 802.11bg radio, 10/100 ethernet and.. well, a hardware watchdog that resets the board after five minutes.
Now, annoyingly, this reset occurs inside of Redboot too - which precludes them from being (fully) flashed before the unit reboots. Once the unit was flashed with OpenWRT, the unit still reboots every five minutes.
So, I started down the path of trying to debug this.
What did I know?
Firstly, the AR2317 watchdog doesn't have a way of resetting things itself - instead, all it can do is post an interrupt. The AR7161 and later SoCs do indeed have a way to do a full hardware reset if the watchdog is tickled.
Secondly, redboot has a few tricksy ways to manipulate the hardware:
A few months ago they were donated a few hundred Meraki Sparky boards. They're an Atheros AR2317 SoC based device with an integrated 2GHz 802.11bg radio, 10/100 ethernet and.. well, a hardware watchdog that resets the board after five minutes.
Now, annoyingly, this reset occurs inside of Redboot too - which precludes them from being (fully) flashed before the unit reboots. Once the unit was flashed with OpenWRT, the unit still reboots every five minutes.
So, I started down the path of trying to debug this.
What did I know?
Firstly, the AR2317 watchdog doesn't have a way of resetting things itself - instead, all it can do is post an interrupt. The AR7161 and later SoCs do indeed have a way to do a full hardware reset if the watchdog is tickled.
Secondly, redboot has a few tricksy ways to manipulate the hardware:
- 'x' can examine registers. Since we need them in KSEG1 (unmapped, uncached) then the reset registers (0x11000xxx becomes 0xb1000xxx.) Since its hardware access, we should do them as DWORDS and not bytes.
- 'mfill' can be used to write to registers.
Thirdly, there's an Atheros specific command - bdshow - which is surprisingly informative:
RedBoot> bdshow
name: Meraki Outdoor 1.0
magic: 35333131
cksum: 2a1b
rev: 10
major: 1
minor: 0
pciid: 0013
wlan0: yes 00:18:0a:50:7b:ae
wlan1: no 00:00:00:00:00:00
enet0: yes 00:18:0a:50:7b:ae
enet1: no 00:00:00:00:00:00
uart0: yes
sysled: no, gpio 0
factory: no, gpio 0
serclk: internal
cpufreq: calculated 184000000 Hz
sysfreq: calculated 92000000 Hz
memcap: disabled
watchdg: disabled (WARNING: for debugging only!)
serialNo: Q2AJYS5XMYZ8
Watchdog Gpio pin: 6
secret number: e2f019a200ee517e30ded15cdbd27b a72f9e30c8
.. hm. Watchdog GPIO pin 6? What's that?
name: Meraki Outdoor 1.0
magic: 35333131
cksum: 2a1b
rev: 10
major: 1
minor: 0
pciid: 0013
wlan0: yes 00:18:0a:50:7b:ae
wlan1: no 00:00:00:00:00:00
enet0: yes 00:18:0a:50:7b:ae
enet1: no 00:00:00:00:00:00
uart0: yes
sysled: no, gpio 0
factory: no, gpio 0
serclk: internal
cpufreq: calculated 184000000 Hz
sysfreq: calculated 92000000 Hz
memcap: disabled
watchdg: disabled (WARNING: for debugging only!)
serialNo: Q2AJYS5XMYZ8
Watchdog Gpio pin: 6
secret number: e2f019a200ee517e30ded15cdbd27b
.. hm. Watchdog GPIO pin 6? What's that?
Next, I tried manually manipulating the watchdog registers but nothing actually happened.
Then I wondered - what about manipulating the GPIO registers? Maybe there's a hardware reset circuit hooked up to GPIO 6 that needs to be toggled to keep the board from resetting.
Board: ap61
RAM: 0x80000000-0x82000000, [0x8003ddd0-0x80fe1000] available
FLASH: 0xa8000000 - 0xa87e0000, 128 blocks of 0x00010000 bytes each.
== Executing boot script in 2.000 seconds - enter ^C to abort
^C
RedBoot> # set direction of gpio6 to out
RedBoot> mfill -b 0xb1000098 -l 4 -p 0x00000043
RedBoot> x -b 0xb1000098
B1000098: 00 00 00 43 00 00 00 00 00 00 00 00 00 00 00 03 |...C............|
B10000A8: FF EF F7 B9 7D DF 5F FF 00 00 00 00 00 00 00 00 |....}._.........|
RedBoot> # pat gpio6 - set it high, then low.
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
.. then I manually did this every minute or so.
RedBoot>
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
.. so, the solution here seems to be to "set gpio6 to be output", then "pat it every 60 seconds."
RAM: 0x80000000-0x82000000, [0x8003ddd0-0x80fe1000] available
FLASH: 0xa8000000 - 0xa87e0000, 128 blocks of 0x00010000 bytes each.
== Executing boot script in 2.000 seconds - enter ^C to abort
^C
RedBoot> # set direction of gpio6 to out
RedBoot> mfill -b 0xb1000098 -l 4 -p 0x00000043
RedBoot> x -b 0xb1000098
B1000098: 00 00 00 43 00 00 00 00 00 00 00 00 00 00 00 03 |...C............|
B10000A8: FF EF F7 B9 7D DF 5F FF 00 00 00 00 00 00 00 00 |....}._.........|
RedBoot> # pat gpio6 - set it high, then low.
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
.. then I manually did this every minute or so.
RedBoot>
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000042
RedBoot> mfill -b 0xb1000090 -l 4 -p 0x00000002
.. so, the solution here seems to be to "set gpio6 to be output", then "pat it every 60 seconds."
I hope this helps people bring OpenWRT up on this board finally. There seems to be a few of them out there!
Subscribe to:
Posts (Atom)