Saturday, October 31, 2015

Fixing up the QCA9558 performance on FreeBSD, or "why attention to detail matters when you're a kernel developer."

When I started with this Atheros MIPS 11n stuff a few years ago, my first test board was a Routerstation Pro with a pair of AR9160 NICs. I could get ~ 150mbit/sec bridging performance out of it, and I thought I was doing pretty good.

Fast forward to now, and I've been bringing FreeBSD up on each of the subsequent boards. But the performance never improved. Now, I never bothered to look into it because I was always too busy with my day job, but finally someone trolled me correctly on the FreeBSD embedded IRC channel and I took a look.

It turns out that.. things could've benefitted from a lot of improvement.

First up - I'm glad George Neville-Neil brought up PMC (performance counters) on the MIPS24k platform. It made it easier for me to bring it up on the MIPS74k platform and it was absolutely instrumental in figuring out performance issues here. No, there's no real ability to get DTrace up on these boards - some have 32MB of RAM. Heck, the packet filter (bpf) consumes most of a megabyte of RAM when you first start it up.

My initial tests are on an AP135 reference design board from Qualcomm Atheros. It's a QCA9558 SoC with an AR8327 switch on board. Both on-chip ethernet ports (arge0, arge1) are available. I set it up as a straight bridge between arge0 and arge1 and then I used iperf between two laptops to measure performance.

The first test - 130mbit bridging performance. That's terrible for this platform.

So I fired up hwpmc, and I found the first problem - packets were being copied in the receive and transmit path. Since I'm more familiar with the transmit path, I decided to look into that.

The AR7161 MAC requires both transmit and receive buffers to be both DWORD (32 bit or 4 bytes) aligned. In addition, all transmit frames save the last frame are required to be a multiple of DWORD in length. Plenty of frames don't meet this requirement and end up being copied.

The AR7240 and later MAC relaxed this - transmit/receive buffers can now be byte-aligned. So that particular workaround can be removed. It still needs to do it for multi-descriptor transmits that aren't DWORD sized (eg if you just prepend a fresh ethernet header) but that doesn't happen in the bridging path in the normal case.

Fixing that got bridging performance from 130mbit to 180mbit. That's not a huge difference, but it's something.

Next up is the receive path.  This was more .. complicated. The receive code copies the whole buffer back two bytes in order to ensure that the IP payload presented to the FreeBSD network stack is aligned. This is a problem in FreeBSD's network stack - it assumes the hardware handles unaligned accesses fine. So if your receive engine is DWORD aligned, the 14 byte ethernet header will result in the start of the IP payload being non-DWORD aligned, and .. the stack blows up. Now, I have vague plans to start fixing that as a general rule, but I did the next worst hack - I grabbed a buffer, set its RX start point to two bytes in, so the ethernet header is unaligned but the IP header is. Now, the ethernet stack in FreeBSD handles unaligned stuff correctly, so that works.

Except it wasn't faster. It turns out that the MIPS busdma code was doing very inefficient things with mbuf handling if everything wasn't completely aligned. Ian Lepore (who does ARM work) recently fixed this for armv6, so he ported it to MIPS and I added it.

The result? bridging performance leaped from 180mbit to 420mbit. Quite nice, but not where Linux was.

I left it for a few days, and someone on the freebsd-mips mailing list pointed out big stability issues with his tests. I started looking at the Linux OpenWRT driver and the MIPS24K/MIPS74K memory coherency operations. I found a couple of interesting things:

  • The busdma sync code never did a "SYNC" operation if things weren't being copied or invalidated; and
  • I was using cache-writethrough instead of cache-writeback for the cached memory attribute for MIPS74K.
The former is a problem with driver memory / driver access sync - you need to ensure that the changes you've made are actually in memory before you tell the hardware to look at it. So I fixed that in the busdma routines.

The latter makes everything slow. It means each write is going through the cache and into memory - the cache hardware doesn't get to batch writes to memory. I changed that, and found more instability in some parts of the arge ethernet driver - the MDIO bus accesses started misbehaving. After looking at the Linux code and the sync operations, I reimplemented the MDIO code correctly and I added explicit read/write barriers as needed. The MDIO code does lots of same-register accesses in loops to look for things, and the hardware may subtly reorder things. I committed this, flipped on the correct cache attribute to support cache-writeback, and things got .. faster. Much faster in fact.

So, that worked - and I hit the hardware instability issue. But, I hit it at a higher traffic rate. The final thing was fixed by looking at the OpenWRT driver (ag71xx_main.c) and going "Aha!" - the transmit side was buggy.

Specifically - the transmit side is a linked list of descriptors, but it's formed into a ring. The TX DMA engine stops when it hits a descriptor that isn't marked "ready" (ie, has ARGE_DESC_EMPTY set.) Now, we didn't see this before when we were copying transmit buffers for a packet into a single correct transmit buffer, but now that I am doing multi-descriptor transmit more frequently, this bug was hit. The bug is that because the TX descriptors are in a big ring, it's possible the hardware will transmit everything and hit the end of the ring before we've completely setup the descriptors for the next packet. If this happens, and it hits ARGE_DESC_EMPTY, then it stops. But if we have say a 3 descriptor packet, and we set the descriptors up in order, the hardware may hit that first descriptor out of three before we've finished setting things up, and start transmitting. It hits a descriptor we've not setup yet, thinks we're done, and transmits what it's seen. Then when I finish the setup and hit "transmit" on the hardware, it stalls, and everything sticks.

The fix was to initialise the first descriptor as EMPTY, then when we're done setting them up, flip that first descriptor to non-empty.

And voila! The bug is fixed and things perform now at a much faster rate - 720mbit. Yup, it bridges at 720mbit and it routes at around 320mbit. I'd like to get routing up from 320mbit to somewhere near bridge performance, but that'll have to wait a while.

120mbit -> 720mbit. Yup, I'm happy with that.

Tuesday, October 13, 2015

Fixing up the RTL8188SU/RTL8192SU 802.11bgn driver (rsu)

I recently figured out most of the missing pieces for 11n support and stability with the rsu driver in FreeBSD. This handles support for the RTL8188SU and RTL8192SU chipsets. I'll cover what I found and fixed in this post.

First off - the driver was in reasonably poor shape. It sometimes paniced when the NIC was removed, it didn't support 802.11n at all and it wouldn't associate reliably. I was pestered enough by one of the original users behind getting the driver ported (Idwer! Hi!) and decided I'd give fixing it up a go.

Importantly - it's a mostly real fullmac device. "fullmac" here means that the firmware on the device does almost everything interesting - it handles association, it can do encryption/decryption for you if you want, it'll handle retransmission and transmit rate control. There are some important things it doesn't do - I'll cover those shortly.

Here's a fun bit of trivia - this firmware outputs text debugging via a magic firmware notification, and it's on by default. This made all of the debugging much, much easier as I didn't have to guess so much about what was going on in the firmware. All firmware developers - please do this. Please!

I first looked at the association issue. The device does full scan offload - you send it a firmware command to start scanning and it'll return scan results as they come in. Plenty of firmware devices do this. Then you send it an association message, then a join_bss message. For those looking at the source - rsu_site_survey() starts the scan, and rsu_join_bss() attempts an association. Now, I noticed that it was sending a join message before the site survey finished. I also noticed that I never really received any management frames, and when I used a sniffer to see what was going on, I saw double-associations sometimes occuring.

I then checked OpenBSD. Their driver just stubbed out the management frame transmit routine. This wasn't done in FreeBSD, so I added it. It turns out the firmware here does all management frame transmit and receive, so I just plainly have to do none of it. This tidied things up a bit but it didn't fix association.

Next up - the whole way scan results were pushed into net80211 was wrong. Sometimes scan results ended up on the wrong channel. The driver was doing dirty things to the current channel state directly and then faking a beacon to the net80211 stack. I replaced that with some code I wrote for the 7260 wifi driver - the stack now accepts a channel (and other things!) as part of the receive frame, so you can do proper off-channel frame reception. This tidied up the scan results so they were now consistent.

Then I thought about an evil hack - how about delaying the call to rsu_join_bss() until after the survey finished. That worked - associations were now very reliable.


Now the device associated reliably and worked okay. There were some missing bits for the firmware setup path for doing things like power saving, saying how many transmit/receive streams are available, etc, but those were easy to add. Next up - 802.11n.

On the receive side, 802.11n requires you to do A-MPDU reordering as the transmitter is free to retransmit failed frames out of order. But the net80211 stack only handled the case where it saw the management frames and it itself drove the A-MPDU negotiation. Here, the firmware drives the negotiation and just tells us what's just happened. So, I had to extend net80211 to be told what the A-MPDU parameters are. It turned out that yes, the firmware sends a notification about A-MPDU going up, but it doesn't tell you how big the block-ack window is. Sigh. So, I needed to add that.

But the access point still wasn't negotiating it. Here was the next fun bit - join rsu_join_bss() it lets the stack assemble optional IEs to send to the access point and, the more interesting part, it looks at said IEs for an idea of what its own configuration should be. I added the HTINFO IE and voila! It started negotiating 802.11n.

(Oh, and I had to add M_AMPDU to each RX'ed frame from an 802.11n node before I called net80211, or the receive code would never do A-MPDU reorder processing.)

The final hack - I stubbed out the A-MPDU TX negotiation so we would never attempt to do it. So yes, there's no TX aggregation support, but that's fine for now.

Then Idwer told me it wasn't working for him. After much digging with the Linux driver authors (Thanks Christian and Larry!) we found that the OpenBSD driver tried to program the chip directly for 40MHz mode and that's wrong - instead, I just missed one of the 802.11n IEs. The firmware looked into that to see what the channel setup should've been. Two lines of diff later and I was on at 40MHz wide modes.

Finally - stability. It turns out that the USB drivers do inconsistent things when it comes to the detach path. They're supposed to stop transmit/receive, then flush buffers which flushes the net80211 node references, and then tear down the net80211 interface. Some, eg if_rsu, were doing it the other way. I fixed if_rsu and if_urtwn - they're now both stable.

Thursday, October 1, 2015

As requested: progress of AR9170

Hi!

The progress of the AR9170 FreeBSD-ification can be found here:

https://github.com/erikarn/otus

Yes, I did actually keep the history of the driver bring-up here.

Wednesday, September 30, 2015

Porting a wifi driver from openbsd - AR9170

I told myself a long, long time ago that I really don't want to be working on USB wireless. It's not that I dislike USB or wireless; it's just that the hoops required to get it all working in a stable way is quite a bit to keep in your mind. But, I decided recently that it's about time I learnt how it worked and I was very sad that we still didn't have any working USB wifi devices that also operated with 802.11n.

So, I picked a NIC and dove in.

I picked if_rsu(4) - it's the RTL8188SU / RTL8192SU series hardware from Realtek. It turned out I chose reasonably well.

First off - it's a "fullmac" device - meaning that outside of a handful of things, the device firmware offloads a lot of the 802.11 complications. The driver does hardware initialisation and the wireless stack speaks WPA/WPA2/etc for negotiating encryption, but the hardware handles scanning, authentication, 802.11n aggregation negotiation and most management frame work.

Secondly - it's ported from OpenBSD. The OpenBSD folk do a good job of getting drivers up and running, but there tend to be some sharp edges and the 802.11n bits just don't work.

So, besides currently doing encryption in software, the rsu(4) driver behaves rather well. I'll write a separate article about that. This article is about the AR9170, or otus(4) driver in FreeBSD/OpenBSD parlance.

Now, the AR9170 is a ZyDAS device with an Atheros 802.11n PHY and radio. It's quite a hybrid beast. It's also buggy - there are issues with QoS frames and 802.11n aggregation that make it impossible to behave well. So, for now I'm treating it like a 11abg device and I'll worry about 802.11n when someone gives me patches to make it work.

The OpenBSD driver is based on the initial otus driver that Atheros provided to the Linux developers circa 2009. The firmware blob is closed and very old - the ar9170fw project is still out there on the internet (and I have a mirror at https://github.com/erikarn/ar9170-fw) but I can't get it to build on a recent FreeBSD install so a firmware update will take time. But, it does seem to work.

There are a few pieces to think about when porting a USB driver. The biggest piece is that it's not memory mapped IO or IO port based - everything is a message. There are USB device control commands you can send which will sleep until they're done, but the majority of stuff is done using bulk transmit and receive endpoints and that's all conveniently asynchronous. But it complicates things in the driver world.

Memory mapped and IO port drivers treat device IO as this magical "I do it, then the next instruction executes when it's done" mostly serialised paradigm. It's a lie, of course - the intel x86 CPUs will pretend things are occuring in a specific order, but a lot of platforms require you to mark memory as uncached or use memory / cache flush operations to ensure things go out to the device in any particularly controlled manner. But USB doesn't - outside of USB control transfers, USB devices tend to look like remote network devices and this includes register accesses. Now, the RTL8188SU driver (rsu(4)) implements the firmware upload and register accesses using control transfers, so it's all pretty easy to get the driver initialisation and attaching working before you care about the asynchronous parts. But the AR9170 driver implements register accesses as firmware commands - and so I have to get a lot more of the stack up and working first.

So, here's what I did.

First up - I commented out almost all of the device driver, and focused on getting the probe, attach and detach methods working. That wasn't too hard. But yes, almost all the code was commented out.

Next up was firmware loading. This was done using control transfers, so I didn't have to worry about implementing the bulk transmit and receive endpoint handling. I had to convert the firmware load path to the FreeBSD firmware API rather than the OpenBSD API, but that was mostly trivial.

Then I realised I wasn't doing any driver locking - so yes, I ensured I did the bare minimum of driver locking required to stop the kernel panicing. OpenBSD doesn't use locks, they use old style BSD spl() levels.

Next up was command transmit and receive. Now, I needed to setup the USB endpoints - which FreeBSD makes really easy to do using a structure to define what endpoints are what. It was pretty clean. The complicated bit is the bulk callback - it handles transfer statuses and transfer initiation. This is the bit that took me a little time to wrap my head around.

The USB stuff handles things in-sequence. Everything going to an endpoint here gets handled in the sequence you queue it. It also will process the bulk callback in a single worker thread taskqueue, rather than the driver author having to worry about creating their own worker threads. So, this is what
you end up doing:

  • The bulk callback has three states: USB_ST_TRANSFERRED, USB_ST_SETUP, and everything else (error.)
  • USB_ST_TRANSFERRED says "I've finished a transfer".
  • USB_ST_SETUP says "I've been asked to initiate a transfer."
  • Any driver thread starts a transmit by calling usbd_transfer_start() on the usb_xfer struct, which will kick off a call into the bulk callback with USB_ST_SETUP.
  • So, the driver has to maintain its own queues of "pending", "active" and "waiting" transactions. "pending" is the queue to put outbound transmit messages on. "active" is the queue you put messages that you've submitted when USB_ST_SETUP is called. When USB_ST_TRANSFERRED or an error is called, you pop off the top entry from "active" and you finish with it, then you fall through to USB_ST_SETUP to start a new transfer.
It's a little complicated because you have to maintain your own submission queues in/out of the USB stack, but in practice it's just a linked list.

So, I stole the framework from rsu(4) for buffer management, transmit submission and completion. It submitted things fine. I also registered buffers for receive, and .. nothing happened. I would send a PING message to the firmware to see if it was awake, and I'd get nothing from the receive pipe.

Then I remembered an interesting bug from when I tried this in 2012 - the AR9170 firmware required the IRQ endpoint to be setup, even though no interrupt messages were ever posted. So, I set the endpoint up, started reception on it.. and now I started to see receive messages. My PING messages were being PONG'ed.

But here's the first complication - although everything is asynchronous here, a lot of places want to send a command and wait for a response. For the PING command it's waiting for a matching PONG response. For setting frequency, starting calibration, etc, you get back interesting status from the firmware. But for things like register read commands, you have to wait until you get the register value back before you can continue. We need to be able to put the caller to sleep until the response comes back, or some timeout occurs.

So, cmd_otus() submits a transfer buffer and then will msleep() on it for up to a second, waiting for a response. When a command is transmitted, a couple of things can occur:
  • Once the transfer succeeds, if the command needs no response then we just send a wakeup to notify the sender that we've sent it, and we free the buffer.
  • If the transfer succeeds but the command needs a response, then we put it on the "waiting" queue.
Then in the receive path we pull out firmware notifications and if they're responses we copy the response into the callers provided buffer, call wakeup() to wake up the caller, and free the buffer.

OpenBSD cheats - it only has one single outstanding command buffer for all threads to use.

The tricky, unimplemented bit here is error handling - if I yank out a NIC during active commands then the driver will sleep for a second, wakeup with an error and pass an error back. But, the rest of the driver doesn't know anything was sleeping, so state gets freed from underneath it. I need to go and add what OpenBSD does - refcount when the driver is entered from say, the transmit and ioctl paths, and then upon detach just wait for pending things to finish before freeing.

Ok, so that got command transmit/receive and sleep/wake notification working okay. Next up is packet reception and basic initialization. That was mostly the same - the same hardware bits are needed, the same 802.11 packet format is needed for the stack. The main differences here were in the OpenBSD versus FreeBSD net80211 interface layout - FreeBSD has vaps (virtual access points, etc) but OpenBSD does not. It's still pre-vap work, so there's only one interface. This required a little bit of splitting to put the vap bits in vap routines, and driver bits in the driver. The notable exceptions are vap_create, vap_destroy and newstate.

Next up was realising OpenBSD is also still driving 802.11 state from the driver, not from the net80211 stack. FreeBSD drives the state changes and tells the driver what to do. That required me undoing some manual state transitions (eg otus_init() setting the state to SCAN or RUN depending upon the interface mode) and just letting net80211 do it.

So, net80211 created a vap, called otus_init(), then brought up the interface, set the initial vap state to SCAN via a call to newstate and started changing channels. This worked fine. I had some locking concerns - check the driver to see what I did. It was pretty straightforward.

And then yes - because the receive path was pretty simple and I got straight 802.11 frames back, yes, I started seeing beacons in a tcpdump session. This was great.

Then I ripped up a bunch of callback code that isn't needed. A few years ago FreeBSD's USB drivers maintained their own taskqueue to defer things like crypto key setting, state changes and such. Now net80211 has a per-device taskqueue that it runs these things on, and a lot of the driver calls are done as deferred tasks. OpenBSD doesn't have this so the drivers create their own deferred task and async callback framework to schedule these. It's duplicated work and I removed all of that from the driver.

Next up is transmit. This is trickier for a few reasons.

First, FreeBSD doesn't use if_start() anymore, with network stack provided queues. I have to maintain my own queue and free net80211 node references as appropriate. It took a while to craft up a correctly behaving transmit side when I fixed rsu(4), so I just stole it for the AR9170. I'll describe that in a subsequent article about rsu.

FreeBSD's net80211 stack handles 802.11 encapsulation itself; we're not handed ethernet frames unless we ask for them. So, I don't call ieee80211_encap(). Yes, I do call for software encryption as required, and that was done.

The biggest sticking point is the rate control. FreeBSD's net80211 stack has a reasonable implementation of transmit rate control modules and it's per vap and per associated node. I don't have to do anything too manual for it. OpenBSD did a bunch of manual work to do the AMRR setup/teardown/updating, so I had to rip it out and call the ratectl init/destroy methods in the vap create/destroy methods.

Next up was what ni->ni_txrate represented. In OpenBSD it seems like an index into the rate control table. In FreeBSD it's the 802.11 rate to use! So, I ripped out a bunch of rate table stuff in the driver and replaced it with a couple of mapping functions to go 802.11 rate to AR9170 hardware rate. That worked like a charm, and transmit works fine.

The last annoying thing with transmit is how the firmware tells us about failed frames. We don't get a completion message upon each frame - the later firmware does this, but the original blob doesn't. We only get told upon retries and errors. So, I hacked up something where the transmit path counts outbound packets, the RX command path counts retries/errors, and each time I transmit a packet I update net80211 with the transmit/retry/error counts. This works pretty well.

Finally - teardown. The correct order for teardown is:
  • Shut down the MAC - eg, disable TX/RX DMA, etc
  • Disable the USB transfers, wait until they're done
  • Free the transmit/receive buffers and any net80211 node references they may have; and
  • then call ieee80211_ifdetach() to ensure vaps and the top level interface is destroyed.
The initial port called ieee80211_ifdetach() too early and the subsequent node references would refer to now-freed nodes and vaps, causing lots of hilarity.

And that's that. I haven't made 802.11n work; I haven't fixed up the radiotap support so received 802.11 packets in tcpdump actually provide the right rate/channel/etc. That's all details that I'll do when i feel like it. But, the driver is stable, there aren't any lock ordering issues that I've seen so far, and it actually behaves remarkably well.

Tuesday, July 14, 2015

FreeBSD now has NUMA? Why'd it take so long?

I just committed "NUMA" to FreeBSD. Well, no, I didn't. I did almost no actual NUMA-y work in FreeBSD. I just exposed the existing NUMA stuff in FreeBSD out and re-enabled it.

FreeBSD-9 introduced basic NUMA awareness in the physical allocator (sys/vm/vm_phys.c.) It implemented first-touch page allocation, and then fell back to searching through the domains, round-robin style. It wasn't perfect, for some workloads it was apparently okay. But it had some shortcomings - it wasn't configurable, UMA and other subsystems didn't know about NUMA domains, and the scheduler really didn't know about NUMA domains. So I'm sure there are plenty of workloads which it didn't work for.

That was all ripped out before FreeBSD-10. FreeBSD-10 NUMA just implements round-robin physical page allocation. It still tracks the per-domain physical memory regions, but it doesn't do any kind of NUMA aware allocation. From what I can gather, it was removed until something 'better' would land.

However, nothing (yet) has landed. So I decided I'd take a look into it. I found that for a lot of simple workloads (ie, where you're doing lots of anonymous memory allocation - eg, you're doing math crunching) the FreeBSD-9 model works fine. It's also a perfectly good starting point for experimenting.

So all my NUMA work in -HEAD does is provide an API to exactly the above. It doesn't teach the kernel APIs about domain aware allocations - there's currently no way to ask for memory from a specific domain when calling UMA, or contigmalloc, etc. The scheduler doesn't know about NUMA, so threads/processes will migrate off-socket very quickly unless you explicitly limit things. Devices don't yet do NUMA local work - the ACPI code is in there to enumerate which NUMA domain they're in, but it's not used anywhere just yet.

Then what is it good for?

If you're doing math workloads where you read in data into memory, do a bunch of work, and spit it out - it works fine. If you're running bhyve instances, you can run them using numactl and have them pinned to a local NUMA domain. Those coarse-grained things work fine. You can also change the system default back to round-robin and use first-touch or fixed-domain for specific processes. It's useful for exactly the same subset of tasks as it was in FreeBSD-9, but now it's at least configurable.

So what's next?

Well, my main aim is to get the minimum done so kernel side work is NUMA aware. This includes UMA, contigmalloc, malloc, mbuf allocation and such. It'd be nice to tag VM objects with a domain allocation policy, but that's currently out of scope. I'd also like to plumb in domain configuration into devices and allow devices to allocate memory for different driver threads with different policies.

But the first thing that showed up is that KVA allocation and superpages get in the way of malloc/contigmalloc working. Allocating memory in FreeBSD first allocates KVA space, then back-fills it with pages. As far as malloc/contigmalloc is concerned, KVA is KVA and it finds the first available space in a time-fast way. It then backfills it with physical pages. The superpage reservation bits (sys/vm/vm_reserv.[ch]) join together regions that are contiguous and in the same superpage and turn it into an allocation from the same superpage. These have no idea about NUMA domains. So, if you allocate a 4KiB page via malloc() from domain 0 and then try to allocate a 4KiB page from domain 1, it will likely mess it up:

  • First page gets allocated - first KVA, then the underlying 2mb superpage is allocated and a 4k page is returned - from physical memory domain 0;
  • Second page gets allocated - first KVA, and if it's adjacent or within the same 2mb superpage as the above allocation, it'll "fake" the page allocation via refcounting and it'll really be that same underlying superpage - but it's from physical memory domain 0.
I have to teach both vm_reserv and the KVA allocator about NUMA domains, enough so domain specific allocations don't use KVA that's adjacent. It was suggested that I create a second layer of KVA allocators that allocate KVA from the main resource allocator in superpage chunks (here it's 2mb) and then I do domain-specific allocations from them. It'll change how things get fragmented a bit, but it does mean that I won't fall afoul of things.

So, I'll do the above as an experiment and I'll push the VM policy evaluation up a little into malloc/contigmalloc. I'll see how that experiment goes and I'll post diffs for testing/evaluation.

Saturday, July 11, 2015

The importance of mentoring, or "how I got involved in FreeBSD"..

Here's how I was introduced into this UNIX world, or "wait, WHO was your WHAT?"

So, here's 11ish or so year old Adrian. It's the early 90s. I was hiding in my bedroom, trying to make another crystal set out of random parts and scraping away the paint at my windowsill. In walks my Aunty, who introduces her new boyfriend.

"Hi, I'm Julian." he said. That wasn't all that interesting.

"Oh, are you making a crystal set?" .. ok, so that was interesting.

And, that was that. Suddenly, someone role-model-y shows up in my life out of the blue. There I was, an 11 year old who felt very mostly alone most of the time, and someone shows up who I can look up to and think I can relate to. So, I'm a sponge for everything he shows me. Whenever he comes over, he has some new story to tell, some new thing to show me. He would show me better ways of building transistor switch circuits when I was in the "make large arcs with car alternator" phase of my early teens. And, when I saved up and bought a PC, he started to show me programming.

Now, I was already programming. My parents had saved up and bought me an Amstrad CPC464. We had a second-hand commodore 64 for a short while, but that eventually somehow stopped working and I didn't have the clue to fix it. But I was programming Locomotive BASIC and dabbling in Z80 assembly when I was 12, and had "upgraded" to Turbo Pascal 6 when I hit high school. (Yes, school taught Turbo Pascal at Grade 10 level, and I decided to learn it a bit earlier. That's .. wow, that dates me.) I hadn't yet really stumbled into C yet. I had heard about it, but I didn't have anything that could write it.


Julian explained task switching to me one day during a walk along the beach. He explained that computers can just appear to be doing multiple things at once - but the CPU only does one thing at a time, and you can just switch things really quickly to give the appearance that it's multitasking. With that bright spark planted in my head, I went home and started dreaming up ways to make my Z80 based CPC do something like this.

My mother dragged me to McDonalds to apply for a job the moment I was legally able to (14 years, 9 months) and I saw a computer at a second hand shop - it was a $500 IBM PC/AT, with EGA monitor, two floppy disks and a printer. We put down a down-payment and I paid it off myself with my minimum wage money. Once I had that home I quickly erm, "acquired" a copy of Turbo Pascal for home and was off drawing funny little fractals.

So yes - it's Julian's fault I discovered FreeBSD. Yes, this is Julian Elischer. One day he showed me his computer, running something called BSD. He was trying to explain bourne shell scripting and the installer. I nodded, very confused, and eventually went back to the VGA programming book he lent me. He also showed me fractint running in X on his monochome 486 DX2-50 laptop. I had no idea what was going on under the scenes, only that the fractals were much more interesting than the ones I was drawing. So I took the VGA book home and started learning how to use the higher resolutions available. One thing stuck in my mind: so much bit-plane work. Ugh. One other thing stuck in my mind - reading from VGA memory is one of the slowest things you can do. Don't do it. Ever. (Do you hear that console driver authors? Don't do it. It's bad.)

One day he explained pointers to me. I had erm, "acquired" a copy of Turbo C 2.0 from a friend after failing to make much traction with the less friendly versions (Tiny C, for example.) I had coded up a few things, but I didn't really "get" it. So he sat me down with a pen and paper, and drew diagrams to explain what was going on. I remember that lightbulb going off in the back of my mind, as I dimly connected the whole idea of types and sizes together - and that was it. I was off and doing bad things to C code.

I eventually saved up enough for an updated 286 motherboard, then an updated graphics card (full VGA!), then a sound blaster card, and finally a 486-DX33 motherboard. He introduced me to his friend Peter (who had, and I believe still has, a rather extensive electronics collection) and handed me a FreeBSD-1.1 CDROM. I took it home, put it in, and .. it didn't do anything. My 486 had a soundblaster pro + CD-ROM, and .. well, FreeBSD-1.1 didn't speak to that hardware. So, I eventually put Slackware Linux 3.0 on the thing, and became a Linux nerd for a bit.

I did eventually try FreeBSD-1.1 on it - after putting a lot of FreeBSD bits on a lot of floppies - but I couldn't figure out what to do when it booted. This is going to sound silly - but the lack of colorls turned me off. I know, it seems silly now, but that's honestly why I went back to Slackware.

I eventually went back to FreeBSD in the 2.x era once I had an IDE CDROM and I was working part time at an ISP after (high) school finished. Yes, I figured out how to get colorls to work, I got in trouble disagreeing with a Michael (O, not M) at iiNet about Squid on Linux versus FreeBSD, and well.. stuff. Here was this 17yo kid disagreeing with things and acting like he knew everything. I'm sure it was endearing.

Fast-forward a couple years, and I had been hacking on FreeBSD here and there. I got in a little erm, "trouble" before I finished high school, which phk reminded me of - when they granted me a commit bit. I forget when this was, but I wouldn't have been much older than 20.

So - this is why mentoring kids is important. It may seem like a waste of time; it may seem like they don't understand, but we were all there once. We wanted someone to relate to, someone to look up to, and something interesting to do. Julian was that person for me, and I owe both him and my mother (of course) pretty much everything about my existence in this silly little computer industry.

(This is also why you don't skimp on hardware support for popular, if cheaper platforms and "shiny" looking features if you want people to adopt your stuff -  but that's a different rant.)

Ok, that's done. I'm going back to hacking on VGA/VESA boot loader support for FreeBSD-HEAD. That's long overdue, and I want my pretty splash screen.

Sunday, June 28, 2015

RTL-SDR on FreeBSD, or "hey, cool, I live near an airport, I wonder if ADSB works.."

I bought one of those cheap RTL-SDR units a few months ago. There's no real kernel code required for it - all of the rtl-sdr code just uses the generic USB userland API which is shared between many operating systems.

So, getting it going was pretty easy:

# pkg install rtl-sdr

Then, using it to test ADSB is pretty easy:

# rtl_adsb -V -S 

.. this is verbose and listens to short packets.

Where I live (near San Jose Airport!) I receive a lot of ADSB transmissions. It's quite interesting.

Ok, so next - what about something more GUI like? Someone's already done it - https://github.com/antirez/dump1090 . There's already a package for it:

# pkg install dump1090
# dump1090 --net --aggressive

Then, point a webserver at http://localhost:8080/ and watch!

Sunday, May 17, 2015

freebsd-wifi-build, or "wait, you can run freebsd on atheros MIPS access points? where do I get that?"

I've been running FreeBSD at home as my primary internet/wifi access for a few years now. It's cheap, it's easy to do, and I've tried very hard to wrap up the whole process into a mostly-simple build system that spits out a useful image to use.

It's pretty simple in concept - I take FreeBSD-HEAD, build it with some cut-down options, create a custom filesystem image with some custom boot scripts and a custom configuration file, and provide an image that you can TFTP (using a serial console and ethernet cable) or upload directly to the AP if it supports it.

The supported hardware list is here:

https://github.com/freebsd/freebsd-wifi-build/wiki/Supported-Boards

Now, it's not a huge list like OpenWRT, but that's mostly because I don't have an infinite supply of Atheros MIPS based routers. I think I'll get some of the TP-Link Archer series stuff next.

Building it is pretty simple:

https://github.com/freebsd/freebsd-wifi-build/wiki

You checkout the build repo, check out FreeBSD-HEAD, install a couple of packages, and run the build for your board. Once it's done, the images for your board appear in ../tftpboot/. There's a wiki page for each of the supported boards with a walkthrough with how to get FreeBSD going on it.

It comes up on 192.168.1.20/24 with 'user' and 'root' users, with no password. So, the first thing you should do after installation is telnet in, configure /etc/cfg/rc.conf with your actual LAN IPs, set the user/root passwords, and then 'cfg_save' to save things. Then, reboot and voila!

The configuration file format looks like FreeBSD but it isn't. I'm keeping it somewhat hierarchical-looking in naming but flat in implementation so I can migrate it to something like a sqlite or luci backend in the future.

https://github.com/freebsd/freebsd-wifi-build/wiki/Config-Overview

It's good enough for me to be able to set up an AP to be a bridge with a management IP address and configure the ethernet switch. Others have added ipfw support to do NAT and firewalling - I'm going to add configuration rules for NAT, IPFW and routing soon so it's all integrated.

It's FreeBSD, all the way through:

$ uname -a
FreeBSD tl-wdr3600 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r282406M: Wed May  6 22:27:16 PDT 2015     adrian@lucy-11i386:/usr/home/adrian/work/freebsd/head-embedded/obj/mips/mips.mips/usr/home/adrian/work/freebsd/head-embedded/src/sys/TL-WDR4300  mips
$ ifconfig wlan0 list sta
ADDR               AID CHAN RATE RSSI IDLE  TXSEQ  RXSEQ CAPS FLAG   
18:ee:69:15:f4:12    2    1  26M 37.0   45   2703  51888 EPS  AQEHTRM RSN HTCAP WME
04:e5:36:0d:1b:0d    1    1  19M 23.0   15   1524  47072 EPS  AQEPHTR RSN HTCAP WME
cc:3a:61:0e:33:a0    3    1  19M 32.0   30   2585  43072 EPS  AQEPHTR RSN HTCAP WME
40:0e:85:1a:f1:69    4    1  19M 25.0   30   1138  54800 EPS  AQEPHTR RSN HTCAP WME
00:0f:13:97:14:54    5    1  54M 30.0   45   1808  57424 EPS  AE      RSN
00:22:fa:c2:d1:20    6    1  26M 24.5    0    574  57776 EPS  AQEHTRS RSN HTCAP WME

So if you'd like a FreeBSD based device to act as your home gateway, this is where you can start. It's not pfsense, but it's designed to run on things much smaller than pfsense supports and it's a good introduction into the world of FreeBSD embedded.

Friday, April 10, 2015

Intel DDIO, LLC cache, buffer alignment, prefetching, shared locks and packet rates.

I've been digging into the low level behaviour of high throughput packet classification and pushing for my job. The initial suggestions from everyone was "use netmap!" Which was cool, but it only seems to to fast packet work if you're only ever really flipping packets between receive and transmit rings. Once you start actually looking into the payload, you start having to take memory misses and things can slow down quite a bit. An L3 miss (ie, RAM access) on Sandybridge is ~50ns. (There's also costs involved in walking the TLB, but I won't cover that here.)

For background: http://7-cpu.com/cpu/SandyBridge.html .

But! Intel has this magical thing called DDIO. In theory (and there's a lot of theory here), DMA is done via a small (~10%) fraction of LLC (L3) cache, which is shared between all cores. If the data is already in cache when the CPU accesses it, it will be quick. Also, if you then wish to DMA out data from something in cache, it doesn't have to get flushed to memory first - it's just DMAed straight out of cache.

However! When I was doing packet bridge testing (using netmap + bridge, 64 byte payloads), I noticed that I was doing a significant amount of memory bandwidth. It wasn't quite at the rate of 10G worth of bridged data, but DDIO should be doing almost all of that work for me at 64 byte payloads.

So, to reproduce: run netmap bridge (eg 'bridge -i netmap:ix0 -i netmap:ix1') and run pkt-gen between two nodes.

This is the output of 'pcm-memory.x 1' from the intel-pcm toolkit (which is available as a binary package on FreeBSD.)

---------------------------------------||---------------------------------------
--                   System Read Throughput(MB/s):    300.68                  --
--                  System Write Throughput(MB/s):    970.81                  --
--                 System Memory Throughput(MB/s):   1271.48                  --
---------------------------------------||---------------------------------------

The first theory - the bridging isn't occuring fast enough to service what's in LLC before it gets flushed out by other packets. So, assume:

  1. It's 1/10th of the LLC - which is 1/10th of an 8 core * 2.5MB per core setup, is ~ 2MB.
  2. 64 byte payloads are being cached.
  3. Perfect (!) LLC use.
That's 32,768 packets at a time. Now, netmap is doing ~ 1000 packets a batch and it's keeping up line rate bridging on one core (~14 million packets per second), so it's not likely that.

Ok, so what if it's not perfect LLC usage?

Then I thought back to cache line aliasing and other issues that I've previously written about. What if the buffers are perfectly aligned (say, 2048 byte aligned) - the cache line aliasing effects should also manifest themselves as low LLC utilisation.

Luckily netmap has a twiddle - 'dev.netmap.buf_size' / 'dev.netmap.priv_buf_size'. They're both .. 2048. So yes, the default buffer sizes are aligned, and there's likely some very poor LLC utilisation going on.

So, I tried 1920 - that's 2048 - (2 * 64) - ie, two cache lines less than 2048.


---------------------------------------||---------------------------------------
--                   System Read Throughput(MB/s):    104.92                  --
--                  System Write Throughput(MB/s):    382.32                  --
--                 System Memory Throughput(MB/s):    487.24                  --
---------------------------------------||---------------------------------------

It's now using significantly less memory bandwidth to do the same thing. I'm guessing this is because I'm now using the LLC much more efficiently.

Ok, so that's nice - but what about when it comes time to actually look at the packet contents to make decisions?

I've modified a copy of bridge to do a few things, mostly inspired by netmap-ipfw:
  • It does batch receive from netmap;
  • but it then looks at the ethernet header do decap that;
  • then it gets the IPv4 src/dst addresses;
  • .. and looks them up in a (very large) traditional hash table.
I also have a modified copy of pkt-gen that will use completely random source and destination IPv4 addresses and ports, so as to elicit some very terrible behaviour.

With an empty hash set, but still dereferencing the ethernet header and IPv4 source/destination, handling a packet at a time, no batching, no prefetching and only using one core/thread to run:

buf_size=2048:
  • Bridges about 6.5 million pps;
  • .. maxes out the CPU core;
  • Memory access: 1000MB/sec read; 423MB/sec write (~1400MB/sec in total).
buf_size=1920:
  • Bridges around 10 million pps;
  • 98% of a CPU core;
  • Memory access: 125MB/sec read, 32MB/sec write, ~ 153MB/sec in total.
So, it's a significant drop in memory throughput and a massive increase in pps for a single core.

Ok, so most of the CPU time is now spent looking at the ethernet header in the demux routine and in the hash table lookup. It's a blank hash table, so it's just the memory access needed to see if the bucket has anything in it. I'm guessing it's because the CPU is loading in the ethernet and IP header into a cache line, so it's not already there from DDIO.

I next added in prefetching the ethernet header. I don't have the code to do that, so I can't report numbers at the moment. But what I did there was I looped over everything in the netmap RX ring, dereferenced the ethernet header, and then did per-packet processing. This was interesting, but I wanted to try batching out next. So, after some significant refactoring, I arranged the code to look like this:
  1. Pull in up to 1024 entries from the netmap receive ring;
  2. Loop through, up to 16 at a time, and place them in a batch
  3. For each packet in a batch do:
    1. For each packet in the batch: optional prefetch on the ethernet header
    2. For each packet in the batch: decapsulate ethernet/IP header;
    3. For each packet in the batch: optional prefetch on the hash table bucket head;
    4. For each packet in the batch: do hash table lookup, decide whether to forward/block
    5. For each packet in the batch: forward (ie, ignore the forward/block for now.)
I had things be optional so I could turn on/off prefetching and control the batch size.

So, with an empty hash table, no prefetching and only changing the batch size, at buf_size=1920:
  • Batch size of 1: 10 million pps;
  • Batch size of 2: 11.1 million pps;
  • Batch size of 4: 11.7 million pps.
Hm, that's cute. What about with prefetching of ethernet header? At buf_size=1920:
  • Batch size of 1: 10 million pps;
  • Batch size of 2: 10.8 million pps;
  • Batch size of 4: 11.5 million pps.
Ok, so that's not that useful. Prefetching on the bucket header here isn't worthwhile, because the buckets are all empty (and thus NULL pointers.)

But, I want to also be doing hash table lookups. I loaded in a reasonably large hash table set (~ 6 million entries), and I absolutely accept that a traditional hash table is not exactly memory or cache footprint happy. I was specifically after what the performance was like for a traditional hash table. Said hash table has 524,288 buckets, and each points to an array of IPv4 addresses to search. So yes, not very optimal by any measure, but it's the kind of thing you'd expect to find in an existing project.

With no prefetching, and a 6 million entry hash table:

At 2048 byte buffers:
  • Batch size of 1: 3.7 million pps;
  • Batch size of 2: 4.5 million pps;
  • Batch size of 4: 4.8 million pps.
At 1920 byte buffers:
  • Batch size of 1: 5 million pps;
  • Batch size of 2: 5.6 million pps;
  • Batch size of 4: 5.6 million pps.
That's a very inefficient hash table - each bucket is going to have around 11 IPv4 entries in it, and that's checking almost a cache line worth of IPv4 addresses in it. Not very nice. But, it's within a cache line worth of data, so in theory it's not too terrible.

What about with prefetching? All at 1920 byte buffers:
  • Batch size of 4, ethernet prefetching: 5.5 million pps
  • Batch size of 4, hash bucket prefetching: 7.7 million pps
  • Batch size of 4, ethernet + hash bucket prefetching: 7.5 million pps
So in this instance, there's no real benefit from doing prefetching on both.

For one last test, let's bump the bucket count from 524,288 to 2,097,152. These again are all at buf_size=1920:
  • Batch size of 1, no prefetching: 6.1 million pps;
  • Batch size of 2, no prefetching: 7.1 million pps;
  • Batch size of 4, no prefetching: 7.1 million pps;
  • Batch size of 4, hash bucket prefetching: 8.9 million pps.
Now, I didn't quite predict this. I figured that since I was reading in the full cache line anyway, having up to 11 entries in it to linearly check would be cheap. It turns out that no, that's not exactly true.

The difference between the naive way (no prefetching, no batching) to 4-packet batching, hash bucket prefetching is not trivial - it's ~ 50% faster. Going all the way to a larger hash bucket was ~75% faster. Now, this hash implementation is not exactly cache footprint friendly - it's bigger than the LLC, so with random flows and thus no real useful cache behaviour it's going to degrade to quite a few memory accesses.

This has been quite a fun trip down the optimisation peephole. I'm going to spend a bunch of time writing down the hardware performance counters involved in analysing this stuff and I'll look to write a follow-up post with details about that.

One final things: threads and locking. I wanted to clearly demonstrate the cost of shared read locks on a setup like this. There's been lots of discussions about the right kind of locking and concurrency strategies, so I figured I'd just do a simple test in this setup and explain how terrible it can get.

So, no read-locks between threads on the hash table, batch size of 4, hash bucket prefetching, buf_size=1920:
  • 1 thread: 8.9 million pps;
  • 4 threads: 12 million pps.
But with a read lock on the hash table lookups:
  • 1 thread: 7 million pps;
  • 4 threads: 4.7 million pps.
I'm guessing that as I add more threads, the performance will drop.

Even taking a rwlock as a reader lock in pthreads is expensive - it's purely just an atomic increment/decrement in FreeBSD, but it's still not free. I'm getting the lock once for two hash table lookups - ie, the source and destination IP hash table lookups are done under one lock. I'm sure if I took the lock for the whole batch hash table lookup it'd work out a little better on a small number of CPU cores, but I think this demonstrates my point - read locks aren't going to cut it when you have a frequently accessed thing to protect.

The best bit about this post? The prefetching, terrible (large) hash table performance and general cache abuse is not new. Doing batching on superscalar Intel CPUs is not new. Documenting DDIO effectiveness using non-power-of-two-aligned buffer sizes is new, but it's just a rehash of the existing cache aliasing effect. But, I now have a little test bed to experiment with these things without having to try and involve the rest of a kernel.

Yes, I'll publish code soon.

Saturday, March 28, 2015

Using the arswitch ethernet switch on FreeBSD

I sat down a few weeks ago to make the AR8327 ethernet switch work and in doing so I wanted to add per-port and 802.1q VLAN support. It turned out that I .. didn't know as much I thought I did about the etherswitch support. So, after a whole bunch of trial-and-error, I wrapped my head around things. This post is mostly a braindump so if I do forget I have something written down about it - at least until I turn it into a FreeBSD manpage.

There's three modes:
  • default - all ports are in the same VLAN;
  • per-port - each port can be in a VLAN 'group';
  • dot1q - each port can be in multiple VLAN groups, with 802.1q tagging going on.
The per-port VLAN group is for switches that don't have an arbitrary VLAN table - you just assign each port an ID from some low set of values (say, 16), and then the VLAN tag can either be added or not added. I think the RTL8366 switch is like this, but I'd have to check.

The dot1q VLAN is for switches that support multiple VLANs, each can have an arbitrary VLAN ID (0..4095) with optional other VLAN options (like tag-in-tag support.)

The etherswitch configuration side has a few options and they're supported by different hardware:
  • Each port has a port VLAN ID - this is the "native port" for dot1q support. I don't think it has any particular meaning in the per-port VLAN code in arswitch but I could be terribly wrong. I thought it did when I initially did the port, but the documentation is .. lacking.
  • Then there's a set of per-port flags - eg q-in-q, 802.1q tagging, etc.
  • Then there's the vlangroup - each vlangroup has a vlan ID, and then a set of port members. Each port member can be tagged or untagged.
This is where things get odd.

Firstly - the AR934x SoC switch support doesn't include VLANs. I need to add that. I'm not sure which side of the wall this falls.

The switches previous to the AR8327 support per-port and VLAN configuration, but they don't support per-port-per-VLAN tagging. Ie, you can configure 802.1q VLANs, and you can enable tagging on the port - but it tags all packets that aren't the port 'VLAN ID'.

The per-port VLAN ID seems ignored by the arswitch code - it's only used by the dot1q support.

So I think (and it hasn't yet been tested) that on the earlier switches, I can use per-port VLANs with tagging by:
  • Configuring per port vlans - "etherswitch config vlan_mode port"
  • Adding vlangroups as appropriate with membership - tag/untag doesn't matter
  • Set the CPU port up to have tagging - "etherswitch port0 addtag"
When configuring dot1q VLANs, the mode is "config vlan_mode dot1q" and the 802.1q VLAN IDs are used, but the above still holds - the port is tagged or untagged.

But on the AR8327, the VLAN map hardware actually supports enabling/disabling tagging on a per-port-per-VLAN basis. Ie, when the VLAN table is programmed with the port membership, it takes a list of both the ports and whether the ports are tagged/untagged/open/filtered. So, I don't think per-port VLAN tagging works - only dot1q tagging. Maybe I can make it work, but I haven't really sat down for long enough with the documentation to see what combinations are required.
  • Configure the hardware - "etherswitch config vlan_mode dot1q"
  • Add vlangroups as appropriate, set pvid as appropriate
  • For each vlangroup membership, the port can be tagged or untagged - eg to tag the cpu port 0, you'd use '0t' as the port member. That says "port0 is a member, and it's tagged."
I still have a whole lot more to add - the ingress/egress filters aren't configurable, the per-port vlan stuff needs to be made much more sensible and consistent - and the AR934x SoC switch needs to support VLANs. Oh, and much more documentation. But, hey, I can get the thing spitting out VLAN tags, so when it's time to setup my home network with some VLANs, i'll be sure to document what I did and share it with everyone.

Thursday, March 19, 2015

Cache Line Aliasing #2, or "What happens when you page align everything"

After a little more digging into the Intel performance side of things, I discovered one of the big reasons for the performance drop on this particular workload: how Intel CPUs do memory reordering.

The TL;DR is this - there's some hardware inside the Intel CPUs that tracks memory ordering and cache contents - but they don't use all the address bits.

The relevant chapter in the intel optimisation guide is 3.6.8 - Capacity Limits and Aliasing in Caches. The specific thing I was hitting was in 3.6.8.2 - Store Forwarding Aliasing.

Assembly/Compiler Coding Rule 56. (H impact, M generality) Avoid having a store followed by a non-dependent load with addresses that differ by a multiple of 4 KBytes. Also, lay out data or order computation to avoid having cache lines that have linear addresses that are a multiple of 64 KBytes apart in the same working set. Avoid having more than 4 cache lines that are some multiple of 2 KBytes apart in the same first-level cache working set, and avoid having more than 8 cache lines that are some multiple of 4 KBytes apart in the same first-level cache working set.

So, given this, what can be done? In this workload, a bunch of large matrices were allocated via jemalloc, which page aligns large allocations. In the default invocation of the benchmark (where the allocation padding size is 0), the memory access patterns showed a very large number of counter events on "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS" - which is the number of 64k address aliases on the Sandy Bridge Xeon processors I've been testing on. (The same occurs on Westmere, Ivy Bridge and Haswell.) As I vary the padding size, the address aliasing value drops, the memory access counters increase, and the general performance increases.

On the test boxes I have (running pmcstat -w 120 -C -p LD_BLOCKS_PARTIAL.ADDRESS_ALIAS ./himenobmtxpa M )

0 217799413 830.995025
64 18138386 1624.296713
96 8876469 1662.486298
128 19281984 1645.370750
192 18247069 1643.119908
256 18511952 1661.426341
320 19636951 1674.154119
352 19716236 1686.694053
384 19684863 1681.110499
448 18189029 1683.163673
512 19380987 1691.937818

So there's still plenty of aliasing going on at different padding offsets, however it's a very marked drop between 0 and, well, anything.

It turns out that someone's gone and done a bunch more digging into the effects of various CPU magic under the hood. The last paper in the list (Analysing Contextual Bias..) looks at Aliasing and Cache Effects and the effect of memory layout. There's some cute (and sobering!) analysis of the performance changes due to something as simple as the length of your login name in the UNIX environment. It's worth reading.

The summary? Maybe page alignment of all of your memory accesses isn't the way to go.

For further reading:

Friday, March 6, 2015

cache line aliasing effects, or "why is freebsd slower than linux?"

There was some threads on FreeBSD/DragonflyBSD mailing lists a few years ago (2012?) which talked about some math benchmarks being much slower on FreeBSD/DragonflyBSD versus Linux.

When the same benchmark is run on FreeBSD/DragonflyBSD using the Linux layer (ie, a linux binary compiled for linux, but run on BSD) it gives the same or better behaviour.

Some digging was done, and it turned out it was due to memory allocation patterns and memory layout. The jemalloc library allocates large chunks at page aligned boundaries, whereas the allocator in glibc under Linux does not.

I've put the code online in the hope that others can test and verify this:

https://github.com/erikarn/himenobmtxpa

The branch 'local/freebsd' has my local change to allow the allocator offset to be specified. The offset compounds on each allocation - so with an 'n' byte offset, the first allocation is 0 bytes offset from the page boundary, the next is 'n' bytes offset from the page boundary, the next is '2n' bytes offset, etc.

You can experiment with different values and get completely different behavioural results. It's non-trivial: there's a 100% speedup by using a 127 byte offset for each allocation, versus a 0 byte offset.

I'd like to investigate cache line aliasing effects further. There was work done a few years ago to offset mbuf headers in the FreeBSD kernel so they weren't all page-aligned or 256/512/1024 byte aligned - and apparently this gave a significant performance improvement. But it wasn't folded into FreeBSD. What I'd like to do is come up with some better strategies / profiling guides for identifying when this is actually happening so the underlying objects being accessed can be adjusted.

So - if anyone out there has any tips, hints or suggestions on how to do this, please let me know. I'd like to document and automate this testing.

Sunday, February 22, 2015

FreeBSD on the POWER8: it's alive!

A post to freebsd-ppc from a couple of months ago asked if we had support for POWER8 and offered to provide remote access to anyone interested in working on it. I was sufficiently intrigued that I approached the FreeBSD powerpc hackers to ask about it, and was informed that it'd be nice, but we didn't have hardware.

After a bit of wrangling of hardware logistics and with the FreeBSD Foundation purchasing a box, a Tyan POWER8 evaluation server appeared. Nathan Whitehorn started poking at it and managed to get a basic "hello world" going, but stalled on issues with the Linux KVM virtualisation environment.

Fast forward a few weeks - he's figured out the KVM issues, their lack of support for some mandated hypervisor APIs and other bugs - FreeBSD now boots inside of the hypervisor environment and seems stable enough to do development on.

He then found the existing powerpc pmap (physical memory management) code wasn't very SMP friendly - it works fine on one and two CPU powerpc machines, but this POWER8 evaluation board is a 4-core, 32-thread CPU. So a few days of development went by and he rewrote most of the pmap code to be much more fine grained locked and scale much, much better than the existing code. (He also found the PS3 hypervisor layer isn't thread-safe.)

What's been done thus far?

  • FreeBSD boots inside the hypervisor environment;
  • Virtualised console, networking and storage all work;
  • (in progress) new, scalable pmap implementation;
  • Initial support for the Vector-Scalar Extension (VSX) that's found on POWER7 and POWER8.
So, I'm impressed. Nathan's done a fantastic job bringing the whole thing up. There's some further work on the new powerpc technology that needs doing (things like the new vector processing units, performance counter support and such) and I'm sure Justin and Nathan will poke powerpc dtrace support into further good shape. I'm going to see if we can fix a chelsio 40G NIC into one of these and work with their developers to fix any endian/busdma issues that creep up, and then do some network stack scaling testing with it. There's also the missing hardware/hypervisor support to run FreeBSD on the bare metal, which would be a fantastic achievement.

Now I kind of want some larger POWER8 hardware.

Sunday, February 15, 2015

TDMA (somewhat) working on AR9380 chips

(Wow, I have a lot of posts to write to catch up on things.)

I've just brought up FreeBSD's TDMA support on the AR9380 chipset. Specifically, the AR9331, since I have a Carambola 2 on me today.

It was pretty simple to bring up - I was missing the beacon configuration HAL call that the TDMA code expected. It's only used by the TDMA code - the STA and AP modes rely on the normal HAL beacon methods that date back to the Atheros HAL.

The only problem - it seems something is up with ANI (noise immunity) and sensitivity on at least the AR9331. It doesn't seem to behave well on slightly loaded channels and thus the beacons don't always go out when they're supposed to.

But, if you've been wanting to play with TDMA on the later Atheros chips, now you can!

Sunday, January 11, 2015

On profiling HTTP, or "god damnit people, why are all the open source tools slow?"

Something that's been a challenge at work (and at other things in the past) has been "how do I generate enough traffic to test this thing?"

If you're running some public facing boxes then sure, you can do A/B testing. But what if you're not able to test it in the real world? What if you need to do testing before you ship, and the traffic levels have to be stupid high?

So, what do you do?

I've done this a few times. When doing squid and other reverse proxy development, I would run tools like apachebench, httperf, even web polygraph - but these things scaled poorly. They didn't handle tens of thousands of concurrent connections and scale to both slow and fast clients - their use of poll() and select() just wouldn't work out well.

Something I did at Netflix was to start building TCP testing tools that more than 65,000 concurrent sockets. My aim is much higher, but one has to start somewhere. I was testing out the network stack rather than specifically doing HTTP testing. Here at my current job, I'm much more interested in real HTTP and all processing.

I looked at what's out there, and it's not very pretty. I need to be able to do 10G of traffic, looking upwards towards 20G and 40G of HTTP in the future. After a little more digging into what was out there - and finding httperf actually reverted my changes to use libevent and went back to poll/select! - I decided it was about time I just started writing something minimal to stress test things and build upon it as the need arose. I want something that eventually ends up like web polygraph - multiple client/server sets with different URL choices from a pool, a variety of client IP addresses, and other things like how often to make the requests and other request pacing.

So, I grabbed libevent, libevhtp from Mark Ellzey and threw them together. It turned out okish - libevent/libevhtp still does a bunch of memcpy()'ing inside the buffer management routines that makes 40G on one box infeasible at the moment, but it's good enough to get a few gigabit of client traffic on one core. There were some hiccups which I'll cover below, but it's good enough to build upon.

What did I learn?
  • Well, it turns out the client code in libevhtp was a bit immature. Mark and I talked a bit about it on IRC and then I found there was an outstanding pull request that found and fixed a bunch of these. So, my code has turned into another thing - a libevhtp client and server test suite.
  • The libevhtp threading model is fine for a couple of CPUs, but it's the standardish *NIX model of "one thread does accept, farms work off to other threads." So it's not going to scale well at high request rates to multiple CPUs. That's cool; that's what the FreeBSD-HEAD RSS work is for.
  •  There's memcpy()'ing in the libevhtp body handling code. It's not a big deal at 1G, but at 10G it's definitely noticeable. I've spoken to Mark about it.
But, it's a good starting point. Once the rest of the bugs get shaken out, it'll be a good high throughput HTTP traffic tester.

What would I do next, after the bugs?

  • the server will eventually grow the ability to generate controllable sized responses. That way the client can control how big a response to send and thus can create a mix of requests/replies.
  • .. and HTTP request body testing would be nice.
  • The client side needs to grow the ability to create client pools, like web polygraph, where certain subsets of clients get certain behaviours (like a pool of IPs to use, separate pool of URLs to fetch from, the time between each HTTP request, etc.)
The other trick is how to simulate lots (and I do mean lots) of IP addresses. I don't want to create separate loopback connections for each - that would be crazy. Instead, it'd be good to use the transparent interception support in FreeBSD IPFW that allows both connections from and connections to arbitrary IP addresses. A little trickery with IP routing so we don't need more than 1 ARP entry for each server and voila!

Oh, and the code?

https://github.com/erikarn/libevhtp-http/