Thursday, October 4, 2012

Power save, CABQ, multicast frames, EAPOL frames and sequence numbers (or why does my Macbook Pro keep disassociating?)

I do lots of traffic tests when I commit changes to the FreeBSD Atheros HAL or driver. And I hadn't noticed any problems until very recently (when I was doing filtered frames work.) I noticed that my macbook pro would keep disassociating after a while. I had no idea why - it would happen with or without any iperf traffic. Very, very odd.

So I went digging into it a bit further (and it took quite a few evenings to narrow down the cause.) Here's the story.

Firstly - hostapd kept kicking off my station. Ok, so I had to figure out why. It turns out that the group rekey would occasionally fail. When it's time to do a group rekey, hostapd will send a unicast EAPOL frame to each associated station with the new key and each station must send back another EAPOL frame, acknowledging the fact. This wasn't happening so hostapd would just disconnect my laptop.

Ok, so then I went digging to see why. After adding lots of debugging code I found that the EAPOL frames were actually making to my Macbook Pro _AND_ it was ACKing them at the 802.11 layer. Ok, so the frame made it out there. But why the hell was it being dropped?

Now that I knew it was making it to the end node, I could eliminate a bunch of possibilities. What was left:


  • Sequence number is out of order;
  • CCMP IV replay counter is out of order;
  • Invalid/garbled EAPOL frame contents.
I quickly ruled out the EAPOL frame contents. The sequence number and CCMP IV were allocated correctly and in order (and never out of sequence from each other.) Good. So what was going on?

Then I realised - ok, all the traffic is in TID 16 (the non-QoS TID.) That means it isn't a QoS frame but it still has a sequence number; so it is allocated one from TID 16. There's only one CCMP IV number for a transmitter (the receiver tracks a per-TID CCMP IV replay counter, but the transmitter only has one global counter.) So that immediately rings alarm bells - what if the CCMP IV sequence number isn't being allocated in a correctly locked fashion?

Ok. So I should really fix that bug. Actually, let me go and file a bug right now. There.

There. Bug filed. PR 172338.

Now, why didn't this occur back in Perth? Why is it occuring here? Why doesn't it occur under high throughput iperf (150Mbps+) but it is when the iperf tests are capped at 100Mbps ethernet speeds? Why doesn't it drop my FreeBSD STAs?

Right. So what else is in TID 16? Guess what I found ? All the multicast and broadcast traffic like ARPs are in TID 16.

Then I discovered what was really going on. The pieces fell into place.

  • My mac does go in and out of powersave - especially when it does a background scan.
  • When the mac is doing 150Mbps+ of test traffic, it doesn't do background scans.
  • When it's doing 100Mbps of traffic, the stack sneaks in a background scan here and there.
  • Whenever it goes into background scan, it sends a "power save" to the AP..
  • .. and the AP puts all multicast traffic into the CABQ instead of sending it to the destination hardware queue.
  • Now, when this occured, the EAPOL frames would go into the software queue for TID 16 and the ARP/multicast/etc traffic would go into the CABQ
  • .. but the CABQ has higher priority, so it'll be transmitted just after the beacon frame goes out, before the EAPOL frames in the software queue.
Now, given the above set of conditions, the ARP/multicast traffic (which there's more of in my new place, thanks to a DSL modem that constantly scans the local DHCP range for rogue/disconnected devices) would be assigned sequence numbers AFTER the EAPOL frames that went out but are sitting in the TID 16 software queue. The Mac would receive those CABQ frames with later sequence numbers, THEN my EAPOL frame. Which would be rejected for being out of sequence.

The solution? Complicated.

The temporarily solution? TID 16 traffic is now in a higher priority hardware queue, so it goes out first. Yes, I should mark EAPOL frames that way. I'll go through and tidy this up soon. I just needed to fix this problem before others started reporting the instability.

The real solution is complicated. It's complicated because in power save mode, there's both unicast and multicast traffic going into the same TID(s) but different hardware queues. Given this, it's quite possible that the traffic in the CABQ will burst out before the unicast packets with the same TID make it out via another hardware queue.

I'm still thinking of the best way to fix this.

Lessons learnt from fiddling with the rate control code..

(Note before I begin: a lot of these ideas about rate control are stuff I came up with before I began working at my current employer.)

Once I had implemented filtered frames and did a little digging, I found that the rate control code was doing some relatively silly things. Lots of rates were failing quite quickly and the rate control was bouncing all over the place.

The first bug I found was that I was checking the TX descriptor completion before I had copied it over - and so I was randomly failing TX when it didn't fail. Oops.

Next, don't call the rate control code on filtered frames. They've been filtered, not transmitted. My code wasn't doing that - I'm just pointing it out to anyone else who is implementing this.

Then I looked at what was going on with rate control. I noticed that whenever the higher transmission rates failed, it took a long time for the rate control code to try and sample them again. I went and did some digging - and found it was due to a coding decision I had made about 18 months ago. I treated higher rate failures with a low EWMA success rate as successive failures. The ath_rate_sample code treats "successive failures" as "don't try to probe this for ten seconds." Now, there's a few things you need to know about 802.11n:


  • The higher rates fail, often;
  • The channel state changes, often;
  • Don't be afraid to occasionally try those higher rates; it may actually work out better for you even under higher error rates.
So given that, I modified the rate control code a bit:

  • Only randomly sample a few rates lower than the current one; don't try sampling all 6, 14 or 22 rates below the high MCS rates;
  • Don't treat low EWMA as "successive failures"; just let the rate control code oscillate a bit;
  • Drop the EWMA decay time a bit to let the oscillation swing a little more.
Now the rate control code behaves much better and recovers much quicker during unstable channel conditions (eg - adrian walking around a house whilst doing iperf tests.)

Given this, what could I do better? I decided to start reading up on what the current state of play with 802.11n aware rate control and rapidly came to the conclusion that - wow, we likely could do it better. The Linux minstrel_ht algorithm is also based on John Bickett's sample rate code, but instead of using a EWMA and minimising packet transmission time, it uses the EWMA to calculate a theoretical throughput and maximises that. So, that sounds good, right?

Except that the research shows that 802.11n channels can vary very frequently and very often, especially at the higher MCS rates. The higher MCS rates can become better and worse within a window of a second or two. So, do you want to try and squeeze the last of throughput out of that, or not?

Secondly, using "throughput" as a metric is fine if your air time is .. well, cheap. But what if you have many, many clients on an AP? Your choice of maximising throughput based on what the error rate predicts your data throughput is doesn't take airtime into account. In fact, if you choose a higher MCS rate with a higher error rate but higher throughput, you may actually be wasting more air with those retransmissions. Great for a single station, but perhaps not so great when you have lots.

So what's the solution? The open source rate control stuff doesn't take the idea of "air utilisation" into account. There's enough data available to create an air time model, but no-one is using it yet. Patches are gratefully accepted. :-)

Finally, the current packet scheduler is pretty simple and stupid (and does break in a lot of scenarios, sigh.) It's just a FIFO, servicing nodes/TIDs with traffic in said FIFO mechanism. But that's not very fair - both from a "who is next" standpoint and "what's the most efficient use of the air" view. In addition, the decision about which node/TID to schedule next is done totally separate to the rate control decision. Rate control occurs rather late in the packet transmission process (ie, once we've committed to queuing it to the hardware.) Wouldn't it be better to have the packet scheduler and rate control code joined at the hip, where the scheduler doesn't aggressively schedule traffic to a slow/lossy end node?

Lots of things to think about..

Wednesday, October 3, 2012

Filtered frames support, or how not to spam the air with useless transmission attempts

I haven't updated this blog in a while - but that doesn't mean I haven't been a busy little beaver in the world of FreeBSD Atheros support.

A small corner part of correct and efficient software retransmission handling is what Atheros hardware calls "filtered frames", or what actually happens - "how to make sure the hardware doesn't just spam the air with transmission attempts to a dead remote node."

So here's the run-down.

You feed the Atheros hardware a set of frames to transmit. There's a linked list (or FIFO for the more recent hardware) of TX descriptors which represent the fragments of each frame you're transmitting. The hardware will attempt each one in turn and then return a TX completion status explaining what happened. For frames without ACKs the TX status is "did I get a chance to squeeze this out into the air" - there's no ACK, so there's no way to know if it made it out there. But for frames with ACKs, there's a response from the remote end which the transmitter uses, and it'll attempt retransmission in hardware before either succeeding or giving up. Either way, it tells you what happened.

The reality is a little more complicated - there's multiple TX queues with varying TX priorities (implementing the 802.11 "WME" QoS mechanism. There's all kinds of stuff going on behind the scenes to figure out which queue wins arbitration, gets access to the air to transmit, etc, etc. For this particular discussion it doesn't matter.

Now, say you then queue 10 frames to a remote node. The hardware will walk the TX queue (or queues) and transmit those frames one at a time. Some may fail, some may not. If you don't care about software retransmission then you're fine.

But say you want to handle software retransmission. Say that retransmission is for legacy, non-aggregation / non-blockack sessions. You transmit one frame at a time. Each traffic identifier (TID) has its own sequence number space, as well as the "Non-QoS" traffic identifier (ie, non-QoS traffic that does have a sequence number.) By design, each frame is transmitted in order, with incrementing sequence numbers. The receiver throws away frames that are out of sequence. That way packets are delivered in order. They may not be reliably received, but the sequence number ordering is enforced.

So, you now want to handle software retransmission of some frames. You get some frames that are transmitted and some frames that weren't. Can you retransmit the failed ones? Well, the answer is "sure", but there are implications. Specifically, those frames may now be out of sequence, so when you retransmit them the receiver will simply drop them. You could choose to reassign them new sequence numbers so the receiver doesn't reject them out of hand, but now the receiver is seeing out-of-order frames. It doesn't see out of sequence frames, but the underlying payloads are all jumbled. This makes various protocols (like TCP) very unhappy. If you're running some older PPTP VPN sessions, the end point may simply drop your now out-of-order frames. So it's very important that you actually maintain the order of frames to a station.

Given that, how can you actually handle software retransmission? The key lies in this idea of "filtered frames." The Atheros MAC has what it calls a "keycache", where it stuffs encryption keys for each destination node (and multicast keys, and WEP keys..) So upon reception of a frame, it'll look in the keycache for that particular station MAC, then attempt to decrypt the data with that key. Transmission is similar - there's keycache entries for each destination station and the TX descriptor has a TX Key ID.

Now, I don't know if the filtered frame bit is stored in the keycache entry or elsewhere (I should check, honest) but I'm pretty sure it is.

So the MAC has a bit for each station in the keycache (I think!) and when a TX fails, it sets that bit. That bit controls whether any further TX attempts to that station will actually occur. So when the first frame TX to that station fails, any subsequent attempts are automatically failed by the MAC, with the TX status "TX destination filtered." Thus, anything already in the hardware TX queue(s) for that destination get automatically filtered for you. The software can then grab those frames in the order you tried them (which is in sequence number order, right?) and since _all_ frames are filtered, you don't have to worry about some intermediary frames being transmitted. You just queue them in a software queue, wait until the node is alive again, and then set the "clear destination mask (CLRDMASK)" bit in the next TX attempt.

This gives you three main benefits:
  • All the frames are filtered at the point the first fails, so you get all the subsequent attempted frames back, in the order you queued them. This makes life much easier when retransmitting them;
  • The MAC now doesn't waste lots of time trying to transmit to destinations that aren't available. So if you have a bunch of UDP traffic being pushed to a dead or far away node, the airtime won't be spent trying to transmit all those frames. The first failure will filter the rest, freeing up the TX queue (and air) to transmit frames to other destinations;
  • When stations go into power save mode, you may have frames already in the hardware queue for said station. You can't cancel them (easily/cleanly/fast), so instead they'll just fail to transmit (as the receiver is asleep.) Now you just get them filtered; you store them away until the station wakes up and then you retransmit them. A little extra latency (which is ok for some things and not others!) but much, much lower packet loss.
This is all nice and easy. However, there are a few gotchas here.

Firstly, it filters all frames to that destination. For all TIDs, QoS or otherwise. That's not a huge deal; if however you're me and you have per-TID queues you need to requeue the frames into the correct queues to retry. No biggie.

Secondly, if a station is just far away or under interference, you'll end up filtering a lot of traffic to it. So a lot of frames will cycle through the filtered frames handling code. Right now in FreeBSD I'm treating them the same as normal software retransmissions and dropping them after 10 attempts. I have a feeling I need to fix that logic a bit as under heavy dropping conditions, the traffic is being prematurely filtered and prematurely dropped (especially when the node is going off-channel to do things like background scans.) So the retransmission and frame expiry is tricky. You can't just keep trying them forever as you'll just end up wasting TX air time and CPU time transmitting lots of frames that will just end up being immediately filtered. Yes, tricky.

Thirdly, there may be traffic that needs to go out to that node that can't be filtered. If that's the case, you may actually end up with some subsequent frames being transmitted. You then can't just naively requeue all of your failed/filtered frames or you'll transmit some frames out of sequence. You then _have_ to drop anything with a sequence number or that was queued _before_ the successfully transmitted frame.

Luckily for 802.11n aggregation sessions the third point isn't a big deal - you already can transmit out of sequence (just within the block-ack window or BAW), so you can just retransmit filtered frames with sequence numbers in any particular sequence. Life is good there.

So what have I done in FreeBSD?

I have filtered frames support working for 802.11n aggregation sessions. I haven't yet implemented software retransmission (and all of the hairy logic to handle point #3 above) for non-aggregate traffic, so I can't do filtered frames there. But I'm going to have to do it in order to nicely support AP power save operation.

Friday, July 20, 2012

Reading rate control information from userland..

I've begun working on exporting the ath(4) driver rate control information. The "current" way to extract this information has been to issue a sysctl to the rate control module, which dumps debugging state to the kernel console. This is quite suboptimal. Instead, I'm now packaging that up via a driver ioctl() call. It's temporary (hah, famous last words in the software industry :) and it's driver specific for now. The thirty second summary:

  • There's a new ioctl to ath(4) to query the rate control module for a single associated MAC address (or the BSS MAC when running as a STA);
  • Since the rate control is currently done at the driver level rather than at the VAP level, the call is to the driver rather than via the VAP (wlanX) interface;
  • There's no easy way to get "all" station details whilst maintaining correct locking.

The last point deserves a little more explanation. I've introduced (well, _using_ now) a per-node lock when doing rate control updates. I acquire this lock when copying the rate control data out, so the snapshot is consistent.

So to fetch the state for a node, the following occurs:


  • Call the net80211 layer to find an ieee80211_node for the given mac address - that involves locking the node table and getting a reference for the node (if found); 
  • Then locking the ath_node associated with it;
  • Copy the data out;
  • Unlock the ath_node;
  • Decrement the ieee80211_node reference counter (which requires the node table lock.)


Now, the node table lock only occurs whilst fetching the node reference. It isn't held whilst doing the actual rate control manipulation. Compare to what I'd do if I wanted to walk the node table. The net80211 API for doing this holds the node lock whilst waking the node list. This means that I'll end up holding the node table lock whilst acquiring the ath_node lock. Now, that's fine - however, if I then decide somewhere else to try and do any ieee80211 operation whilst holding the ath_node lock, I may find myself with a lock ordering problem.

So for now the API will just support doing a single lookup for a given MAC, rather than trying to pull all of the rate control table entries down at once.

Here's an example output from the command:


adrian@marilyn:~/work/freebsd/ath/head/src/tools/tools/ath/athratestats]> ./athratestats -i ath1 -m 06:16:16:03:40:d0
static_rix (-1) ratemask 0xf
[ 250] cur rate 5  Mb since switch: packets 1 ticks 43028655
[ 250] last sample (11  Mb) cur sample (0 ) packets sent 10708
[ 250] packets since sample 16 sample tt 6275
[1600] cur rate 11  Mb since switch: packets 15 ticks 43025720
[1600] last sample (5  Mb) cur sample (0 ) packets sent 2423
[1600] packets since sample 7 sample tt 12713
[ 2  Mb: 250]        9:9        (100%) (EWMA 100.0%) T       11 F    0 avg  2803 last 42176930
[ 5  Mb: 250]     3139:3139     (100%) (EWMA 100.0%) T     3273 F    0 avg  1433 last 43028656
[ 5  Mb:1600]       29:29       (100%) (EWMA 100.0%) T       39 F    0 avg  5303 last 42192044
[11  Mb: 250]     7560:7560     (100%) (EWMA 100.0%) T     7838 F    0 avg  1857 last 43026094
[11  Mb:1600]     2394:2394     (100%) (EWMA 100.0%) T     2581 F    0 avg  2919 last 43026411

Friday, June 15, 2012

Don't let anyone tell you that FreeBSD doesn't "do" 802.11n:

This is from my FreeBSD-HEAD 802.11n access point, currently doing ~ 130MBit/s TCP:


# athstats -i ath0
41838297     data frames received
31028383     data frames transmit
78260        short on-chip tx retries
3672         long on-chip tx retries
197          tx failed 'cuz too many retries
MCS13        current transmit rate
8834         tx failed 'cuz destination filtered
477          tx frames with no ack marked
239517       rx failed 'cuz of bad CRC
10           rx failed 'cuz of PHY err
    10           OFDM restart
42043        beacons transmitted
143          periodic calibrations
-0/+0        TDMA slot adjust (usecs, smoothed)
45           rssi of last ack
51           avg recv rssi
-96          rx noise floor
812          tx frames through raw api
41664029     A-MPDU sub-frames received
42075948     Half-GI frames received
42075981     40MHz frames received
13191        CRC errors for non-last A-MPDU subframes
129          CRC errors for last subframe in an A-MPDU
2645042      Frames transmitted with HT Protection
351457       Number of frames retransmitted in software
23299        Number of frames exceeding software retry
30674735     A-MPDU sub-frame TX attempt success
374408       A-MPDU sub-frame TX attempt failures
8676         A-MPDU TX frame failures
443          listen time
6435         cumulative OFDM phy error count
161          ANI forced listen time to zero
3672         missing ACK's
78260        RTS without CTS
1469003      successful RTS
239605       bad FCS
2            average rssi (beacons only)
Antenna profile:
[0] tx  1466665 rx        1
[1] tx        0 rx 41838296

Monday, June 11, 2012

A tale of two sequence numbers, or "when QoS seqno and CCMP PN don't match up"..

Many moons ago (say, 3 or 4 weeks - so hm, most-of-a-moon-ago actually) I found a rather curious failure condition in the ath(4) TX aggregation path. The colourful history is documented in FreeBSD kern/166190. In short - there are situations where sequence numbers were allocated in a different order to how frames were being added to the block-ack window tracking, and if you got unlucky, you'd cause the stack to think a frame was (far) outside the BAW.

The 30 second explanation:

Imagine you allocated four frames - sequence numbers 1, 2, 3 and 4. They have to be added to the block-ack window in precisely that order. Ie:

  1. Starting condition: Window is at 0:63 (64 frame window, starting at 0, so ending at 63)
  2. Add 1: Window is now at 0:63, starting at 1
  3. Add 2: Window is now at 0:63, starting at 2
  4. Add 3: Window is now at 0:63, starting at 3.
The reason the window pointer isn't moving along is because although you've sent the frames (or you're about to), you can't advance it until the other end has ACKed it (via a block-ack or a normal ACK.) For more information, google how 802.11n aggregation works.

The important bit here is that the window is still 0:63 and the starting point is now '3'. This continues all the way to trying to queue frame 64, where it will be outside of the current BAW and not be allowed to be transmitted. It'll sit in the software queue and wait until frame '0' has been ACKed and the BAW has been advanced to be 1:64 - at which point 63 will fall inside the window and will be transmitted.

So yes, the sender is tracking two things - the BAW and what the starting point is that they've added to the BAW.

Now, imagine instead of (1, 2, 3, 4) on the software queue, I somehow get preempted (or race between two sending threads, when using SMP) between 'allocated seqno' and 'queue to software queue'. In the existing code, a lock was held when:
  • Allocating a sequence number, then it was dropped; then
  • Adding it to the software queue.
Now because there was a period where no lock was being held, it's quite possible that what ends up on the software queue is (2, 1, 3, 4.) So:
  1. Starting condition: Window is at 0:63
  2. Add 2: Window is now 0:63, starting at 2.
  3. Add 1: Window is 0:63, starting at 2; 1 is outside of the BAW (it's treated as a 'wraparound', so imagine it's 4095 seqno's away) so TX stalls.
This was the cause of the TX stalls that I was seeing originally in kern/166190. I "fixed" it by only allocating sequence numbers when the frame was about to be transmitted for the first time, and then adding it to the BAW right there. Since both sequence number allocation and adding to the BAW happened inside the same lock, everything was sweet.

Except, I totally forgot about CCMP PN. So under high enough UDP TX loads (say, > 200MBit), I'd hit the same race, but between 802.11 sequence numbers and CCMP PN sequence numbers.

CCMP PN is assigned during 802.11 encapsulation time, in the driver. In the ath(4) case, it's done during transmit and before being queued to the software queue. And it was being done outside of any locking.  So it's very possible that frames would end up on the software queue with 802.11 and CCMP PN sequence numbers out of lock-step.

What would happen?

Simply - after the 802.11n reordering occured on the receive side, the CCMP PN replay detector would notice sequence numbers out of order, and start tossing said out of order frames. Lots of packet loss ensued.

So, I sat down and started trying to address it. The simplest thing - wrap the whole encapsulation path between ieee80211_crypto_encap(), 802.11 sequence number assignment and software/hardware queueing behind the TID (well, hardware TXQ) lock. It took some time; I had to revert two earlier commits which introduced the delayed sequence number allocation.

This didn't fix things. So I was back to square one.

I started looking at all the places where the frames were being queued to the software queue and .. well, let's just say I spent Sunday swearing _at myself_ for all the weird and wonderful stupid mistakes I had made when writing/porting this code over.

The short version follows (the long version is "read the sys/dev/ath/ commit logs and the PR history"):
  1. When I was queueing frames to the software queue, I'd check how deep the hardware queue was. If the hardware queue was shallow/empty, I'd direct dispatch up to two frames to the hardware to get things 'busy'. That will (hopefully) let further frames come along in the meantime and be aggregated. However, I was queueing the new frame to the hardware rather than queueing the new frame to the tail of the queue, and queueing the head frame of the queue to the hardware. That led to some out of order behaviour.
  2. ath_tx_xmit_aggr() would check if the sequence number was within the block-ack window and if it wasn't, it'd queue the frame to the tail of the queue. This meant that any new frames that came along would be queued to the end of the queue, even if they had been dequeued from the head of the queue. This lead to frames on the software queue being out of order.
    1. Frames on the software queue don't have to be in-sequence (as retries are prepended to the beginning of the list, and new frames are appended to the end) however they have to be in-order. If they end up being out of order, the BAW logic fails.
So, now that I allocate sequence numbers at packet queue time, I have to be triply sure that what ends up on the software queue is correctly in order, or the BAW logic will cause traffic stalls and potentially duplicate sequence number issues. Yes, this means that the old behaviour, whilst it now works right with all the right locking, requires me to correctly handle putting frames on the software queue. (Or, as I like to say, "keeping the bastard (me) honest.")

TL;DR - 802.11n aggregation works again. Now, to fix those pesky "queue full and I want to send a BAR frame so I can unblock the full queue and transmit" problems. At least that one is more tractable and easier to solve. Or is it.


Wednesday, June 6, 2012

FreeBSD, Netflix, CDN

The big news this week is the Netflix Openconnect platform, which was just announced. It uses off the shelf hardware - and FreeBSD + nginx.


The question is how you could spin it.


You could say "Netflix chose FreeBSD because they can keep their changes proprietary." Sure, they could. But they're not making appliances that they're selling - they're owning the infrastructure and servers. It's unclear whether they'd have to contribute back any Linux changes if they ran Linux on their open connect platform. They're making a conscious, public decision to distribute their changes back to FreeBSD - even though they don't have to.


You could say "Netflix chose FreeBSD because the people inside the company knew FreeBSD." Sure, they may have. The same thing could be said about why start-ups and tech companies choose Linux. A lot of the time its because they're chasing enterprise support from Redhat. But technology startups using Ubuntu or Debian tend not to be paying support fees - they hire smart people who know the technology. So, yes - "using what they know."


According to the Netflix Openconnect website:


"This was selected for its balance of stability and features, a strong development community and staff expertise. We will contribute changes we make as part of our project to the community through the FreeBSD committers on our team."



Let's pull this apart a little.

  1. "Balance of stability and features." FreeBSD has long been derided for how slowly it moves in some areas. The FreeBSD developers tend to be a conservative bunch, trying to find the balance between new feature development and maintaining both stability and backwards compatibility.
  2. "Strong community." FreeBSD has a strong technical development community and Netflix finds this very important. They're also willing to join and participate in the community like many other companies do.
  3. "Staff expertise." So yes, their staff are familiar with FreeBSD. They're also familiar with Linux. They chose a platform which they have the expertise to develop, use and improve. They didn't just choose an unfamiliar platform because of marketing brochures or sales promises. I don't see any negatives here. I'm sure that Google engineers chose Linux to begin with because they were familiar with Linux.
  4. "Contribute changes we make as part of our project to the community." Netflix  has committed to push improvements and fixes back to the upstream project They contributed some bug fixes in the 10GE Intel driver and IPv6 stack this week. This is collaborative open source working the way it should.
Why would Netflix push back changes and improvements into a project when they're not required to? That's something you should likely ask them. But the same good practice arguments hold for both Linux AND BSD projects:
  • The project is a constantly moving target. If you don't push your changes back upstream, you risk carrying around increasingly larger changes as your project and your BSD upstream project diverge. This will just make things more difficult in the long run.
  • By pushing your changes upstream, you make it easier to move with the project - including adopting improvements and new features. If you keep large changes to yourself, you will likely find it increasingly difficult to update your software to the newer upstream versions. And that upstream project is likely adding bug fixes, improvements and new features - which at some point you may wish to leverage. By pushing your changes upstream, you make it a lot easier to move to future versions of the upstream project, allowing you to leverage all those fixes and improvements without too much engineering time.
  • By participating, you encourage others to adopt your technology. By pushing your changes and improvements upstream, you decrease the amount of software you have to maintain yourself (and keep patching as the upstream project moves along.) But you also start to foster technology adoption. The FreeBSD jail project started out of the desire by a hosting company to support virtualisation. Since then, the Jail infrastructure has been adopted by many other companies and individuals.
  • When others use your technology, they also find and fix bugs in your technology; they may even improve it. The FreeBSD jail support has been extended to include IPv6 support, shared memory support and integrates into the VIMAGE (virtualised networking) stack (which, by the way, came from Ironport/Cisco.) As a company, you may find that the community will do quite a lot of the work that you would normally have to hire engineers to do yourself. This saves time and saves money.
  • When companies contribute upstream, it encourages other companies to also contribute upstream. A common issue is "reinventing the wheel", where companies end up having to reinvent the same technology privately because no-one has contributed it upstream. They solve the same problems, they implement the same new features .. and they all spend engineering time and resources to do so.
  • And when companies contribute upstream, it encourages (private) developers to contribute. Open source developers love to see their code out there in the wild, in places they never quite thought of. It's encouraging to see companies build products with their code and contribute back bug fixes and improvements. It fosters a sense of community and participation, of "give and take", rather than just "take". This is exactly the kind of thing that keeps developers coming back to contribute more - and it attracts new developers. Honestly, who wouldn't want to say that some popular device is running code that they wrote in their spare time?
So, you could rant and rave about the conspiracy side of Linux versus FreeBSD. You could rant on about GPL versus BSD. Or, you could see the more useful side of things. You could see a large company who didn't have to participate at all, agreeing to contribute back their improvements to an open source operating system. You could see that by doing so, the entire open source ecosystem benefits - not just FreeBSD. There's nothing stopping Linux or other BSD projects from keeping an eye on the improvements made by Netflix and incorporating those improvements into their own project. And it's another case of a company participating and engaging the open source community - and having that community engage them right back.

Good show, Netflix. Good show.