Friday, November 30, 2012
Be careful of adding debugging, as microseconds count..
Two words: Debug Code.
Well, to be more specific - I added some debugging code that by default didn't do anything. But it was still there; it checked a debug flag and didn't log anything if it was disabled. But that would take time to execute. Since that debugging code sat _between_ the routines doing math with the RX timestamp and the nexttbtt register, it would calculate a slightly larger TSF offset.
Once I moved the debug code out from where it is and grouped all that register access and math together, the slot timing swings dropped by a few microseconds and everything went back to smooth.
Tsk. I should've known better.
At least now the TDMA code is working well on the 802.11n chips. Yes, it's still only 802.11abg rates, but it works. I've also found the PCU MISC_MODE bit to enforce packets don't transmit outside of the burst window and that is working quite fine with TDMA.
So, I think I can say "mission accomplished." I'll tidy up a few more things and make sure TX only occurs in one data queue (as mentioned in my previous post, they all burst independently at the moment..) and then patiently wait for someone to implement 802.11n adhoc negotiation so 802.11n MCS rates and aggregation magically begins to work. Once that's done, 802.11n TDMA will become a reality.
Tuesday, November 27, 2012
Getting TDMA working on 802.11n chipsets
But, if you tried bringing up TDMA on the Atheros 802.11n chips, it plain just didn't work. Lots of people gnashed teeth about it. I was knee deep in TX aggregation work at the time so I just pushed TDMA to the back of my mind.
How it works is pretty cute in itself. To setup a TX "slot", the beacon timer is used to gate the TX queues to be able to start transmitting. Then a "channel ready time" burst length is configured, which is the period of time the TX queue can transmit. Once that timer expires, no new TX is allowed to begin. Sam then slides the slave TX window along based on when it sees a beacon from the master, as everything is synchronised against that.
Luckily, someone did some initial investigation and discovered that a couple of things were very very wrong.
Firstly, when fetching the next target beacon transmission time ("TBTT"), the AR5212 era NICs returned it in TU, but the AR5416 and later returned it in TSF.
Secondly, the TSF from each RX frame on the AR5212 is only 15 bits; on the AR5416 and later its 32 bits. The wrong logic was used when extending the RX frame timestamp from the AR5416 from 32 bits to 64 bits, and it was causing the TSF to jump all over the place.
So with that in place, he managed to stop the NICs from spewing stuck beacons everywhere (a classic "whoa, who setup the timers wrong!" symptom) and got two 11n NICs configured in a TDMA setup. But he reported the traffic was very unstable, so he had to stop.
Fast-forward about 12 months. I've finished the TX aggregation and BAR handling; I've debugged a bunch of AP power save handling and I'm about to reimplement some things to allow me to finish of AP power save handling (legacy/ps-poll and uapsd) in a sane, correct fashion. I decide, "hey, TDMA shouldn't be that hard to fix. Hopefully there are no chip bugs, right?" So, I plug in a pair of AR5413 (pre-11n NICs) and get it up and running. Easy. Then I plug in an AR5416 as the slave node, and .. it worked. Ok, so why was he reporting such bad results?
Firstly, Sam exposed a bunch of useful TDMA stats from "athstats". Specifically, if you start tinkering with TDMA, do this:
$ athstats -i ath0 -o tdma 1
input output bexmit tdmau tdmadj crcerr phyerr TOR rssi noise rate
619817 877907 25152 25152 -4/+6 142 143 1 74 -96 24M
492 712 20 20 -0/+7 0 0 0 74 -96 24M
496 720 20 20 -2/+6 0 0 0 74 -96 24M
500 723 21 21 -6/+4 0 0 0 75 -96 24M
Then:
- The "tx time" calculation needs to be aware of the 11n rate configuration, so it can calculate the guard time correctly. Right now it uses the non-11n aware rate -> duration HAL function;
- The TX path has to be rejiggled a bit to ensure _all_ traffic gets stuffed into one TX queue (well, besides beacons.) Management and higher priority traffic has to do this too. If not, then multiple TX queues can burst and they'll burst separately, blowing out the TX slot timing;
- Someone needs to get 11n adhoc working, so that 11n rates are negotiated during adhoc peer establishment. Then aggregation can just magically work at that point (the TDMA code reuses a lot of adhoc mode vap behaviour code);
- 802.11e / 802.11n delayed block-ACK support needs to be implemented;
- Then when doing TDMA, we can just burst out an aggregate or two inside the given slot time, then wait for a delayed block ACK to come back from the remote peer in the next slot time! Yes, I'd like to try and reuse the standard stuff for doing delayed block-ack rather than implementing something specific for 802.11n aggregation + TDMA.
- .. and yes, it'd be nice for this to support >2 slave terminals, but that's a bigger project.
Tuesday, November 20, 2012
Making the AR5210 NIC work in the office..
The AR5210. It's their first 11a-only NIC. It does up to 54MBit OFDM 802.11a; it doesn't do QoS/WME (as it only has one data queue); it "may" go up to 72MBit if I hack on some magic extensions. And in open mode, it works great.
But it didn't work in the office or at home. All of which are 802.11n APs with WPA2 authentication and AES-CCMP encryption.
Now, the AR5210 only does open and WEP encryption. It doesn't do TKIP or AES-CCMP. So the encryption has to happen in software. The NIC was associating fine, but when wpa_supplicant went to program in the AES-CCMP encryption keys, the HAL simply refused.
What I discovered was this.
The driver keycache was also trying to allocate keycache slots for the AR5210, where it only supports the 4 WEP keys. This is a big no-no. So once I mapped them to all be slot 0, I made a little progress.
The net80211 layer was trying to program in an AES-CCMP key, which the driver was dutifully passing to the HAL. The AR5210 HAL doesn't support anything but WEP or open, so the encryption key type was "clear". Now, "clear" means "for this MAC address, don't try decrypting anything." But the AR5210 HAL code rejected it - as I said, it doesn't do that.
Ok, so I ignored that entirely. I mapped all of the software encrypted key entries to slot 0 and just didn't program the hardware. So now the HAL didn't reject things. But it wasn't working. The received frames were being corrupted somehow and failed the CCMP MIC integrity check. I took at look at the frames being received (which should've been "clear" versus what was going on in the air - luckily, this laptop has an AR9280 inside so I could put it into monitor mode and sniff things. The packets just didn't add up. I was confused.
Then after discussing this with my flatmate, I idly wondered if the hardware was decrypting the traffic anyway. And, well, it was. Encrypted frames have the WEP bit set in the 802.11 header - whether they're WEP, TKIP, AES-CCMP. The AR5210 didn't know it wasn't WEP, so it tried decoding the frames itself. And corrupting them.
So after finding a PCU control register (hi AR_DIAG_SW) that lets me disable encryption/decryption, I was able to pass through the encrypted traffic fine and everything just plain worked. It's odd seeing an 11a, non-QoS station on my 11n AP, but that just goes to show that backwards interoperability is still useful.
And yes, I did take the AR5210 into the office and I did sit in a meeting with it and use it to work from. It let me onto the corporate wireless just fine, thankyou.
So now the FreeBSD AR5210 support doesn't do any hardware encryption. You can turn it on again if you'd like. Why? Because I don't want the headache of someone coming to me and asking why a dual-VAP AP with WEP and CCMP is failing. The hardware can only do _either_ WEP/open with hardware encryption, _or_ it can do everything without hardware encryption. So I decided to just disable it for now.
There's also a problem with how encryption is specified to net80211. It's done at startup time, when the driver attaches. Anything that isn't specified as being done in hardware is done in software. There is currently no clean way to dynamically change that configuration. So, if I have WEP encryption in hardware but CCMP/TKIP in software, I have to dynamically flip on/off the hardware encryption _AND_ I have to enforce that WEP and CCMP doesn't get configured at the same time.
The cleaner solution would be to:
- Create a new driver attribute, which indicates the hardware can do WEP and CCMP at the same time - make sure it's off for the AR5210;
- Add a HAL call to enable/disable hardware encryption;
- If a user wants to do WEP or open - enable hardware encryption;
- If a user wants to do CCMP/TKIP/etc - disable hardware encryption;
- Complain if the user wants to create a VAP with CCMP/TKIP and WEP.
Thursday, October 4, 2012
Power save, CABQ, multicast frames, EAPOL frames and sequence numbers (or why does my Macbook Pro keep disassociating?)
So I went digging into it a bit further (and it took quite a few evenings to narrow down the cause.) Here's the story.
Firstly - hostapd kept kicking off my station. Ok, so I had to figure out why. It turns out that the group rekey would occasionally fail. When it's time to do a group rekey, hostapd will send a unicast EAPOL frame to each associated station with the new key and each station must send back another EAPOL frame, acknowledging the fact. This wasn't happening so hostapd would just disconnect my laptop.
Ok, so then I went digging to see why. After adding lots of debugging code I found that the EAPOL frames were actually making to my Macbook Pro _AND_ it was ACKing them at the 802.11 layer. Ok, so the frame made it out there. But why the hell was it being dropped?
Now that I knew it was making it to the end node, I could eliminate a bunch of possibilities. What was left:
- Sequence number is out of order;
- CCMP IV replay counter is out of order;
- Invalid/garbled EAPOL frame contents.
Then I realised - ok, all the traffic is in TID 16 (the non-QoS TID.) That means it isn't a QoS frame but it still has a sequence number; so it is allocated one from TID 16. There's only one CCMP IV number for a transmitter (the receiver tracks a per-TID CCMP IV replay counter, but the transmitter only has one global counter.) So that immediately rings alarm bells - what if the CCMP IV sequence number isn't being allocated in a correctly locked fashion?
- My mac does go in and out of powersave - especially when it does a background scan.
- When the mac is doing 150Mbps+ of test traffic, it doesn't do background scans.
- When it's doing 100Mbps of traffic, the stack sneaks in a background scan here and there.
- Whenever it goes into background scan, it sends a "power save" to the AP..
- .. and the AP puts all multicast traffic into the CABQ instead of sending it to the destination hardware queue.
- Now, when this occured, the EAPOL frames would go into the software queue for TID 16 and the ARP/multicast/etc traffic would go into the CABQ
- .. but the CABQ has higher priority, so it'll be transmitted just after the beacon frame goes out, before the EAPOL frames in the software queue.
Lessons learnt from fiddling with the rate control code..
Once I had implemented filtered frames and did a little digging, I found that the rate control code was doing some relatively silly things. Lots of rates were failing quite quickly and the rate control was bouncing all over the place.
The first bug I found was that I was checking the TX descriptor completion before I had copied it over - and so I was randomly failing TX when it didn't fail. Oops.
Next, don't call the rate control code on filtered frames. They've been filtered, not transmitted. My code wasn't doing that - I'm just pointing it out to anyone else who is implementing this.
Then I looked at what was going on with rate control. I noticed that whenever the higher transmission rates failed, it took a long time for the rate control code to try and sample them again. I went and did some digging - and found it was due to a coding decision I had made about 18 months ago. I treated higher rate failures with a low EWMA success rate as successive failures. The ath_rate_sample code treats "successive failures" as "don't try to probe this for ten seconds." Now, there's a few things you need to know about 802.11n:
- The higher rates fail, often;
- The channel state changes, often;
- Don't be afraid to occasionally try those higher rates; it may actually work out better for you even under higher error rates.
- Only randomly sample a few rates lower than the current one; don't try sampling all 6, 14 or 22 rates below the high MCS rates;
- Don't treat low EWMA as "successive failures"; just let the rate control code oscillate a bit;
- Drop the EWMA decay time a bit to let the oscillation swing a little more.
Wednesday, October 3, 2012
Filtered frames support, or how not to spam the air with useless transmission attempts
- All the frames are filtered at the point the first fails, so you get all the subsequent attempted frames back, in the order you queued them. This makes life much easier when retransmitting them;
- The MAC now doesn't waste lots of time trying to transmit to destinations that aren't available. So if you have a bunch of UDP traffic being pushed to a dead or far away node, the airtime won't be spent trying to transmit all those frames. The first failure will filter the rest, freeing up the TX queue (and air) to transmit frames to other destinations;
- When stations go into power save mode, you may have frames already in the hardware queue for said station. You can't cancel them (easily/cleanly/fast), so instead they'll just fail to transmit (as the receiver is asleep.) Now you just get them filtered; you store them away until the station wakes up and then you retransmit them. A little extra latency (which is ok for some things and not others!) but much, much lower packet loss.
Secondly, if a station is just far away or under interference, you'll end up filtering a lot of traffic to it. So a lot of frames will cycle through the filtered frames handling code. Right now in FreeBSD I'm treating them the same as normal software retransmissions and dropping them after 10 attempts. I have a feeling I need to fix that logic a bit as under heavy dropping conditions, the traffic is being prematurely filtered and prematurely dropped (especially when the node is going off-channel to do things like background scans.) So the retransmission and frame expiry is tricky. You can't just keep trying them forever as you'll just end up wasting TX air time and CPU time transmitting lots of frames that will just end up being immediately filtered. Yes, tricky.
Friday, July 20, 2012
Reading rate control information from userland..
- There's a new ioctl to ath(4) to query the rate control module for a single associated MAC address (or the BSS MAC when running as a STA);
- Since the rate control is currently done at the driver level rather than at the VAP level, the call is to the driver rather than via the VAP (wlanX) interface;
- There's no easy way to get "all" station details whilst maintaining correct locking.
The last point deserves a little more explanation. I've introduced (well, _using_ now) a per-node lock when doing rate control updates. I acquire this lock when copying the rate control data out, so the snapshot is consistent.
So to fetch the state for a node, the following occurs:
- Call the net80211 layer to find an ieee80211_node for the given mac address - that involves locking the node table and getting a reference for the node (if found);
- Then locking the ath_node associated with it;
- Copy the data out;
- Unlock the ath_node;
- Decrement the ieee80211_node reference counter (which requires the node table lock.)
Now, the node table lock only occurs whilst fetching the node reference. It isn't held whilst doing the actual rate control manipulation. Compare to what I'd do if I wanted to walk the node table. The net80211 API for doing this holds the node lock whilst waking the node list. This means that I'll end up holding the node table lock whilst acquiring the ath_node lock. Now, that's fine - however, if I then decide somewhere else to try and do any ieee80211 operation whilst holding the ath_node lock, I may find myself with a lock ordering problem.
So for now the API will just support doing a single lookup for a given MAC, rather than trying to pull all of the rate control table entries down at once.
Here's an example output from the command:
adrian@marilyn:~/work/freebsd/ath/head/src/tools/tools/ath/athratestats]> ./athratestats -i ath1 -m 06:16:16:03:40:d0
static_rix (-1) ratemask 0xf
[ 250] cur rate 5 Mb since switch: packets 1 ticks 43028655
[ 250] last sample (11 Mb) cur sample (0 ) packets sent 10708
[ 250] packets since sample 16 sample tt 6275
[1600] cur rate 11 Mb since switch: packets 15 ticks 43025720
[1600] last sample (5 Mb) cur sample (0 ) packets sent 2423
[1600] packets since sample 7 sample tt 12713
[ 2 Mb: 250] 9:9 (100%) (EWMA 100.0%) T 11 F 0 avg 2803 last 42176930
[ 5 Mb: 250] 3139:3139 (100%) (EWMA 100.0%) T 3273 F 0 avg 1433 last 43028656
[ 5 Mb:1600] 29:29 (100%) (EWMA 100.0%) T 39 F 0 avg 5303 last 42192044
[11 Mb: 250] 7560:7560 (100%) (EWMA 100.0%) T 7838 F 0 avg 1857 last 43026094
[11 Mb:1600] 2394:2394 (100%) (EWMA 100.0%) T 2581 F 0 avg 2919 last 43026411
Friday, June 15, 2012
Don't let anyone tell you that FreeBSD doesn't "do" 802.11n:
This is from my FreeBSD-HEAD 802.11n access point, currently doing ~ 130MBit/s TCP:
# athstats -i ath0 41838297 data frames received 31028383 data frames transmit 78260 short on-chip tx retries 3672 long on-chip tx retries 197 tx failed 'cuz too many retries MCS13 current transmit rate 8834 tx failed 'cuz destination filtered 477 tx frames with no ack marked 239517 rx failed 'cuz of bad CRC 10 rx failed 'cuz of PHY err 10 OFDM restart 42043 beacons transmitted 143 periodic calibrations -0/+0 TDMA slot adjust (usecs, smoothed) 45 rssi of last ack 51 avg recv rssi -96 rx noise floor 812 tx frames through raw api 41664029 A-MPDU sub-frames received 42075948 Half-GI frames received 42075981 40MHz frames received 13191 CRC errors for non-last A-MPDU subframes 129 CRC errors for last subframe in an A-MPDU 2645042 Frames transmitted with HT Protection 351457 Number of frames retransmitted in software 23299 Number of frames exceeding software retry 30674735 A-MPDU sub-frame TX attempt success 374408 A-MPDU sub-frame TX attempt failures 8676 A-MPDU TX frame failures 443 listen time 6435 cumulative OFDM phy error count 161 ANI forced listen time to zero 3672 missing ACK's 78260 RTS without CTS 1469003 successful RTS 239605 bad FCS 2 average rssi (beacons only) Antenna profile: [0] tx 1466665 rx 1 [1] tx 0 rx 41838296
Monday, June 11, 2012
A tale of two sequence numbers, or "when QoS seqno and CCMP PN don't match up"..
The 30 second explanation:
Imagine you allocated four frames - sequence numbers 1, 2, 3 and 4. They have to be added to the block-ack window in precisely that order. Ie:
- Starting condition: Window is at 0:63 (64 frame window, starting at 0, so ending at 63)
- Add 1: Window is now at 0:63, starting at 1
- Add 2: Window is now at 0:63, starting at 2
- Add 3: Window is now at 0:63, starting at 3.
- Allocating a sequence number, then it was dropped; then
- Adding it to the software queue.
- Starting condition: Window is at 0:63
- Add 2: Window is now 0:63, starting at 2.
- Add 1: Window is 0:63, starting at 2; 1 is outside of the BAW (it's treated as a 'wraparound', so imagine it's 4095 seqno's away) so TX stalls.
- When I was queueing frames to the software queue, I'd check how deep the hardware queue was. If the hardware queue was shallow/empty, I'd direct dispatch up to two frames to the hardware to get things 'busy'. That will (hopefully) let further frames come along in the meantime and be aggregated. However, I was queueing the new frame to the hardware rather than queueing the new frame to the tail of the queue, and queueing the head frame of the queue to the hardware. That led to some out of order behaviour.
- ath_tx_xmit_aggr() would check if the sequence number was within the block-ack window and if it wasn't, it'd queue the frame to the tail of the queue. This meant that any new frames that came along would be queued to the end of the queue, even if they had been dequeued from the head of the queue. This lead to frames on the software queue being out of order.
- Frames on the software queue don't have to be in-sequence (as retries are prepended to the beginning of the list, and new frames are appended to the end) however they have to be in-order. If they end up being out of order, the BAW logic fails.
Wednesday, June 6, 2012
FreeBSD, Netflix, CDN
The question is how you could spin it.
You could say "Netflix chose FreeBSD because they can keep their changes proprietary." Sure, they could. But they're not making appliances that they're selling - they're owning the infrastructure and servers. It's unclear whether they'd have to contribute back any Linux changes if they ran Linux on their open connect platform. They're making a conscious, public decision to distribute their changes back to FreeBSD - even though they don't have to.
You could say "Netflix chose FreeBSD because the people inside the company knew FreeBSD." Sure, they may have. The same thing could be said about why start-ups and tech companies choose Linux. A lot of the time its because they're chasing enterprise support from Redhat. But technology startups using Ubuntu or Debian tend not to be paying support fees - they hire smart people who know the technology. So, yes - "using what they know."
According to the Netflix Openconnect website:
"This was selected for its balance of stability and features, a strong development community and staff expertise. We will contribute changes we make as part of our project to the community through the FreeBSD committers on our team."
Let's pull this apart a little.
- "Balance of stability and features." FreeBSD has long been derided for how slowly it moves in some areas. The FreeBSD developers tend to be a conservative bunch, trying to find the balance between new feature development and maintaining both stability and backwards compatibility.
- "Strong community." FreeBSD has a strong technical development community and Netflix finds this very important. They're also willing to join and participate in the community like many other companies do.
- "Staff expertise." So yes, their staff are familiar with FreeBSD. They're also familiar with Linux. They chose a platform which they have the expertise to develop, use and improve. They didn't just choose an unfamiliar platform because of marketing brochures or sales promises. I don't see any negatives here. I'm sure that Google engineers chose Linux to begin with because they were familiar with Linux.
- "Contribute changes we make as part of our project to the community." Netflix has committed to push improvements and fixes back to the upstream project They contributed some bug fixes in the 10GE Intel driver and IPv6 stack this week. This is collaborative open source working the way it should.
- The project is a constantly moving target. If you don't push your changes back upstream, you risk carrying around increasingly larger changes as your project and your BSD upstream project diverge. This will just make things more difficult in the long run.
- By pushing your changes upstream, you make it easier to move with the project - including adopting improvements and new features. If you keep large changes to yourself, you will likely find it increasingly difficult to update your software to the newer upstream versions. And that upstream project is likely adding bug fixes, improvements and new features - which at some point you may wish to leverage. By pushing your changes upstream, you make it a lot easier to move to future versions of the upstream project, allowing you to leverage all those fixes and improvements without too much engineering time.
- By participating, you encourage others to adopt your technology. By pushing your changes and improvements upstream, you decrease the amount of software you have to maintain yourself (and keep patching as the upstream project moves along.) But you also start to foster technology adoption. The FreeBSD jail project started out of the desire by a hosting company to support virtualisation. Since then, the Jail infrastructure has been adopted by many other companies and individuals.
- When others use your technology, they also find and fix bugs in your technology; they may even improve it. The FreeBSD jail support has been extended to include IPv6 support, shared memory support and integrates into the VIMAGE (virtualised networking) stack (which, by the way, came from Ironport/Cisco.) As a company, you may find that the community will do quite a lot of the work that you would normally have to hire engineers to do yourself. This saves time and saves money.
- When companies contribute upstream, it encourages other companies to also contribute upstream. A common issue is "reinventing the wheel", where companies end up having to reinvent the same technology privately because no-one has contributed it upstream. They solve the same problems, they implement the same new features .. and they all spend engineering time and resources to do so.
- And when companies contribute upstream, it encourages (private) developers to contribute. Open source developers love to see their code out there in the wild, in places they never quite thought of. It's encouraging to see companies build products with their code and contribute back bug fixes and improvements. It fosters a sense of community and participation, of "give and take", rather than just "take". This is exactly the kind of thing that keeps developers coming back to contribute more - and it attracts new developers. Honestly, who wouldn't want to say that some popular device is running code that they wrote in their spare time?
Wednesday, May 23, 2012
Fixing BAR handling and handle corner cases of things..
This exposed a very annoying problem - what if the driver runs out of ath_buf entries to schedule TX frames? Or, what if the network stack runs out of mbufs? If we need to allocate an ath_buf/mbuf to send a BAR frame, but they're all allocated and unavailable, the driver/wireless stack will come to a grinding halt. Typically these allocated ath_buf's are allocated in the software queue, waiting for the BAR TX (or power-save wakeup) to send a frame.
So, I haven't fixed this. It's on my (very) short term to-do list. But it did expose some issues in how the net80211 BAR send code (ieee80211_send_bar()) works. In short - it didn't handle resource allocation failures at all. It worked fine if the driver send method (ic->ic_raw_xmit()) succeeded and just failed to TX the frame. But if it couldn't allocate an mbuf, or if the driver send method failed.. things just stopped. And when the BAR TX just stopped, the ath(4) software TX queue would just keep buffering frames, right until all the TX ath_buf entries were consumed.
This is obviously .. sub-optimal.
But this raises an interesting point - how much of your kernel and/or userland application handle resource shortages correctly? I've seen plenty of userland software just not check whether malloc() returned NULL and I've seen some that specifically terminate (non-gracefully) if malloc()/calloc() fails - Squid does this. But what about your network stack? How's it handle mbuf shortages? What about the driver stack? What about net80211 (ew) ? What if the kernel malloc() API has to sleep because there's no free memory available?
I don't (currently) have an answer - it's a difficult, cross-discipline problem. What I -can- do though (at least in my corner of the FreeBSD world - net80211 and ath(4)) is to start testing some of these corner cases, where I force some resource shortages and ensure that the wireless stack and driver(s) recover somewhat gracefully. 802.11n is very unforgiving if you start dropping frames involved in an active aggregation session. So it's best I try and address these sooner rather than later.
FreeBSD/arm -
http://www.embeddedartists.com/products/kits/lpc3250_kit.php
This is excellent news. There's also been some recent work to improve pandaboard support (TI-OMAP) focusing on pmap and SMP fixes.
I'm so very glad that the FreeBSD community is pushing this ARM project along. Yes, the armv6 branch is not very well named (it supports more recent ARM platforms, not just armv6 improvements) and I'm hoping this will all make it into FreeBSD-HEAD soon.
Sunday, April 8, 2012
And the winner of the most committing committer to src/sys over the last 12 months is ..
(Source: http://people.freebsd.org/~peter/commits.html)
.. but I wouldn't call myself the most important committer. Or the most active. What I'd call myself is the "most active fixing a sorely needed corner of the codebase."
What I _could_ have done is simply do all my work in a branch and then merge it back into -HEAD when I was done. And, for about 6 months, this is what I did. The "if_ath_tx" branch is where I did most of the initial TX aggregation work.
But as time goes on, your branch diverges more and more from the master branch (-HEAD in FreeBSD) and you are faced with some uncomfortable decisions.
If you stay on the same branch point and never merge in anything from your master branch, you _may_ have a stable snapshot of code, but who knows how stable (or relevant) your work will be when you merge it back into master.
You have no idea if your work will break anything in master and you have no idea if changes in master have broken your work.
As time goes on, the delta between your branch point and the master branch increases, making it even more difficult to do that final merge back. It also has the side effect of making it increasingly likely that problems will occur with the merge (your code breaking master, master breaking your code, etc.)
So as uncomfortable as it was - and as much as I wanted things to stay stable - I did press through with relatively frequent merging. This means:
- I would pick specific development targets to work towards, at which point I'd stop developing and go into a code review/tidyup/testing phase;
- I'd do frequent merges from master back into my branch during active development - I wouldn't leave this until I was ready to merge my work back into master;
- Once I reached my development target and had done sufficient testing - including integrating changes from master back into my branch - I'd kick off a semi-formal review (read: email freebsd-wireless@) and call for testers/review;
- Only _then_ would I merge what was suitable back into master.
I wouldn't merge everything from my branch into master. In my instance, there were some debugging extensions that were easy to maintain (read: lots of device_printf() calls) but weren't suitable for FreeBSD-HEAD. But I merged the majority of my work each time.
But that doesn't always work. I managed to merge a bunch of ath(4), ath_hal(4) and net80211 fixes back into -HEAD as appropriate. But the TX aggregation code was .. well, rather large. So I attempted to break up my commit into as many small, self-contained functional changes as possible. Yes, there was a big "here's software TX queue and aggregation" as a big commit at the end but I managed to peel off more than 30% of that in the lead-up commits.
Why bother doing that?
Two words - version bisection. Once I started having users report issues, they would report something like "FreeBSD-HEAD revision X worked, revision Y didn't." (If I were lucky, of course.) Or, they'd note that a certain snapshot from a certain day worked, but the next day had a regression. If I had committed everything as one enormous commit after having spent 6 + months on the branch, I'd be in for a whole lot of annoying line-by-line debugging of issues. Instead, I was able to narrow down most of the regressions by trying all the different commits.
Now that 802.11n ath(4) TX aggregation and general 802.11n support is in the tree, I only use branches for larger scale changes that take a couple of weeks. For example, when fixing up the reset path to not drop any TX/RX frames. I do most of the bugfixing in FreeBSD-HEAD. I could do it in a branch and then do monthly merges, but I then have the same problems I've listed above.
In summary: don't underestimate how helpful it is to break down your commits into little, piecemeal, self-contained functional changes. It has the side effect of making you look really good in the committer statistics.
Thursday, April 5, 2012
The initial introduction into "it's the NIC, stupid!"
In my case, an IBM/Lenovo Thinkpad T60 has been modified (not by me) to take an Atheros AR9280 NIC. Unfortunately, the NIC was proving to be very unstable when doing 802.11n throughput. The investigations did show I was doing something slightly incorrect with TX descriptors (and I've since fixed that) but the stability issues remained.
The Atheros NICs can expose some host interface error conditions via the AR_INTR_SYNC_CAUSE register. These include PCI(e) transaction timeouts, illegal chip access (eg whilst the MAC is asleep), parity errors, and other rather nice things. FreeBSD's HAL and Linux ath9k does have the register definition for what the bits do - but unfortunately we don't keep statistics.
In my particular case:
- I'd see AR_INTR_SYNC_LOCAL_TIMEOUT occur. This is because a PCI(e) transaction didn't complete in time. I can tune these timeouts via a local register but that's not the point - I was seeing these errors when receiving only beacons from the access point. That's a bit silly.
- I'd also see AR_INTR_SYNC_RADM_CPL_DLLP_ABORT, which is an indication that the PCIe layer isn't behaving well.
I swapped it out with another AR9280 based NIC and suddenly all the instabilities have gone away. No TX hangs, no missed TX interrupts. Everything looks great.
So as an open source developer, I want to try and put some tools into the hands of the community to be able to debug what's going on - or, if that's not possible, at least get some indication that things are going wrong. Right now the only thing people see is "I see TX timeouts, it must be the driver/chip fault." There's too much going on to be able to conclude that.
My game plan is this:
- Implement statistics keeping for each of the SYNC interrupts and expose those via a diagnostic interface. Ben Grear has done something similar for Linux ath9k after a private email discussion. He's also seeing MAC sleep accesses, so it's quite likely we'll start finding/squishing these.
- Take the offending laptop/NIC to the office and attach it to a very expensive and fancy looking PCIe analyser. I'm hoping we'll find something really silly occuring - like lots of sleep state transitions, or a high number of parity errors.
- Try documenting this a lot better so users are able to understand what's going on when their NIC is misbehaving.
Sunday, March 18, 2012
Concurrency in the TX path and when it all falls down..
Monday, February 13, 2012
.. and the price of packaging up software? Billions.
http://blog.james.rcpt.to/
To quote:
"In my analysis the projected cost of producing Debian Wheezy in February 2012 is US$19,070,177,727 (AU$17.7B, EUR€14.4B, GBP£12.11B), making each package’s upstream source code wrth an average of US$1,112,547.56 (AU$837K) to produce. Impressively, this is all free (of cost)."
Now this has apparently caused a bit of a stir among Linux and IT news sites. It's a large number, right? It's all free, right?
However - Debian for the most part is a package repository. Sure, a lot of effort goes into building and maintaining that - and I think _that_ should be assigned a cost - but I think counting all of those package upstreams as part of Debian is hiding the true nature of software development.
According to Ohloh, the cost of producing FreeBSD, at $72,000 a year per programmer, would be $ 243,777,135, _before_ all of the packages in the FreeBSD ports repository. FreeBSD has over 23,000 packages too, just like Debian - if those were also counted, it may start to push that figure far up from millions to billions.
Does it mean Debian Wheezy is equivalent of $20B of effort? Maybe. But then a tiny (comparatively) more effort and you end up with FreeBSD. Or, with different effort - Redhat. Or Gentoo. Or Mandrake. Or NetBSD. Or OpenBSD. Or MacPorts/Fink, which packages this similar software for MacOS X.
What I guess I'm trying to say is this. You get cool stuff from programmers for free. Including that in your project "cost" just seems silly.