Saturday, December 4, 2010

802.11n update - WPA works

I traced down WPA not working to broken CCMP support in FreeBSD-HEAD.

This has been broken for some time. It was introduced in r204364 which enabled the multicast key search. I'm not sure why it's broken - I'll have to go through the ath9k/ath5k code; maybe even madwifi has some more up to date code in this area.

In any case, that's one less thing to worry about for now. I'm pretty sure that multi-VAP mode has more issues than this so I'll put that on hold for a while.

Next - making the 11n TX path a run-time check rather than a compile-time check; then test it with legacy chipsets to ensure things haven't broken.

Tuesday, November 30, 2010

Saturday night @ ucc: 11n hackery

I'm going to work on the FreeBSD 802.11n support a little more this upcoming Friday night and Saturday at UCC.

My short-short term TODO list:

  1. Figure out why crypto isn't working in 11n mode - I've likely just not fully implemented something there;
  2. Hopefully I'll have the AR9220 based SR-71 cards by then - so if I do, see whether my HAL works on the AR9280 and AR9220 based NICs;
  3. Test the legacy HAL (AR5212 support at least) to make sure I haven't broken that;
  4. Make the TX path run-time configurable rather than a compile #ifdef - ie, if the card says it does 11n, use the 11n-aware TX routines;
  5. Keep breaking apart sys/dev/ath/if_ath.c into smaller parts to make this whole thing more manageable moving forward.
That's all I think I need to finish up before I begin preparation for committing this work to FreeBSD-HEAD.

Saturday, November 20, 2010

A productive night at the computer club ..

I spent most of Saturday at the UWA Computer Club, participating in their "Hardware Hacking Night". Whilst others were hacking on power supplies, coke machines and the like, I sat down to try and figure out how 11n TX works.

Which I did, around 11:30pm last night. I still have a long way to go. The AR9160 NIC seems quite happy pumping out packets all the way up to MCS15. I couldn't figure out how to make things work correctly using the 11n RTSCTS protection - if I configure that in the TX registers, no packets are ever sent. That's the next thing on my plate to figure out.

Once I figure that out, I'll worry about the 802.11n "niceities" (short-GI, HT-40 mode to name two) and then see about how to ensure that I'm interoperating (enough!) with legacy non-n devices. There's also hostap and adhoc modes to test.

Thursday, November 18, 2010

Now that exams are over ..

My university exams are over for another semester. Remind me why I put myself through this? :-)

So now that they're over, my open source work plate now is rapidly filling up again:

FreeBSD 802.11n:
  • 802.11n RX works!
  • But now I need to grovel through ath9k and figure out how 802.11n TX works.
  • I'm blissfully ignoring handling sending or receiving A-MPDU and A-MSDU frames for now. That can come (much) later once I have stable 802.11n TX/RX in station and hostap mode (which is enough of a challenge for now, thanks.)
Lusca:
  • IPv6 client support! I need to finish merging in more stuff to test.
  • Thread manager - this is an important one.
  • Find my RPM build patches and include them in the subversion tree.
Who needs sleep?

Tuesday, October 12, 2010

FreeBSD 802.11n: aiee

I've been toying with getting 802.11n working on these atheros AR9160's.

I've got somewhat working 802.11n on the receive side using my public HAL, up to MCS15. But it (currently) isn't implementing AMPDU (datagram aggregation) which is done in software, so it isn't much faster than legacy mode.

But it -does- work - in 20mhz 11ng mode:

root@OpenWrt:~# iw dev wlan0 station dump
Station 00:15:6d:84:05:52 (on wlan0)
inactive time: 0 ms
rx bytes: 41144714
rx packets: 475307
tx bytes: 1059470517
tx packets: 692698
signal: -50 dBm
tx bitrate: 130.0 MBit/s MCS 15

Thursday, September 16, 2010

bsdbox - take #1

One of the nice things about building embedded Linux "stuff" is busybox. The other is "uClibc", which isn't really the focus of this post.

I've fleshed out a basic busybox style thing based on the crunched binary stuff which generates sysinstall and the rescue binaries. Thankfully, most of the hard work (read: the hacks needed) are done for you - both in the rescue Makefile and the general FreeBSD build framework.

So, I give you "bsdbox" - a not-so-small static binary busybox style solution to help build a standalone image.

# ls -l /sbin/bsdbox
-rwxr-xr-x 1 0 0 2958668 Sep 16 17:44 /sbin/bsdbox

# /sbin/bsdbox

usage: bsdbox ..., where is one of:

ls cat dd df cp hostname kill mkdir sleep sh -sh dmesg sysctl init reboot

mount mdmfs mount_mfs mdconfig newfs ifconfig route ping true false hexdump

tail netstat chown chgrp arp hostapd hostapd_cli bsdbox


And with that, I have this TP-Link device happily running FreeBSD as a WPA-enabled wireless access point.

Combined with the GEOM uzip module, the entire filesystem (which is the above binary and a few shell scripts) - is just slightly above 1 megabyte in size. (geom_lzma will make that even smaller.)

The bsdbox stuff is in my GIT repository in "bsdbox". It's built as part of the base system and installed in /bsdbox/. The binary can then be cherry picked and built into an image with whatever symlinks are needed. There's more to do - more binaries for a start; but also making it a bit easier to configure bits than a single Makefile - but it's useful right now.

Monday, September 13, 2010

Antenna Diversity: Ubiquiti SR-2

The Ubiquiti SR-2 is a high-power 11bg card based on the Atheros AR5213 MAC. It has three antenna fittings - one UFL, one MMCX, and a third set of solder pads for a non-existent connector. The first two are treated as separate antennas and are controlled by an external antenna switch. The third looks like it's an alternative to the UFL connector.

I discovered that these cards occasionally calibrate their NF at some impossibly low level - under -100 dBm. The reason? Antenna diversity is occasionally selecting the MMCX connector as the RX antenna - and this never has an antenna attached. Tsk.

Antenna diversity also often chooses the MMCX antenna for TX. Grr.

Sunday, September 12, 2010

FreeBSD/MIPS: AR9132 and AR9100 wireless support

I've now made the wireless MAC/PHY work on my AR9132 wireless access point - the TP-LINK TR-WL1043nd.

The wireless bits are below:

ath0: at mem 0x180c0000-0x180effff irq 0 on nexus0
ath0: [ITHREAD]
eepromdata pointer: 0xbfff1000
eeprom: attaching
eeprom: allocating
eeprom: copying the data!
eeprom: endian check
eeprom: swapsies: 0
ath0: AR9100 mac 20.1 RF2133 phy 10.2

It doesn't currently like client mode - it's something to do with the channel changes and aborting some pending queued stuff isn't working. I'll have to go digging at some point soon.

But it works in hostap mode:

# ifconfig wlan0 list sta
ADDR AID CHAN RATE RSSI IDLE TXSEQ RXSEQ CAPS FLAG
1c:4b:d6:97:ac:26 1 6 1M 32.5 15 2 32160 ES AQE WME

# ifconfig wlan0 list channels

Channel 1 : 2412 MHz 11g ht Channel 7 : 2442 MHz 11g ht
Channel 2 : 2417 MHz 11g ht Channel 8 : 2447 MHz 11g ht
Channel 3 : 2422 MHz 11g ht Channel 9 : 2452 MHz 11g ht
Channel 4 : 2427 MHz 11g ht Channel 10 : 2457 MHz 11g ht
Channel 5 : 2432 MHz 11g ht Channel 11 : 2462 MHz 11g ht
Channel 6 : 2437 MHz 11g ht

# ifconfig wlan0 list sta

ADDR AID CHAN RATE RSSI IDLE TXSEQ RXSEQ CAPS FLAG
1c:4b:d6:97:ac:26 1 6 54M 33.5 0 2682 45008 ES AQE WME

# ifconfig wlan0 scan

SSID/MESH ID BSSID CHAN RATE S:N INT CAPS
WLAN Lucy-Gaby 94:44:52:4b:f5:52 6 54M -65:-96 100 EP RSN HTCAP WPA WME
CACHEBOY_DA... 00:25:86:d8:7c:da 6 54M -65:-96 100 EP WPA RSN
davinayarker 00:1c:df:e4:d6:5f 6 54M -83:-96 100 EP MESHCONF MESHCONF WPS HTCAP WPA RSN WME
surbi 00:22:b0:9a:d0:ab 6 54M -88:-96 100 EPS WPA ATH WPS

I haven't yet committed the code anywhere - it's a bit of a mess. In particular:
  • There's some weird stuff surrounding the EEPROM logic determining when to swap what. The existing code assumes that a Big-Endian system requires swapping the byte order - but the EEPROM contents stored in flash are stored in big endian order already! Tsk.
  • There's also a bit of a mess in getting the EEPROM contents out from memory-mapped SPI flash into a private buffer. The linux AR9100 commit(s) mention that accessing it direct can be unreliable.
  • The AR9100 support is sprinkled throughout the AR5416/AR9160 support...
  • .. and I'm sure there's bits that I've missed in porting it from ath9k ..
  • .. and I know there's a few bits here and there which are also missing. I'll check that out later.
I'll look to tidy this code up and get it pushed into something public in the next week or so.

Update

The diff against my GIT repository is available at http://www.creative.net.au/diffs/git_ath_diff_1.diff . The patch includes where the GIT repository can be found.

Tuesday, September 7, 2010

LUSCA and IPv6

The IPv6 development branch (/playpen/LUSCA_HEAD_ipv6 in subversion) now handles IPv6 HTTP clients. It is still limited to IPv4 servers and other protocols (snmp, icmp, htcp, ftp) are all still IPv4.

But it's a (long overdue) start.

I'll look at next fleshing out IPv6-aware SNMP, ICP and HTCP as they're reasonably easy - I just need to change the socket handling to be IPv4/IPv6 agnostic.

I'll begin merging some of the simple framework changes back to mainline Lusca once I've committed that memory leak fix that I've been discussing.

Saturday, August 28, 2010

FreeBSD/MIPS: AR9132 porting

I decided to avoid work a little by porting FreeBSD/MIPS to the AR9132 in the TP-LINK WN-1043nd 802.11n wireless AP I have here. There's existing OpenWRT code for it so I figured the port wouldn't be that difficult.

It turns out I was right. The AR91xx support took a couple of days to massage into a form which was good enough to commit to FreeBSD-HEAD. I also included some basic AR713x SoC support - not enough to be completely useful, but enough to get a head-start on porting.

I broke out various CPU specific operations into a set of function pointers which a CPU detection function sets up early during the MIPS init code. The main differences between the SoCs are:

* Different base frequencies for setting up RAM, peripheral bus (eg UART), etc
* Different register locations for a few things
* The AR713x has a PCIe bus; the AR71xx has a PCI bus; the AR91xx doesn't have a PCI bus
* The AR9132 has different PLL values for the gigabit ethernet MACs
* Each SoC has a different USB peripheral setup path
* Each SoC has different GPIO layouts
* There's slightly different on-board peripheral reset registers

I've tested the code on the AR71xx and AR9132 and things work fine. I now need to port the AR9100 wireless MAC support from Linux ath9k to complete things. The AR9100 looks a lot like the AR5416/AR9160 (802.11n, 3 radio chains) but it has a few annoying differences:

* It isn't on a PCI bus, so it has to be manually attached;
* A few registers differ in locations (one of which returns 0xdeadbeef if the AR5416 register location is read!)
* The EEPROM is not in the same location(s) as the AR5416 - it needs to be copied and then shoe-horned into the AR5416 eeprom access code.

Also, there's an RTL8133RB gigabit switch device on-board. There's also code in Linux/OpenWRT to support this. I'll look at porting this once the AR9100 support is done.

Friday, August 27, 2010

Satellite instead of NBN? Hello latency..

Today's NBN politik comes thanks to the IT Wire:

http://www.itwire.com/it-policy-news/government-tech-policy/41422-newsat-woos-independents-attacks-nbn

In particular:

"Mr Ballantine today argued that for most of Australia’s geography broadband delivered over satellite would be faster and cheaper. What was right for the city he argued was not necessarily good for the outback."

Maybe for download throughput, sure. But latency plays a huge part in satellite (and 3G style solutions) and it will be noticable.

Satellite IP doesn't scale. Sorry guys.

Wednesday, July 28, 2010

Authority-by-being-employed-by-Google, rather than by-reproducable-data

Many things rub me up the wrong way. This is one of them.

http://developers.slashdot.org/comments.pl?sid=1734346&cid=33051506

In summary - the post asserts some interesting facts (which I believe, having done high volume HTTP stuff myself) but when asked about benchmarks to back up his assertions, he replies:

"Unfortunately, nothing I can publish without permission. I can say that I'm in charge of maintaining the software that terminates all HTTP traffic for Google. Draw your own conclusions."

I really do dislike "I work in this area in Google" as a measure of authority. My conclusion is that the developer in question, as clever as he should be, should likely read some history books on "authority by being high up in the priesthood" and where that takes people.

Grr.

Monday, July 12, 2010

More IPv6 hackery..

I've been spending a little time fleshing out some more of the IPv6 preparation work in Lusca. Since they're rather intrusive patches, I'm doing the work in a separate branch (/playpen/LUSCA_HEAD_ipv6) and will merge back bits and pieces as needed.

I've migrated the client db, external URL rewriters, access logging and the client-facing connection management code over to be IPv6 aware. I still have the request_t state, ACL lookups (which is luckily done - but sitting in a branch!), further DNS work and the protocol-facing stuff (HTTP, FTP.)

There isn't much more work involved in getting LUSCA_HEAD ready enough to serve IPv6-facing clients. That'll let me push out Cacheboy content over IPv6.

But for now, it's back to hacking on commercial, customer code.

Sunday, July 11, 2010

FreeBSD/MIPS on AR71xx / Routerstation Pro: NIC alignment/size bug fixed!

I found and squished a small bug in the gigabit NIC driver in FreeBSD/MIPS for the AR71xx chipset. It wasn't all that complicated - TX buffers weren't being thoroughly enough checked for alignment/size constraints before being handed to the DMA engine.

It did however fix a few niggling issues. My tunneling stuff was fixed. IPv6 frames are now correctly handled. And ng_nat doesn't cause a panic in the NIC driver. So there's at least three people who are happy. :)

Tuesday, June 22, 2010

Cacheboy: Firefox Release 3.6.4

Cacheboy is currently pushing a good 800-1200mbit of firefox 3.6.4 updates. It has about 6% of the total mozilla mirror weight so I predict that there's currently around 16 gigabits of total mozilla updates going out.

Lusca is holding up fine - a single process happily pushed ~ 400mbit on some hosts during the initial peak. I'd obviously like to be able to push a lot more than that but I'm still doing baby steps in the Lusca performance department.

Wednesday, May 19, 2010

HTTP parser/management changes

I've finally committed changes to Lusca which migrate the HTTP Header management routines away from individual malloc/free calls per HTTP Header entry to (hopefully!) one malloc/free for every N. (N being tunable, of course.)

A lot of incremental preparation work was required in the HTTP parser to tidy things up enough to (hopefully!) minimise any impact on stability and functionality.

There really isn't all that much noticable improvement unless you're working on small, embedded platforms with slow CPUs. It's more of a precursor to further optimisation and reorganisation work. The cumulative effect will be worthwhile.

I've released a tarball with the work - r14674 - which also includes a handful of ATF unit tests in the test-suite/atf/ subdirectory.

I'll likely next work on reorganising some more of the FTP and HTTP protocol specific code which can be migrated out of src/ . I'd like to then spend some more time writing unit tests before (hopefully!) finishing off the IPv6 related DNS changes in preparation for more IPv6 related tomfoolery.

Thursday, May 13, 2010

Unit testing using ATF

I've been toying around with the NetBSD unit testing framework called "ATF". Unlike other well-known FOSS unit testing software, ATF seems to have relatively comprehensive support for writing unit tests in C - which is quite important for a piece of C software.

In fact, it seems quite a few popular bits of FOSS software handling unit tests, embedded/API documentation, etc all support C rather poorly. I wonder why.

I've not linked the ATF stubs into the main build tree just yet. The ATF code can be found in the source tree under /test-suite/atf/ . It works enough for me to use "make check" on my iBook (yes, that's a pre-intel, PPC G4 iBook) to check that the basic changes I'm making to the HTTP parser code don't introduce new bugs.

I would love some help writing more unit tests though!

Tuesday, May 4, 2010

The best news is sometimes no news..

Things have been quiet on the Lusca front lately. I poked some of the users who were reporting earlier issues and asked whether said issues were fixed. All of them responded "yes".

So sometimes no news is good news. But you have to sometimes make sure.

Wednesday, April 21, 2010

Modifying the HTTP header parser in Lusca

I've been slowly working towards a variety of medium-term goals in Lusca. I've resisted committing various bits of partially finished work partly because they're works in progress but partially because I'm not happy with how the code fits together.

One of these areas is the HTTP header parser and management routines. Among other things, the main issues I have with the parser and management code is listed below.
  • Each header entry is represented by separate strings in memory;
  • Each header entry has a small, separately allocated object (HttpHeaderEntry), one per header
  • Parsing the header entries uses various stdio routines to iterate over characters, and these may be implemented slower (to handle unicode/wide/UTF/locale support) than what's needed here (7-bit ASCII);
  • There's some sanity checks in the header parser - specifically, duplicate content-length - which is likely better once the headers have been parsed.
I've been working on the first two items in separate branches. One converts the HttpHeaderEntry items into a single allocated array, which is grown if needed. Another takes the current String API and turns it into fully reference-counted strings. Both of these work fine for me. But shoe-horning it into the current HTTP parser code - which expects individually allocated/created HttpHeaderEntry items which it can destroy on a whim before they're considered a part of the Http Header set - is overly hackish and prone to introduce bugs.

It's taking me quite a bit of time to slowly change the HTTP parser code to be ready for the new management code. Well, it's taken me about 6 months to slowly modify it in a way that doesn't require rewriting everything and potentially changing expected behaviour and/or introduce subtle bugs.

The upshoot? Things take time, but the code hopefully will be tidier, cleaner and easier to understand. Oh, and won't include bugs.

Saturday, April 3, 2010

State of the Cygwin/Windows port!

A rather nice chap has been ploughing through the source and making it work under Windows/Cygwin. I've been committing bits and pieces of his work into LUSCA_HEAD as time permits.

You can find the main port details in Issue 94.

Thanks!

Friday, April 2, 2010

Hunting down method_t bugs..

It all started with Issue 99. There was a random crash in the logging code. It looked like bug in the method handling changes which made it into Squid-2.HEAD a year or two ago. I've been patching issues in the method handling - specifically with NULL and uninitialised method pointers appearing in places - but this time the method_t pointed to junk data.

A bit of digging found that the pointer value did point to a valid method_t structure instance - but something free'd it. Hm. A little further digging found what was going on:
  1. A METHOD_OTHER appeared (an RTSP method) which resulted in a new method_t being malloc'ed;
  2. The pointer was copied to the request_t structure;
  3. The request was processed;
  4. The initial method_t pointer was freed, but the request_t method pointer still pointed to it;
  5. The logging code then logged the stuff said request_t method pointer pointed to - but it was already free'd. Sometimes it'd be junk, sometimes it'd be the original contents.
The original method code (and the "known" methods) all throw around pointers - and copies of pointers - to statically allocated structures which never go away. Unfortunately this logic wasn't changed when the dynamic "other" methods appeared.

So I've been quite busy tidying up the method handling code in preparation for the change in how they're handled. LUSCA_HEAD now has some code which logs potential memory leaks when handling the dynamic methods. I'm going to see if I can come up with a way (or two) to log potential risky situations when items are dereferenced after being free'd. But hopefully I can fix the issue without introducing any further bugs.

Sunday, March 28, 2010

Today's fun bug: invalid swap metadata

One of the Lusca users has issues with swap file contents being invalid. It's linked to the length of the object URL - and fixing this is unfortunately more difficult than first thought.

In essence - the size of the URL, the size of the metadata and the size of the buffers being used for reading data don't match "right". The TLV encoding code doesn't put an upper limit on the size of the metadata. The TLV decoding code doesn't enforce a maximum buffer size - it tries reading the metadata until it finds the end of said metadata. All of this unfortunately results in stupid behaviour when overly long URLs are stored.

The current maximum URL is MAX_URL - 4096 bytes. Squid-3 may have redefined this to be longer. The reason I haven't done this is because the URL is included in the swap metadata - and this is only read in in SM_PAGE_SIZE chunks - again, 4096 bytes. So if the URL is say, 4000ish or so bytes long, the total length of the encoded metadata is > 4096 bytes. This is happily written out to the swapfile.

Reading the data in however is another story. An SM_PAGE_SIZE sized buffer is created and a read is issued. The incomplete metadata is read in. There's unfortunately no check to ensure the passed in buffer actually fully contains all of the metadata - so the code happily trumps in potentially uninitialised memory. The end result is at the very least an invalid object which is eventually deleted; it could be worse. I haven't yet investigated.

In any case - I'm now going to have to somehow enforce some sensible behaviour. I'd much prefer to make the code handle arbitrary (ie, configurable arbitrary) long URLs and read in the metadata as needed - but that's a bigger issue which will take some further refactoring and redesigning to solve.

This is all another example of how code works "by magic", rather than "by intent". :)

Thursday, March 25, 2010

Lusca update: important bug fixes, portability work

I've finally found and fixed two annoying bugs in Lusca.

The first is that occasionally the rebuild processed failed to properly rebuild the cache contents and the proxy would start with an empty cache. This ended up being due to the undocumented IPC code in Lusca, inherited from Squid, which would "eat" a few bytes at the beginning of the child process lifetime to establish whether the child was active and ready. It would try to read a hello string ("hi there!\n\0") but it would read 32 bytes instead of the exact string length. If the child process started sending data straight away (as my store rebuild helpers do!) the initial part of their conversation could be eaten.

The real fix is to modify the helper IPC handshake so it's two way and fixed length rather than the hacky way it's done now. But the temporary workaround seems to work fine - just read the 11 bytes for the hello string, rather than up to 32 bytes.

The second was a weird situation involving the swap.state files in the UFS store dirs growing to enormous sizes, filling the cache_dir up. This was eventually traced back to the proxy being reconfigured once a minute on some deployments (in this case - pfsense!). The problem was quite simple in the end - a reconfigure would force the swap state logs to be closed and re-opened; but this didn't know whether the swap state logs were pointing at the live ones or the rebuilding ones. In the latter, the rebuild process was reading from swap.state and appending to swap.state.new; a reconfigure would close the swap.state.new file and append to swap.state. This meant that the rebuild process was reading from swap.state and appending to swap.state.new - thus never quite finishing the rebuild process.

Those have been fixed for *NIX ports and now the rebuild processe seems to be moving forward quite swimmingly!

I've also had a user pop up recently who is submitting portability fixes for cygwin (and under Windows 7, no less!) I've been committing fixes to the way header files are included so the code mostly compiles under both *NIX and Cygwin. Admittedly most of those portability fixes were my fault - I didn't really bother making my new code "autoconf/automake safe" but, thankfully, revisiting this choice isn't difficult. Thankyou for the continuous assistance with the Cygwin changes!

Finally, there's still 46 currently active issues in the issue tracker. Some are just placeholders for me to revisit certain bits of code (eg Issue #85 - figuring out how PURGE works!) but I'm still aiming to address, repair and close the majority of those issues - and release a few more stable snapshots! - before I begin including more development stuff into the main tree.

Not that the snapshots at the moment are unstable - far from it!

Saturday, March 20, 2010

Lusca and HTTP parsing..

I've broken out most of the client-side request parsing code into a separate source file. There's now only a single entry and exit point (and one sideways jump for a comm timeout handler for now) from the client-side main code into the request handling code.

The main aim here is to eventually isolate and rework the whole process with which a HTTP request is parsed and constructed in memory. The process itself isn't all that CPU intensive these days compared to Squid-2.x and Squid-3.x but it is quite ugly. I won't go into the gory details - you can check it out for yourself if you like. Just cast your eyes over src/client_side_request_parser.c in LUSCA_HEAD.

I'm going to leave the request parsing code as it is for now. It's ugly but it works and its inefficiencies are dwarfed by the misuses of memory bandwidth/CPU cycles elsewhere.

I think I've left the codebase slightly easier to understand than before. I think I'm now at the point where I can document a large part of the client-side request and reply handling pipeline. The caching, vary and ETag processing is still very messy and too tightly integrated into the client-side code for my liking but as it also works fine for now I'll be leaving it well alone. There be dragons and all of that.

Monday, March 15, 2010

Lusca Logging, revisited

So the obvious thing to do is to not run the logging path at all, and to avoid evaluating the access control list(s) if there's no access control set defined for logging.

There are a couple of problems with this!

Firstly, the current behaviour is to not pass request/reply information through some of the statistics counters and the clientdb modules if the request doesn't match the logging ACL. If there's no logging ACL then all requests are kicked to the statistics/clientdb counters.

Secondly, there's also the ACLs used in the access_log directives which allow the administrator to filter which requests/replies are logged to which access log file(s). The current implementation will pass all requests which aren't denied by the top-level logging ACL through to each of the access_log entries, checking those for suitability.

The question is - can I mostly maintain the behaviour for the use cases. The main two are:
  1. where logging is completely disabled (via "access_log none") but maintain client counters and the clientdb;
  2. where logging is enabled to one access_log entry, maintaining client counters and the clientdb.
Hm, stuff to do, stuff to do..

Sunday, March 14, 2010

Improving the access logging code

The Squid/Lusca access logging logic is .. well, to be honest, it's quite dirty and inefficient. Yes, like most of Squid. But this is a somewhat bite-sized piece of the puzzle which I can tidy up in one corner without too many effects elsewhere.. or is it?

For high request rate, small object loads on slower FSB hardware, one of the largest CPU users is actually the client-side access log path. There's two culprits here:
  1. memcpy() when copying the "hier" struct from the request struct to the access log entry struct (client_side.c:297; revision 14457); and
  2. The ACL list setup in preparation for writing filtered access log entries to multiple files (client_side.c:314; revision 14457).
The memcpy() is of a 588 byte HierarchyLogEntry struct - it's this large because of two "SQUIDHOSTNAMELEN" (256 byte) long strings embedded in the struct itself. Annoying, but somewhat fixable with a little code refactoring and use of reference counted strings.

The ACL list setup is a bit more problematic. It sets up an ACL checklist using the http_access checklist before then checking log_access / acl_access. It then further may use this ACL checklist when evaluating various access_log lines to allow filtering certain access log entries to certain files.

Now, the latter bites because the really slow paths (setting up and destroying the ACL access stuff) is done even if the default logging configuration is used - one log file - and there's currently no easy way around that. Furthermore, if you disable logging entirely (access_log none) then the initial setup of the access log entry information is done, the ACL checklist is created, and then it's all tossed away without logging. A bit stupid, eh?

I'll cover what I'm doing in the codebase in a subsequent post (read; when I'm not running off to class.)

Saturday, March 13, 2010

Currently reading: "The Science of Programming" by David Gries

I've been acquiring older computer science textbooks at various second hand book sale type events over the last few years and occasionally I'll pick up what seems to be like a gem which shouldn't have fallen by the wayside in the name of progress.

Today's gem is "The Science of Programming" by David Gries. It's a book first published in the late 1970's which I'm guessing is one of the earlier attempts at a not-quite-academic publication trying to formalise some concepts of program design, provability and deductive reasoning. It does this for a variety of algorithms in a framework which - and this is a useful point - does not involve in any way an object oriented language, functional language, or anything which hides what's mostly going on under the hood from the programmer. Not that I think those are bad things, but I do think that being taught about how things are done closest to how things are run is a good idea. Starting at OO seems to produce programmers who, well, don't seem to have much of a clue about reality and rely on their tools a lot more than they should.

I'm currently re-reading it with a notebook at hand. Luckily for me, the first few chapters involve propositional/predicate logic and deduction stuff which overlaps nicely with my Semantics course in Linguistics. So it's "almost" related to my university degree. Sort of.

Tuesday, March 9, 2010

Some more Lusca Profiling..

More Lusca profiling of a simple test workload:

samples % image name symbol name
234266 6.7052 libc-2.3.6.so _int_malloc
138031 3.9507 libc-2.3.6.so vfprintf
111831 3.2008 libc-2.3.6.so calloc
104393 2.9879 libc-2.3.6.so malloc_consolidate
98984 2.8331 libc-2.3.6.so memcpy
91783 2.6270 libc-2.3.6.so _int_free
72578 2.0773 libc-2.3.6.so memset
69068 1.9769 libc-2.3.6.so free
50473 1.4446 squid clientTryParseRequest
50064 1.4329 squid memPoolAlloc
45211 1.2940 libc-2.3.6.so re_search_internal
40469 1.1583 squid httpRequestFree
39227 1.1228 squid statHistBin
38916 1.1139 squid comm_select
36974 1.0583 libc-2.3.6.so _IO_default_xsputn
36260 1.0378 squid memPoolFree

.. CPU is still taken up in the standard areas: memory allocation, stdio, and memcpy.

memcpy() I can't tackle right now. malloc() and friends I can tackle, but it may cause strange bugs and I'm not willing to commit anything which upsets stability too much right now. But vfprintf() is fun.

51285 22.7881 libc-2.3.6.so vsprintf
168714 74.9667 libc-2.3.6.so vsnprintf
156434 100.000 libc-2.3.6.so vfprintf

.. so most of the CPU is vsnprintf().

2549 1.0775 squid httpHeaderPutStrf
78253 33.0790 squid memBufVPrintf
153322 64.8121 libc-2.3.6.so snprintf
13601 100.000 libc-2.3.6.so vsnprintf

.. memBufVPrintf():

33231 36.8652 httpHeaderPutStrf
55577 61.6549 packerPrintf
7225 100.000 memBufVPrintf

.. snprintf():

5021 2.8271 squid clientAccessCheckDone
31897 17.9597 squid clientSendHeaders
53339 30.0327 squid xitoa
85548 48.1681 squid urlCanonical
7870 100.000 libc-2.3.6.so snprintf

.. so it's likely that eliminating the random calls through the printf() code to do "stuff" like assembling the URL and request/reply strings will shave off about 4% of this workload. But the biggest issue right now is the stupidly large amount of CPU being used in the memory allocation routines.

But the annoying one is memcpy():

10557 7.3665 squid httpAccept
20779 14.4992 squid stringDup
37010 25.8250 squid httpHeaderEntryPackInto
47882 33.4113 squid connStateFree
143177 100.000 libc-2.3.6.so memcpy

.. there's no memcpy() call in connStateFree. Which means I need to go hunting to figure out what's going on. Grr.

Monday, March 8, 2010

Why are some Squid/Lusca ACL types slower than others? And which ones?

This post should likely be part of the documentation!

One thing which hasn't really been documented is the relative speed of each of the Squid/Lusca ACL types. This is important to know if you're administering a large Squid/Lusca install - it's entirely possible that the performance of your site will be massively impacted with the wrong ACL setup.

Firstly - the types themselves:
  1. Splay trees are likely the fastest - src, dst, myip, dstdomain, srcdomain
  2. The wordlist checks are linear but place hits back on the top of the wordlist to try and speed up the most looked up items - portname, method, snmp community, urlgroup, hiercode,
  3. The regular expression checks are also linear and also reshuffle the list based on the most popular items - url regex, path regex, source/destination domain regex, request/reply mime type
Now the exceptions! Some require DNS lookups to match on the IP of the hostname being connected to - eg "dst", "srcdom_regex", "dstdom_regex".

A lot of places will simply use URL regular expression ACLs ("url_regex") to filter/forward requests. Unfortunately these scale poorly under high load and are almost always the reason a busy proxy server is pegging at full CPU.

I'll write up an article explaining how to work around these behaviours if enough people ask me nicely. :)

Thursday, February 25, 2010

Open Source Economy?

I'm reposting this from a Buzz (eww) that I responded to; I'd appreciate feedback and comments.

The article:

Lobby Group Says Open Source Threatens Capitalism


My response:

That article is .. well, a silly summary. It's comparing apples to oranges in a way.

Open/Free Source Software creates a very low entry barrier to a variety of interesting possibilities. It means companies can leverage this to create their own solutions without necessarily having to spend a large amount of money on in-house development or expensively licenced solutions. In a way, Open/Free Source is forcing a large part of the commercial market to compete better.

But the question is how you make that sustainable. Do I think the current method that Open/Free Source is used in companies is sustainable? It can be. It can not be. I've worked on open source projects in both camps.

Everything looks fine right now with a large wad of Open/Free projects because the popular ones have a lot of inertia behind them. But I wonder if there's longer-term flow-on effects in the economy. Plenty of companies which use and abuse open/free software don't contribute very much back to the projects. The cost savings they're passing on sound great in theory, but in practice the "community" is mostly covering this without their investment.

So the interesting question is - at what point (if any) does open/free software use tip the software economy to the point where developing new solutions is just not cost effective enough to compete with the established open source base; and what does that mean for future software development.

Saturday, February 20, 2010

Lusca: more reorganisation, new features


More Lusca updates!

I've further split up the client-side processing. Refresh, IMS handling, ETag handling, location rewriting and various range request support code is now in separate modules. The code isn't all that much easier to follow if you don't have a rough idea what is going on but it's getting there. One of the long-standing issues in the Squid/Lusca codebase is how much of the caching logic is done as part of the client-side handling; I'd like to continue slowly unwinding this so I can start sliding in processing hooks in useful places. I have eventual evil plans here; I'll talk more when they're coming closer to fruition.

I've also been whittling away a little at the lack of code documentation for the client-side request and reply framework. I'll commit some more comments to client_side.c and the associated routines as I explore things some more.

I've got a very overdue contract to sort out a TOS bit tagging feature. I'm putting in the basic framework to make this happen. The first step is a new ACL type - dstfwdip - which is the destination IP being handed the request. There's a bunch of other side-changes which need to occur before I can slot in the TOS marking map logic (and I still haven't figured out where -that- will be!) so stay tuned. The aim is to provide a simple way to tag client (and maybe also server) requests with TOS bits based on some property of the request - and this includes the destination IP/network the request is being forwarded to.

Sunday, February 14, 2010

ibook hackery, or "why apple probably did the magnetic connector stuff."

I didn't have the foresight to take pictures of the incident beforehand so I'll have to make do with a verbose description of the problem.

Symptom: the iBook just doesn't seem to handle power very well. "Jiggling" the connector around helps. But if it doesn't get jiggled right - and further jiggling added to the problem - the laptop won't charge fully or .. well, stay on.

It also can't be good for the power electronics to be given bursts of a couple amps of power every few seconds or so if I'm not paying attention.

So, after much anger, and Yet Another CD Not Ejecting Properly From The DVD Drive, I decided to strip the iBook down (again!) to take a look.

It takes quite a bit of disassembly to be able to reach the CD drive to replace it. It doesn't take all that much to get to the power input daughterboard. Then, removal of the daughterboard showed a couple of interesting issues. Firstly, there's some cracked/dry joints rocking hard around the power connector. Secondly, those now dry joints are black with what I'm guessing is a whole lot of DC electrical arc'ing going on.

A little bit of hot soldering iron action later and the damage was reversed. The power feed seems much more stable now. Thank god! I can stave off replacing the G4 iBook for another day.

This leads me to the subject of this post. Yes, the magnetic DC power connector Apple introduced into their laptop range is very nifty but it does make sense. Even just the normal insert/remove cycle of these power connectors could cause joints to crack and this sort of damage to occur. But there's a flipside - I've seen at least one magnetic connector blackened with DC arcing after something unknown to me had occured. So there's still apparently a slight chance of DC arc'ing happening - but my pet cat won't be destroying anything by dashing across the room and taking the laptop with it.

Saturday, February 13, 2010

Lusca: more reorganisation

I've been doing some further Lusca re-organisation in preparation for the next round of evil changes.

The client-side code has been split up a little bit. The breakup has almost exclusively been shifting functions into separate source files to better delineate function - the only refactoring was to extract some common client-side connection creation from the HTTP and HTTPS connection paths.

My next little bit of fudgery will be to extract out some more of the storage related code into a top-level library so some further unit testing and code reuse becomes easier.

There's still far, far too much code in the client-side request and reply path. I think I've mentioned it earlier - there's very little code involved in the data pump between the server->client reply handling code; the majority of the performance issues stem from the initial request and reply processing.

Monday, February 1, 2010

FreeBSD-current on the RouterStation Pro

I've been working on shoehorning FreeBSD-current onto the RouterStation Pro. It's a rather cheap but nice Atheros MIPS board by Ubiquity. It was tricky, but doable. I now have a cut down kernel and memory root filesystem stored on the on-board flash - enough to run it as a basic access point.

FreeBSD-current has support for the chipset - it's the AR71XX kernel config file. I've added a few more devices (USB, disk, MSDOS, redboot) to the default kernel. The default cross-build system works just fine.

I've been using TFTP loaded kernel+mdroot images to test out standalone functionality, along with TFTP + NFS root images to run a mostly-full development environment.

The board uses RedBoot as a bootloader. There's a tool by Ubiquity which generates firmware images which are understood by the bootloader and will overwrite the non-system area of the on-board flash. This makes dumping an image onto the system highly trivial. The bootloader can also boot the (uncompresed) kernel+mdroot images via TFTP as well as having a compressed version written to flash - so I can easily test the standalone images without constant re-flashing.

RedBoot makes it easy to TFTP upload a replacement flash image written out by the Ubiquity tool. It takes care of erasing, repartitioning and copying into the onboard flash without overwriting or damaging the RedBoot code, flash partition and configuration areas.

There was a bug with the PCI probe code until a couple of days ago where PCI cards (ie, stuff like the mini-PCI radio cards!) wouldn't be enumerated unless bootverbose was specified. Gonzo traced it down to a missing DELAY() - Linux was waiting a lot longer between probes than FreeBSD was. PCI devices now probe and attach correctly.

The geom_redboot module doesn't attach to the on-board flash (device "spi") - it only probes "cfi" flash devices. A quick patch to src/sys/geom/geom_redboot.c to also probe flash/spi has fixed this. I've asked Gonzo/Warner to investigate this and possibly commit a fix. With this in place, FreeBSD can mount a compressed, read-only filesystem from the "rootfs" flash slice rather than needing to pack a kernel+mdroot into the kernel flash slice.

So far, so good. I've written out a basic image that can be coaxed to be an open (no authentication) access point.

Interesting links:

Instructions on re-flashing the unit (but it doesn't mention that the reset button must be held down during power on!): http://www.usualcoding.eu/post/2010/01/15/Flash-OpenWRT-on-the-Ubiquiti-RouterStation-Pro

My FreeBSD router-station pro wiki: http://wiki.freebsd.org/AdrianChadd/UbiquityRouterstationPro

My work-in-progress tarball of "stuff" for building firmware images for the RouterStation Pro: http://people.freebsd.org/~adrian/rspro/

Monday, January 25, 2010

PS3 hacked! Or, "I'd pay extra for a totally open PS3."

This slashdot article covers this news article which describes a hackers success at breaking through the PS3 hypervisor protection layers to get full hardware access. A few of the slashdot posts describe why people may want full PS3 hardware access, and why Sony may be keeping the full platform access away from the user.

To be perfectly honest, I'd pay extra for a PS3 (and XBox 360 too!) which was unlocked for software that I've written myself, and it wouldn't be used to run pirated games. I'm sure others feel the same way. I'd love to see more hardware manufacturers do this.

Sigh, I can but only dream. :)

Saturday, January 16, 2010

Hardware hackery: Adding IO to an Amstrad CPC


One of the benefits to hacking on the old Commodore 64 is that the user port exposes quite a few bidirectional IO pins which can be easily programmed from BASIC. The Amstrad CPC doesn't really have these in the same fashion. Well, ok, it does - if you count the joystick pins as "input" and the printer port pins as "output". But having a fully programmable IO controller is helpful.

So I breadboarded a basic IO board a couple months ago using an 8255 PIO and some TTL logic to handle address decoding. This worked fine on the Amstrad CPC464 but it didn't work on the CPC6128. A bit of digging into the method used for address decoding showed what I did wrong.


The Amstrad CPC base peripheral IO list can be found here. The peripheral IO is decoded by the high 8 bits of the address bus - each peripheral attaches an address line (and /IORQ) to the relevant chip /CS line. The expansion peripheral line is A10, and the "selection" decoding lines are A9 and A8. So, A10 needs to be low, and A8+A9 let you either address multiple peripherals or the registers inside the peripheral itself.

This was easy - a bit of logic to say /CS = low IFF /IORQ is low and A10 is high.

This is fine on an unexpanded CPC464 but the CPC6128 floppy disk controller decodes A10 low, A8 low AND A7 low. A7 is the "FDD peripheral expansion" line. So my 8255 PPI board was being accessed the same time that the FDD controller was. This caused all kinds of random lights to blink and the system to get highly confused.

The logic now is /CS = low IFF /IORQ is low, A10 is high and A7 is high.

Strictly speaking, I should ensure that A7-A4 are 1110 - ie, A4 is low, the rest are high.

According to the CPC464 manual, when A10 is low, A4 low is "user", A5 low is "RS232" (for the Amstrad/PACE RS232 adaptor), A6 low is "future", and A7 low is "disk".

Thursday, January 14, 2010

The Origin Of Nerd Phrases, or Robert Heinlein

I've read a few Heinlein novels and I have been struck by the sheer amount of quoted material - especially inside Email Signatures! - but by far the highest density of quoted material is in the first few chapters of "Time Enough For Love."

Also, the more Heinlein I read, the more cynical I seem to be leaning.