Monday, June 29, 2009

Current Downtime/issues

There's a current issue with content not being served correctly. It stemmed from a ZFS related panic on one of the backend servers (note to self - update to the very latest FreeBSD-7-stable code; these are all fixed!) which then came up with lighttpd but no ZFS mounts. Lighttpd then started returning 404's.

I'm now watching the backend(s) throw random connection failures and the Lusca caches then cache an error rather than the object.

I've fixed the backend giving trouble so it won't start up in that failed mode again and I've set the negative caching in the Lusca cache nodes to 30 seconds instead of the default 5 minutes. Hopefully the traffic levels now pick up to where its supposed to be.

EDIT: The problem is again related to the Firefox range requests and Squid/Lusca's inability to cache range request fragments.

The backend failure(s) removed the objects from the cache. The problem now is that the objects aren't re-entering the cache because they are all range requests.

I'm going to wind down the Firefox content serving for now until I get some time to hack up Lusca "enough" to cache the range request objects. I may just do something dodgy with the URL rewriter to force a full object request to occur in the background. Hm, actually..

Saturday, June 27, 2009

New mirror node - italy

I've just turned on a new mirror node in Italy thanks to New Media Labs. They've provided some transit services and (I believe) 100mbit access to the local internet exchange.

Thanks guys!

Friday, June 26, 2009

Lusca in Production, #2

Here's some 24 hour statistics from a Lusca-HEAD forward proxy install:

Request rate:
File descriptor count (so clients, servers and disk FDs) :


Byte and Request hit ratio:


Traffic to clients (blue) and from the internet (red):


CPU Utilisation (1.0 is "100% of one core):



Lusca-head pushing >100mbit in production..


Here's the Lusca HEAD install under FreeBSD-7.2 + TPROXY patch. This is a basic configuration with minimal customisation. There's ~ 600gig of data in the cache and there's around 5TB of total disk storage.


You can see when I turned on half of the users, then all of the users. I think there's now around 10,000 active users sitting behind this single Lusca server.

Tuesday, June 23, 2009

Lusca-head in production with tproxy!

I've deployed Lusca head (the very latest head revision) in production for a client who is using a patched FreeBSD-7.2-STABLE to implement full transparency.

I'm using the patches and ipfw config available at http://tproxy.no-ip.org/ .

The latest Lusca fixes some of the method_t related crashes due to some work done last year in Squid-2.HEAD. It seems quite stable now. The bugs only get tickled with invalid requests - so they show up in production but not with local testing. Hm, I need to "extend" my local testing to include generating a wide variety of errors.

Getting back on track, I've also helped another Lusca user deploy full transparency using the TPROXY4 support in the latest Linux kernel (I believe under Debian-unstable?) He helped me iron out some of the bugs which I've just not seen in my local testing. The important ones (method_t in particular) have been fixed; he's been filing Lusca issues in the google code tracker so I don't forget them. Ah, if all users were as helpful. :)

Anyway. Its nice to see Lusca in production. My customer should be turning it on for their entire satellite link (somewhere between 50 and 100mbit I think) in the next couple of days. I believe the other user has enabled it for 5000 odd users. I'll be asking them both for some statistics to publish once the cache has filled and has been tuned.

Stay tuned for example configurations and tutorials covering how this all works. :)

Wednesday, June 17, 2009

And the GeoIP summary..

And the geoip summary:


From Sun Jun 7 00:00:00 2009 to Sun Jun 14 00:00:00 2009



ServerCountryMBytesRequests

us5163783.096533162
de1514664.222307222
ca1152095.00917777
fr948433.271451105
uk945640.711136455
it818161.03770164
br542497.791426306
se482932.15229559
es445444.34647321
pl397755.301021083
nl373185.13306023
ru368124.64749924
tr293627.27484965
mx276775.12463252
be249088.62213460
ch201782.33209530
ro190059.45274216
fi172399.75204630
ar170421.77374071
no169351.46155258

Tuesday, June 16, 2009

A quick snapshot of Cacheboy destinations..

The following is a snapshot of the per destination AS traffic information I'm keeping.


If you're peering with any of these ASes and are willing to sponsor a cacheboy node or two then please let me know. How well I can scale things at this point is rapidly becoming limited to where I can push traffic from, rather than anything intrinsic to the software.


From Sun Jun 7 00:00:00 2009 to Sun Jun 14 00:00:00 2009














TimeSiteASNMBytesRequests% of overall
AS3320602465.0110219753.26DTAG Deutsche Telekom AG
AS7132583164.057782593.16SBIS-AS - AT&T Internet Services
AS19262459322.306031272.49VZGNI-TRANSIT - Verizon Internet Services Inc.
AS3215330962.955532991.79AS3215 France Telecom - Orange
AS3269317534.063331141.72ASN-IBSNAZ TELECOM ITALIA
AS9121259768.324349321.41TTNET TTnet Autonomous System
AS22773244573.652834271.32ASN-CXA-ALL-CCI-22773-RDC - Cox Communications Inc.
AS12322224708.253436861.22PROXAD AS for Proxad/Free ISP
AS3352206093.843051831.12TELEFONICADATA-ESPANA Internet Access Network of TDE
AS812204120.741666331.10ROGERS-CABLE - Rogers Cable Communications Inc.
AS8151198918.223286321.08Uninet S.A. de C.V.
AS6327197906.531528611.07SHAW - Shaw Communications Inc.
AS3209191429.183037871.04ARCOR-AS Arcor IP-Network
AS20115182407.092251510.99CHARTER-NET-HKY-NC - Charter Communications
AS2119181719.201176560.98TELENOR-NEXTEL T.net
AS577181167.021523830.98BACOM - Bell Canada
AS12874172973.421084290.94FASTWEB Fastweb Autonomous System
AS6389165445.732361330.90BELLSOUTH-NET-BLK - BellSouth.net Inc.
AS6128165183.072103000.89CABLE-NET-1 - Cablevision Systems Corp.
AS2856164332.962192670.89BT-UK-AS BTnet UK Regional network

Query content served: 5234195.61 mbytes; 6878234 requests (ie, what was displayed in the table.)


Total content served: 18473721.25 mbytes; 26272660 requests (ie, the total amount of content served over the time period.)

Saturday, June 13, 2009

Seeking a few more US / Canada hosts

G'day everyone!

I'm now actively looking for some more Cacheboy CDN nodes in the United States and Canada. I've got around 3gbit of available bandwidth in Europe, 1gbit of available bandwidth in Japan but only 300mbit of available bandwidth in North America.

I'd really, really appreciate a couple of well-connected North American nodes so I can properly test the platform and software that I'm building. The majority of traffic is still North American in destination; I'm having to serve a fraction of it from Sweden and the United Kingdom at the moment. Erk.

Please drop me a line if you're interested. The node requirements are at http://www.cacheboy.net/node_requirements.html . Thankyou!

Friday, June 12, 2009

Another day, another firefox release done..

The June Firefox 3.0.11 release rush is all but over and Cacheboy worked without much of a problem.

The changes I've made to the Lusca load shedding code (ie, being able to disable it :) works well for this workload. Migrating the backend to lighttpd (and fixing up the ETag generation to be properly consistent between 32 bit and 64 bit platforms) fixed the initial issues I was seeing.

The network pushed out around 850mbit at peak. Not a lot (heck, I can do that on one CPU of a mid-range server without a problem!) but it was a good enough test to show that things are working.

I need to teach Lusca a couple of new tricks, namely:


  • It needs to be taught to download at the fastest client speed, not the slowest; and

  • Some better range request caching needs to be added.



The former isn't too difficult - that is a weekend 5 line patch. The latter is more difficult. I don't really want to shoehorn in range request caching into the current storage layer. It would look a lot like how Vary and Etag is currently handled (ie, with "magical" store entries acting as indexes to the real backend objects.) I'd rather put in a dirtier hack that is easy to undo now and use the opportunity to tidy up the whole storage layer a whole lot. But the "tidying up" rant is not for this blog entry, its for the Lusca development blog.

The hack will most likely be a little logic to start downloading full objects that aren't in the cache when their first range request comes in - so subsequent range requests for those objects will be "glued" to the current request. It means that subsequent requests will "stall" until enough of the object is transferred to start satisfying their range request. The alternative is to pass through each range request to a backend until the full object is transferred and this would improve initial performance but there's a point where the backend could be overloaded with too many range requests for highly popular objects and that starts affecting how fast full objects are transferred.

As a side note, I should probably do up some math on a whiteboard here and see if I can model some of the potential behaviour(s). It would certainly be a good excuse to brush up on higher math clue. Hm..!

Thursday, June 11, 2009

Migrating to Lighttpd on the backend, and why aren't my files being cached..

I migrated away from apache-1.3 to Lighttpd-1.4.19 to handle the load better. Apache-1.3 handles lots of concurrent disk IO on large files fine but it bites for lots of concurrent network connections.

In theory, once all of the caching stuff is fixed, the backends will spend most of their time revalidating objects.

But for some weird reason I'm seeing TCP_REFRESH_MISS on my Lusca edge nodes and generally poor performance during this release. I look at the logs and find this:



[Host: mozilla.cdn.cacheboy.net\r\n
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10\r\n
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
If-Modified-Since: Wed, 03 Jun 2009 15:09:39 GMT\r\n
If-None-Match: "1721454571"\r\n
Cache-Control: max-stale=0\r\n
Connection: Keep-Alive\r\n
Pragma: no-cache\r\n
X-BlueCoat-Via: 24C3C50D45B23509\r\n]

[HTTP/1.0 200 OK\r\n
Content-Type: application/octet-stream\r\n
Accept-Ranges: bytes\r\n
ETag: "1687308715"\r\n
Last-Modified: Wed, 03 Jun 2009 15:09:39 GMT\r\n
Content-Length: 2178196\r\n
Date: Fri, 12 Jun 2009 04:25:40 GMT\r\n
Server: lighttpd/1.4.19\r\n
X-Cache: MISS from mirror1.jp.cacheboy.net\r\n
Via: 1.0 mirror1.jp.cacheboy.net:80 (Lusca/LUSCA_HEAD)\r\n
Connection: keep-alive\r\n\r]


Notice the different ETags? Hm! I wonder whats going on. On a hunch I checked the Etags from both backends. master1 for that object gives "1721454571"; master2 gives "1687308715". They both have the same size and same timestamp. I wonder what is different?

Time to go digging into the depths of the lighttpd code.

EDIT: the etag generation is configurable. By default it uses the mtime, inode and filesize. Disabling inode and inode/mtime didn't help. I then found that earlier lighttpd versions have different etag generation behaviour based on 32 or 64 bit platforms. I'll build a local lighttpd package and see if I can replicate the behaviour on my 32/64 bit systems. Grr.

Meanwhile, Cacheboy isn't really serving any of the mozilla updates. :(

EDIT: so it turns out the bug is in the ETag generation code. They create an unsigned 32-bit integer hash value from the etag contents, then shovel it into a signed long for the ETag header. Unfortunately for FreeBSD-i386, "long" is a signed 32 bit type, and thus things go airy from time to time. Grrrrrr.

EDIT: fixed in a newly-built local lighttpd package; both backend servers are now doing the right thing. I'm going back to serving content.

Tuesday, June 2, 2009

New mirrors - mirror2.uk and mirror3.uk

I've just had two new sponsors show up with a pair of UK mirrors.

mirror2.uk is thanks to UK Broadband, who have graciously given me access to a few hundred megabits of traffic and space on an ESX server.

mirror3.uk (due to be turned up today!) is thanks to a private donor named Alex who has given me a server in his colocation space and up to a gigabit of traffic.

Shiny! Thanks to you both.