Friday, February 27, 2009

Cacheboy CDN is online!

There's been a few changes!

* The "Cacheboy proxy" development has become Lusca; thats spun off into a little separate project of its own.
* The "Cacheboy" project is now focusing on providing an open source platform for content delivery. I've organised some donated hardware (some donated by me), some donated bandwidth (again, some donated by me) and a couple of test projects to serve content for.

More details to come!

(As a side note, I've got too many blogs; I think its time to rationalise them down to one or two and use labels to correctly identify which is which.)

Monday, February 23, 2009

Lusca and BGP, take 2.

I've ironed out the crash kinks (the rest of the "kinks" are in the BGP FSM implementation); thus I'm left with:

1235459412.856 17063 118.92.109.x TCP_REFRESH_HIT/206 33405 GET http://videolan.cdn.cacheboy.net/vlc/0.9.8a/win32/vlc-0.9.8a-win32.exe - NONE/- application/x-msdownload AS7657
1235459417.194 1113 202.150.98.x TCP_HIT/200 45637 GET http://videolan.cdn.cacheboy.net/vlc/0.9.8a/win32/vlc-0.9.8a-win32.exe - NONE/- application/x-msdownload AS17746

Notice how the Squid logs have AS numbers in them? :)

Lusca and BGP

I've been fleshing out some very, very basic BGP support in a lusca-head branch. I'm only using the BGP information right now for logging but I'll eventually use it as part of the request and reply processing.

It *cough* mostly works. I need to figure out why there's occasional radix tree corruption (which probably means running it under valgrind to find when the radix code goes off the map..) and un-dirty some of the BGP code (ie, implement a real FSM; proper separation of the protocol handling, FSM, network and RIB code) and add in the AS path/community/attribute stuff before I commit it to LUSCA_HEAD.

It is kind of cool though having a live BGP feed in your application. :) All 280,000 odd routes of it. :)

Sunday, February 1, 2009

Lusca development, and changes to string handling

I've just renamed Cacheboy to "Lusca". I've had a few potential users comment that "Cacheboy" isn't uhm, "management compatible", so the project has been renamed to try and bring some of these users on board. I'm also hoping to make Lusca less Adrian-focused and involve more of the community. We'll see how that goes.

In terms of development, I've shifted the code to http://code.google.com/p/lusca-cache/ and I'm continuing my work in /branches/LUSCA_HEAD.

I've been working on src/http.c (the server-side HTTP code) in preparation for introducing reference counted buffer/string handling. I removed one copy (of the socket read buffer into another memory buffer, to assemble a buffer containing the HTTP reply, in preparation for parsing) and have just migrated that bit of the codebase over to use my reference counted buffer (buf_t; found in libmem/buf.[ch].) It's entirely possible that I've horribly broken the server-side code so I'm reluctant to do much else until I've finished restructuring and testing the server-side HTTP code.

I've also been tidying up a few more places where the current String API is used "incorrectly", at least incorrectly for reference counted strings/buffers. I have ~ 61 code chunks to rewrite, mostly in the logging code. I've done it twice already in other branches, so this won't be terribly difficult. Its just boring. :)

Oh, and I've also just removed the "caching" bits of the MemPools code. MemPools in LUSCA_HEAD is now just a small wrapper around malloc/calloc/free, mainly to preserve the "block allocator" style API and keep some statistics. At the end of the day, Squid uses memory very very poorly and the caching code in MemPools is purely to avoid said poor memory use. I'm going to just fix the memory use (mostly revolving around String buffers, HTTP headers and the TLV code, amazing that!) so the number of calls through the allocator is much, much reduced. I'm guessing once I've finished, the number of calls through the system allocator will be about 2 or 3% of what they are now. That should drop the CPU use quite a bit.

Ah, now to find testers..