?

Log in

No account? Create an account
log f-list backlog .nfo weev.net back back forward forward
Configuring squid for high-volume UGC caching on Linux - Andrew Auernheimer
Oðinnsson. Market abuser. Internationally notorious computer criminal.
weev
weev
Configuring squid for high-volume UGC caching on Linux
Squid is an extremely amazing piece of caching proxy software. I know some of you people like Varnish, but you seem to be accepting extremely irritating constraints on practical maintenance in return for 1.5% of extra performance at best. Personally, if you are so frustrated with Squid that you have to kowtow to Norwegian aspies you have done something wrong in your process of configuring Squid. On anything 10gigE or less I consistently see bandwidth at peak far before Squid runs out of CPU.

Part of the reason people bounce back and forth between various caching solutions is that there is no easy documentation of how to do what most of the dissatisfied people wish to do with Squid: not give all of your money to Akamai. So if you have a shitton of UGC and have acquired some cheap unlimited dedi to offload the majority of your UGC bandwidth usage to, this is for you. An idiot can even do this, and you will be able to afford your bandwidth bills even as your website goes into the Alexa top 1000 range. If you control one of the websites I love, please make sure they keep running in the event of my temporary or extended absence. 


INSTALLATION

FYI: We choose squid2.6 for this particular setup. There are compelling performance reasons why we are not using squid3 for this particular task, but you are an idiot so its not like we need to know them. Your linux kernel -must- be at least 2.6.16 or else you’re missing out on drop_caches and shit is going to really fucking suck for you. Don't even bother trying to get this working with anything previous-- go take your ancient server and install the latest version of ubuntu or whatever.

For a dedicated squid cache the first thing you do is turn the fucking swap off. If you do not do this the disks will spin into fucking oblivion:
root@cache:~# swapoff -a
Before building squid, increase the maximum file descriptors. This value can be found like so:
root@cache:/usr/local/src/squid-2.6.STABLE# cat /proc/sys/fs/file-max
762718

root@cache:/usr/local/src/squid-2.6.STABLE# grep “#define __FD_SETSIZE” /usr/include/*.h
/usr/include/linux/posix_types.h:#define __FD_SETSIZE 1024
The first number is nice and big by default on this system, but may not be on yours!

The second number is how many compiled applications are gonna use. That value is sadface low. Let’s turn that frown upside down!
Execute “free -m”. See that first number, in the “total” column? It is 8002 on this box. Divide that by four to get 2000.5, and then multiply that by 256 to get 512128. Now you know how to figure out what to set __FD_SETSIZE to! Hooray!

Set shell ulimits:
ulimit -HSn 512128
Configuring squid:
root@cache:/usr/local/src/squid-2.6.STABLE# ./configure —prefix=/usr/local/squid
Edit /usr/local/src/squid-2.6.STABLE23/include/autoconf.h and search for FD_SETSIZE and SQUID_MAXFD, values should be our 512128 instead of default 1024

then make clean; make; make install and you’re good to drop in our squid.conf:

cache.dongs.com is the box with squid on it. images.dongs.com should point to cache.dongs.com in the public A record, but /etc/hosts on cache.dongs.com should point images.dongs.com to the parent server.
http_port 80 accel defaultsite=images.dongs.com
cache_peer 8.8.8.135 parent 80 0 no-query originserver round-robin name=wiki
visible_hostname cache.dongs.com
acl dongImages dstdomain images.dongs.com
acl wp port 80
acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
acl dongsite src 8.8.8.128/27 
acl purge method PURGE
cache_peer_access wiki allow dongImages
cache_peer_access wiki deny all
http_access allow purge localhost dongsite
http_access allow purge
http_access allow dongImages wp all

cache_dir ufs /usr/local/squid/var/cache 60000 32 512

cache_mem 5690 MB
maximum_object_size 5500 KB
maximum_object_size_in_memory 2047 KB
This was geared for cache, which at the time of this authoring had 8GB of ram. This configuration is a WIP and may be outdated when you read this. The essential things to know that may not be adequately described (or described at all) in the appropriate documentation are:
  • ACLs need to exist so that squid can receive purges from your website. Your website's software needs to be configured to send purges to squid. For mediawiki see: http://www.mediawiki.org/wiki/Manual:Configuration_settings#Squid
  • Squid will use way more memory than cache_mem. It seems to be consistent in a given environment but unpredictable across different environments. This machine has 8GB of ram and Squid uses all of it when cache_mem is set at 5690MB. Tweak this value a couple times to ensure stable operation.
  • cache_mem needs to leave a little breathing room. Define it as less than 85% of the machine’s total memory.
  • maximum_object_size_in_memory will be (2^n)-1 KB.
  • maximum_object_size should be larger than the largest possible common file being served from the parent site.
  • Then you need to either set the FS noatime in the /etc/fstab options OR use chattr to disable atime recording on the parent directory of the cache_dir, like so: root@cache:/usr/local/squid# chattr -R +A var/
OH SHIT OH CRAP ITS ANOTHER SQUID ATTACK


STARTUP/POSTBOOT

MAKE SURE SWAP IS TURNED OFF
SET SHELL ULIMITS:
ulimit -HSn 512128
Then start up squid.

So, when the squid cache is started up, Linux page cache will start being fucking stupid. The memory used by Linux’s kernel cache will grow at a rate at least twice as fast as the memory used by squid. This is because the kernel is caching this shit as it first gets written to memory, and then caching it again as it gets written to disk. Theoretically Linux is supposed to give this memory back when applications need more of it but it totally won't. This is bad because obviously squid knows how to use that memory better than linux does. Here is an example of what the fuck I am talking about:
root@cache:/usr/local/squid/etc# free -m
total used free shared buffers cached
Mem: 8002 7274 728 0 46 1337
-/+ buffers/cache: 5890 2112
Swap: 0 0 0
Linux is using 1337 of cached memory here. So as squid spins up, you will need to execute the following command many times to clear Linux’s page cache:
root@cache:/usr/local/squid/etc# sync; echo 3 > /proc/sys/vm/drop_caches
After you do this, Linux will let go of all its hoarded memory:
root@cache:/usr/local/squid/etc# free -m
total used free shared buffers cached
Mem: 8002 5894 2107 0 4 38
-/+ buffers/cache: 5852 2150
Swap: 0 0 0
Squid spinning up can take a little time, so you might want to put that sync; echo 3 > /proc/sys/vm/drop_caches in a while true; do; sleep loop for a few until peak hours. Once squid has all its memory allocated linux will no longer try to steal it, so you are smooth sailing.

I know there will be a way to discourage linux from doing this shit with page cache somehow with sysctl but I do not know it now off the top of my head. As it stands now correcting this is to be your fucking job to figure out, if you even care so much that you have to watch the server for a few minutes after a reboot every few months.
leave comment