Memcached saved my users a minute per query |
It currently takes about 7 seconds to query my Postgres database to get this list out. Unfortunately I need to do this several times as part of a goal seeking function so I need to greatly improve this lookup speed.
I'm already using the Postgres earthdistance module and have properly indexed my table so I realized that I needed to look for a caching solution.
Memcached places limits on the size of the value you can store. The default setup is 1meg and I'm reluctant to change this because it adds to the deployment burden. My result sets were sometimes up to 4 megs large - searching on a 20 mile radius in London yields a lot of postcodes!
My idea was to split the large piece of data into several smaller pieces and to place an index referencing the pieces as the value for the key we're trying to store.
I decided to make use of PHP's gzcompress() function to reduce the size of the element because I felt that the time I spend compressing the data is still going to be drastically less than running the query and I want to try my best to avoid cache evictions.
I'm currently using Laravel so the code snippets below use the facades made available by Laravel. I think the code is readable enough to extend to other PHP environments and I think the approach could also be ported to other languages.
Very helpful, thanks a lot!
ReplyDelete