
Yesterday a contingent from OpenDNS attended data-focused talks at the @Scale conference hosted by Facebook. Presentations focused on the issues that affect systems that process over 100,000 requests per second. While technologies like HDFS have provided a platform for solving most information storage and analysis problems, one pattern that emerged was the need for better tools to manage caching.
History
OpenDNS technology has its roots in distributed caching infrastructure – our DNS resolvers provide a read-only cache of the results of recursive lookups against authoritative DNS servers. DNS information is cached at 24 data centers worldwide. Due to the architecture of DNS, lookups come with a built-in expiry time in the form of a TTL. If there is ever a failure, anycast can provide automatic failover to other data centers. Consistency between sites is not required, but may be manually forced with our cache check utility.
What interested our team from the @Scale conference was the applications of caching to dynamic data. Issues like consistency and failure detection become critical when read and write operations happen from the same client machines.
Database Caches
Historically, caches were used to speed up systems so that not all client requests queried the database. However, at scale databases are unable to sustain all requests, thus making caches a critical part of infrastructure. Thus, the failure of a cache or the rebuilding of a cache can cause significant downtime.
When running multiple memcached servers with direct client access, the failure of one cache server can cause cascading errors and downtime. Client connections are dropped, caches need to be rebuilt, and back-end databases may be affected. One remedy is counter-intuitive: decrease the number of caching servers, because more servers increases the likelihood of a failure.
In order to maintain pools of caching servers, companies have been working on different solutions to run critical caching infrastructure.
Box – Tron
Tamar Bercovici from Box presented about storing structured data at scale and the issue of cache failure. Their solution is called Tron, which provides a proxy to memcached servers. The tool adds features such as automatic failure detection and consistency checks. However, Tron is client-dependent, so it is unlikely to be open-sourced in the near future.
Twitter – TwemProxy
While Twitter did not present about caching at the conference, their TwemProxy project was referenced in multiple presentations. TwemProxy is an open-source project by Twitter for better Memcache and Redis stability at scale. It works by proxying connections to cache servers in order to consolidate and thus decrease the number of open connections per machine. Since the system was open-sourced in early 2012, it has been adopted by companies including Pinterest, Snapchat, and Tumblr.
Facebook – McRouter
During their presentation, Facebook announced the open sourcing of their memcache tool McRouter. McRouter takes the basic ideas of TwemProxy and builds powerful routing tools on top that allow for the managing of production pools. McRouter features include failover between cache pools, multi-cluster consistency, and cold cache warm-up. In terms of memcache tools discussed at the conference, McRouter was one of the most advanced because it handles a variety of failure and scaling scenarios. The system is currently used in production at Facebook, and it is in the process of being adopted in other high-traffic environments such as Reddit.
Youtube – Vitess / Vttable
After Facebook presented a comprehensive tool for managing pools of caches, Sugu Sougoumarane from Google challenged the basic idea of dedicated caching pools. The basic idea of caches is to reduce database load and latency. Rather than separate caches and databases, Sugu embarked on a project to build caches into database systems. Vitess is a open-source system written in Go that is about to enter full production at Youtube. One of the most fascinating components is a MySQL wrapper called Vttablet.
Vttablet improves MySQL performance by modifying SQL, pooling connections, and consolidating queries. Vttablet supplements the MySQL buffer cache with RowCache, a memcache store of row information. Because the cache is also managed by Vttablet, there are no expiration keys – Vttablet invalidates caches when it parses SQL queries that change cached values, thus eliminating consistency issues.
Vitess challenges the paradigm of treating caching as a separate layer by directly addressing the issues of database scalability and by modifying the handling of SQL queries.
Conclusion
When a cache grows from a single server to a pool, management and consistency become concerns. As request volumes grow, cache failures can take down a system. Because of this, the management of cache pools is becoming a critical component of data systems at scale.