The Twitter stack – Chris Aniszczyk

Average 6000 tweets per second, but peaks up to a record of 140000 tps.
Originally: ruby on rails and mysql. Added caching but it’s a band-aid. Monolithic design made engineering difficult. Adding extra machines is not scalable. Performance optimization made code less readable.
Vision: 10x improvement, isolate failure,
Ruby vm was consuming a lot of resources. Two new features were using jvm and doing great, so migrated there.
Decompose the monolith.

Each team took a different approach to concurrency and failure. So move this to a library: Finagle. Abstracts all the distributed system stuff.
Zipkin: tracing for Finagle.
Partitioning of the data center:learned from Google how to do this more dynamically. Using cgroups to isolate resources. Mesos

Twitter embraced open source from day one: it has good quality and you can learn from your peers.
Incremental change always wins.
View the data center as the computer, not a single server.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s