Kafka Summit Logo

Kafka Summit San Francisco 2017

Streaming platforms at massive scale.

Aug 28, 2017 | San Francisco

How Blizzard Used Kafka to Save Our Pipeline (and Azeroth)

Session Level:
Video & Slides

When Blizzard started sending gameplay data to Hadoop in 2013, we went through several iterations before settling on Flumes in many data centers around the world reading from RabbitMQ and writing to central flumes in our Los Angeles datacenter. While this worked at first, by 2015 we were hitting problems scaling to the number of events required. This is how we used Kafka to save our pipeline.


We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.