Kafka Summit Logo

Kafka Summit San Francisco

Streaming platforms at massive scale.

Aug 28, 2017 | San Francisco

How Blizzard Used Kafka to Save Our Pipeline (and Azeroth)

Video & Slides

When Blizzard started sending gameplay data to Hadoop in 2013, we went through several iterations before settling on Flumes in many data centers around the world reading from RabbitMQ and writing to central flumes in our Los Angeles datacenter. While this worked at first, by 2015 we were hitting problems scaling to the number of events required. This is how we used Kafka to save our pipeline.