Do you have a great Apache Kafka story to share?
Speaking at Kafka Summit is a great way to connect with hundreds of your peers, become more involved in the Kafka community, and have a public platform for you to share your story of the future of streaming platform.
What We Want:
We’re seeking technical sessions that offer actionable expertise that deepens the practical knowledge of our attendees—without the sales or marketing pitch. Discuss best practices or lessons learned, present business cases or a success story, and provide details to help attendees get under the hood of your Kafka implementation.
With only 30 sessions for the London event, the competition is going to be fierce. Wow the program committee with your creativity by submitting relevant, engaging and entertaining proposals. We’re looking for fresh ideas, unique perspectives, and thought-provoking discussions that will enhance the experience for our attendees.
Sessions are 45 minutes in length, with 35 minutes of content and 10 minutes for Q&A.
You must submit a separate entry for each topic, with a maximum of two entries per person.
Talks in the Streams Track are about real-time messaging, analytics, and stream processing. Whether you are using KSQL for intrusion detection, deploying Kafka Streams-powered microservices using Kubernetes, or have great tips on how to tune the integration of Kafka and your favorite stream processing framework, this track is for you.
Talks in the Pipelines Track are about real-time data integration and ETL pipelines, using Kafka to connect disparate systems at low latency. If you are a data engineer who figured out how to get data from a mainframe to Kafka, join it with events from few microservices, and write the results to Elastic where your analytics team uses it to improve customer service, then your story belongs here.
Talks in the Internals Track are deep dives into Kafka internals, performance tuning, and detailed treatments of system architectures. If you just spent 36 hours chasing a strange performance degradation and have the flame-graphs to show for it, have the world’s largest Kafka cluster and can tell us how we can have one too, figured out how we are all monitoring Kafka wrong, or are using esoteric operating system features to improve Kafka performance, we want to hear your story in this track.
What You’ll Get:
Call for Papers opens: October 25, 2017
Call for Papers closes: December 1, 2017
Notifications sent: December 20, 2017
Presentations due for initial review: March 19, 2018
Presentations due for final approval: April 9, 2018