Kafka Summit Logo
Organized by

Kafka Summit SF 2019

September 30 - October 1, 2019 | San Francisco

Building an Enterprise Eventing Framework

Session Level: Intermediate

Centene is fundamentally modernizing its legacy monolithic systems to support distributed, real-time event-driven healthcare information processing. A key part of our architecture is the development of a universal eventing framework to accommodate transformation into an event-driven architecture (EDA). Our application provides a representational state transfer (REST) and remote procedure call (gRPC) interface that allows development teams to publish and consume events with a simple Noun-Verb-Object (NVO) syntax. Embedded within the framework are structured schema evolutions with Confluent Schema Registry and AVRO, configurable (self-service) event-routing with K-Tables, dynamic event-aggregation with Kafka Streams, distributed event-tracing with Jaeger, and event querying against a MongoDB event-store hydrated by Kafka Connect. Lastly, we developed techniques to handle long-term event storage within Kafka; specifically surrounding the automated deletion of expired events and re-hydration of missing events. In Centene’s first business use case, events related to claim processing of provider reconsiderations was used to provide real-time updates to providers on the status of their claim appeals. To satisfy the business requirement, multiple monolith systems independently leveraged the event framework, to stream status updates for display on the Centene Provider Portal instantly. This provided a capability that was brand new to Centene: the ability to interact and engage with our providers in real-time through the use of event streams. In this presentation, we will walk you through the architecture of the eventing framework and showcase how our business requirements within our claims adjudication domain were able to be solved leveraging the Kafka Stream DSL and the Confluent Platform. And more importantly, how Centene plans on leveraging this framework, written on-top of Kafka Streams, to change our culture from batch processing to real-time stream processing.


We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.