
Picture this: You’re watching a high-stakes football match. A goal is scored in the 89th minute. Within a split second, your sportsbook app updates - odds shift, markets lock or open, and everything feels seamless.
But behind that smooth front-end experience - pure complexity.
Delasport's system has to handle thousands of these updates at once - goals, fouls, injuries, substitutions, overtime - all in real time, across hundreds of matches. The scale is staggering, but it’s also where the fun begins (at least for those of us who enjoy a good concurrency challenge).
Here’s how we make it work using Java, multithreading, Kafka, and a healthy respect for immutability, as told by Delasport's Java Team Lead, Georgi Stoyanov.
In a live betting environment, speed isn’t just important - it’s non-negotiable.
Every single change in the game can affect:
Multiply that across hundreds or even thousands of simultaneous matches. This isn’t just a stream of updates - it’s a flood. And the system must react in near real time, without dropping a beat.
We’ve built our system around a simple principle: divide and conquer.
Kafka: The Event Traffic Controller
Kafka acts as our real-time data backbone. Each match has its own stream of updates, funneled into a Kafka topic and partitioned - often by match ID.
Why partitions? Because they allow us to scale horizontally and process data in parallel. Kafka also guarantees message order within a partition, which is critical when timing matters - like determining whether a goal came before a red card.
Java Threads: The Processing Engine
Each Kafka partition is handled by a dedicated Java thread (or thread pool, when necessary). That thread is responsible for:
The outcome is massively parallel processing - ordered, efficient, and fast. Each match is handled independently, which allows us to scale our processing across CPU cores and servers with minimal contention.
Immutability: Our Best Friend in a Concurrent World
Concurrency is tricky. Bugs caused by shared mutable state are some of the most subtle and frustrating in software engineering.
That’s why we’ve built our system around immutable data structures. Each event message, once deserialized, is wrapped into an immutable object. Any transformation or update creates a new object.
The benefits:
This design choice has dramatically improved the reliability of our processing pipeline - and simplified how our developers work with data.
Let’s walk through what happens when a goal is scored in Match 5678:
1. Kafka ingests the event into the topic match-5678-events.
2. The message is routed to the appropriate partition.
3. A Java thread assigned to that partition picks up the message.
4. It processes the event: updates internal state, recalculates odds, and generates a new view of the market.
5. The updated data is pushed to the front end, usually within milliseconds.
All of this happens with minimal locking, no shared mutable state, and no interference from events in other matches.
After building and refining this system in production, a few lessons stand out:
Building a scalable, real-time sportsbook engine in Java has been one of the most rewarding engineering challenges we've tackled. It forced us to think deeply about concurrency, architecture, and fault tolerance - while still delivering a seamless experience to users who expect everything to just work.
If you're working on anything in the real-time or high-throughput space - trading systems, analytics platforms, even multiplayer games - many of the same principles apply.
And if you're deep in the trenches of Java concurrency and event-driven architecture, I'd love to hear how you’re approaching it.
Sounds interesting? Delasport will be at the upcoming jPrime 2025 where you can learn more about how our Java teams operate and what the opportunities for you are!

