The Principles of Low Latency Microservices

Peter Lawrey, presents an interesting talk on “Low Latency Microservices” at the Melbourne JVM . In this talk we explore how Microservices and Trading System overlap and what they can learn from each other. In particular, how can we make microservices easy to test and performant. How can Trading System have shorter time to market and easier to maintain. Peter Lawrey likes to inspire developers to improve the craftmanship of their solutions, engineer their systems for simplicity and performance, and enjoy their work more by being creative and innovative.

 

The presentation begins around the concepts of meeting the requirements of high performance combined with very low latency and that it  requires careful attention to all aspects of coding and architecture. Chronicle’s software is highly optimized Java, employing techniques that minimize operating system overhead and Java Garbage Collection (the objective is zero GC). These low latency microservices in Java are single-threaded, eliminating the need for thread management, locks, signals, polls of thread state, etc. This produces a deterministic and reproducible result (critical for replicating and debugging problems), while also increasing throughput and decreasing latency.

He discusses how low latency microservices operate and how they receives events from a source of data that contains the complete history of all events and operates in an append-only manner. These low latency  microservice reacts to the events, processes the input data, and generates output data. In many cases, the function is stateless: all required information is derived from the input events. However, in some cases efficiency is gained by maintaining some state between transactions. High performance and low latency are the objectives that determine whether a particular Chronicle service maintains some state or is stateless. If state is maintained, it can be persisted to reduce start-up time the next time the service is launched.

In the presentation Peter explains about a Lambda architecture and how the microservice function receives events from a source of data that contains the complete history of all events and operates in an append-only manner. The microservice reacts to the events, processes the input data, and generates output data. In many cases, the function is stateless: all required information is derived from the input events. However, in some cases efficiency is gained by maintaining some state between transactions. High performance and low latency are the objectives that determine whether a particular Chronicle service maintains some state or is stateless. If state is maintained, it can be persisted to reduce start-up time the next time the service is launched.

There is also examples of how to produce a stateful Lambda plus some code excerpts

The presentation moves onto gives an in-depth overview of the Interprocess transport used between the low latency microservices which in this case is Chronicle Queue, Peter explains how the Chronicle Services framework has been optimised to work with the Chronicle Queue and he shows some benchmarks that illustrate this

Finally there is a Q&A which covers several topics including best practice around testing and benchmarking of Chronicle Queue and the Microservices framework

 

Want access to exclusive Chronicle content?

The Chronicle Community is now live, hit the button below and join the community, for free! You'll get exclusive access to blogs, guides and video content. Join today!

Want more from Chronicle? Subscribe today!

Something went wrong. Please check your entries and try again.