Chronicle Queue is a persisted low-latency messaging framework for high performance and critical applications. Designed to be a “record everything store” Chronicle Queue provides sub-microsecond real-time latency supporting even the most demanding applications. It can be used in any application where the recording, distribution, analysis, or processing of information is a concern these characteristics make it ideal when large amounts of data need to be processed at speed.
Large amounts of data is fundamental to any AI system from traditional formatted data eg Market Data to large unstructured datasets, one of the challenges faced is to ensure that this data is processed in a timely manner to prove useful in a real world environment and Chronicle Queue helps ensure that this data is moved at speed
At Chronicle we have been investing research time into improving the throughput from Chronicle Queue to meet a client requirements for both processing of large external data sources plus their own internally created data.
Benchmarking was based on 3 different packet sizes from 60 bytes to 500 bytes and the benchmark was run on a Ubuntu i7-10710U CPU @ 1.10GHz
Writing 140,500,273 messages took 5.025 seconds, at a rate of 27,959,000 per second
Reading 140,500,273 messages took 6.619 seconds, at a rate of 21,226,000 per second
Writing 44,039,331 messages took 5.017 seconds, at a rate of 8,778,000 per second
Reading 44,039,331 messages took 2.846 seconds, at a rate of 15,472,000 per second
Writing 23,068,441 messages took 5.076 seconds, at a rate of 4,544,000 per second
Reading 23,068,441 messages took 2.728 seconds, at a rate of 8,456,000 per second
As can be seen the results range from nearly 28 million messages per second to just over 4.5 million messages per seconds for writes from a Chronicle Queue and this ensures that regardless of the size and shape of the dataset it will be processed quickly and efficiently
We are now looking to build on this research to enhance the throughput capabilities of Queue and we are developing a Concurrent Chronicle Queue which should increase these figures fivefold and for much larger message sizes
Neural Networks are a fundamental part of Artificial Intelligence and we have been working to create a Neural Network using Chronicle Services. This demonstration shows how to build a neural network using Chronicle Services where we let each service represent a layer in the network. Prior to putting together the Java representation, the network was trained using the Tensorflow Playground A Neural Network Playground (tensorflow.org). Full details about the demonstration and the output can be found here
Chronicle Queue Python
Viewed alongside Java and C++, Python provides a compelling mix of simplicity and power and has become the development language of choice for a wide range of tasks where ease of implementation, flexibility and short development times are more important than outright speed. In particular, Python is a natural choice for more business-oriented users with notable target applications including:
- Algorithmic model development
- Data analysis, recording, and replay
- Back-Office feeds and processing
- Rapid prototyping
To further broaden the applicability of Chronicle Queue, we are developing a Python API. This builds directly on the underlying C++ Chronicle Queue library, thereby benefiting from highly optimised core functionality.
By building directly on the C++ Chronicle Queue library, Python Chronicle Queue will in principle have full access to all Queue features. The initial aim is to expose the subset of features of most immediate practical use for the above example use cases, with a longer-term goal of further broadening the feature set in response to demand and where prudent subject to the limitations of Python. All exposed features will be fully interoperable with the Java and C++ versions: a queue written using the Python, Java, or C++ API can be seamlessly processed using any combination of Python, Java, or C++. Initial Python implementations will be for x86 Linux, with support for other platforms proposed for a later date.
To find out more about any of these initiatives please contact us on firstname.lastname@example.org