Persisted low-latency messaging framework that handles massive throughput at microsecond latencies for high-performance and critical applications.
Chronicle Queue aims to achieve latencies of under 40 microseconds end-to-end across multiple services 99.99% of the time.
High-Resolution Timings Across Machines
Chronicle Queue allows calibrated timings across machines to accurately detect outliers.
Persistence to Disk
Chronicle Queue makes full use of the disk space available, hence is not constrained by the main memory of your machine. As all saved data is stored as memory-mapped files, the on-heap overhead is insignificant, even for 100 TBs of data.
Replays All Input and Output
Chronicle Queue manages storage by cycle. A listener can be added to receive notifications whenever a new file is added, and when it is no longer retained. At that point, the file can be moved, compressed, or deleted, depending on the application.
Chronicle Queue Benchmarks
Chronicle Queue can be considered to be similar to a low-latency, broker-less durable/persisted JVM topic. It is a distributed, unbounded, persisted queue that:
supports asynchronous RMI and pub/sub interfaces with microsecond latencies
passes messages between JVMs in under 1 µs*
provides stable, soft, real-time latencies into the millions of messages/s for a single thread to one queue
* In optimised examples
Write-to-Read Latency Between 2 Threads
Delivers Value for Companies Worldwide
Articles about Chronicle Queue
The Big Question How is Chronicle Queue being used for Big Data solutions in Java, and how does it work under the covers? What is Chronicle Queue? Chronicle Queue is a persisted journal of messages which supports concurrent writers and readers even across multiple JVMs on the same machine. Every reader sees every message, and…
Compared to a year ago, we have significantly improved the throughput at which we can achieve the 99%ile (worst one in 100). What tools and tricks did we use to achieve that? What are we testing? Chronicle Queue appends messages to a file while another thread or process can be reading it. This gives you…
If you use a standard JVM like the Oracle JVM or the OpenJDK, you might find that as the heap size grows the performance of your JVM can drop as GC pause time escalates. This tends to be a problem around 32 GB of heap, but it often depends on the application at which point…