What is Chronicle Fix?
Built on our Chronicle libraries, and with the Chronicle philosophy, Chronicle Fix employs amongst others, the principles of Zero-copy eliminating unnecessary garbage collection and increased speed. Runtime code generation that reduces code size for efficient CPU cache usage and increased speed. Smart ordering for optimal parsing and you guessed it increased speed. All these combine to allow Chronicle Fix to achieve excellent performance results.
Chronicle Fix Features
- Low latency, recyclable, configurable FIX implementation for inbound FIX message parsing, and outbound FIX message transmission
- Full FIX compatibility with FIX versions 4.0, 4.1, 4.2, 4.3, 4.4, and 5.0
- Pre-allocates the memory for FixMessage objects. This is intended to reuse objects thus avoiding dynamic memory allocation
- FIX parser, around 1 microsecond, 99.9% of the time for medium sized messages
- FIX message generator, around 1 microsecond, 99.9% of the time for medium sized messages
- Optional persistence of messages and latency checkpoints in, and out
- Optimise latency for FIX in-order messages with fallback for out-of-order messages
- Scalable to tens of connections per server, with minimal latency impact
- Session handling; login, logout, and heartbeat
- Integration with existing data structures in a zero-copy manner
- High-performance test client to drive the system to its limits
- Tools for minimising garbage collection and enabling zero-copy
- Optimise networking to work efficiently with Solarflare network cards
- MiFID II support
How it works?
The parser has been optimised to accept fields in an expected order. Our default implementation has the order the fields as they appear in the fixprotocol.org documentation. However, the fields can be in any order allowed by FIX, but this may be a little slower. For generating, you do the reverse. You call an interface which writes the FIX message as you add the fields. Ideally, you call the fields in the expected order; this speeds up parsing.
By recycling the buffer, and using object pools and mutable StringBuilders, we can send and receive messages end-to-end without significant garbage. Our target is less than one byte per message, on average. We design our systems to produce less than 1 GB/hour of garbage. This means that a 24 GB Eden space takes 24 hours to fill up. You can save around 0.2 microseconds if you drop checking, or computing, the byte sum.
Super Low Latency
Chronicle Queue (a super-optimised low-latency core application from the Chronicle Enterprise suite) is included which logs every FIX message. And with Replay, you can easily replay all your historical FIX messages from a Chronicle queue into Chronicle FIX to give the deterministic replay. In other words, you can feed your test systems with actual historical data; so no more made-up, or generated data, that can’t reproduce production issues. You can use this technique to test experimental strategies.
Chronicle FIX can be used stand-alone. However, it really makes sense to use it as your entry point to the rest of the Chronicle products. You will benefit from the ultra-low latency that we can provide if you use Chronicle as the backbone throughout your systems.
How to access Chronicle Fix
You can access Chronicle Fix via our enterprise GitHub repository where you can read, fork, and create, pull-requests on our code. It works exactly like the open source. You can be as involved as you like.
Chronicle Fix Support to Meet Your Specific Needs
We'll support as you need, to help with any issues that might arise, either with Chronicle FIX itself, or with your project as a whole. We understand that the FIX engine is usually only a small part of a system, and we can use our experience to help with latency issues with other parts of your project should you need it.
See the benefits of Chronicle's products in action
At Chronicle, we believe less is more. Learn more about why and how Chronicle can support your business and how it can increase efficiency and streamline your systems and workflows by speaking with one of our experts.