COVID-19 – High Performance in a Time of Volatility

By Peter Lawrey

“Unprecedented” is the much-overused adjective used to describe our current lockdown situation, in almost every blog, article, or feature on the current global COVID-19 pandemic published these days. And yet, these are truly unprecedented times. We’ve seen our society go into lockdown, rapidly transitioned to social distancing and working at home (often in the company of small children), and watched every certainty about our lives, our jobs, and the economy evaporate.

The rollercoaster of uncertainty that each of is experiencing as an individual at this time has been replicated in the behaviour of our financial markets. As COVID-19 went truly global at the end of February, markets witnessed volumes spiking, and trading activity soared amidst a peaking of fear and uncertainty over the spread of the virus. Monthly trading volumes exhibited a 30-60% increase across all asset classes. Over that single month, we saw a doubling of trading activity by volume on lit venues. On some global exchanges, circuit breakers were triggered on a near-daily basis. It is a testament to the measures around market resilience and systemic risk, implemented by regulators following the previous global financial crisis, that price formation and trading continued with relatively little interruption.

It is at times like this, with volatility soaring, pricing uncertain and liquidity scarce, that firms rely most heavily on the robust performance of their pricing engines and trading systems. Volatility gives rise to higher levels of trading activity, and this means correspondingly more market data must be captured and processed by internal pricing engines. Latency is critical – end to end, from decision-making and generation of an order, through to accessing liquidity on a venue and executing a trade, firms must ensure that latency is minimised, or the markets will move against them before they have their desired position.

There’s no time to scale up hardware or to tweak performance, especially with developers and production support teams offsite, and with tighter controls around change deployment in place. For the firms that have come out ahead during this time of intense market turmoil, the availability of sufficient headroom in their existing systems and architectures is a key differentiator. “Headroom” means that systems are not operating at capacity during day-to-day trading activity in normal market conditions. Rather, they are utilising a relatively low fraction of their full capability, and therefore have the in-built resilience to cope with peaking demand during unusually volatile and exceptional market circumstances.

At Chronicle, our design principles ensure that headroom is always built into our clients’ infrastructure so that they are fully able to withstand market turbulence without the need for additional scaling. Our software is designed to operate at between 1% and 10% utilisation during normal market conditions, and to ensure consistent latencies at higher levels of utilisation, thus giving our clients up to 99% headroom in times of volatility and high volumes – that’s more than enough spare capacity to ensure that performance is maintained even at outliers of volatility.

We’ve carried out intensive testing in a Lenovo Lab, recording read / write data performance of Chronicle Queue, our most popular and widely-used product, for a variety of different thread counts. Our benchmark is based on read/write throughput of 60-byte messages, and we demonstrated that while a single thread was able to handle around 18 million writes per second, this goes all the way up to 36 threads handling 120 million reads and over 100 million writes per second on the same server – representing a burst of 128GB data in a single second.

All very well, you might say, but how does this translate into pricing and trading performance? We’ve implemented trading systems for tier-1 global investment banks, built on Chronicle Queue, that demonstrates the power of this capability in practice. For systems built to deliver low latency pricing, auto hedging, order management and market connectivity, we have been able to demonstrate consistent end-to-end latency of ~35 microseconds (99%). Using our products in automated trading systems, clients have seen message throughput for single-threaded processes increased by up to ten times to over 100,000 messages per second, allowing them to utilise a far wider range of full-book strategies. Our products have also typically realised a tenfold reduction in latency compared to the low-latency systems that they replaced, enabling our clients to ensure that orders hit the market in an efficient and timely manner.

Chronicle Software products have been designed to support the configuration, control and performance of distributed applications and messaging, specifically for the highly demanding requirements of global pricing and trading systems. We also have a suite of Chronicle Libraries, enabling easier development, monitoring and tuning of market data processing systems, where performance and scalability are critical to our clients’ success. And our Chronicle Solutions are built for enterprise customers – including tier-1 investment banks and exchanges – with specific business applications and use cases in mind. We also bring the skills to train, support and co-develop solutions with our clients’ teams, accelerating development and delivery timelines. And finally, we’ve designed our business to operate remotely, so our teams remain fully operational under all circumstances, and wherever in the world they may be!

If you’d like to talk to us about how Chronicle and our products can help your firm realise its true potential, contact Andrew.Twigg@Chronicle.Software

Register for our webinar series – our CEO, Peter Lawrey will be sharing his experience in benchmarking low latency microservices on 14th May 2020.https://tinyurl.com/y9ex5b6j

Join our user group , where you can register for Chronicle content and updates, plus view our previous webinar on “Writing Low Latency Microservices”. https://www.linkedin.com/groups/12138236/

 

Andrew Twigg

Subscribe to Our Newsletter

Featured Products

Data Processing

Chronicle Queue Enterprise

Persisted messaging framework that handles massive throughput. Delivers superior control of outliers, and replicates easily across hosts.

Read more >>