Frequently Asked Questions

The following questions and answers are a distillation of the webinar given by Peter Lawrey on 15th April 2020.

What are the implications of Chronicle staying at Java version 8?

The main push for us is probably going to be commercial rather than technical. We have stated that we will continue to support Java 8 for as long as Oracle does for their supported clients. It could be for quite some time, which is fairly consistent with what our clients are doing; a lot of clients are still on Java 8. Our main concern is about being able to continue to support existing clients.

Obviously, there are a lot of nice features in newer versions of Java. However, most of them are not particularly compelling for us. Part of the reason that they are not so compelling is that, even though there are newer garbage collectors which have some nice features, in particular to deal with scaling to larger heap sizes, for the most part, our software does not use the heap that much.

Instead, we support large amounts of heap storage that is in virtual memory which can be larger than physical memory. We find that most clients have heap sizes as well below 32 gigabyte limit of practicality for either the parallel collector or the CMS. Another thing is, we tend to focus on very low garbage rates. So ideally, our servers run at less than one GC a day; either minor or major. At that point, your choice of collector becomes less important. In fact, we tend to favour a parallel collector, because the CMS tends to start collections in the background when it thinks it is a good idea to do so. What we would rather do is actually push it off as long as possible, and to do it only in a very controlled manner. Because of the way we use Java, we do not benefit from some of the improvements to the GC because we avoid using it if possible. To some degree, we are driven by our clients, many of whom are not looking to move to a newer version of Java in the short term. Obviously, once they do, and we still support Java 11, we will be supporting Java’s next long-term release, and we may consider an interim release before that. 

The decision will be more focused on the commercial aspects rather than the technical ones. The technical benefits may encourage our clients to move forward with versions so they can make use of them. But to some degree, the decision is not ours to make; It is what will appeal to our user base.

Amount of Memory: Typically, systems in banks would generally have massive amounts of memory. Am I correct in thinking that, when using Chronicle, it is almost the reverse of that; and we should preferably get the box as much memory as it needs, but actually give your processes just what it needs?

That is correct. Because Chronicle tries to work in virtual memory as much as possible, giving the machine more memory actually improves performance. For example, if you are on a 64 gigabyte box, and you have got a choice between a 32 gigabyte heap and a four gigabyte heap, you might find that a four gigabyte heap actually performs better because you have released more memory to be used off-heap. It may be slightly counterintuitive that a smaller heap will perform better, but it is because you are allocating memory to where it should be needed. Obviously, not every, every application that uses our software will have this approach. At some point you need to integrate with things like web servers and other environments which were designed with the same principles we use. So it is not necessarily “one size fits all” in that regard. But certainly, for the core trading systems, we tend to encourage smaller heaps but which are large enough that they do not really cause a problem, and actually release more memory to the rest of the system.

That being said, servers are getting staggering large amounts of memory these days for not an awful amount of money. So what you may find is that you are not necessarily having to trade off as much as you might think, because quarter terabyte, half terabyte machines are not as uncommon as they used to be.

Use of Queries: We are adding a lot of the benchmark tests using a message-in, and a message-out of a queue. In your own designs do you try to hop in and out of the queue before you have reached your FIX-in/FIX-out gateway, in between those two points? Do you limit the amount that you write to the queue?

To give a little context for people who do not know the Chronicle model quite as well, what we do is encourage using microservices where there are one or more queues as input, and that produces another cue as an output. This means that the core of your system runs very efficiently, all through memory, or persisted, or reproducible. This is ideal, but at some point, you need to talk to the outside world, and that might be via the web or it might be via a FIX connection. Then what we tend to do is encourage that those gateways be very thin layers with minimal logic in them. In this way they can be kept running, independent of restarting the business logic, for example. In this way they can then be very robust and able to maintain connections, because they do not have much logic in them and so they are able to keep the connections up and running all the time; this is the main principle.

Our clients have observed that sometimes, when you break out lots of microservices, you might find that two or more microservices actually are much more tightly coupled than is desirable and you do not want every message between them to be passed via a queue. You may have designed it that way to make it all nicely decoupled. But then you might find that the interaction between two microservices is just a bit too intense, and that adding a queue is actually adding latency. In that scenario, it is not too hard to rewire it, possibly through changing the microservices themselves, or not even doing that and just write a component that creates those two microservices, and then individually wire them together. That essentially removes the need for a queue between them;. essentially, those microservices are interacting in the same thread.

This has happened from time to time. Certainly on the last big project we did, it happened a couple of times; whereas in previous projects, it has never happened. The nice thing about it is, if you design the microservices upfront, if it turns out that, for whatever reason (due to performance, or just the interaction between them), that it does not make sense to separate them, it is not a major redesign. It does not touch on any other component in the system, any other service, and in fact, you may not even need to recode the microservices or any of the tests for them. Instead, you can just write out a very simple component that creates both microservices in the same context.

In some ways, it should be something that you should be in a position to investigate when you see the system running, and then make a decision as to how to deploy these microservices and whether it makes sense to put them in separate threads, or separate processes.

One of the common things that we do is that, while we talk about all these microservices that can run in their own processes, that is not always the easiest to debug or test, because starting them all up and shutting them all down again, be quite tedious if you are doing it a lot. One of the things we often do is we will create an “all” component that just starts up a thread with each one of them running, still communicating via queues. It just runs up every service of interest simultaneously in a single process which allows you to start them all up, shut them all down, and debug them, in a way that causes all the other threads to store when you are at a breakpoint; making it easier to work with. You get the benefits of working with a monolith for testing while still getting the full deploy-ability of decoupled microservices.

Question about Chronicle Map and best usage: Chronicle Map seems a little bit limited, in that you need to know how large a map is going to be up front. This is obviously very good for replaying things, but have you got any kind of best practices on how to go about sizing maps and what kind use cases it may be best suited?

One of the things that Chronicle Map expects you to do is to provide the size of entry and, possibly the key and the value individually, and the number of entries that you are likely to have. Now Chronicle version 2 was a bit pedantic about that and you really had to stick to those numbers. However, Chronicle version 3 does have resizing. The entries themselves can be bigger than what you said; you can have a lot more entries than what you said as well.

The downside is that if you grow beyond what you first indicated, it may not perform as well; it essentially optimises for the numbers that you specified, but they do not have to be exactly as specified. One of our clients uses the same figures for the initial size and the size of the entries regardless of what they use the map for.

As a rule of thumb, if you are willing to waste a little bit of space (in the range of kilobytes which these days people really should not be too worried about) I would make the entry size and number of entries larger than they need to be. If they are later exceeded, it does not matter; you may get a slight performance hit at the time that the growth occurs. Obviously if you get these numbers right from the start it will not need to resize. The entire map is not resized at once; it does it gradually in a sort of a tree structure. So the hit is not that big, but it is measurable and it will show up in a benchmark test for example.

Generally speaking, if you make the entries a little bit bigger than you think you probably need, make the number of entries a bit bigger than you probably need, then you are likely to be fine.

Chronicle starts allocating space in terms of a quarter of the specified size. If, for example, you pick a size of 128, Chronicle starts allocating in multiples of 32 bytes. Therefore, even if your entries are always about half the size of what you specified, you are probably not wasting much space. If it is a 10th of the size that you specified, then you are probably wasting a bit of space but it is still going to work. You will just end up wasting a little bit of memory; depending on the number of entries of course.

Accessing Maps: When it comes to accessing the map in the application, does it perform comparatively well to say like the past utility map or standard Java collections, or is it best to use them as a kind of a persistent repository if you restart the system?

Compared to say a concurrent hash map, there is a significant cost difference. The main difference is the cost of serialisation and deserialization. The general cost compared to concurrent hash maps is higher, by a factor of a few times. However, when it comes to things like GCs. It has almost no impact because almost the entire thing is off-heap and it is all going to go into tenured space and they just stay there.

So there is a trade off to be had. Whether you serialise the objects every time but you do not need to worry so much about GCs. One way to mitigate the GC serialisation cost is to use the “values” objects. The values objects use flyweights; and the flyweights mean that if you are only going to access one field in an object, then that is the only thing that gets effectively deserialized. It is not as easy to work with, but it can be more efficient depending on your access pattern.

There are a number of the things to consider; for example, using the value objects as opposed to using the mashables.

Q iOS configuration tuning: In terms of the iOS configuration, are there any obvious things that need to be tuned to make this run better; for example, open files?

I would say that if you are getting a problem with open files, that is a red flag as you should not really have so many open files. For example, If you have 100 queues you should have in the order of a couple hundred open files.

Each map has one file per map per process that opens it, so regardless of its size it really should not be pressuring your open files

Where we sometimes have problems with open files, it is usually something to do with tuning sockets and that can cause a problem indirectly, because there still needs to be free open files when you use Chronicle Queue and Chronicle Map. If your socket connectivity is changed appropriately, it should not be a problem.

The main thing that we tend to choose is the isolated CPUs and turning off a perspective patch where appropriate.

The Spectre Meltdown patch can have a significant (15 to 30%) overhead on your service depending on what you are doing; particularly for making system calls, because system calls have to switch between contexts, and the patch forces are much more significant flushing of the caches to avoid being able to see things you should not because they are still in the cache. 

Isolating CPUs and using isolated CPUs for latency sensitive tasks, for example, trying to reduce the 99th percentile in particular, making sure that if you are using a TCP offload card, that you are on the socket that that card is actually bound to. If you are using it from a different socket, you can get unnecessary noise in terms of your latencies.

Mac OS has some poor latency properties; for example, loopback takes 30 microseconds round trip while on Windows and Linux it takes less than 10 microseconds. I would not encourage even Windows for low latency production, but for development I use Windows.

Chronicle or third-party tooling: Do you have any good tooling or any tooling that you know of actually being able to dive into a queue and see the contents of what is next? Although you can do it from the command line, and we have built some tooling ourselves, are there any Chronicle products or any third party tools available?

We are investigating third-party tooling. In an upcoming press webinar that we are looking to do with a company called 3Forge, is tooling for visualisation of latency data coming out of Chronicle queues. It is something we have only discussed this week, so it is a little way off, but we will start to look at third-party visualisation tools for the Chronicle FIX message data in a Chronicle queue.

Other Chronicle tools: Aside from the double-latency benchmark and the micro benchmark, are there any other internal tools that you are using?

There is a micro-jitter sampler that is one of my favourites. We use that for determining what sort of noise there is on the operating system before you even run anything on it.

Top three latency issues: What are the top three things that you see when your clients have latency issues with their queues?

Virtual/real machine?

If the client is using a virtual machine, when they really should be using a bare metal machine if they want to get reproducible benchmarks. We have one client where the company had mandated that even their trading system should be on virtual machines. The deployed trading system’s latency was 12 times higher than it should have been for a bare metal machine.

File system and disk subsystem

The next one is how is your disk subsystem working and which file system have you used? We have found that EXT4 can offer better performance than XFS but not always. The kind of disk subsystem being used; SAM may  not necessarily perform as well. Just real reality there. We do have some commercial options that will help with this analysis. If you are seeing disk base latencies, and one of the ways to test it is to compare it against tempfs. So, if you run the same benchmark using tempfs first and you get strikingly better results than on disk then there is something wrong with your disk subsystem or there is something you can tune to improve it. This is not set in stone as tempfs typically gets better latencies (not surprisingly, since it is not writing to disk, but it seems to allocate pages much more lazily than real disk subsystems. And you can see worse outliers with tempfs than you do with real disks. Real spinning disks can have a better outlier profile than tempfs.

Use of queues

Be sensible how you use the queues. One of the things someone evaluating Chronicle  was trying to do, I am not sure what the logic was, was to make write asynchronous. Their idea was to run up a message to the queue in a new thread each time; so they created a new thread for every message. They would then write that thread into the queue and then die. Obviously, not only is it expensive firing off all these threads, but Chronicle  Queue does perform better if it is accessed from a single thread. All around that is worse than the simple and usual way of doing it. 

Use the queues sensibly and avoid creating lots of objects when you are writing to the queues. We recommend the use of Flight Recorder and have customers send us recordings to analyse the performance. One of my tips for using Flight Recorder is that, if you are looking at a system that is not actually producing a lot of garbage, Flight Recorder only records when an object triggers a new local allocation buffer, and that might be every two megabytes, so you can get very weak and noisy data which is not very useful. The simplest thing to do in this case is to tune down the T-LAB, which obviously has a small impact on performance, but gives you much better accounting. I would suggest making your T-LAB  about 64K, and extreme cases of suggesting you just turn it off. When you turn off T-LAB you actually get accounting of every object, which obviously has an overhead, but it gives you all the detail that you could possibly want. terms of object allocation.

Multiple queue issues: We use eight queues and use a core critical path queue that we constantly write back to. Is there any kind of negative impact of doing this?

If you are writing back to a queue that you are reading from, there are probably other writers. In this case there can be some contention. Chronicle will do a buffered write instead of just waiting for the queue to be available. As soon it is finished writing, it will then just copy it in making the cost of serialisation of the object concurrent; even though the locking of the queue and the copying is not concurrent.

Chronicle does have steps for mitigating the overhead costs by concurrent access, particularly if you are serialising expensive objects.

Our partners

We partner with some of the industry's most trusted brands

Partner Logo - MTree
Partner Logo - Lenovo
Partner Logo - Speedment Logo
Partner Logo - Red Hat
Partner Logo - Reactive Markets
Partner Logo - Nvidia
Mellanox Partner Logo
SiegeFX Logo

Want to learn more? See the benefits of Chronicle's products in action

At Chronicle, we believe less is more. Learn more about why and how Chronicle can support your business and how it can increase efficiency and streamline your systems and workflows by speaking with one of our experts.

We can also offer you our release notes.

By completing this form, I agree that Chronicle Software may keep me informed of its products, services and offerings. To view our privacy policy, click here. To unsubscribe click here.