Cloudflare’s analytics products help customers answer questions about their traffic by analyzing the mind-boggling, ever-increasing number of events (HTTP requests, Workers requests, Spectrum events) logged by Cloudflare products every day. The answers to these questions depend on the point of view of the question being asked, and we’ve come up with a way to exploit this fact to improve the quality and responsiveness of our analytics.
Consider the following questions and answers:
What is the length of the coastline of Great Britain? 12.4K km
What is the total world population? 7.8B
How many stars are in the Milky Way? 250B
What is the total volume of the Antarctic ice shelf? 25.4M km3
What is the worldwide production of lentils? 6.3M tonnes
How many HTTP requests hit my site in the last week? 22.6M
Useful answers do not benefit from being overly exact. For large quantities, knowing the correct order of magnitude and a few significant digits gives the most useful answer. At Cloudflare, the difference in traffic between different sites or when a single site is under attack can cross nine orders of magnitude and, in general, all our traffic follows a Pareto distribution, meaning that what’s appropriate for one site or one moment in time might not work for another.
Because of this distribution, a query that scans a few hundred records for one customer will need to scan billions for another. A report that needs to load a handful of rows under normal operation might need to load millions when a site is under attack.
To get a sense of the relative difference of each of these numbers, remember “Powers of Ten”, an amazing visualization that Ray and Charles Eames produced in 1977. Notice that the scale of an image determines what resolution is practical for recording and displaying it.
Using ABR to determine resolution
This basic fact informed our design and implementation of ABR for Cloudflare analytics. ABR stands for “Adaptive Bit Rate”. It’s essentially an eponym for the term as used in video streaming such as Cloudflare’s own Stream Delivery. In those cases, the server will select the best resolution for a video stream to match your client and network connection.
In our case, every analytics query that supports ABR will be calculated at a resolution matching the query. For example, if you’re interested to know from which country the most firewall events were generated in the past week, the system might opt to use a lower resolution version of the firewall data than if you had opted to look at the last hour. The lower resolution version will provide the same answer but take less time and fewer resources. By using multiple, different resolutions of the same data, our analytics can provide consistent response times and a better user experience.
You might be aware that we use a columnar store called ClickHouse to store and process our analytics data. When using ABR with ClickHouse, we write the same data at multiple resolutions into separate tables. Usually, we cover seven orders of magnitude – from 100% to 0.0001% of the original events. We wind up using an additional 12% of disk storage but enable very fast ad hoc queries on the reduced resolution tables.
Aggregations and Rollups
The ABR technique facilitates aggregations by making compact estimates of every dimension. Another way to achieve the same ends is with a system that computes “rollups”. Rollups save space by computing either complete or partial aggregations of the data as it arrives.
For example, suppose we wanted to count a total number of lentils. (Lentils are legumes and among the oldest and most widely cultivated crops. They are a staple food in many parts of the world.) We could just count each lentil as it passed through the processing system. Of course because there a lot of lentils, that system is distributed – meaning that there are hundreds of separate machines. Therefore we’ll actually have hundreds of separate counters.
Also, we’ll want to include more information than just the count, so we’ll also include the weight of each lentil and maybe 10 or 20 other attributes. And of course, we don’t want just a total for each attribute, but we’ll want to be able to break it down by color, origin, distributor and many other things, and also we’ll want to break these down by slices of time.
In the end, we’ll have tens of thousands or possibly millions of aggregations to be tabulated and saved every minute. These aggregations are expensive to compute, especially when using aggregations more complicated than simple counters and sums. They also destroy some information. For example, once we’ve processed all the lentils through the rollups, we can’t say for sure that we’ve counted them all, and most importantly, whichever attributes we neglected to aggregate are unavailable.
The number we’re counting, 6.3M tonnes, only includes two significant digits which can easily be achieved by counting a sample. Most of the rollup computations used on each lentil (on the order 1013 to account for 6.3M tonnes) are wasted.
Other forms of aggregations
So far, we’ve discussed ABR and its application to aggregations, but we’ve only given examples involving “counts” and “sums”. There are other, more complex forms of aggregations we use quite heavily. Two examples are “topK” and “count-distinct”.
A “topK” aggregation attempts to show the K most frequent items in a set. For example, the most frequent IP address, or country. To compute topK, just count the frequency of each item in the set and return the K items with the highest frequencies. Under ABR, we compute topK based on the set found in the matching resolution sample. Using a sample makes this computation a lot faster and less complex, but there are problems.
The estimate of topK derived from a sample is biased and dependent on the distribution of the underlying data. This can result in overestimating the significance of elements in the set as compared to their frequency in the full set. In practice this effect can only be noticed when the cardinality of the set is very high and you’re not going to notice this effect on a Cloudflare dashboard. If your site has a lot of traffic and you’re looking at the Top K URLs or browser types, there will be no difference visible at different resolutions. Also keep in mind that as long as we’re estimating the “proportion” of the element in the set and the set is large, the results will be quite accurate.
The other fascinating aggregation we support is known as “count-distinct”, or number of uniques. In this case we want to know the number of unique values in a set. For example, how many unique cache keys have been used. We can safely say that a uniform random sample of the set cannot be used to estimate this number. However, we do have a solution.
We can generate another, alternate sample based on the value in question. For example, instead of taking a random sample of all requests, we take a random sample of IP addresses. This is sometimes called distinct reservoir sampling, and it allows us to estimate the true number of distinct IPs based on the cardinality of the sampled set. Again, there are techniques available to improve these estimates, and we’ll be implementing some of those.
ABR improves resilience and scalability
Using ABR saves us resources. Even better, it allows us to query all the attributes in the original data, not just those included in rollups. And even better, it allows us to check our assumptions against different sample intervals in separate tables as a check that the system is working correctly, because the original events are preserved.
However, the greatest benefits of employing ABR are the ones that aren’t directly visible. Even under ideal conditions, a large distributed system such as Cloudflare’s data pipeline is subject to high tail latency. This occurs when any single part of the system takes longer than usual for any number of a long list of reasons. In these cases, the ABR system will adapt to provide the best results available at that moment in time.
For example, compare this chart showing Cache Performance for a site under attack with the same chart generated a moment later while we simulate a failure of some of the servers in our cluster. In the days before ABR, your Cloudflare dashboard would fail to load in this scenario. Now, with ABR analytics, you won’t see significant degradation.
Stretching the analogy to ABR in video streaming, we want you to be able to enjoy your analytics dashboard without being bothered by issues related to faulty servers, or network latency, or long running queries. With ABR you can get appropriate answers to your questions reliably and within a predictable amount of time.
In the coming months, we’re going to be releasing a variety of new dashboards and analytics products based on this simple but profound technology. Watch your Cloudflare dashboard for increasingly useful and interactive analytics.