Avoid Multiple Cache Refreshes: The Double Check Approach
In previous articles, we've stressed the importance of caching to enhance the performance of our applications. This time, we're discussing a small yet potent tip to further amplify the benefits derived from caching.
A standard caching routine often looks like this:
This code is 'functional' and can be regarded as the 'default' approach to caching. Here, we're fetching a value from the cache, and if it's missing, we generate it and store it for future requests. However, a problem arises when we deal with a high-traffic application, such as a .NET Core web application or API, which must handle many concurrent requests.
Suppose multiple requests reach this code simultaneously, each finding that it needs to generate the value. In such a case, you'll experience "multiple" refreshes of the same value and several calls to SetValue.
To prevent this, we can employ a mutual-exclusion (mutex) lock to restrict multiple threads from accessing the same code block. Here's how it's done:
While the example above uses the keyword lock for simplicity, you can also directly use the Monitor class or SemaphoreSlim. The latter can be particularly useful when dealing with async code.
In the revised code block, the first thread that encounters a need for cache refresh will enter the mutual-exclusion block. Once inside, no other thread can enter until the initial one finishes. The second cache check right after the lock ensures that the threads which were waiting for their turn also verify the cache before deciding to produce the value.
This technique presents an extra benefit: it shortens recovery time when the application reboots or crashes under high traffic. A common problem with not using the double-check is that during a high-traffic scenario if your application is restarted, every single request/thread will attempt to refresh cache values simultaneously. This often leads to another crash. By employing the double-check, you provide an added layer of protection against this issue. In my experience, I've found this strategy can significantly improve bandwidth utilization, which in turn boosts performance and reduces costs.
This minor adjustment, therefore, can not only prevent unnecessary cache refreshes but also improve the efficiency, resilience, and cost-effectiveness of your application. Remember, sometimes, little tweaks can lead to significant enhancements!