Handling simultaneous duplicate expensive read-only HTTP requests

  softwareengineering

(updated)

We have a read-only REST endpoint that performs a somewhat “expensive“ but transient request. Without client needing to poll, we need a mechanism for the server to avoid unnecessary processing whenever “similar” but not yet cached requests come it at the same time. What existing technologies & designs can robustly handle this use case?

Example

[GET] /route?start=<latlng>&finish=<latlng>&api_key=<uuid>


Let’s say the service above routes users between latitude/longitude locations based on current conditions. We want to optimize server spike overhead because it internally pulls dynamic traffic data from external APIs ($$ per call) & then calculates a custom route (with some tolerable but not ideal time latency).

So, in addition to relying on short time-to-live server side caching to return recently processed requests as well as signaling clients with appropriate cache-control headers to hopefully reduce their re-requests, it seems we still need some custom caching logic to handle:

  • de-duplicating “similar” request query parameters, say by excluding api_key & geo-hashing locations.
  • Suspend responding to HTTP requests while other similar ones are in progress.


9

You can move the caching/polling logic into the server, for example by writing some middleware for this. A key insight is that a cache entry cannot just be present or absent, but also pending.

An example algorithm might be:

  • determine a cache key for the request
  • look up the cache entry
    • if absent: continue
    • if pending: someone else is preparing the response, so wait until ready
    • if ready: respond with the cached content
  • atomically update the cache entry from “absent” to “pending”
  • calculate the response
  • update the cache entry from “pending” to “ready”, letting the waiting requests serve their responses
  • serve the response

There are lots of variations of this. For example, the waiting requests might poll the shared data, or the waiting requests are parked and later unblocked or notified when the result is ready. The “pending” information might be integrated in your normal server-side caching infrastructure (e.g. Redis), or might use a separate data source. The “pending” state can sometimes be represented by acquiring an exclusive lock on a database entry. A key distinction is whether the progress information is kept in-process or externally.

Aside from caching entire responses, you might also consider which internal calculations can be reasonably cached. This might speed up your calculations, to the point that caching entire responses might become less appropriate. The internal caches might have a “pending” status as well, so that they can avoid concurrent computations of the same value.

If you expect lots of approximately-concurrent requests for the same resource, you could consider waiting for a short duration in order for more shareable requests to build up. This trades latency for throughput. Such a waiting duration is quite similar to a TTL on a cache entry, but with the crucial difference that you don’t have to actually store the response data in the cache – the response is just copied to all currently waiting requests.

1

Amon’s answer is correct, and this answer is really just a specific variation of the approach described in the first part of his answer:

When a request comes in, look in the cache based on the appropriate key(s). If there’s nothing there, create a new resource URI which will return the request result. Start processing the result. Put the URI in the cache and associate it with the request key. Reply with a response which references that location. You could use redirects for this if that’s desirable.

When the client attempts to GET the response, if it has not yet completed you have a couple of choices. You could either respond with a ‘try again later’ (or ‘pending’) response, or you could block on the server until the response is ready. If your response should be created in a reasonably short period, you might be able to get away with the latter, but an immediate response is safer.

If you can come up with some sort of reasonable hashing scheme or mapping ‘similar’ requests to the same URI (which seems doable here) you could avoid race conditions where two or more requests come in so close together that you still end up processing the same request more than once (or have to come up with locking schemes to prevent it.) That might be overkill here but it’s fairly straightforward if you can work out a scheme. That could also lead to building that logic right into the client, but I think doing it server side has some advantages, especially if you have to worry about different versions of the client.

The main challenge with this approach that you’ll want to consider regardless of the approach is what happens if the processing fails to complete. Then you could have all the clients waiting on something that never completes. Consider how long it should take, what a reasonable timeout is, and how (or if) to retry if the background processing fails.

LEAVE A COMMENT