Why do Databases Require Client-Side Retries
11 Jan 2019I’ve written in the past that client-side retries are actually more ubiquitous than most practitioners seem to think they are. But more fundamentally than that post discusses, why, morally, do retries need to exist at all? The entire stated premise of database isolation, like, the thing you learn on day one, is that it’s meant to provide the illusion that every transaction in the database is a completely independent actor that doesn’t need to concern itself with the machinations of any other transaction. Which is why I think many users are confused when some databases tell them that they need to be prepared to re-issue their query if something goes wrong. Wait, wasn’t the idea that you would handle that for me? Are you saying you messed up and need another shot at it? Isn’t it your job to, like, not do that? Or at least be able to sort it out yourself if you do?
There’s two distinct questions here we need to disentangle:
- Why can’t you solve this problem state for me, and
- why can’t you prevent me from getting into this problem state in the first place?
The hurdle causing (1) is really that your database doesn’t control the rest of the universe. This is more about maintaining atomicity (reminder: the A in ACID) than it is about “having another go at it”. When the database decides the transaction can no longer be committed for some reason, it needs to make sure that everything that happened as a result of that transaction is undone. Decisions made using the data the database returned, such as
- whether to advance the program counter in the driving application,
- whether to send an email to a user, or
- whether to launch the missiles.
In a traditional conversational SQL model, there’s no way for the database to guarantee that these things will be rolled back without informing the user that they need to do it themselves (in most cases this means requiring the application to not take irreversible actions until the transaction has successfully committed).
It actually is possible for the database to lift this restriction somewhat if it presents a more limited model to the user, in which data is not returned to the user at all until the transaction has committed. This comes with its own problems and there are a lot of points in this design space to be explored. Spanner provides some interesting guarantees around what they call “snapshot queries” (which are slightly stale, read only queries):
Spanner automatically compensates for failures, resharding, and binary rollouts, affecting request latencies in a minimal way. The client library transparently resumes execution of snapshot queries if transient errors or other system events occur, even when partial results have been consumed by user’s code. In a read-modify-write transaction, a failure often results in the loss of locks (held in-memory only) and thus requires the transaction to be aborted and retried. The material in this section applies mostly to queries with snapshot concurrency, including queries with strong concurrency, which make up the majority of queries handled by Spanner.
It’s also possible to avoid these problems by moving to a more stored-procedure-centric world — if you can provably move all the intermediate state in your transaction into the database, then it can be capable of rolling back everything (provided any external systems it interacts with have rollback mechanisms too).
Though how come we can’t avoid getting into an abort situation in the first place? Isn’t my database supposed to have a sophisticated transaction scheduler? Like, yes, I want my database to intelligently give me concurrency and make my transactions run soon, but on the other hand if one of those transactions ends up aborting then I’m worse off than I would be if they just executed serially. One tempting explanation is that it winds up being fastest for the database to just throw all the transactions at the wall and hope that they contend infrequently enough that the loss of performance due to aborts is made up for by the low overhead of this approach. That might actually be true, but there’s a more fundamental reason why we can’t avoid all aborts this way in many query languages. Think about what kind of information you would need to neatly lay out your transactions in order to intelligently schedule their execution. Probably, at the very least, which regions are read and written to, and, more ambitiously, in what order. Thus, any query language that allows this would have to permit static analysis to determine its read and write set. This presents an immediate problem with SQL, which is actually pretty expressive, as it turns out:
SELECT x FROM a WHERE y = (SELECT q FROM b LIMIT 1)
There’s simply no way to precisely determine what the read set of this query is
statically — to be pessimistic, it could read at most one row from b and could
read as much as the entirety of a. This is in spite of the fact that it
probably won’t (if y
is a
’s primary key, it might only be as much as a
single row of a
, but it’s not knowable at plan-time what that row is). There
have been attempts to design around this problem.
Calvin requires queries to state their read and write set ahead of time. This can limit the expressiveness of said queries, and forces you to do a dance to do any kind of dependent lookup: The idea is for dependent transactions to be preceded by a cheap non-transactional “reconnaissance query” that performs all the necessary reads to discover the transaction’s full read/write set. The actual transaction is then sent to be added to the global sequencer and executed, using the reconnaissance query’s results for its read/write set.
Because it is possible for the records read by the reconnaissance query (and therefore the actual transaction’s read/write set) to have changed between the execution of the reconnaissance query and the execution of the actual transaction, the read results must be rechecked, and the process have to may be (deterministically) restarted if the “reconnoitered” read/write set is no longer valid. The idea being that the initial reconnaissance query requires no scheduling at all, and the follow-up “real” query now has enough information to enable the database to make intelligent decisions about scheduling.