How ACID is MongoDB?

Relational databases usually guarantee ACID properties related to how reliably transactions (both reads and writes) are processed. MySQL and PostgreSQL are examples of database that provide these properties as a selling point.

The NoSQL movement trades off ACID compliance for other properties, such as 100% availability, and MongoDB is the leader in the field. I'm not saying that is bad for Mongo not to provide these guarantees, since they are not the most important constraint in its use cases; just that when transitioning from MySQL and similar products in a production system, you must be aware of what you are sacrificing.

Actually some variations of these properties map to object-oriented programming better than their classic counterpart; for example, document-based transactions are more in line with the Domain-Driven Design Aggregate pattern than the arbitrary wide transaction of MySQL.

The Original

Here is a summary fo how the ACID properties are interpreted by a relational DBMS:

Atomicity

MongoDB provides only a document-wide transaction: writes are never partially applied to an inserted or updated document. The operation is atomic in the sense that it either fails or succeeds, for the document in its entirety.

Thus at least Mongo it's not as low-level as using a bunch of files, since the equivalent would be a set of files each with its own lock.

There is no possibility of atomic changes that span multiple documents or collections: either you model the state changes of your application as additional documents, or you can't use Mongo where these database transactions are required. A classic example is to model the operations of a bank account with movement documents, instead of with a single account one: the insertion of a movement either succeed or fails.

If you would have to implement 2 phase commit by yourself, just stick to a relational database for that persistence component of your application (I'm not saying anything on the rest of the data.)

Consistency

Even in replica set configurations, the primary Mongo server is targeted with all the writes; single server consistency is easy to guarantee.

The secondary nodes may be out of date with respect to the primary, as eventual consistency only guarantees that if after a long enough period with no writes, they will get up to date with respect to the primary. However by default the secondary servers cannot answer reads, so you are able to distribute your traffic with the penalty of inconsistency only if you want to and configure them to do so. Consistency and availability are incompatible due to the CAP theorem - you have to choose.

Isolation

Up to a few months ago, MongoDB had a server-wide write lock! I guess you can say it's a perfect isolation mechanism. Read locks instead can be taken by multiple connections at the same time, as long as no one is writing.

From the 2.2 version, Mongo started to use database-specific write locks, and many operation started to yield locks upon encountering slow events such as page faults. Mongo will move in the future at least to collection-specific locks.

However, keep in mind that the Mongo model is similar to transaction auto commits for relational databases: you can't really talk about isolation since every operation is immediately visibile to any other connection.

Durability

Durability of writes is the biggest issue with Mongo. After the 2.0 version, the situation of a single server is:

These parameters are configurable with the syncdelay and journalCommitInterval configuration options. Committing a file means issuing the OS sync command over it; the journal is, as for all databases, synced very frequently so that in the event of a crash or a forced shutdown the database can be rebuilt from it.

What MySQL do is committing the journal after every write operation (actually every committed transaction). The Mongo developers say they don't do this because in many scenarios the OS doesn't write the file on disk even after syncing (hardware buffering), and because time spent waiting for recovering would impact availability. Only a battery-backed disk controller could guarantee these writes aren't lost upon a failure, but it isn't a common configuration. Turning off hardware buffering would be "very slow".

So if the server crashes, writes accepted after the last commit of the journal will be lost. Rare, but possible case.

Across multiple servers, it is possible for the server to die before transmitting updates to any secondary, which are synced asynchronously by default. These updates can be merged back if the failed primary is recoverable.

However, you can specify to replicate a write to at least N secondaries before considering a write finished, with the write concern options (clustered durability). Write concerns can even be customized for single insertions. It's a bit strange to skip writing on disks only to wait for network calls to be finished, anyway.

Thus once upon a time, Mongo only supported clustered durability, by replicating everything to secondary servers. Under the assumption of unrelated failures, this protects you because two servers will never fail in the same time span (meaning one fails and the other fails before the first is repaired.) But if your data center loses power, only journaling, introduced in Mongo 1.8, will be able to save you.

The take-away from this discussion is that Mongo does not provide durability by default (outdated post), but lets you tune the configuration of a replica set  in order to achieve it if you want to sacrifice enough performance.

 

 

 

 

Top