How do database read/write operations behave in a distributed, Dynamo-style cluster?

Meet Michael Rhodes

Mike Rhodes and I worked together back in Cloudant’s startup days. Now at IBM, Mike is officially Engineer & Architect, Mobile First Data & Cloudant IBM Analytics, Cloud Data Services. I’ve long appreciated his work on the engineering team, and he’s also an excellent writer.

I’d like to point to an article Mike recently posted to his personal blog, dx13.co.uk: “CouchDB 2.0’s read and write behaviour in a cluster.” In it, he explains how database reads and writes work in a clustered Apache CouchDB™ environment, which will become generally available in the project’s upcoming 2.0 release.

clustered CouchDB operations

CouchDB 2.0 will soon deliver on its promise of running across a distributed, highly available cluster of servers. Much of the code comes from years of operating IBM Cloudant at scale. Cloudant donated it to the Apache community, and the community has done lots of work to integrate it into the upcoming 2.0 release.

Mind Your Rs and Ws

Thinking about database transactions in a single-server environment is more straightforward than transactions in a cluster. When you have several copies of a database spread across many server nodes, the process by which they coordinate their response is known as “quorum.”

Mike’s article helps prepare engineers with a mental model for understanding how CouchDB behaves when network partitions occur within the cluster. A lot can happen between the load balancers in a cluster and the database nodes that sit behind them. The good news is that CouchDB always favors availability, even in situations where other clustered databases might become unavailable to deal with conflicts or insufficient quorum.

Read on in Mike’s article to learn the basic mechanics of quorum in CouchDB 2.0.

© “Apache”, “CouchDB”, “Apache CouchDB”, and the CouchDB logo are trademarks or registered trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.

Join The Discussion

Your email address will not be published. Required fields are marked *