Just make it scale: An Aurora DSQL Story
うお
At re:Invent we announced Aurora DSQL, and since then I’ve had many conversations with builders about what this means for database engineering. What’s particularly interesting isn’t just the technology itself, but the journey that got us here. I’ve been wanting to dive deeper into this story, to share not just the what, but the how and why behind DSQL’s development. Then, a few weeks ago, at our internal developer conference — DevCon — I watched a talk from two of our senior principal engineers (PEs) on building DSQL (a project that started 100% in JVM and finished 100% Rust). After the presentation, I asked Niko Matsakis and Marc Bowes if they’d be willing to work with me to turn their insights into a deeper exploration of DSQL’s development. They not only agreed, but offered to help explain some of the more technically complex parts of the story.
In the blog that follows, Niko and Marc provide deep technical insights on Rust and how we’ve used it to build DSQL. It’s an interesting story on the pursuit of engineering efficiency and why it’s so important to question past decisions – even if they’ve worked very well in the past.
A brief history of Aurora DSQL
Since the early days of AWS, the needs of our customers have grown more varied — and in many cases, more urgent. What started with a push to make traditional relational databases easier to manage with the launch of Amazon RDS in 2009 quickly expanded into a portfolio of purpose-built options: DynamoDB for internet-scale NoSQL workloads, Redshift for fast analytical queries over massive datasets, Aurora for those looking to escape the cost and complexity of legacy commercial engines without sacrificing performance. These weren’t just incremental steps—they were answers to real constraints our customers were hitting in production. And time after time, what unlocked the right solution wasn’t a flash of genius, but listening closely and building iteratively, often with the customer in the loop.
Of course, speed and scale aren’t the only forces at play. In-memory caching with ElastiCache emerged from developers needing to squeeze more from their relational databases. Neptune came later, as graph-based workloads and relationship-heavy applications pushed the limits of traditional database approaches. What’s remarkable looking back isn’t just how the portfolio grew, but how it grew in tandem with new computing patterns—serverless, edge, real-time analytics. Behind each launch was a team willing to experiment, challenge prior assumptions, and work in close collaboration with product teams across Amazon. That’s the part that’s harder to see from the outside: innovation almost never happens overnight. It almost always comes from taking incremental steps forward. Building on successes and learning from (but not fearing) failures.
While each database service we’ve launched has solved critical problems for our customers, we kept encountering a persistent challenge: how do you build a relational database that requires no infrastructure management and which scales automatically with load? One that combines the familiarity and power of SQL with genuine serverless scalability, seamless multi-region deployment, and zero operational overhead? Our previous attempts had each moved us closer to this goal. Aurora brought cloud-optimized storage and simplified operations, Aurora Serverless automated vertical scaling, but we knew we needed to go further. This wasn’t just about adding features or improving performance - it was about fundamentally rethinking what a cloud database could be.
Which brings us to Aurora DSQL.