Monday introduced the primary shock from Snowflake Summit 25: the acquisition of Crunchy Knowledge for a reported $250 million. Crunchy Knowledge is a longtime South Carolina firm that has been working and constructing upon the open supply Postgres database for effectively over a decade, so what prompted the $70-billion information large to purchase it now?
When the cool youngsters in IT have been going gaga for Hadoop and NoSQL databases again within the early 2010s, Paul Laurence, the president and co-founder of Crunchy Knowledge, had a unique thought. Whereas these different applied sciences had their backers and successes, Laurence was targeted on one other rising development–one which additionally concerned enthusiasm for the potential for open supply, however with out all of the rule-breaking that Hadoop and NoSQL entailed.
“They have been betting on a brand new open-source information administration toolbox, the place that they had these NoSQL databases. That was a part of the place they have been going,” Laurence informed BigDATAwire in a July 2024 interview. “However in addition they noticed this as a possibility to shift in direction of open supply on the relational and SQL aspect. And so they had evaluated the panorama and chosen Postgres as sort of their huge wager.”
Laurence additionally determined to make a wager on Postgres in 2012, when he co-founded Crunchy Knowledge in Charleston, South Carolina along with his father, Bob. Through the years, the corporate launched a spread of Postgres-based choices, together with a supported distribution of the database referred to as Crunchy Postgres, a model of Postgres to run on Kubernetes containers, a cloud-hosted model referred to as Crunchy Bridge, and at last an analytics model referred to as Crunchy Bridge for Analytics.
By the point 2024 rolled round, Crunchy Knowledge had constructed a stable enterprise with 100 staff and 500 prospects. All of its merchandise are 100% suitable with Postgres, a assure that’s crucial for the corporate’s prospects in addition to the gas that drives the continued development of Postgres. The wager on a boring piece of relational tech created by Michael Stonebraker 35 years in the past had paid off, which is a small miracle within the hype-driven echo chamber of IT.
“Postgres was largely dismissed after we first began. It was actually all about Hadoop and MongoDB,” Laurence informed us. “It was very contrarian on the time…Other people would say ‘You guys are loopy. What are you guys doing beginning a SQL database firm? SQL is useless. That’s not an actual factor anymore.’”
Snowflake Finds a Postgres
You wouldn’t mistake Snowflake for being a giant fan of Hadoop or NoSQL, both. In reality, the corporate’s success relies in no small half on the wave of Hadoop failures within the late 2010s.
“I can’t discover a glad Hadoop buyer. It’s type of so simple as that,” Bob Muglia, the previous CEO of Snowflake, informed this publication again in March 2017. “It’s very clear to me, technologically, that it’s not the expertise base the world can be constructed on going ahead.”
Muglia, in fact, was proper: Hadoop would go on to implode in early June 2019, thereby paving the way in which for the rise of cloud-based information warehouses and lakehouses, together with Snowflake. Simply over a yr later, amid the COVID-19 pandemic, Snowflake famously went public because the “largest ever” software program IPO ever.
Since then, Snowflake has been in a neck-and-neck race with Databricks to develop the trade’s preeminent cloud information platform. Databricks, in fact, is the corporate behind Apache Spark, which additionally contributed to the decline of the Hadoop stack. Whereas Spark continues to be there, it’s simply part of what Databricks does anymore.
Like its rival, Snowflake has additionally been shifting away from the place it began. Whereas Snowflake has adopted newer analytics architectures for its RDMBS, together with MPP-style column retailer and separation of compute and storage, underneath the covers, Snowflake’s huge wager has been on conventional SQL and relational applied sciences.
Snowflake’s huge awakening underneath new CEO Sridhar Ramaswamy has been the significance of openness and a shift away from proprietary tech. That’s what drove it final yr to embrace Apache Iceberg, the open supply desk format that solved a bunch of the consistency points that arose with Hadoop’s buffet-style information entry patterns.
The assist for Iceberg tables as an alternative choice to Snowflake’s conventional tables additionally opened up the door for Snowflake prospects to carry an array of different question engines to bear on Snowflake information, together with open supply Trino, Presto, Dremio, Spark, and Apache Flink, amongst others.
Now with the acquisition of Crunchy Knowledge, Snowflake has a complete open supply database on supply. Snowflake Postgres, as the brand new providing known as, brings prospects all of the open goodness of Postgres and its huge ecosystem.
So what modified? Why is Snowflake doing this now?
What Modified at Snowflake
What modified is agentic AI emerged on the scene, and Snowflake sensed that prospects wouldn’t be glad utilizing its proprietary relational Snowflake database to deal with this rising workload.
Whereas Snowflake’s database is highly effective and succesful, the corporate just isn’t about to open it up and permit prospects to switch the database to deal with these rising agentic AI workloads in the identical method that they may with an open supply database–that they may with Postgres.
Snowflake EVP Christian Kleinerman addressed the explanations behind the acquisition of Crunchy Knowledge throughout a query and reply session with the press on Monday at Snowflake Summit 25.
“I feel at first, it confirms our dedication to interoperability after which buyer alternative,” Kleinerman stated. “I feel open supply is a little bit bit much less essential. However what issues with Postgres just isn’t whether or not it’s open supply or not. What issues is there’s a really giant group of folks that worth Postgres constructed for Postgres, construct on high of Postgres.
“That’s what drove us,” he continued. “That was the requirement for most of the prospects and companions that we heard, which is we like plenty of the expertise you construct, however it could make it method simpler for us to carry computation, to carry purposes, to even carry brokers onto the info in case you have one thing that’s Postgres suitable.”
On the subject of interoperability and requirements, Postgres is king. Snowflake’s transfer exhibits that prospects are prepared to place up with rather less efficiency in the event that they make it again with interoperability and requirements.
Snowflake seemed on the potential to construct a Postgres suitable layer on high of its present database, however ultimately, it determined to purchase Crunchy Knowledge as a substitute.
“Prospects and companions stated, please give me Postgres compatibility,” Kleinerman stated. “We requested considered one of our absolute high engineers, world class, high 10, high 20 on this planet in database design, can we layer the Postgres semantics and Postgres API on high of our engine? And the reply was a really, very lengthy doc on why it received’t work.
“And if prospects need compatibility with Snowflake, Snowflake is nice,” he continued. “What we did with Unistore and hybrid tables is nice. But when they need compatibility with Postgres, it’s not simply the syntax, the semantics and all of that. It must be Postgres. The one strategy to obtain compatibility is with Postgres.”
By the way in which, Databricks did the identical math, and determined to purchase Neon, a developer of a serverless Postgres database, for $1 billion.
Regardless of all of the progress in huge information, nothing apparently can beat SQL and relational tech.
Associated Gadgets:
Databricks Nabs Neon to Remedy AI Database Bottleneck
Snowflake Widens Analytics and AI Attain at Summit 25
Crunchy Knowledge Goes All-In With Postgres


