Data warehousing is quickly becoming a commodity through open-source.
I know a company who had 2PBs+ of data in Cloudera. But instead of moving to the cloud (and Databricks), they saved 5X costs by building their own analytics platform with Iceberg, Trino and Superset. The k8s operators are enterprise quality now. On-premises S3 is good, too. You can have great hardware (servers with 128 cpus and 1 TB) and networking.
It's not just Trino. StarRocks and Clickhouse have enterprise grade k8s helm charts/operators. That 60bn valuation is an albtross on Databrick's neck - their pricing will have to justify it, and their core business is commoditizing.
Neon filled their product gap of not having an operational (row-oriented) DB.
Not commoditising for enterprise. My last gig wouldn’t allow open source software or any company that might not be there in a decade, or which kept data anywhere but our own tenant. We’d look for the “call us” pricing rather than hate it, which I normally do. We added databricks and it was considered one of my top three achievements, because they don’t have to think about data platforms again, just focus on using it. It’s SO expensive for an enterprise to rejig for a new platform that you can’t rely on (insert open source project here).
I managed to add one startup and so far it’s done very well, but it was an exceptional case and the global CEO wanted the functionality. But it used MongoDB and we didn’t have any skills, so rather then learn one tiny thing for an irrelevant data store they added cash to use Atlas with all the support and RBAC etc etc. Kept hiring load down, one number to call, job done.
Startups are from Venus, enterprise are from Jupiter.
Totally agree. Happy open source StarRocks user here using the k8s operator for customer-facing analytics on terabytes of data. There's very little need for Databricks in my world.
Looking at StarRocks site (https://www.starrocks.io/), they compare against Clickhouse, Druid and Trino. Don't even compare against Spark/Databricks! Guess Spark is just not competitive.
It's been a commodity for decades now. Metrics like price-performance have a long history, but the SnowBricks products fail at them quite dramatically. The difference is hard-sell vs. soft or no-sell.
Not having to buy an appliance and pay for it up front is quite a valuable option. Also the split between processing and storage allows for better archival and scaling strategies.
if Databricks just wanted a row DB they couldve done postgres themselves. paying this much for Neon i think is a sign that Neon has something special they want (which, knowing their marketing line, is "independently scalable storage and compute for postgres")
On-premise open-source S3 is a problem though. MinIO is not something we're touching and other than that it looks a bit empty with enterprise ready solutions.
Ceph would be a theoretical option, but a) we don't have a lot of experience with it and b) it's relatively complex to operate. We'd really love to add a lighter option to our stack that's under the stewardship of a foundation.
Try expanding a cluster, or changing erasure coding configuration, or using anything that needs random access within a file (parquet), or any day 2 operation.
Guessing you’re referring to minio not ceph? Have they still not figured out how to do day 2? I mainly avoid them because of their license and the way they interpret it
They are not efficient; they have a one-time static hash to create a cluster. After that, it is all duct tape and glue. Want to expand? Add another cluster (pool) and then look for the cluster that contains the object. They don't know which cluster has the object, and performance does not scale as well with additional clusters. Want to decommission a single node, drain the cluster. They refer to multiple pools as a single cluster, but it is essentially a set of static hashes that lack the intelligence to locate objects. Got the initial EC configuration not quite right.. sorry need to redo the entire cluster.
MinIO is a good fit if you want a small cluster that doesn't require day 2 operational complexity, as you only store a few TBs.
I have not looked into them recently, but I doubt the core has changed.
Being VC-funded and looking for an exit makes them overinvest in marketing and story telling.
In addition to the AI use cases, sometimes you wanna share the data warehouse data in oltp way for fast lookups and high concurrency. Not sure whether Neon will do that but I hope so.
One example from Snowflake is hybrid tables which adds rowstore next to columnar.
ETL to bring all your data into Databricks/Snowflake is a lot of effort. Much better if your OLTP data already exists in Databricks and you directly access it from your OLAP layer.
With the push towards open table formats (Iceberg) from both Snowflake and Databricks, it's even harder to get your Postgres OLTP tables ready for OLAP.
The problem isn't in the CDC / replication tools in the market.
The problem is that columnar stores (especially Iceberg) are not designed for the write /upserts patterns of OLTP systems.
They just can't keep up...
This is a big problem we're hoping to solve at Mooncake [0]. Turn Iceberg into an operational columnstore. So that it can be keep up (<s freshness) with your Postgres.
I applied to neon last week and then the news broke about the acquisition. They rejected it this morning — I have never been happier to receive a rejection to an application.
This would’ve been three acquisitions straight for me and… I’m okay, they’re awful. I just want stability.
Congrats to the neon team! I use and love neon. Really hope this doesn’t change them too much.
I've been part of an acquisition as a first-year engineering manager, during which I had to navigate subsequent two rounds of layoffs. I was also a part of the group to help restructure teams and help make calls on who to keep. Morale was terrible, and the cultures also did not gel at all.
It led to some serious burnout and I took several months off. I'm now happily working as an IC again.
I got hired at Kenna Security a month before they were acquired by Cisco and it was such a miserable experience that I won't work for any company the Kenna leadership are involved with, nor would I ever consider working at Cisco.
Yes, that is what i expect, too. They have been paying DynamoDB and CosmosDB for a few years now. However, Neon is not competitive latency/throughput-wise for real-time workloads, needed for high end AI (like personalized recommendations). There are a few others I would have expected like Cockroach, Aerospike, or RonDB.
The first acquisition I was apart of wasn’t too bad! But we were still culturally very different. So after 2 years and properly transitioning things, I bounced to another start up.
Walking into something like that is tough because the two teams sort of don’t like each other and you’re really “neither”. I’d want to make sure I was interviewed by both teams
I was a very early employee at the other two start ups that were acquired and even with equity it was not worth it. After all the class A shares were paid out, the rest of us got little.
I mean, hindsight 20/20 here, but I would have loved the theoretical money @ 1 billion. But those are so rare and my experience in the past 15 years hasn’t matched those unicorns.
Basically I’ve come to the conclusion unless you have serious equity or you’re a founder, acquisition suck. You’re the one doing the work making these two companies come together, while the founders usually bounce or are stripped of any real power to change things.
Databricks started in 2013 when Spark sucked (it still does) and they aimed to make it better / faster (which they do).
The product is still centered Spark, but most companies don't want or need Spark and a combination of Iceberg and DuckDB will work for 95% of companies. It's cheaper, just as fast or faster and way easier to reason about.
We're building a data platform around that premise at Definite[0]. It includes everything you need to get started with data (ETL, BI, datalake).
Aren't the alternatives you mentioned - icerberg and duckdb - both storage solutions while spark is a way to express distributed compute? I'm a bit out of touch with this space, is there a newer way to express distributed compute?
DuckDB is not only a storage solution. It can directly query a variety of file formats at rest, without having to re-store anything. That's one of its selling points: you can query across archival/log data stored in S3 (or wherever) without needing to "ingest" anything or double-pay to duplicate the data you've already stored.
duckdb is primarily a query engine. It does have a storage format, but one of it's strengths is querying data where it already resides (e.g. a parquet file sitting in S3).
There are some examples[0] of enabling DuckDB to manage distributed workloads, but these are pretty experimental.
Databricks is the Jira of dealing with data. No one wants to use it, it sucks, there are too many features to try to appease all possible users but none of them particularly good, and there are substantially better options now than there were not long ago. I would never, ever use it by choice.
I used to be a big fan of the platform because back in 2020 / 2021 it really was the only reasonable choice compared to AWS / Azure / Snowflake for building data platforms.
Today it suffers from feature creep and too many pivots & acquisitions. That they are insanely bad at naming features doesn't help either.
But these days just use trino or whatever. There are lots of new ways to work on data that are all bigger steps up - ergonomically, performance and price - over spark as spark was over hadoop.
The nice thing about spark is the scala/python/R APIs. That helps to avoid lots of the irritating things about SQL (the same transformation applied to multiple columns is a big one).
Hadoop was fundamentally a batch processing system for large data files that was never intended for the sort of online reporting and analytics workloads for which the DW concept addressed. No amount of Pig and Hive and HBase and subsequent tools layered on top of it could ever change that basic fact.
Congrats to the Neon team. They make an awesome product. Obviously it’s sad to see this, but it’s inevitable when you’re VC funded. Let’s hope Nikita and co remain strong and don’t let Databricks bit.io them.
Databricks is Oracle-level bad. They will definitely ruin Neon or make it expensive. In the medium to long term, I will start looking for Neon alternatives.
Definitely agree, their M&A strategy is setup to strangle whoever they buy and they don't even know it. They're struggling in the face of Iceberg, DuckDB and the other tectonic shifts happening in the open source world. They are trying to innovate through acquisition, but can't quite make it because their culture kills the companies they buy.
I'm biased, I'm a big-data-tech refugee (ex-Snowflake) and am working on https://tower.dev right now, but we're definitely seeing the open source trend supported by Iceberg. It'll be really interesting to see how this plays out.
Congratz to neon team (i like what they built), but i don’t see the value or relation to databricks. I hope neon will continue as a standalone product, otherwise we lose a solid postgres provider from the market.
Its pretty heavy in Azure, so I would be surprised if it went away. This is DBX play to move into the transactional database space in addition to the analytical database.
>As Neon became GA last year, they noticed an interesting stat: 30% of the databases were created by AI agents, not humans. When they looked at their stats again recently, the number went from 30% to over 80%. That is, AI agents were creating 4 times more databases versus humans.
For me this has alarm bells all over it. Databricks is trying to pump postgres as some sort of AI solution. We do live in weird times.
I remember the first post by the Neon team here on HN. I think I commented at the time that I thought it was a great idea. I’ve never had a need to use them yet, but thought I always would.
Cynically, am I the only one who takes pause because of an acquisition like this? It worries me that they will need to be more focused on the needs of their new owners, rather than their users. In theory, the needs should align — but I’m not sure it usually works out that way in practice.
> I remember the first post by the Neon team here on HN. I think I commented at the time that I thought it was a great idea.
Same! I remember it too. I found it quite fascinating. Separation of storage and compute was something new to me, and I was asking them about Pageserver [0]. I also asked for career advice on how to get into database development [1].
Two years later, I ended up working on very similar disaggregated storage at Turso database.
Taking a pause also... I don't believe serving IA can be aligned to serving devs. I hope that the part of the work related to the core of PostgreSQL will help the community.
To be honest this is a little sad for me. I'd hoped that Neon would be able to fill the vacuum left by CockroachDB going "business source"
Being bought by DataBricks makes Neon far less interesting to me. I simply don't trust such a large organisation that has previously had issues acquiring companies, to really care about what is pretty much the most important infrastructure I've got.
There certainly is enough demand for a more "modern" postgresql, but pretty much all of the direct alternatives are straying far from its roots. Whether it be pricing, compatibility, source available etc.
Back when I was looking at alternatives to postgres these were considered:
1. AWS RDS: We were already on AWS RDS, but it is expensive, and has scaling and operations issues
2. AWS Aurora: The one that ended up being recommended, solved some operations issues, but came with other niche downsides. Pretty much the same downsides as other wire compatible postgresql alternatives
3. CockroachDB: Was very interesting, wire compatible, but had deeper compatibility issues, was open source at the time, it didn't fit with our tooling
4. Neon: Was considered to be too immature at the time, but certainly interesting, looked to be able to solve most of our challenges, maybe except for some of the operations problems with postgresql, I didn't look deeper into it at the time
5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
There are also various self hosting utilities for PostgreSQL I looked at, specifically CloudPG, but we didn't have the resources to maintain a stateful deployment of kubernetes and postgres ourselves. It would fulfill most of our requirements, but with extra maintenance burden, both for Kubernetes and PostgreSQL.
Hosting PostgreSQL by itself, didn't have mature enough replication and operations features by itself at that point. It is steadily maturing, but as we'd got many databases manual upgrades and patches would be very time consuming, as PostgreSQL has some not so nice upgrade quirks. You basically have to unload and reload all data during major upgrades. Unless you use extensions and other services to circumvent this issue.
In my brief experience as an engineer (2014->), I've learned that the best "modern" alternative to PostgreSQL at year X has been PostgreSQL at year X+5. :)
> 5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
Mainly in relation to notify/listen and advisory locks. Most of our code bases use advisory lock based migration tools. It would be a large lift moving to an alternative or building a migration scheduler out of process
Hey everyone, I'm an engineer at Neon and I wanted to share this FAQ which covers a lot of the questions that are being brought up in the comments here:
For what it's worth the questions can't really be answered by a simple FAQ, because history has shown that those answers aren't worth the page they're written on. Many companies that get bought talk all about the fact that nothing is going to change.
Something is always going to change, almost always in a way that impacts customers. In the best case it's something simple like a different name on the bill, other times it will leave customers scrambling for an alternative before a ridiculous deadline. It could happen within weeks, after a month, or it might take a year. The answers at the time of the announcement are the same regardless.
That’s a nice FAQ and all but after what happened to bit.io [0] you have to understand why people (like me) are extremely worried about this.
We’ve all read glowing blog posts and reassuring FAQs enough times after an acquisition only to see a
complete about-face a few months or a year later.
I quite enjoyed using Neon but as a solo founder running my business on Neon I can’t help but think it’s insanity to not be looking for alternatives.
Databricks is _not_ a company I trust at all.
[0] if you don’t know, databricks acquired bit.io and shut down all databases within 30 days. Production databases had <30 days to migrate.
Love Neon, but that FAQ is worthless. Every company that gets acquired reassures customers that "nothing will change"... then it does, once the new company is in the acquirer's belly and gets digested.
As it happens, we've just launched our new Xata platform (https://xata.io/) which has some of the key Neon features: instant copy-on-write branching and separation of storage and compute. As an extra twist, we also can do anonymization (PII masking) between your production database and developer branches.
The way we do copy-on-write branches is a bit different. We haven't done any modifications to Postgres but do it completely at the storage layer, which is a distributed system in itself. This also brings some I/O performance opportunities.
While Xata has been around for a while, we're just launching this new platform, and it is in Private Beta. But we are happy to work with you if you are interested.
1. Would you sign BAA (for HIPAA) for the Pay As You Go plan? Can't find that anywhere on your site except for that Lite is HIPAA compliant (https://lite.xata.io/security).
2. FYI, couldn't request access via the BYOC form so I sent an email as per the error: There was an error, please try again or contact us at info@xata.io.
The PII masking aspect is very interesting and something we couldn't get when we decided on DBLab a month ago. What does the deployment model within AWS look like?
If you want to deploy the whole platform inside your own AWS account, we have a Bring Your Own Cloud model: https://xata.io/byoc
If you want to get anonymization from your RDS/Aurora instance and into Xata branches, then you run only a CLI command (`xata clone`) which does something similar to pg_dump/pg_restore but with masking. It is based on our pgstream open source project.
Do you support http or websocket connections like https://github.com/neondatabase/serverless? In my experience neon is ultra fast that way in serverless environments like 1-5ms per query with network roundtrip.
(Disclaimer: I work at Xata.)
Just wanted to mention that we also support anonymization, in case that’s something you're looking into:
https://xata.io/postgres-data-masking
Several components are open source as their own projects (see below) which will allow you to reproduce most of the features on top of regular Postgres. But the storage part is not open source. We are considering a simpler implementation of it that would be realistic to self-host and still do copy-on-write branching.
These are the open source components:
* pgstream for the anonymization from the production branch
I think when people look at Neon, the Aurora-style disaggregated compute/data architecture allowing highly scalable read replicas on cloud storage is the defining feature, and it's the only such project that offers it for Postgres. So the storage part is the point.
If all you care about is the forking aspect we use DBLab Engine pretty effectively: https://postgres.ai/products/dblab_engine. Gets deployed within your own infrastructure.
It's my understanding that Neon had some tech to basically "wake up" the DB when a request came out -- so you could "scale down to zero," if you will. I was hoping to explore this for small personal projects: I by far prefer Postgres and would love an isolated database per project.
Is there an alternative for that? Scale-to-zero postgres, basically?
For small personal projects, coolify (featured recently here on HN) lets you quickly stand up postgres with SSL, etc. and get a connection string in seconds. You can deploy in the same project or expose pg to the world like neon does.
One click turns it off, or you can just leave it on. A $5 VM will run a lot of small postgres.
I use both neon and coolify, and could live with either, though apples and oranges when it comes to the data branching feature. But a quick pg_dump/restore which could even be scripted solves my problem. Disclaimer: I like devops in addition to just dev.
AWS Aurora is way too expensive and their "serverless" offerings are overly complicated and not worth it IMHO.
I used Serverless v1 and then they doubled the prices for v2 while removing features so I moved to PlanetScale. They were great but as I grew and wanted multiple smaller DBs they didn't really have a good way to do that and I moved to Neon. Now, with this news, I guess I'll be looking for an alternative.
> AWS Aurora Postgres Serverless v2 has that capability
Was just about to react to someone being wrong on the internet and say that this is not true. Instead, TIL that this is, in fact, the case. Since 2024Q4.
If you're someone who researched the company, enjoyed the interview and accepted an offer, you're probably not going to be in the same group as the people who hate Databricks. Databricks is a 10k people enterprise software company that just raised $10bn and is using their deep pockets to hoover up smaller companies. If that doesn't scare you, you'll be fine. For many of us, the thought of working with or using the product of a company like that strikes fear into our hearts because we have different values to you.
Databricks is the antithesis of Neon. Neon is driven by product, Databricks is driven by sales. Opinions of Databricks in a thread about Neon are going to be on the negative side (but not necessarily representative).
I've been an SA at Databricks for the past two years and love it here. The people you get to work with here are world-class and our customers legitimately love our product.
I too am a little confused about comments in threads on HN about Databricks, they seriously don't reflect what I see internally and what my customers say. I don't think I'd be working here if they did.
It's big, enterprise, and competes aggressively on marketing and hype. Also there have been a string of acquisitions where databricks has kind of just absorbed the team and product and then not done a great job for customers of the old company.
It's fine. Probably actually a good place to work.
Does anyone have insight into Neon's financials - specifically their revenue, COGS, and gross margins? I'm trying to understand what made Databricks value them at $1B. Was it strong unit economics, rapid growth, or mostly strategic/tech value?
Not too familiar with Neon other than the basics - its premise is that you use S3 as bottomless storage for Postgres and it’s otherwise the same as standard Postgres right? And this is all open source? Why are people paying? Can’t you use a cloud provider and have them host this for you?
Hosting and operating the autoscaling of the various services (compute, pageserver, safekeeper, storage broker) that it takes to make all that work is complex enough that most folks would rather not. Same as any other "managed X" service.
> you use S3 as bottomless storage for Postgres [...] Why are people paying?
It's vastly more complicated to do this efficiently than you might imagine. Postgres' internal architecture is built around a very different set of assumptions (pages, WAL, local disk etc.) than what the S3 API offers.
I really do hope that their OSS strategy does not change due to this, as it's really friendly to people who want to learn their product and run smaller deployments. It's (intentionally or not) really hard to run at a big scale as the control plane is not open-source, which makes the model actually work.
Neon is still early‑stage and, AFAIK, not profitable. It’s a perfect snapshot of 2025: anything that’s (1) serverless, and (2) even vaguely AI‑adjacent is trading at a multiple nobody would have believed two years ago. Also supports my hypothesis that the next 12 months will be filled with cash acquisitions.
> Databricks will ruin Neon;
I certainly hope not. Focus on DX, friendly free tier, and community support is what made it special. If that vanishes behind Databricks’ enterprise guardrails, the goodwill will vanish with it.
No, it doesn't matter when you're paying $1B. Why would it? Tech companies don't care about profits. It's easy to become profitable - tech margins are obnoxiously high. They're bought and valued for their ability to scale and rapidly absorb market share.
Guess this is the beginning of the end of a great service, not holding my breath. Sounds like from the WSJ article that they’ll just become some AI agent backend service for Replit, and from the previous conversation on HN that Databricks ruins and shutters their acquisitions. Congrats on the big payout for the employees, though.
Most likely a holding state for a bit before databricks ruins it or shuts it down. I started looking around when the news broke last week or so for alternatives.
Supabase is one that I'll consider, Xata [0] is another one that is interesting. Thankfully I just need "postgres", I don't need branching/PII-clearing/etc. That's all nice to have but I don't need it for my app.
I really would prefer a managed DB for multiple reasons but I might need to look at just self-hosting. I might have spent less time futzing with my DB if I had done that from the start instead of going Aurora Serverless v1 -> Planetscale -> Neon.
I believe that, it's a cool concept. But I was too nervous to build on top of that feature, I wanted to maintain my ability to leave Neon easily. After Planetscale (and using their version of schema branching) I didn't want to get pinched again when I went to switch (PS vs Neon branching was/is very different).
I think one of the coolest features of neon is being able to quickly spin up new DBs via the API (single tenant DBs) and while that is cool, my client list is small so manually creating the DBs is not a problem (B2B).
Crazy how big the data ecosystem has grown. Congrats to the Neon team on a good outcome, but good luck integrating into DBX culture and surviving.
I'm seeing a lot of DBX hate in this thread overall. I think it's warranted. At Tower[0], we're trying to provide a decent open solution. It stars with owning your own data, and Iceberg helps you break free.
I’m incredibly disappointed by this news. I really enjoyed Neon but I seriously doubt I’m going to like Databricks’ stewardship if it. And that’s if they even still care about catering to people like me and don’t jack the prices us.
I guess it’s time to go back to the well of managed/serverless Postgres options…
Data warehousing is quickly becoming a commodity through open-source. I know a company who had 2PBs+ of data in Cloudera. But instead of moving to the cloud (and Databricks), they saved 5X costs by building their own analytics platform with Iceberg, Trino and Superset. The k8s operators are enterprise quality now. On-premises S3 is good, too. You can have great hardware (servers with 128 cpus and 1 TB) and networking. It's not just Trino. StarRocks and Clickhouse have enterprise grade k8s helm charts/operators. That 60bn valuation is an albtross on Databrick's neck - their pricing will have to justify it, and their core business is commoditizing.
Neon filled their product gap of not having an operational (row-oriented) DB.
Not commoditising for enterprise. My last gig wouldn’t allow open source software or any company that might not be there in a decade, or which kept data anywhere but our own tenant. We’d look for the “call us” pricing rather than hate it, which I normally do. We added databricks and it was considered one of my top three achievements, because they don’t have to think about data platforms again, just focus on using it. It’s SO expensive for an enterprise to rejig for a new platform that you can’t rely on (insert open source project here).
I managed to add one startup and so far it’s done very well, but it was an exceptional case and the global CEO wanted the functionality. But it used MongoDB and we didn’t have any skills, so rather then learn one tiny thing for an irrelevant data store they added cash to use Atlas with all the support and RBAC etc etc. Kept hiring load down, one number to call, job done.
Startups are from Venus, enterprise are from Jupiter.
Totally agree. Happy open source StarRocks user here using the k8s operator for customer-facing analytics on terabytes of data. There's very little need for Databricks in my world.
Looking at StarRocks site (https://www.starrocks.io/), they compare against Clickhouse, Druid and Trino. Don't even compare against Spark/Databricks! Guess Spark is just not competitive.
It's been a commodity for decades now. Metrics like price-performance have a long history, but the SnowBricks products fail at them quite dramatically. The difference is hard-sell vs. soft or no-sell.
Not having to buy an appliance and pay for it up front is quite a valuable option. Also the split between processing and storage allows for better archival and scaling strategies.
if Databricks just wanted a row DB they couldve done postgres themselves. paying this much for Neon i think is a sign that Neon has something special they want (which, knowing their marketing line, is "independently scalable storage and compute for postgres")
Anyone looking for an open-source Cloudera alternative based on Kubernetes operators. We're building one (~5 years old now): https://stackable.tech/ & https://github.com/stackabletech/
On-premise open-source S3 is a problem though. MinIO is not something we're touching and other than that it looks a bit empty with enterprise ready solutions.
Don’t SeaweedFS and ceph/rook also offer this? Ceph/rook is definitely enterprise ready
Wouldn't Rook be a good solution? It's definitely proven in much larger settings than Minio, as it's just Ceph.
What's wrong with minio out of curiosity? Ceph an option?
This is at least partially subjective.
https://news.ycombinator.com/item?id=32148007
https://news.ycombinator.com/item?id=35299665
Ceph would be a theoretical option, but a) we don't have a lot of experience with it and b) it's relatively complex to operate. We'd really love to add a lighter option to our stack that's under the stewardship of a foundation.
Try expanding a cluster, or changing erasure coding configuration, or using anything that needs random access within a file (parquet), or any day 2 operation.
on what?
Look under the hood, the limitations are based on the core, sticking a UI on it does not hide what needs to happen at scale.
Guessing you’re referring to minio not ceph? Have they still not figured out how to do day 2? I mainly avoid them because of their license and the way they interpret it
They are not efficient; they have a one-time static hash to create a cluster. After that, it is all duct tape and glue. Want to expand? Add another cluster (pool) and then look for the cluster that contains the object. They don't know which cluster has the object, and performance does not scale as well with additional clusters. Want to decommission a single node, drain the cluster. They refer to multiple pools as a single cluster, but it is essentially a set of static hashes that lack the intelligence to locate objects. Got the initial EC configuration not quite right.. sorry need to redo the entire cluster.
MinIO is a good fit if you want a small cluster that doesn't require day 2 operational complexity, as you only store a few TBs.
I have not looked into them recently, but I doubt the core has changed. Being VC-funded and looking for an exit makes them overinvest in marketing and story telling.
That tracks with my past analysis as well, thanks
But why would you buy an operational DB from Databricks? The only thing that makes sense is Databricks flailing to maintain market cap.
In addition to the AI use cases, sometimes you wanna share the data warehouse data in oltp way for fast lookups and high concurrency. Not sure whether Neon will do that but I hope so.
One example from Snowflake is hybrid tables which adds rowstore next to columnar.
OLAP + OLTP = HTAP
SingleStore been doing that for years . Unistore been strugglin
ETL to bring all your data into Databricks/Snowflake is a lot of effort. Much better if your OLTP data already exists in Databricks and you directly access it from your OLAP layer.
With the push towards open table formats (Iceberg) from both Snowflake and Databricks, it's even harder to get your Postgres OLTP tables ready for OLAP.
The problem isn't in the CDC / replication tools in the market.
The problem is that columnar stores (especially Iceberg) are not designed for the write /upserts patterns of OLTP systems.
They just can't keep up...
This is a big problem we're hoping to solve at Mooncake [0]. Turn Iceberg into an operational columnstore. So that it can be keep up (<s freshness) with your Postgres.
https://www.mooncake.dev/
I applied to neon last week and then the news broke about the acquisition. They rejected it this morning — I have never been happier to receive a rejection to an application.
This would’ve been three acquisitions straight for me and… I’m okay, they’re awful. I just want stability.
Congrats to the neon team! I use and love neon. Really hope this doesn’t change them too much.
I've been part of an acquisition as a first-year engineering manager, during which I had to navigate subsequent two rounds of layoffs. I was also a part of the group to help restructure teams and help make calls on who to keep. Morale was terrible, and the cultures also did not gel at all.
It led to some serious burnout and I took several months off. I'm now happily working as an IC again.
I got hired at Kenna Security a month before they were acquired by Cisco and it was such a miserable experience that I won't work for any company the Kenna leadership are involved with, nor would I ever consider working at Cisco.
> Really hope this doesn’t change them too much.
My guess is that this team gets rolled into Online Tables tech, which would make product sense.
https://docs.databricks.com/aws/en/machine-learning/feature-...
Yes, that is what i expect, too. They have been paying DynamoDB and CosmosDB for a few years now. However, Neon is not competitive latency/throughput-wise for real-time workloads, needed for high end AI (like personalized recommendations). There are a few others I would have expected like Cockroach, Aerospike, or RonDB.
Had personally the opposite experience. Acquisitions being one of the most interesting times to be hired into.
In a couple cases I’ve been recruited because I have a history of scaling and integrating acquisitions into companies successfully
The first acquisition I was apart of wasn’t too bad! But we were still culturally very different. So after 2 years and properly transitioning things, I bounced to another start up.
Walking into something like that is tough because the two teams sort of don’t like each other and you’re really “neither”. I’d want to make sure I was interviewed by both teams
>you’re really “neither”.
IMO, this is where the power of being hired into the situation is. No existing bias for either company and all the baggage that comes with that.
Allows a person to see the pros and cons of how things get done on both sides of the fence, and act accordingly
what if you had joined at neon's previous valuation (whatever it was) and got a sudden payday (assuming you had juuuust enough vesting)
I was a very early employee at the other two start ups that were acquired and even with equity it was not worth it. After all the class A shares were paid out, the rest of us got little.
I mean, hindsight 20/20 here, but I would have loved the theoretical money @ 1 billion. But those are so rare and my experience in the past 15 years hasn’t matched those unicorns.
Basically I’ve come to the conclusion unless you have serious equity or you’re a founder, acquisition suck. You’re the one doing the work making these two companies come together, while the founders usually bounce or are stripped of any real power to change things.
The 1b is very likely not all cash. Probably significant portion id db illiquid equity
Maybe unrelated but Databricks is the most annoying garbage I have ever had to use. It fascinates me how anyone uses it by choice.
Databricks started in 2013 when Spark sucked (it still does) and they aimed to make it better / faster (which they do).
The product is still centered Spark, but most companies don't want or need Spark and a combination of Iceberg and DuckDB will work for 95% of companies. It's cheaper, just as fast or faster and way easier to reason about.
We're building a data platform around that premise at Definite[0]. It includes everything you need to get started with data (ETL, BI, datalake).
0 - https://www.definite.app/
Aren't the alternatives you mentioned - icerberg and duckdb - both storage solutions while spark is a way to express distributed compute? I'm a bit out of touch with this space, is there a newer way to express distributed compute?
Not a new way like Ray, but a new way to express Spark super-efficiently (GPU-acceleration): https://news.ycombinator.com/item?id=43964505
DuckDB is not only a storage solution. It can directly query a variety of file formats at rest, without having to re-store anything. That's one of its selling points: you can query across archival/log data stored in S3 (or wherever) without needing to "ingest" anything or double-pay to duplicate the data you've already stored.
duckdb is primarily a query engine. It does have a storage format, but one of it's strengths is querying data where it already resides (e.g. a parquet file sitting in S3).
There are some examples[0] of enabling DuckDB to manage distributed workloads, but these are pretty experimental.
0 - https://www.definite.app/blog/smallpond
Flink. It has more momentum than Spark right now.
"momentum" is a tricky word. Zig has more momentum than C++, but will it ever overtake the language? I'd bet not.
Databricks is the Jira of dealing with data. No one wants to use it, it sucks, there are too many features to try to appease all possible users but none of them particularly good, and there are substantially better options now than there were not long ago. I would never, ever use it by choice.
What options do you use? I don't work for Databricks but I am building my own data infra startup, so I'd like to hear what "good" looks like!
I used to be a big fan of the platform because back in 2020 / 2021 it really was the only reasonable choice compared to AWS / Azure / Snowflake for building data platforms.
Today it suffers from feature creep and too many pivots & acquisitions. That they are insanely bad at naming features doesn't help either.
I'm building another Spark-based choice now with ParaQuery (GPU-accelerated Spark): https://news.ycombinator.com/item?id=43964505
I’d settle for only one bad name per feature from them. Alas, they don’t feel so limited
Really hard disagree. Coming from hadoop, databricks is utopia. It's stable, fast, scales really well if you have massive datasets.
The biggest gripe in have is how crazy expensive it is.
If cost (or perf) is the issue, we're building a super-efficient, GPU-accelerated, easy-to-use Spark: https://news.ycombinator.com/item?id=43964505
Spark was a really big step up from hadoop.
But these days just use trino or whatever. There are lots of new ways to work on data that are all bigger steps up - ergonomically, performance and price - over spark as spark was over hadoop.
The nice thing about spark is the scala/python/R APIs. That helps to avoid lots of the irritating things about SQL (the same transformation applied to multiple columns is a big one).
Hadoop was fundamentally a batch processing system for large data files that was never intended for the sort of online reporting and analytics workloads for which the DW concept addressed. No amount of Pig and Hive and HBase and subsequent tools layered on top of it could ever change that basic fact.
The market for IBM-like software and platforms (everyone else uses this! It must be good!) apparently wasn't saturated yet
They push Serverless so hard but there are SO MANY limitations and surprise gotchas. It's driving me absolutely insane.
Hey, what are the most painful limitations/gotchas you're hitting? I'm on this team and would like to hear about painpoints.
And it tends to be notably more expensive! 4-5x the price for less features...
the new cost-optimized mode is very promising, though
Is hosting spark really that groundbreaking ? Also isn't spark kind of too complicated for 90% of enterprisey data-processing .
I really don't understand the valuation for this company. Why is it so high.
With cookies disabled I get a blank website, which is a massive red flag and an immediate nope from me.
Can't imagine someone incapable of building a website would deliver a good (digital) product.
They did build a website though. It even looks pretty nice. The restriction you've placed on yourself just prevents you from viewing it.
But.. but.... we MUST track you! That's the whole purpose of our site /s
Congrats to the Neon team. They make an awesome product. Obviously it’s sad to see this, but it’s inevitable when you’re VC funded. Let’s hope Nikita and co remain strong and don’t let Databricks bit.io them.
Databricks is Oracle-level bad. They will definitely ruin Neon or make it expensive. In the medium to long term, I will start looking for Neon alternatives.
Definitely agree, their M&A strategy is setup to strangle whoever they buy and they don't even know it. They're struggling in the face of Iceberg, DuckDB and the other tectonic shifts happening in the open source world. They are trying to innovate through acquisition, but can't quite make it because their culture kills the companies they buy.
I'm biased, I'm a big-data-tech refugee (ex-Snowflake) and am working on https://tower.dev right now, but we're definitely seeing the open source trend supported by Iceberg. It'll be really interesting to see how this plays out.
Congratz to neon team (i like what they built), but i don’t see the value or relation to databricks. I hope neon will continue as a standalone product, otherwise we lose a solid postgres provider from the market.
Its pretty heavy in Azure, so I would be surprised if it went away. This is DBX play to move into the transactional database space in addition to the analytical database.
They claim they will in the FAQ… but we know how this usually goes
If only companies were held liable for breaking promises they made when acquiring other companies
From the actual article
>As Neon became GA last year, they noticed an interesting stat: 30% of the databases were created by AI agents, not humans. When they looked at their stats again recently, the number went from 30% to over 80%. That is, AI agents were creating 4 times more databases versus humans.
For me this has alarm bells all over it. Databricks is trying to pump postgres as some sort of AI solution. We do live in weird times.
I remember the first post by the Neon team here on HN. I think I commented at the time that I thought it was a great idea. I’ve never had a need to use them yet, but thought I always would.
Cynically, am I the only one who takes pause because of an acquisition like this? It worries me that they will need to be more focused on the needs of their new owners, rather than their users. In theory, the needs should align — but I’m not sure it usually works out that way in practice.
> I remember the first post by the Neon team here on HN. I think I commented at the time that I thought it was a great idea.
Same! I remember it too. I found it quite fascinating. Separation of storage and compute was something new to me, and I was asking them about Pageserver [0]. I also asked for career advice on how to get into database development [1].
Two years later, I ended up working on very similar disaggregated storage at Turso database.
Congrats to the Neon team!
[0] - https://news.ycombinator.com/item?id=31756671
[1] - https://news.ycombinator.com/item?id=31756510
Taking a pause also... I don't believe serving IA can be aligned to serving devs. I hope that the part of the work related to the core of PostgreSQL will help the community.
I am excited to see Databricks turn into the next Oracle. This type of acquisition was inevitable. The king is dead! Long live the king!
And yes, congratulations to the Neon team! (Nikita is, after all, YC)
Neon's blogpost: https://neon.tech/blog/neon-and-databricks
WSJ article: https://www.wsj.com/articles/databricks-to-buy-startup-neon-...
Congratulations to the Neon team.
To be honest this is a little sad for me. I'd hoped that Neon would be able to fill the vacuum left by CockroachDB going "business source"
Being bought by DataBricks makes Neon far less interesting to me. I simply don't trust such a large organisation that has previously had issues acquiring companies, to really care about what is pretty much the most important infrastructure I've got.
There certainly is enough demand for a more "modern" postgresql, but pretty much all of the direct alternatives are straying far from its roots. Whether it be pricing, compatibility, source available etc.
Back when I was looking at alternatives to postgres these were considered:
1. AWS RDS: We were already on AWS RDS, but it is expensive, and has scaling and operations issues
2. AWS Aurora: The one that ended up being recommended, solved some operations issues, but came with other niche downsides. Pretty much the same downsides as other wire compatible postgresql alternatives
3. CockroachDB: Was very interesting, wire compatible, but had deeper compatibility issues, was open source at the time, it didn't fit with our tooling
4. Neon: Was considered to be too immature at the time, but certainly interesting, looked to be able to solve most of our challenges, maybe except for some of the operations problems with postgresql, I didn't look deeper into it at the time
5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
There are also various self hosting utilities for PostgreSQL I looked at, specifically CloudPG, but we didn't have the resources to maintain a stateful deployment of kubernetes and postgres ourselves. It would fulfill most of our requirements, but with extra maintenance burden, both for Kubernetes and PostgreSQL.
Hosting PostgreSQL by itself, didn't have mature enough replication and operations features by itself at that point. It is steadily maturing, but as we'd got many databases manual upgrades and patches would be very time consuming, as PostgreSQL has some not so nice upgrade quirks. You basically have to unload and reload all data during major upgrades. Unless you use extensions and other services to circumvent this issue.
In my brief experience as an engineer (2014->), I've learned that the best "modern" alternative to PostgreSQL at year X has been PostgreSQL at year X+5. :)
> 5. Yugabyte: interesting technology, had some of the same compatibility issues, but less that the others, as they're also using the query engine from postgresql as far as I can tell.
Neon is Postgres.
That is why I was hopeful for Neon unlike a lot of the other ones. Yugabyte however isn't just postgres.
> same downsides as other wire compatible postgresql alternatives
I'm interested if you'd care to elaborate.
Mainly in relation to notify/listen and advisory locks. Most of our code bases use advisory lock based migration tools. It would be a large lift moving to an alternative or building a migration scheduler out of process
Hey everyone, I'm an engineer at Neon and I wanted to share this FAQ which covers a lot of the questions that are being brought up in the comments here:
https://neon.tech/databricks-faq
We're really excited about this, and will try to respond to some of the questions people have here later.
For what it's worth the questions can't really be answered by a simple FAQ, because history has shown that those answers aren't worth the page they're written on. Many companies that get bought talk all about the fact that nothing is going to change.
Something is always going to change, almost always in a way that impacts customers. In the best case it's something simple like a different name on the bill, other times it will leave customers scrambling for an alternative before a ridiculous deadline. It could happen within weeks, after a month, or it might take a year. The answers at the time of the announcement are the same regardless.
That’s a nice FAQ and all but after what happened to bit.io [0] you have to understand why people (like me) are extremely worried about this.
We’ve all read glowing blog posts and reassuring FAQs enough times after an acquisition only to see a complete about-face a few months or a year later.
I quite enjoyed using Neon but as a solo founder running my business on Neon I can’t help but think it’s insanity to not be looking for alternatives.
Databricks is _not_ a company I trust at all.
[0] if you don’t know, databricks acquired bit.io and shut down all databases within 30 days. Production databases had <30 days to migrate.
Love Neon, but that FAQ is worthless. Every company that gets acquired reassures customers that "nothing will change"... then it does, once the new company is in the acquirer's belly and gets digested.
The FAQ, as meaningless as history as shown it is, is missing one key question: why?
Will there be a statement about the OSS nature of Neon?
I'm also an engineer at Neon. The plan is to continue developing Neon as an Apache-2.0 licensed software.
I’ve loved Neon and now I’m a little worried. Are there any alternatives?
[Disclaimer: I work for Xata]
As it happens, we've just launched our new Xata platform (https://xata.io/) which has some of the key Neon features: instant copy-on-write branching and separation of storage and compute. As an extra twist, we also can do anonymization (PII masking) between your production database and developer branches.
The way we do copy-on-write branches is a bit different. We haven't done any modifications to Postgres but do it completely at the storage layer, which is a distributed system in itself. This also brings some I/O performance opportunities.
While Xata has been around for a while, we're just launching this new platform, and it is in Private Beta. But we are happy to work with you if you are interested.
Btw, congrats to the Neon team!
1. Would you sign BAA (for HIPAA) for the Pay As You Go plan? Can't find that anywhere on your site except for that Lite is HIPAA compliant (https://lite.xata.io/security).
2. FYI, couldn't request access via the BYOC form so I sent an email as per the error: There was an error, please try again or contact us at info@xata.io.
1. Yes, we will sign BAA for Pay As You Go.
2. Thanks, I see you sent the email already, not sure why it failed. Will reach out over email.
The PII masking aspect is very interesting and something we couldn't get when we decided on DBLab a month ago. What does the deployment model within AWS look like?
If you want to deploy the whole platform inside your own AWS account, we have a Bring Your Own Cloud model: https://xata.io/byoc
If you want to get anonymization from your RDS/Aurora instance and into Xata branches, then you run only a CLI command (`xata clone`) which does something similar to pg_dump/pg_restore but with masking. It is based on our pgstream open source project.
Happy to organize a demo any time.
Do you support http or websocket connections like https://github.com/neondatabase/serverless? In my experience neon is ultra fast that way in serverless environments like 1-5ms per query with network roundtrip.
We have support for SQL over HTTP in Xata Lite: https://lite.xata.io/docs/sdk/sql/overview
Neon also supports anonymization: https://neon.tech/docs/extensions/postgresql-anonymizer as well as schema only branching
Hey!
(Disclaimer: I work at Xata.) Just wanted to mention that we also support anonymization, in case that’s something you're looking into: https://xata.io/postgres-data-masking
Is this open source? A major point of Neon is that it's open source and self-hostable.
Several components are open source as their own projects (see below) which will allow you to reproduce most of the features on top of regular Postgres. But the storage part is not open source. We are considering a simpler implementation of it that would be realistic to self-host and still do copy-on-write branching.
These are the open source components:
* pgstream for the anonymization from the production branch
* pgroll for schema changes
* Xata Agent for the LLM-powered optimizations
I think when people look at Neon, the Aurora-style disaggregated compute/data architecture allowing highly scalable read replicas on cloud storage is the defining feature, and it's the only such project that offers it for Postgres. So the storage part is the point.
If all you care about is the forking aspect we use DBLab Engine pretty effectively: https://postgres.ai/products/dblab_engine. Gets deployed within your own infrastructure.
Supabase is your best bet.
I’ve had good experiences with Supabase.
Give Prisma Postgres a shot? https://prisma.io/postgres (I work for Prisma)
geldata.com
It's my understanding that Neon had some tech to basically "wake up" the DB when a request came out -- so you could "scale down to zero," if you will. I was hoping to explore this for small personal projects: I by far prefer Postgres and would love an isolated database per project.
Is there an alternative for that? Scale-to-zero postgres, basically?
For small personal projects, coolify (featured recently here on HN) lets you quickly stand up postgres with SSL, etc. and get a connection string in seconds. You can deploy in the same project or expose pg to the world like neon does.
One click turns it off, or you can just leave it on. A $5 VM will run a lot of small postgres.
I use both neon and coolify, and could live with either, though apples and oranges when it comes to the data branching feature. But a quick pg_dump/restore which could even be scripted solves my problem. Disclaimer: I like devops in addition to just dev.
AWS Aurora Postgres Serverless v2 has that capability, though it takes multiple seconds.
AWS Aurora is way too expensive and their "serverless" offerings are overly complicated and not worth it IMHO.
I used Serverless v1 and then they doubled the prices for v2 while removing features so I moved to PlanetScale. They were great but as I grew and wanted multiple smaller DBs they didn't really have a good way to do that and I moved to Neon. Now, with this news, I guess I'll be looking for an alternative.
[Neon employee] p99 for Neon compute start is 500ms
Yikes. No real-time ML with that.
If your project database is suspending for lack of requests I doubt a 500ms wake up delay is an issue.
> AWS Aurora Postgres Serverless v2 has that capability
Was just about to react to someone being wrong on the internet and say that this is not true. Instead, TIL that this is, in fact, the case. Since 2024Q4.
Thanks for invalidating my stale cache.
Congrats folks at Neon! Been following the team and product since the very beginning. Well done, good DX and good education content too :).
This seems like quite the pivot though
So... As someone who's joining databricks in a few weeks, what's with the hate in the comments?
If you're someone who researched the company, enjoyed the interview and accepted an offer, you're probably not going to be in the same group as the people who hate Databricks. Databricks is a 10k people enterprise software company that just raised $10bn and is using their deep pockets to hoover up smaller companies. If that doesn't scare you, you'll be fine. For many of us, the thought of working with or using the product of a company like that strikes fear into our hearts because we have different values to you.
Databricks is the antithesis of Neon. Neon is driven by product, Databricks is driven by sales. Opinions of Databricks in a thread about Neon are going to be on the negative side (but not necessarily representative).
Welcome to Databricks!
I've been an SA at Databricks for the past two years and love it here. The people you get to work with here are world-class and our customers legitimately love our product.
I too am a little confused about comments in threads on HN about Databricks, they seriously don't reflect what I see internally and what my customers say. I don't think I'd be working here if they did.
Hopefully you weren't one of the SAs working on the bit.io migration after databricks acquired them.
It's big, enterprise, and competes aggressively on marketing and hype. Also there have been a string of acquisitions where databricks has kind of just absorbed the team and product and then not done a great job for customers of the old company.
It's fine. Probably actually a good place to work.
In my experience, one factor is databricks releasing features fast but unpolished.
I like how they’re innovating, but it can be rough around the edges sometimes.
Every company gets a ton of hate on Hacker News. Don't let it bother you too much. But the specific concerns may be a directional signal.
Does anyone have insight into Neon's financials - specifically their revenue, COGS, and gross margins? I'm trying to understand what made Databricks value them at $1B. Was it strong unit economics, rapid growth, or mostly strategic/tech value?
Previous discussion a few days ago:
https://news.ycombinator.com/item?id=43899016
Databricks in talks to acquire startup Neon for about $1B (174 comments)
Not too familiar with Neon other than the basics - its premise is that you use S3 as bottomless storage for Postgres and it’s otherwise the same as standard Postgres right? And this is all open source? Why are people paying? Can’t you use a cloud provider and have them host this for you?
Hosting and operating the autoscaling of the various services (compute, pageserver, safekeeper, storage broker) that it takes to make all that work is complex enough that most folks would rather not. Same as any other "managed X" service.
> you use S3 as bottomless storage for Postgres [...] Why are people paying?
It's vastly more complicated to do this efficiently than you might imagine. Postgres' internal architecture is built around a very different set of assumptions (pages, WAL, local disk etc.) than what the S3 API offers.
I get that, but my understanding is that they opened sourced this itself, no?
The Databricks vs. Snowflake bidding war is probably an insanely good time to be a database startup.
Big congratulations!
I really do hope that their OSS strategy does not change due to this, as it's really friendly to people who want to learn their product and run smaller deployments. It's (intentionally or not) really hard to run at a big scale as the control plane is not open-source, which makes the model actually work.
How do they know 80% of Neon databases are created by AI agents?
We can see which database creations are coming from products such as Replit, v0, Same.new, Create.xyz, and a few more etc.
Surely, there might be other agents creating Neon databases so we might be under-counting.
> Neon is valued at $1B;
Neon is still early‑stage and, AFAIK, not profitable. It’s a perfect snapshot of 2025: anything that’s (1) serverless, and (2) even vaguely AI‑adjacent is trading at a multiple nobody would have believed two years ago. Also supports my hypothesis that the next 12 months will be filled with cash acquisitions.
> Databricks will ruin Neon;
I certainly hope not. Focus on DX, friendly free tier, and community support is what made it special. If that vanishes behind Databricks’ enterprise guardrails, the goodwill will vanish with it.
Are people still making comments like these in 2025?
What the hell do profits have to do with valuing tech startups?
Profitability might not be as relevant as it used to be in M&A discussions, but it matters when you’re paying $1B.
Valuations like this only make sense if there’s a clear path to significant strategic leverage or future cash flow.
No, it doesn't matter when you're paying $1B. Why would it? Tech companies don't care about profits. It's easy to become profitable - tech margins are obnoxiously high. They're bought and valued for their ability to scale and rapidly absorb market share.
Guess this is the beginning of the end of a great service, not holding my breath. Sounds like from the WSJ article that they’ll just become some AI agent backend service for Replit, and from the previous conversation on HN that Databricks ruins and shutters their acquisitions. Congrats on the big payout for the employees, though.
https://www.youtube.com/watch?v=QM3VCYA1e-Q
What's the relationship to replit?
At first I thought it had something do to with arm64 SIMD instructions.
What happens to existing customers of Neon?
https://neon.tech/databricks-faq
Ask the bit.io customers….
Most likely a holding state for a bit before databricks ruins it or shuts it down. I started looking around when the news broke last week or so for alternatives.
Any alternatives that you are aware of ? Most search results show me Supabase.
Supabase is one that I'll consider, Xata [0] is another one that is interesting. Thankfully I just need "postgres", I don't need branching/PII-clearing/etc. That's all nice to have but I don't need it for my app.
I really would prefer a managed DB for multiple reasons but I might need to look at just self-hosting. I might have spent less time futzing with my DB if I had done that from the start instead of going Aurora Serverless v1 -> Planetscale -> Neon.
[0] https://xata.io/
Same here...I too just need Postgres... Will check out Xata, My workload isn't super critical.
Prisma Postgres is also an option to dig into: https://prisma.io/postgres
Branching is one of the most useful features of Neon.
I believe that, it's a cool concept. But I was too nervous to build on top of that feature, I wanted to maintain my ability to leave Neon easily. After Planetscale (and using their version of schema branching) I didn't want to get pinched again when I went to switch (PS vs Neon branching was/is very different).
I think one of the coolest features of neon is being able to quickly spin up new DBs via the API (single tenant DBs) and while that is cool, my client list is small so manually creating the DBs is not a problem (B2B).
Crazy how big the data ecosystem has grown. Congrats to the Neon team on a good outcome, but good luck integrating into DBX culture and surviving.
I'm seeing a lot of DBX hate in this thread overall. I think it's warranted. At Tower[0], we're trying to provide a decent open solution. It stars with owning your own data, and Iceberg helps you break free.
[0] - https://tower.dev
I’m incredibly disappointed by this news. I really enjoyed Neon but I seriously doubt I’m going to like Databricks’ stewardship if it. And that’s if they even still care about catering to people like me and don’t jack the prices us.
I guess it’s time to go back to the well of managed/serverless Postgres options…
Supabase!
congrats to Nikita and all the wonderful folks at Neon!
A VC funded company that has never been profitable spending a billion on another startup…
Databricks is profitable afaik.
https://www.wing.vc/content/comparing-the-financials-of-data...
[dead]