I don't know... 1.2% of GDP just doesn't seem that extreme to me. Certainly nowhere near "eating the economy" level compared to other transformative technologies or programs like:
Yeah that's my first reaction to. 1.2% doesn't sound much. It's just people making headlines out of thin air. If it lists the water and energy consumption I might be more concerned.
Slightly off-topic, but ~9% of GDP is generated by "financial services" in the US. Personally I think it's a more alarming data point.
I have read, but not verified the figures myself that if the United States had Australia's healthcare system - universal, government funded healthcare (excluding dental) then all US citizens would have near free healthcare, would not need costly insurance, and the government would spend a similar amount to what it does now
Ultimately, "financial services" is what's downstream of insurance, banking (deposits / money transfers), loans and retirement savings. Also efficient capital allocation and the provision of government services to some extend. Those are things we want, and we want those things to work well.
Why is 9% for financial services bad? This should cover fees/interest from everything like loans, transactions, mortgages, advice, investing, etc. It doesn't seem that surprising to me that the systems that are the backbone for all the money operations that power the rest of the economy make up about 10%.
I get your point, but the flip side is that private companies like visa and Mastercard get to skip 2.5%+ off the entire economy. Visa has more than 50% profit margin, and it’s not like these companies are innovating with all that extra cash either. It’s just money from my pocket to some rich investor somewhere
Visa and Mastercard aren’t skimming 2.5% off the economy; the majority of the interchange fees go to banks (which Visa is not; their actual product is VisaNet which provides payment infrastructure, broadly.)
Trivially verifiable by Visa’s revenue being $35B, which is not even close to 1% of just US GDP (about $30T).
Interests you pay is not necessarily all financial services revenue. Only the net interests the industry receives count as revenue. There's a lot of netting going on in finance.
"Inefficient" implies the money is being burned or something. It's flowing into the pockets of people who work in the financial services industry, who then spend it on other things. The economy isn't zero-sum.
And the industry itself greases the wheels of other industries. In other words without financial services like lending and payment processing there would be less spending and investment overall, so other industries would shrink along with it.
You’re falling for the broken window fallacy, it’s inefficient as demonstrated by automation reducing the percentage of the economy devoted to financial services without any negative effects.
Banking used to really suck. Walk into an old bank building and it looks empty with spaces for a dozen tellers never actually used, this is a good thing as nobody actually wants to stand in line at a bank. People have largely stopped using cash because swiping a card is just more pleasant.
Meanwhile payment networks (Visa, Mastercard) have over a 50% profit margin, that’s a huge loss for the US economy. Financial services dropping to 1% of the overall economy would represent a vast improvement over the current system.
Is there any evidence that central planning on a much larger scale is drastically more efficient? We're talking about a whole country, after all. I take your point that companies themselves are usually centrally planned internally, but centrally planned economies haven't fared so well.
some years back I was talking to an acquaintance through my daughter's kindergarten and complaining about this point and he said it was because financial services was where all the innovation was happening (he was an investment consultant of some sort)
I can't help but wonder if there's a middle ground between people not being able to obtain credit to pursue new enterprises, and entire productive enterprises being swallowed up in the pursuit of short-term rent seeking.
Could such a middle ground exist? Sure. Could someone design a system where that middle ground was a natural equilibrium? Unsure. I don't see how you incentivise the goldilocks behaviour (but I am not the smartest bear so maybe someone else can)
There's a good book on this topic by a Scottish philosopher: An Inquiry into the Nature and Cauſes of the Wealth of Nations by Adam Smith, LL. D. and F. R. S, formerly Profeſſor of Moral Philoſophy in the Univerſity of Glasgow.
The economy is not a fixed pie that you can just take slices from one sector and give them to another. Financial services provide liquidity that supports every other sector; getting rid of them would cause contraction in every other sector.
That's true to a degree. But giving them free reign inventivizes the kind of behavior that gave us the 2008 financial crisis. So the commenter can be forgiven for wanting to limit that sector.
You’re right that it’s large in an absolute sense, but any sector of the US economy is going to be large in an absolute sense. It’s not a very meaningful statement. Using percentages allows comparison to other items, which for some purposes gives a more useful sense of size. For instance, based on your numbers, AI expenditure is about 1/3 the total military expenditure. I tend to agree that this is less than I expected, and generally makes me feel a bit better about the (imo excessive) hype.
It's small as a part of the economy. It's huge as a completely new thing. The US economy in total has been growing something like an average of 2.5% over recent years. Something that is all-the-growth-of-the-last-year-in-one-place is pretty significant.
AI didn’t happen in one year. Netflix’s famous recommender system challenge kicked off in _2006_! And “Big Data” was all the rage ten years ago. The category “AI” includes these things.
The entire population of Norway fits in Queens and Brooklyn. If everyone there decided to whittle spoons we'd be midly concerned about just what got in their water, but it won't be an existential crisis for the rest of us.
I will never understand people who use tiny European countries as meaningful comparisons to continent sized ones.
It helps people understand scale, since there is only one other kinda similar economic machine, and it's China. The EU is too loosely coordinated to really compare.
The population of Queens and Brooklyn is one of the most densely populated areas on the planet. I will never understand people who use massively outlier-sized cities as meaningful comparisons to nations.
You're looking at dots on a graph when you should be looking at lines and curves (and slopes of curves). The author makes this argument:
* Movement of capital from other fields to "AI".
* Duration of asset value (eg, AI in months/years vs railroad in decades/centuries).
* "Without AI datacenter investment, Q1 GDP contraction could have been closer to –2.1%".
To state a bit more simply: the rate at which this spending has gone from about 0% to 1.2% is extremely fast which is point the author is trying to make.
The Q1 GDP comment is stunning because what it says is that if the same Q1 had happened just two years ago there’s a very good chance we’d be looking at a modestly sized recession. Now of course things aren’t zero-sum and it’s impossible to really make a useful claim like that but it’s still striking.
It's hard for me to tell what is a bigger misspending of money - LLMs or Apollo... At least I have a direct access to LLMs. Not sure I would need a direct access to moon rocks though.
It seems quite plausible that if we hadn't done the Apollo program that we'd probably be about 10 to 20 years behind in semiconductors right now (not to mention other technologies).
When you say "we" I assume you are from Taiwan? Good for you people, but it isn't much of a win for US industrial policy when it pushes Taiwan to the ascendant position and seems to be locking in Asian dominance of tech manufacturing.
No, "we" as in humanity. Apollo funding gave the development of integrated circuits a boost. Sure, we would've developed integrated circuits eventually anyway but it would've taken longer to get there.
The crux of the article is asking whether such a large investment is justified; downplaying the article saying it's only X% of the GDP compared to Y doesn't address the issue.
More than a decade long. The technology and industry here was broadly shared. They did things like highjacked bra manufacturers to make space suits.
> Railroads: 6% (mentioned by the author)
We're still using this investment today.
> Covid stimulus: 27%
The virus that was killing us the fizzled is probably not the best example... Only arguments will ensue if I even attempt to point thing out in this one.
> WW2 defense: 40%
I mean Russia made its last lend lease payment in 2006. It lead to America dominance of the globe. It looks like an investment that paid it self off.
How much of the hardware spend on AI is going to be usable in 5years?
There are some deep fundamental questions one should be asking if they pay attention to the hardware space. Who is going to solve the power density problem? Does their solution mean we're moving to exotic cooling (hint: yes)? Have we hit another Moores law style wall (IPC is flat and we dont have a lot of growth left in clock, back to that pesky power and cooling problem). If a lot of it is usable in 5 years thats great, but then the industry isnt going to get any help from the hardware side and thats a bad omen for "scaling".
Meanwhile capex does not include power, data, constables or people. It may include training, but we know that can't be amortized. (how long does a trained system last before you need another, or before you need a continuation/update).
Everyone is going after AI under the assumption that they can market capture, or build some sort of moat, or... The problem is that no one has come up with the killer app where the tech will pay for itself. And many in the industry are smart enough not to try to build their product on someone else's platform (cause rug pulls are a thing).
"AI" could go the way of 3d tv's, VR, metaverse, where the hype never meets up with hope. That doesn't mean we wont get a bunch of great tooling out of it. It is going to need less academics and more engineering (and for that hardware costs have to drop...)
- The birth of the space age, and more realistically the birth of the ICBM and satellite age. Both key to national security, and in the context of a cold war.
- 40% of long-distance ton miles travel by rail in the US. This represents a VAST part of the economic activity within the country.
- A literal plague, and the cessation of much economic activity, with the goal of avoiding a total collapse.
- ...Come on.
So we're comparing these earth-shaking changes and reactions to crisis with "AI"? Other than the people selling pickaxes and hookers to the prospectors, who is getting rich here exactly? What essential economic activity is AI crucial to? What war is it fighting? It mostly seems to be a toy that costs FAR more than it could ever hope to make, subsidized by some obscenely wealthy speculators, executives fantasizing about savings that don't materialize, and a product looking for a purpose commensurate to the resources it eats.
I agree we don’t have an actual ROI on AI yet. There is a ton of activity but the progress for society is speculation at this point. I don’t think we will have an idea on societal progression for at least 10 years and maybe 30.
It continually surprises me when people are in denial like this.
Literally every profession around me is radically changing due to AI. Legal, tech, marketing etc have adopted AI faster than any technology I have ever witnessed.
Interestingly I just talked to several lawyers who were annoyed at how many mistakes were being made and how much time was being wasted due to use of LLMs. I suppose that still qualifies as radically changing — you didn’t specify for the better.
Within 10 minutes earlier today, I took 1.5 years of raw financial trading data and generated performance stats, graphed out return distributions, ran correlation analysis, and to top it off created monte-carlo shock tests using the base data as an input for the model and ran hundreds of simulations with corresponding charts.
Each of the 15 charts would have been a page of boilerplate + Python, and frankly there was a huge amount of interdisciplinary work that went into the hundreds of thought steps in the deep reasoning model. It would have taken days to fill in the gaps and finish the analysis. The new crop of deep reasoning models that can do iteration is powerful.
The gap between previous "scratch work" of poking around a spreadsheet, and getting pages of advanced data analytics tabula rasa, is a gap so large I almost don't have words for it. It often seems larger than the gap between pen and paper and a computer.
And then later, off of work, I wanted to show real average post-inflation returns for housing areas that gentrify and compare it with non-gentrifying areas. Within a minute all of the hard data was pulled in and summed up. It then codes up a graph for the typical "shape of gentrification", which I didn't even need to clarify to get a good answer. Again, this is as large a jump as moving from an encyclopedia to an internet search engine.
I know it's used all over finance though. At Jane Street (upper echelon proprietary trading) they have it baked into their code development in multiple layers. In actual useful ways, not "auto completion" like mass market tools. Well it is integrated into the editor and can generate code, but there is also AI that screens all of the code that is submitted, and an AI "director" tracks all of the code changes from all of the developers, so if a program starts failing an edge case that wasn't apparent earlier, the director will be able to reverse engineer all of the code commits, find out where the dev went wrong, and explain it.
Then that data generated from all of the engineers and AI agents is fed back into in-house AI model training, which then feeds back into improvements in the systems above.
All of the dismissiveness reminds me of the early days of the internet. On that note, this suite of technologies seems large. Somewhere in-between the introduction of the digital office suite (word/excel/etc) and perhaps the Internet itself. In some respects, when it comes to the iterative nature of it all (which often degrades to noise if mindlessly fed back into itself, but in time will be honed to, say, test thousands of changes to an engineering Digital Twin) it seems like something that may be more powerful than both.
The adoption rate seems driven by a race to bottom, a desire to control "the next big thing" before someone else does, executive reflex, and some real use cases.
But then we saw the same thing with Crypto, tons of money poured into that, the Metaverse was going to be the next big thing! People who didn't see and accept that must not understand the obvious appeal...
> What essential economic activity is AI crucial to?
The continued devaluing of skilled labor and making smaller pools of workers able to produce at higher levels, if not their automation entirely.
And yeah AI generated code blows. It's verbose and inefficient. So what? The state of mainstream platform web development has been a complete shit show since roughly 2010. Websites for a decade plus just... don't load sometimes. Links don't load right, you get in a never-ending spinning loading wheel, stuff just doesn't render or it crashes the browser tab entirely. That's been status quo for Facebook, YouTube, Instagram, fuck knows how many desktop apps which are just web wrappers around websites, for just.. like I said, over a decade at this point. Nobody even bats an eye.
I don't see how ChatGPT generating all the code is going to make anything substantively worse than hundreds of junior devs educated at StackOverflow university with zero oversight already have.
Stack Overflow university is quite good, honestly. The bigger problem is documentation written by Twitter and Facebook folk: their "solutions" don't even work within those companies, and certainly don't work when other people adopt them. On Stack Overflow, people occasionally point out the bad practices that others try to promote.
You think the world does a good job educating people? I’m very serious about making that wholesale change. Many are functionally illiterate and never reach sophisticated academic levels. Many don’t even have parents at home that are qualified to supplement them. Many have teachers who absolutely will never be able to extract their potential. Many are in environments where our current education paradigms will never be able to overcome. LLMs will save a generation of kids.
Railroads lead to a distribution of capital into the society and to a long term increase of wealth for many.
Ai leads to a capital concentration on the hands of those that already have money and might lead to a long term reduction of wealth for the middle class.
Less purchasing power in the population usually is not good for economic development, so I have my doubts with respect to a boom .
> Railroads lead to a distribution of capital into the society
does it? I recall that railroads were monopolies (the vanderbilts). The gov't had to pass an act to break them up, because there were price collusions and farmers were forced to pay a higher price for their transport of foodstuffs.
> Ai leads to a capital concentration on the hands of those that already have money
that is true for a lot of other capital intensive ventures. Why pick ai specifically?
And AI is less monopolistic - at least it's not a natural monopoly. There are competition, and there are alternatives.
railroads are for moving freight, and you don't move freight if you're not selling goods. yes there were trust issues, but fundamentally a railroad requires a surrounding market.
During the 1990s dotcom boom we massively overbuilt fiber networks. It was indiscriminate and most of the fiber was never lit.
After the dotcom crash, much of this infrastructure became distressed assets that could be picked up for peanuts. This fueled a large number of new startups in the aftermath that built business models figuring out how to effectively leverage all of this dead fiber when you don't have to pay the huge capital costs of building it out yourself. At the time, you could essentially build a nationwide fiber network for a few million dollars if you were clever, and people did.
These new data centers will find a use, even if it ends up being by some startup who picks it up for nothing after a crash. This has been a pattern in US tech for a long time. The carcass of the previous boom's whale becomes cheap fuel for the next generation of companies.
I can barely get 50Mbps up/down and I only have xfinity in this area. No fiber, I will pay for it, but here we are. 2025 in good ol USA. In an urban area too.
When a few years ago I moved from Eastern Europe (where I had 1GB/s to my apartment for years) to the UK I was surprised that "the best" internet connection I was able to get was about 40MBit/s phone line. But it's a small town, and during past years even we have fiber up to 2GB/s now.
I'm surprised US still has issues that you mentioned. Have you considered Starlink(fuck Musk, but the product is decent)/alternatives?
One is, of course, the size of the country, but that's hardly an "excuse." It does contribute though.
The other big reason is lack of competition in the ISP space, and this is compounded by a distinctly American captured system where the owners/operators of the "public" utility poles shut out new entrants and have no incentive to improve the situation.
Meanwhile the nationwide regulatory agencies have been stripped down and courts have de-toothed them, reducing likelihood of top-down reform, and often these sorts of problems inevitably end up running into the local and state government vs national government split that is far more severe in the US.
So it's one of those problems that is surprising to some degree, but when you read about things like public utility telephone poles captured by corporate interests, it's also distinctly ridiculous and American, and not surprising at all.
telcos lay fibers for free up to basement electrical closets, from closets to each units are on landlords, and to each units to actual wall outlets needs arrangements with tenants. Sometimes ISPs subsidizes that cost, but lots of arrangements still needs to be made.
For one entire rented or owned house, it's just a call and a drill away.
Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
To me, this all sounds like an “end-of-the-world” nihilistic wet dream, and I don’t buy the hype.
I'm afraid that this might sound flippant, but the answer to your question comes through another question - why were early 19th century industrialists obsessed with replacing textile workers? Replacing workers with machines is not a new phenomenon and we have gone through countless waves of social upheaval as a result of it. The debate we're currently having about AI has been rehearsed many, many times and there are few truly novel points being made.
If you want to understand our current moment, I would urge you to study that history.
Programers are going to be replaced by AI in the same way accountants got replaced by VisiCalc and engineers by CAD and mathematicians by calculators and software like Mathematica
Same reason so many people got excited in the early Internet days of how much work and effort could be saved by interconnecting everyone. So many jobs lost to history due to such an invention. Stock trading floors no longer exist, call centers drastically minimized, retail shopping completely changing, etc.
I had the same thought you did back then. If I could build a company with 3 people pulling a couple million of revenue per year, what did that mean to society when the average before that was maybe a couple dozen folks?
Technology concentrates gains to those that can deploy it - either through knowledge, skill, or pure brute force deployment of capital.
There's a lot of non-engineering people who are very happy to see someone else get unemployed by automation for a change. The people who formerly were automating others out of a job are getting a taste of their own medicine.
I am not an engineer and I expect my white collar job to be automated.
The reason to be excited economically for this is if it happens it will be massively deflationary. Pretending CEOs are just going to pocket the money is economically stupid.
Being able to use a super intelligence has been a long time dream too.
What is depressing is the amount of tech workers who have no interest in technological advancement.
I'm not sure exactly what you mean by deflationary, but in general deflation in an economy is a very bad thing. The most significant period of economic deflation in the US was 1930-1933, ie, the great depression, and the most recent period was the great recession.
And since when do business executives NOT pocket the money? Pretty much the only exception is when they reinvest the savings into the business, for more growth, but that reinvestment and growth usually is only something the rest of us care about if it involves hiring..
> that would cause a tremendous drop in demand for the services the schaudenfreuden folks provide, hurting them as well
You're correct. But it doesn't matter. Remember the San Francisco protests against tech? People will kill a golden goose if it's shinier than their own.
> If this goose is also pricing others out of housing market it's not entirely unreasonable
It's self-defeating but predictable. (Hence why the protests were tolerated to backed by NIMBY interests.)
My point is the same nonsense can be applied to someone not earning a tech wage celebrating tech workers getting replaced by AI. It makes them poorer, ceteris paribus. But they may not understand that. And the few that do may not care (or may have a way to profit off it, directly or indirectly, such that it's acceptable).
I don't quite follow. What exactly have non-tech people of San Francisco got from all the tech people working there? How did they become richer (ok, apart from landlords) or how would they become poorer if they lose their jobs?
CEOs run every major media outlet and public platform for communication, people that hype AI will get their content promoted and will see more success which creates and incentive to create content.
This doesn't even require any "conspiracy" among CEOs, just people with a vested interest in AI hype who act in that interest, shaping the type of content their organizations will produce. We saw something lessor with the "return to office" frenzy just because many CEOs realized a large chunk of their investment portfolio was in commercial real estate. That was only less hyped because I suspect there were larger numbers of CEOs with an interest in remaining remote.
Outside of the tech scene, AI is far less hyped and in places where CEOs tend to have little impact on the media it tends to be resisted rather than hyped.
I don’t think software developer is a white collar job. It’s essentially manufacturing. There are some white collar workers at the extremes but the overwhelming majority of programmers are doing the IT equivalent of building pickup trucks.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
For the same reason people are obsessed with replacing all blue-collar jobs. Every cent that a company doesn't have to spend on its employees is another cent that can enrich the company's owners.
The general view in my bubble was that blue collar jobs are seen as dumb, physically demanding and dangerous, so we kind of replacing them for their own good, so that they can do something intellectual (aka learn coding). Whereas intellectual labour is kind of what humans exist for, so making intellectual work redundant is truly the end of the world.
Maybe it's my post-communist background though and not relevant for the rest of the world
Nobody wants to be unemployed, but people generally love the idea of getting what they want without having to interact with not to say pay to other people.
Like you have a brilliant idea, but unfortunately don't have any hard skills. Now you don't have to pay enormous sums of money to geeks and have to suffer them to make it come true. Truly a dream!
Ok, but who benefits from these efficiencies? Hint: not the people losing their jobs. The main people that stand to benefit from this don't even need to work.
Producing things cheaper sounds great, but just because its produced cheaper doesn't mean it is cheaper for people to buy.
And it doesn't matter if things are cheap if a massive number of people don't have incomes at all (or even a reasonable way to find an income - what exactly are white collar professionals supposed to do when their profession is automated away, if all the other professions are also being automated away?)
Sidenote btw, but I do think it's funny that the investor class doesn't think AI will come for their role..
To me the silver lining is that I don't think most of this comes to pass, because I don't think current approaches to AGI are good enough. But it sure shows some massive structural issues we will eventually face
> I do think it's funny that the investor class doesn't think AI will come for their role..
investors don't perform work (labour); they take capital risk. An ai do not own capital, and thus cannot "take" that role.
If you're asking about the role of a manager of investment, that's not an investor - that's just a worker, which can and would be automated eventually. Robo-advisors are already quite common. The owner of capital can use AI to "think" for them in choosing what capital risk to take.
And as for massive number of people who don't have income - i dont think that will come to pass either (just as you dont think AGI will come to pass). Mostly because the speed of these automation will decline as it's not that trivial to do so - the low hanging fruits would've been picked asap, and the difficult ones left will take ages to automate.
It's allowed our lifestyle for a short period of time, to small amount of people. It's not given to poor people, to people in places we've bombed, or extracted resources from, or people in the future since it's destroying the planet.
We're all far closer to poor than we are to having enough capital to live off of efficiency increases. AI is the last thing the capitalist class requires to finally throw of the shackles of humanity, of keeping around the filthy masses for their labor.
It is the one thing I believe capitalism at some level works for. Invest capital to build something that gives competitive advantage over other capital. Buying newer bigger factory that allows producing more for cheaper to compete with others.
This is exactly it. I was talking to my wife about this this morning. She's a sociologist researcher, and a lot of people that work in her organization are responsible for reading through interviews and doing something called coding, where you look for particular themes and then tag them with a particular name associated with that theme. And this is something that people spend a lot of hours on, along with interview transcription, also done by hand.
And I was explaining that I work in tech, so I live in the future to some degree, but that ultimately, even with HIPAA and other regulations, there's too much of a gain here for it not to be deployed eventually, And those people in their time are going to be used differently when that happens. I was speculating that it could be used for interviews as well, but I think I'm less confident there.
Nobody is obsessed with it. People are afraid of it. And yet, what will you do? Will you renounce adopting a tool that can make your work or someone else's work faster, easier, better? It's a trap: once you've seen the possibilities you can't go back; and if you do, you'll have to compete with those who keep using the new tools. Even if you know perfectly well that in a few years the tools will make your own job useless.
Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently. At that point I'd rather get the money anyway and spend the day at the beach.
That's only possible if you as the worker are capturing the efficiencies that the automation provides (i.e. you get RSU's, you have an equity stake in the business as well).
Believe it or not most SWE's and white collar workers in general don't get these perks especially outside the US where most firms have made sure tech workers in general are paid "standard wages" even if they are "good".
Because white collar salaries are extremely high, which makes the services of white collar workers unavailable to many.
If you replace lawyers with AI, poor people will be able to take big companies to court and defend themselves against frivolous lawsuits, instead of giving in and settling. If you replace doctors, the cost of medicine will go down dramatically, and so will waiting times. If you replace financial advisors, everybody will have their money managed in an optimal way, making them richer and less likely to make bad financial decisions. If you replace creative workers, everybody will have access to the exact kind of music, books, movies and video games they want, instead of having to settle for what is available. If you automate away delivery and drivers (particularly with drones), the price of prepared food will fall dramatically.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
> They seem to be totally convinced that this will happen.
The two groups of people are not same. I for example belong to the 2nd but not the 1st. If you have used the current gen LLM coding tools you will realize they have gotten they are scary good.
Imagine a world where there is 10x as much wealth and 10x as many hard problems being solved. Suppose there's even a 5% chance of that happening. It's clearly worth doing
> completely obsessed with replacing all developers
I’m paid about 16x an electronics engineer. Salaries in IT are completely unrelated to the person’s effort compared to other white collar jobs. It would take an entire career to some manager to reach what I made after 5 years. I may be 140IQ but I’m also a dumbass in social terms!
That's cool. It sounds like most of us aren't making what you make. I don't make 16x what someone paid minimum wage makes, much less an electrical engineer.
Especially outside the US where having a 140 IQ isn't really enough to have a high wage. Only social EQ and high capital does that in most of the world.
> Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
Because the only thing that gets the executive class hornier than new iPhone-tier products is getting to layoff tons of staff. It sends the stock price through the roof.
It follows from there that an iPhone-tier product that also lets them layoff tons of staff would be like fucking catnip to them.
The job description of a developer is to replace themselves by automating themselves so they can get promoted/find a new more relevant role. That's the point of compilers and new programming languages.
There's no such thing as taking people's jobs, nobody and nothing is going to take your job except for Jay Powell, and productivity improvements cause employment to increase not decrease.
The dream of many business owners is running their business with no products, no employees, and no customers, where they can just collect money. AI promises to fulfil this dream. AIs selling NFTs to other AIs paying in crypto is the final boss of capitalism.
Notice how it wasn't and isn't a big deal when it's not in your own back yard (i.e. destroying blue collar professions); our chickens have just come home to roost. It's amazing the number of gullible, naive nerds out there that can't or won't see the forests for the trees. The number of ancap-lite libertarians precipitously drops when it's their own livelihood getting its shit kicked in.
One of the more annoying parts about being in the tech community was listening to the tone-deafness of these sorts of folks. Coming from a very blue collar background and family.
It's difficult to have much empathy for the "learn to code" crowd who seemingly almost got a sense of joy out of watching those jobs and lifestyles get destroyed. Almost some form of childhood high school revenge fantasy style stuff - the nerd finally gets one up on the prom king. Otherwise I'm not sure where the vitriol came from. Way too many private conversations and overheard discussion in the office to make me think these were isolated opinions.
That said, it's not everyone in tech. Just a much larger percentage than I ever thought, which is depressing to think about.
It's certainly been interesting to watch some folks who a decade ago were all about "only skills matter, if you can be outcompeted by a robot you deserve to lose your job" make a 180 on the whole topic.
The rest of the world has not caught up to current LLM capabilities. If it all stopped tomorrow and we couldn't build anything more intelligent than what we have now: there would be years of work automating away toil across various industries.
my experience using LLM-powered tools (e.g. copilot in agent mode) has been underwhelming. like, shockingly so. like not cd-ing to the wrong dir where a script is located, and getting lost, disregarding my instructions to run ./tests.ps1 and running `dotnet test`, writing syntactically incorrect scripts and failing to correct them, particularly being overwhelmed by verbose logs. sometimes it even fails to understand the semantic meaning of my prompts.
whereas my experience describing my problem and actually asking the AI is much, much smoother.
I'm not convinced the "LLM+scaffolding" paradigm will work all that well. sanity degrades with context length, and even the models with huge context windows don't seem to use it all that effectively. RAG searches often give lackluster results. the models fundamentally seem to do poorly with using commands to accomplish tasks.
I think fundamental model advances are needed to make most things more than superficially automatable: better planning/goal-directed behavior, a more organic connection to RAG context, automatic gym synthesis, and RL-based fine tuning (that holds up to distribution shift.)
I think that will come, but I think if LLMs plateau here they won't have much more impact than Google Search did in the '90s.
As long as liability is clearly assigned, it doesn't have an economic impact. The ambiguity of liability is what creates negative economic impact. Once it's assigned initially through law, then it can be reassigned via contract in exchange for cash to ensure the most productive outcome.
e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.
I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress.
Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
Your examples are not LLMs, though, and don't really behave like them at all. If we take the chess analogy and design an "LLM-like chess engine", it would behave like an average 1400 London spammer, not like Stockfish, because it would try to play like the average human plays in it's database.
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
> I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
It's very different from chess etc. If we could formalise and "solve" software engineering precisely, it would be really cool, and probably indeed just lift programming to a new level of abstraction.
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
I mentioned algorithms, not software engineering, precisely for that reason.
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
It's unlikely that LLMs are gonna get us there though. They ingested all relevant data at this point at the net effect might very well kill future sources of quality data.
How is e.g. stackoverflow gonna stay alive if the next generation of programmers relies mainly on copilot and vibe coding? And what will the LLMs scrape once it's gone?
People assume (rightly so) that the progress in AI should be self-evident. If the whole thing is really working that great, we should expect to see real advances in these fields. Protein-folding AI should lower the prices of drugs and create competitive new treatments at an unprecedented rate. Photo and video AI should be enabling film directors and game directors to release higher-quality content faster than ever before. Text AI should be spitting out Shakespeare-toppling opuses on a monthly basis.
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
The kaboom already happened on user-generated media platforms. YouTube, Facebook, tiktok, and so on are flooded with AI-generated videos, photos, sounds, and so on. The sheer volume of this low-quality slop is because AI lowered the barrier of entry for creating content. In this space the progress is not happening through pushing the upper bound of quality higher but by reducing the cost for minimal quality to down to near-0.
Another perspective for the kaboom is search and programming tasks for the average person.
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
I live next to an abandoned building from the Spanish property boom. It's now occupied illegally. Hype's over yet the consequence is staring at me every day. I am sure it'll eventually be knocked down or repurposed yet it'd be better had the misallocation never happened.
Very cheap game consoles and VR headsets. Unironically that could really help world peace and QOL: less news and doomscrolling, and people would have an outlet for stress, anger, and boredom.
That's a subset of gamers, specifically gamers who actively want a hypersexualized characters and still haven't figured out how to ignore the irrelevant products and still chose to be angered if some product doesn't fit their requirements (despite no shortage of games that would fit the bill).
On the other hand, drug discovery sounds like it's a candidate for really benefitting from AI. To fuel AI model development, there maybe has to be all the garbage that comes with AI.
Drugs to cure the diseases caused by your environment. It's not as if people are suddenly going to be making perfect decisions (e.g. never getting a sunburn, not eating meat, avoiding sugary foods).
>Drugs to cure the diseases caused by your environment
Humans have so far completely failed to develop any drug with minimal side effects to cure lifestyle diseases; it's magical to think AI can definitely do it.
Everything has side effects. In this case we have three pretty good interventions, Ozempic, FMT, and telling people to drink Coke Zero. The worst "side effect" is just that the first two are expensive.
Oh, in this case GP seems to be including sunscreen as a treatment for lifestyle diseases. Pretty sure those don't have side effects, but Americans don't get the good ones.
Is your objection just over the word "cure"? Because hypertension, depression, arthritis, asthma are a few in an absurdly long list of lifestyle diseases that use drugs as a primary method of treatment.
So all these things that skyrocketed in the span of 75 years are immutable facts of life, but magic drugs are somehow in the realm of possibilities?
What's easier, educate your people and feed them well to build a strong and healthy nation OR let them rot and shovel billions to pharma corps in the hope of finding a magic cure?
>skyrocketed in the span of 75 years are immutable facts of life
A number of them seem to have skyrocketed with quality of life and personal wealth. I suspect my ancestors were skinny not because they were educated on eating well but because they lacked the same access to food we have in modern society, especially super caloric ones. I don't super want to go back to an ice cream scarce world. Things like meat consumption are linked to colon cancer and most folk are unwilling to give that up or practice meat-light diets. People generally like smoking! Education campaigns got that down briefly but it was generally not because people didn't want to smoke, it's because they didn't want cancer. Vaping is clearly popular nowadays. Alcohol, too! The WHO says there is no safe amount of alcohol consumption and attributes lots of cancer to even light drinking. I suspect people would enjoy being able to regularly have a glass of wine or beer and not have it cost them their life.
Bullshit. Heart disease and cancer (and a long tail of medical problems) cook up with age and kill ~everyone inside ~100 years. If you think that environment and exercise can fix this, show me the person who is 200 years old.
We would have to 100x medical research spending before it was clearly overdone.
Obviously healthy habits prolong your life. Nobody argued otherwise. My contention is specifically with the idea that they matter to the exclusion of science and pharma. Healthy habits clearly hit a wall: if they didn't, we would have health and fitness gurus living to 200 and beyond by virtue of having a routine that could actually defeat cancer and heart disease. The absence of these 200 year old gurus indicates that no, 80% of cancer (and heart disease, everyone always forgets heart disease) cannot be avoided with diet and exercise. Hence, work on getting through the wall is valuable.
You can 1000x the research if you want, a 50kg overweight person who doesn't exercise, drink alcohol and lives next to a highway is statistically fucked no matter what. You'd need straight up magic to undo the damages
You're not going to fix lifestyle diseases with drugs, and lifestyle diseases are the leading cause of death
That would be $75 billion combined for 2025. A drop in the bucket.
--
> From 2013 to 2020, cloud infrastructure capex rose methodically—from $32 billion to $119 billion. That's significant, but manageable. Post-2020? The curve steepens. By 2024, we hit $285 billion. And in 2025 alone, the top 11 cloud providers are forecasted to deploy a staggering $392 billion—MORE than the entire previous two years combined.
If you have a GPU which you've used for AI training, but that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue.
If you destroy the GPU, you can write it off as a loss, which reduces your taxable income.
Its possible you could come out ahead by selling everything off, but then you'd have to pay expensive people to manage the sell off, logistics, etc. What a mess. Easier to just destroy everything and take the write-off.
Capital equipment is depreciated over time. By the time you're selling it off, it's pennies on the dollar and a small recoupment of cost. Paying 30% (or whatever) taxes on that small amount of income and having 70% remaining is still better than zero dollars and zero taxes.
It is a giant pain to sell off this gear if you are using in-house folks to do so. Usually not worth it, and why things end up trashed as you state. If I have a dozen 10 year old servers to get rid of - it's usually not worth anyone's time or energy to list them for $200 on ebay and figure out shipping logistics.
However, at scale the situation and numbers change - you can call in an equipment liquidator who can wheel out 500 racks full of gear at a time and you get paid for their disposal on top of it. Usually a win/win situation since you no longer have expensive people trying to figure out who to call to get rid of it, how to do data destruction properly, etc. This usually is a help to the bottom line in almost all cases I've seen, on top of it saving internal man-hours.
If you're in "failed startup being liquidated for asset value" territory, then the receiver/those in charge typically have a fiduciary duty to find the best reasonable outcome for the investors. It's rarely throwing gear with residual value in the trash. See: used Aeron chair market.
> that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue
Unless GPUs are like post-Covid used cars you're going to sell them at a loss which can be written off. Write-offs don't have to involve destroying the asset. I don't know where you got that idea.
I just wish we forced every new data center to be built with renewables or something. The marginal cost over a conventional data center can’t be that big compared to the total cost, and these companies can afford it. Maybe it can help advance the next generation of small modular nuclear reactors or something.
Many of these companies are very interested in small nuclear tech as a means to power these facilities. The major bottleneck now for most of them is finding sites where grid capacity exists to power them.
Talking to anyone in the space for more than 30 minutes and nuclear with come up.
I very much hope the hype cycle lasts long enough for some of this capital raining down from the sky to get these reactors deployed in the field, because those will be a lasting positive from this hype cycle - much like laying railroad infrastructure and fiber optic cables came from other hype cycles.
I've often said that the robber barons sucked, but at least they left massive amounts of physical infrastructure for the average person to benefit from. The hype cycles of late leave very little in comparison.
I'm concerned that the move-fast-and-break-things crowd will attempt to ply their trade on reactor tech and leave us with a bunch of failed projects in jurisdictions that won't hold them to account. The space is heavily regulated for a reason.
> I just wish we forced every new data center to be built with renewables or something.
Already the case in Europe. And in the US, most of the biggest player are doing this: Google, Microsoft, Meta, AWS. By now those 4 are the largest buyers of renewable purchase agreement in the entire world. MS alone invested something like 20B in renewable purchase.
But the issue is that installation of renewable in the US is not bottlenecked by lack of demand, it's bottlenecked by permitting, zoning issue etc. The queue for power deployment right now is something like 100GW (i.e. how much production is paid to be built, but not yet built), that is around 10% of the current total US power capacity. So it's not really clear to me if buying more renewables helps making it's deployment faster through economies of scale, or if the purchase order is just sitting in a queue for years and years.
One notable exception is xAI/Grok, who has one of the biggest cluster, is powering it 100% with gas and afaik did not offset it by buying the equivalent in renewable. Having built the cluster in a what was a washing machine factory that does not have adequate power supply or cooling tech, they have been rolling in 35 mobile gas turbines (large trailer trucks that you connect to gas pipes) and 50+ refrigeration trucks. IMHO, it should be illegal to build such an energy consuming system with such a poor efficiency, but well.
At least in the US, it doesn't need to be forced. In 2024 94% of new US powerplant capacity in the US were renewables or battery storage, we're on track for 93% for 2025, and based on announced plans the next few years will see very similar numbers. What few fossil fuel power plants are being built are exclusively natural gas, and a decent number of them are conversions of former coal plants to natural gas. Planned additional natural gas plant capacity is lower than at any point since the shale revolution started. Renewables have won.
Data centers want firm power to avoid underutilizing expensive assets. Solar and wind are intermittent. New gas has a years-long lead out. 12+ hours of batteries (solar in winter) are, in fact, not free or de minimis.
These companies would all LOVE to use small modular nuclear reactors in their datacenters... if the NRC ever got around to approving a license for one.
People like the idea of economical SMRs, but it seems unlikely they will ever be economical. There's no real reason not to just build LMRs instead (e.g., AP1000).
Why renewables? These guys don't have an energy problem. The states have made sure give them energy credits and other assurances on that. The problem they do have though is siphoning the water from everyone around them.
Microsoft actually has a design for mini datacenters that stay cool in the ocean and collect tidal energy. But it's way more fun to have states trying to court you into building datacenters cause it'll bring some jobs.
I don't see how you can make the argument that a large portion of funds used for AI capex were diverted from other investments (and starving other industries), while simultaneously applying the economic multiplier to the whole sum when going from the investments to the GDP impact.
Surely you only get one of the two, because for diverted investments the multiplier applies equally on both sides of the equation.
He is making two arguments. One is that AI capex is starving other industries. And the other that AI capex is causing major GDP growth, attributed to both the direct investments themselves, as well as the multiplier effects.
One of those could be true. But I assert that both cannot be true at the same time. If these direct investments were going to happen elsewhere if they weren't happening for AI infrastructure then that counterfactual spending would show up in the GDP instead, as would the multiplier effect from that spending.
The main argument builds on the assumption that the economy is a zero sum game when it clearly is not. Just because we invest these ressources in AI does not mean we could mobilize the same capital for other pursuits.
Precisely AI is being built out today because the value returned is expected to be massive. I would argue this value will be far bigger than railroads ever could be.
Overspending will happen, for sure, in certain geographies or for specialty hardware, maybe even capacity will outpace demand for a while, but I don’t think the author makes a good case that we are there yet.
> "... builds on the assumption that the economy is a zero sum game when it clearly is not"
Be cautious making assessments as to compounding effects; while it remains the critical attribute, the compounding nature of a system is not always obvious. For example, the author is correct that financing for AI CapEx is starving other fields of investment at least in the short term.
In addition, the massive investment may be overbuilt, but then likely will end up being useful long-term. This is just as the initial buildout of the internet to power pets.com was a bust at the time, but then led to Amazon, YouTube, Zoom, etc and in fact led to us being able to weather Covid better than you'd have expected.
The modern internet came from folks getting connected over-exuberantly based on near-term returns (with a lot of investors losing their shirts) but then humans figured out what the actual best use of the technology.
That is why I stated transistor improvements what was previously known as Moore's law will continue for at least another 10 years. The Smartphone has carried us from 2008 - 2023. The money that is being used today are already invested into the next 2 - 3 years of Semi Conductor manufacturing. That is 2nm or A20 this year and A18 / 14 in two years time. There is enough momentum towards A10 and A8 by 2030, 2032. Even if things slows down by then it is enough to run till 2035 unless something catastrophic like WW3 or Market collapse happening.
That said even if we somehow reach A5 in 2035, we are only at about 12x density increase. If we include system packaging, chiplet, interconnect advancement pushing this to 30 to 40x. This is still a far cry from the 1000 to 10000x compute demands from a lot of AI companies. And that is assuming memory bandwidth could scale with it.
The counterintuitive part of automation is that it removes parts of the economy rather than making the economy bigger. You end up with more goods but the value people assign to them goes down as they don't provide additional social advantage.
For example at one point nails were 0.5% of the economy and today owning a nail factory is a low margin business that has no social status.
Similarly the percentage of the economy and social status associated with frontend software dev will get automated and become a smaller percentage of the economy.
Since social status is a zero sum game people increase spending in other areas where social status can be helped.
> The counterintuitive part of automation is that it removes parts of the economy rather than making the economy bigger
So you believe in zero sum economy? I think new capabilities lead to demand expansion, they mobilize latent demand that was sleeping. There is no limit to desires, not even AI automation could outrun them.
I'm waiting for the shoe to drop when someone comes out with an FPGA optimized for reconfigurable computing and lowers the cost of llm compute by 90% or better.
This is where I do wish we had more people working on the theoretical CS side of things in this space.
Once you recognize that all ML techniques, including LLMs, are fundamentally compression techniques you should be able to come up with some estimates of the minimum feasible size of an LLM based on: information that can be encoded in a given parameter size, relationship between loss of information and model performance, and information contained in the original data set.
I simultaneously believe LLMs are bigger than the need to be, but suspect they need to be larger than most people think given that you are trying to store a fantastically large amount of information. Even given lossy compression (which ironically is what makes LLMs "generalize"), we're still talking about an enormous corpus of data we're trying to represent.
Raw gemm computation was never the real bottleneck, especially on the newer GPUs. Feeding the matmuls i.e memory bandwidth is where it’s at, especially in the newer GPUs.
I wouldn't wait. fpgas weren't design to serve this model architecture. yes they are very power efficient but the layout/p+r overhead, the memory requirement (very few on-the-market fpgas have hbm), slower clock speed, and just an unpleasant developer experience makes it a hard sell.
We're already seeing it with DeepSeek's and other optimizations - like that law with highways - the wider the highway the even more usage of it. Dropping by 90% would open even more use cases.
For white-collar jobs replacement - we can always evolve up the knowledge/skills/value chain. It is the blue-collar jobs where bloodbath with all the robotics is coming.
For a first approximation, yes? Their earnings have been growing steadily for years ($4-5B growth in each of the last 8 quarters), with no seasonal effects.
Q1*4 is highly likely to be a better estimate of their eventual 2025 calendar revenue than their current trailing 12 months revenue would be. Probably still a bit conservative, but easier to justify than projecting that growth continues at exactly the same pace.
It seems like it's specifically based on nvidia's sales, which I assume almost entirely for deep learning. "Regular" data centers don't need many GPUs.
> Because everyone and their mother are labeling regular data centers as AI data centers.
Companies (generally) build things with an expectation for a return on their investment: what "regular" data centre usage would necessitate these kind of build-outs?
To sell more Postgres or WordPress VMs/instances? Is that being used to justify the spending in shareholder conference calls and regulatory filings?
Substantially this is Meta, Google, etc. These are advertising companies. They substantially create ML models that do advertising targeting, etc. Meta, it should be noted, doesn't sell LLMs at all. They sell ads. And though they are doing a lot of stuff with AI using this infrastructure... any datacenters they build primarily exist to sell ads.
The premise of AI and certainly what a large subset of executives and investors believe is that AI will provide a significant productivity increase to a significant part of the work force.
Of 30% of the work is done 10% faster that leaves a 3% gain for other economic activities. If that is true the CapEx is justified.
Roughly 5% of energy in the US is dedicated towards AI datacenters. The current usage will double to 70–90 TWh/year by 2026-2027. For software heavy tech businesses it makes sense to host your own AI data center so that you could train it on your codebase and have developers build faster. Not sure if this benefits humanity that much..
Will it? Google originally started trying to sell search servers you'd install on site. And on prem (even on your cloud) is not a model any of the big boys will want to follow.
We're yet to see if it's going to be a winner takes all market or whether there will end up a Linux equivalent pop up that destroys all the investment from the big players because programmers are too tight to pay for software.
Is Argentina a net positive for our society? There's the grilled beef, but every country has some kind of barbecue. There's a few soccer players I guess? Is Argentina worth the energy expenditure though?
I don't know... 1.2% of GDP just doesn't seem that extreme to me. Certainly nowhere near "eating the economy" level compared to other transformative technologies or programs like:
- Apollo program: 4%
- Railroads: 6% (mentioned by the author)
- Covid stimulus: 27%
- WW2 defense: 40%
Yeah that's my first reaction to. 1.2% doesn't sound much. It's just people making headlines out of thin air. If it lists the water and energy consumption I might be more concerned.
Slightly off-topic, but ~9% of GDP is generated by "financial services" in the US. Personally I think it's a more alarming data point.
Nearly 20% for healthcare causes some reservations -- considering how little we get for our money in America.
It used to be the other way round.. 9 for health and 20 for finance, as recently as 2020
More concerning to me are that these visualizations are not so trivial to find. Here's one
https://www.bea.gov/system/files/gdp1q25-3rd-chart-03_0.png
Health care is growing but not as much as real estate
We spend about twice as much on average as other developed countries and get far far less in return.
I have read, but not verified the figures myself that if the United States had Australia's healthcare system - universal, government funded healthcare (excluding dental) then all US citizens would have near free healthcare, would not need costly insurance, and the government would spend a similar amount to what it does now
Yes, that stat is spirit crushing.
Ultimately, "financial services" is what's downstream of insurance, banking (deposits / money transfers), loans and retirement savings. Also efficient capital allocation and the provision of government services to some extend. Those are things we want, and we want those things to work well.
Yes but it is the overhead. Higher overhead doesn't suggest efficiency.
Why is 9% for financial services bad? This should cover fees/interest from everything like loans, transactions, mortgages, advice, investing, etc. It doesn't seem that surprising to me that the systems that are the backbone for all the money operations that power the rest of the economy make up about 10%.
I get your point, but the flip side is that private companies like visa and Mastercard get to skip 2.5%+ off the entire economy. Visa has more than 50% profit margin, and it’s not like these companies are innovating with all that extra cash either. It’s just money from my pocket to some rich investor somewhere
Visa and Mastercard aren’t skimming 2.5% off the economy; the majority of the interchange fees go to banks (which Visa is not; their actual product is VisaNet which provides payment infrastructure, broadly.)
Trivially verifiable by Visa’s revenue being $35B, which is not even close to 1% of just US GDP (about $30T).
Interests you pay is not necessarily all financial services revenue. Only the net interests the industry receives count as revenue. There's a lot of netting going on in finance.
9% is very inefficient.
"Inefficient" implies the money is being burned or something. It's flowing into the pockets of people who work in the financial services industry, who then spend it on other things. The economy isn't zero-sum.
And the industry itself greases the wheels of other industries. In other words without financial services like lending and payment processing there would be less spending and investment overall, so other industries would shrink along with it.
You’re falling for the broken window fallacy, it’s inefficient as demonstrated by automation reducing the percentage of the economy devoted to financial services without any negative effects.
Banking used to really suck. Walk into an old bank building and it looks empty with spaces for a dozen tellers never actually used, this is a good thing as nobody actually wants to stand in line at a bank. People have largely stopped using cash because swiping a card is just more pleasant.
Meanwhile payment networks (Visa, Mastercard) have over a 50% profit margin, that’s a huge loss for the US economy. Financial services dropping to 1% of the overall economy would represent a vast improvement over the current system.
That’s a lot of money for “greasing”. Nearly 10% on any kind of overhead is generally considered a lot.
Central planning is drastically more efficient, for example. It’s why large companies use it internally.
Is there any evidence that central planning on a much larger scale is drastically more efficient? We're talking about a whole country, after all. I take your point that companies themselves are usually centrally planned internally, but centrally planned economies haven't fared so well.
some years back I was talking to an acquaintance through my daughter's kindergarten and complaining about this point and he said it was because financial services was where all the innovation was happening (he was an investment consultant of some sort)
He was convicted of fraud a few years later.
https://youtu.be/HA1YKg_OLBw
Financial services makes the unrealistic consumption of rich countries possible. That’s worth 9%.
Nice clip yet it does not make it clear why 9% is the good value of GDP. Why not 7%?
Why not 50%?
Wait, so we could end unrealistic consumption in rich countries and get 9% of our economy back to doing something useful? Sounds like a win-win.
Yes, and you could go back to agrarian life too! Win-win-win!
The finance industry's ability to teleport value across time and space is a massive boon for quality of life across the world.
I can't help but wonder if there's a middle ground between people not being able to obtain credit to pursue new enterprises, and entire productive enterprises being swallowed up in the pursuit of short-term rent seeking.
Could such a middle ground exist? Sure. Could someone design a system where that middle ground was a natural equilibrium? Unsure. I don't see how you incentivise the goldilocks behaviour (but I am not the smartest bear so maybe someone else can)
There's a good book on this topic by a Scottish philosopher: An Inquiry into the Nature and Cauſes of the Wealth of Nations by Adam Smith, LL. D. and F. R. S, formerly Profeſſor of Moral Philoſophy in the Univerſity of Glasgow.
The economy is not a fixed pie that you can just take slices from one sector and give them to another. Financial services provide liquidity that supports every other sector; getting rid of them would cause contraction in every other sector.
That's true to a degree. But giving them free reign inventivizes the kind of behavior that gave us the 2008 financial crisis. So the commenter can be forgiven for wanting to limit that sector.
They don't have "free reign". And who's to say whether 9% is better or worse for society as a whole.
The comment is an uninformed take.
Yes it's like magic! Let's "JUST STOP OIL" while we're at it. Oh and end world hunger too.
How do they do that?
Money multiplier, resource allocation.
Odd, they just seem mostly parasitic to me.
As a % of GDP it doesn’t seem very large, but that’s because our GDP is so massive. This is still an entire Norway’s worth of GDP.
Like 1.2% isn’t a big percentage, but neither is 3.4% - our total military expenditures this year.
You’re right that it’s large in an absolute sense, but any sector of the US economy is going to be large in an absolute sense. It’s not a very meaningful statement. Using percentages allows comparison to other items, which for some purposes gives a more useful sense of size. For instance, based on your numbers, AI expenditure is about 1/3 the total military expenditure. I tend to agree that this is less than I expected, and generally makes me feel a bit better about the (imo excessive) hype.
It's small as a part of the economy. It's huge as a completely new thing. The US economy in total has been growing something like an average of 2.5% over recent years. Something that is all-the-growth-of-the-last-year-in-one-place is pretty significant.
AI didn’t happen in one year. Netflix’s famous recommender system challenge kicked off in _2006_! And “Big Data” was all the rage ten years ago. The category “AI” includes these things.
We both know the NN boom of the 2010s pales in comparison to the post GPT3.5 era of LLMs.
The entire population of Norway fits in Queens and Brooklyn. If everyone there decided to whittle spoons we'd be midly concerned about just what got in their water, but it won't be an existential crisis for the rest of us.
I will never understand people who use tiny European countries as meaningful comparisons to continent sized ones.
It helps people understand scale, since there is only one other kinda similar economic machine, and it's China. The EU is too loosely coordinated to really compare.
The population of Queens and Brooklyn is one of the most densely populated areas on the planet. I will never understand people who use massively outlier-sized cities as meaningful comparisons to nations.
You're looking at dots on a graph when you should be looking at lines and curves (and slopes of curves). The author makes this argument:
* Movement of capital from other fields to "AI". * Duration of asset value (eg, AI in months/years vs railroad in decades/centuries). * "Without AI datacenter investment, Q1 GDP contraction could have been closer to –2.1%".
To state a bit more simply: the rate at which this spending has gone from about 0% to 1.2% is extremely fast which is point the author is trying to make.
The Q1 GDP comment is stunning because what it says is that if the same Q1 had happened just two years ago there’s a very good chance we’d be looking at a modestly sized recession. Now of course things aren’t zero-sum and it’s impossible to really make a useful claim like that but it’s still striking.
We are only 2 years in! 1.2% of GDP is enormous! The fact that we can even make any of these comparisons is stunning.
1.2% is larger than either agriculture or mining (including O&G).
It's hard for me to tell what is a bigger misspending of money - LLMs or Apollo... At least I have a direct access to LLMs. Not sure I would need a direct access to moon rocks though.
It seems quite plausible that if we hadn't done the Apollo program that we'd probably be about 10 to 20 years behind in semiconductors right now (not to mention other technologies).
When you say "we" I assume you are from Taiwan? Good for you people, but it isn't much of a win for US industrial policy when it pushes Taiwan to the ascendant position and seems to be locking in Asian dominance of tech manufacturing.
No, "we" as in humanity. Apollo funding gave the development of integrated circuits a boost. Sure, we would've developed integrated circuits eventually anyway but it would've taken longer to get there.
One is a footnote in history on the way to decent ML, and the other is the literal moon in the sky. Your comment must be dripping in sarcasm.
The crux of the article is asking whether such a large investment is justified; downplaying the article saying it's only X% of the GDP compared to Y doesn't address the issue.
Total global military expenditure as a percent of global GDP is about 2.5%.
1/100th of the US economy sounds massive to me.
…for just the cap ex part
the crash is going to be glorious
Those were all mega events though.
> WW2 defense: 40%
Do you think WW3 defense should be on the charts yet?
Given that defence spending is hovering around 2.0-2.5% of GDP, I don't think we are there yet.
https://data.worldbank.org/indicator/MS.MIL.XPND.GD.ZS
But it's not about "wow, how much!", it's about "wow, it even registers!"
Covid stimulus isn't directly comparable to the other three programs you listed. It was more like tax legislation than an industry.
All your other examples were over a period ranging from 3 years to more than a decade.
> Apollo program: 4%
More than a decade long. The technology and industry here was broadly shared. They did things like highjacked bra manufacturers to make space suits.
> Railroads: 6% (mentioned by the author)
We're still using this investment today.
> Covid stimulus: 27%
The virus that was killing us the fizzled is probably not the best example... Only arguments will ensue if I even attempt to point thing out in this one.
> WW2 defense: 40%
I mean Russia made its last lend lease payment in 2006. It lead to America dominance of the globe. It looks like an investment that paid it self off.
How much of the hardware spend on AI is going to be usable in 5years?
There are some deep fundamental questions one should be asking if they pay attention to the hardware space. Who is going to solve the power density problem? Does their solution mean we're moving to exotic cooling (hint: yes)? Have we hit another Moores law style wall (IPC is flat and we dont have a lot of growth left in clock, back to that pesky power and cooling problem). If a lot of it is usable in 5 years thats great, but then the industry isnt going to get any help from the hardware side and thats a bad omen for "scaling".
Meanwhile capex does not include power, data, constables or people. It may include training, but we know that can't be amortized. (how long does a trained system last before you need another, or before you need a continuation/update).
Everyone is going after AI under the assumption that they can market capture, or build some sort of moat, or... The problem is that no one has come up with the killer app where the tech will pay for itself. And many in the industry are smart enough not to try to build their product on someone else's platform (cause rug pulls are a thing).
"AI" could go the way of 3d tv's, VR, metaverse, where the hype never meets up with hope. That doesn't mean we wont get a bunch of great tooling out of it. It is going to need less academics and more engineering (and for that hardware costs have to drop...)
thing you are missing out if opportunity cost.
All these investments are a chump change for big.
> thing you are missing out if opportunity cost.
so presumably, the people spending those money have considered the opportunity cost and reckoned it be worth it.
Unless you proposed some alternative which would've been better, you cannot say that those spending were bad because "opportunity cost".
- The birth of the space age, and more realistically the birth of the ICBM and satellite age. Both key to national security, and in the context of a cold war.
- 40% of long-distance ton miles travel by rail in the US. This represents a VAST part of the economic activity within the country.
- A literal plague, and the cessation of much economic activity, with the goal of avoiding a total collapse.
- ...Come on.
So we're comparing these earth-shaking changes and reactions to crisis with "AI"? Other than the people selling pickaxes and hookers to the prospectors, who is getting rich here exactly? What essential economic activity is AI crucial to? What war is it fighting? It mostly seems to be a toy that costs FAR more than it could ever hope to make, subsidized by some obscenely wealthy speculators, executives fantasizing about savings that don't materialize, and a product looking for a purpose commensurate to the resources it eats.
I agree we don’t have an actual ROI on AI yet. There is a ton of activity but the progress for society is speculation at this point. I don’t think we will have an idea on societal progression for at least 10 years and maybe 30.
It continually surprises me when people are in denial like this.
Literally every profession around me is radically changing due to AI. Legal, tech, marketing etc have adopted AI faster than any technology I have ever witnessed.
I’m gobsmacked you’re in denial.
Interestingly I just talked to several lawyers who were annoyed at how many mistakes were being made and how much time was being wasted due to use of LLMs. I suppose that still qualifies as radically changing — you didn’t specify for the better.
You could be right, but I'm not the one here who's paycheck depends on AI being worth more than the cost.
The adoption curve is self evident to any one not living under a rock. I suspect you’re mostly just a grumbler than making cogent arguments
I don't live under a rock and I'm also not seeing it.
I mean, companies are trying to force it onto us, but it's not ready for any real work, so the "adoption" is artificial.
Within 10 minutes earlier today, I took 1.5 years of raw financial trading data and generated performance stats, graphed out return distributions, ran correlation analysis, and to top it off created monte-carlo shock tests using the base data as an input for the model and ran hundreds of simulations with corresponding charts.
Each of the 15 charts would have been a page of boilerplate + Python, and frankly there was a huge amount of interdisciplinary work that went into the hundreds of thought steps in the deep reasoning model. It would have taken days to fill in the gaps and finish the analysis. The new crop of deep reasoning models that can do iteration is powerful.
The gap between previous "scratch work" of poking around a spreadsheet, and getting pages of advanced data analytics tabula rasa, is a gap so large I almost don't have words for it. It often seems larger than the gap between pen and paper and a computer.
And then later, off of work, I wanted to show real average post-inflation returns for housing areas that gentrify and compare it with non-gentrifying areas. Within a minute all of the hard data was pulled in and summed up. It then codes up a graph for the typical "shape of gentrification", which I didn't even need to clarify to get a good answer. Again, this is as large a jump as moving from an encyclopedia to an internet search engine.
I know it's used all over finance though. At Jane Street (upper echelon proprietary trading) they have it baked into their code development in multiple layers. In actual useful ways, not "auto completion" like mass market tools. Well it is integrated into the editor and can generate code, but there is also AI that screens all of the code that is submitted, and an AI "director" tracks all of the code changes from all of the developers, so if a program starts failing an edge case that wasn't apparent earlier, the director will be able to reverse engineer all of the code commits, find out where the dev went wrong, and explain it.
Then that data generated from all of the engineers and AI agents is fed back into in-house AI model training, which then feeds back into improvements in the systems above.
All of the dismissiveness reminds me of the early days of the internet. On that note, this suite of technologies seems large. Somewhere in-between the introduction of the digital office suite (word/excel/etc) and perhaps the Internet itself. In some respects, when it comes to the iterative nature of it all (which often degrades to noise if mindlessly fed back into itself, but in time will be honed to, say, test thousands of changes to an engineering Digital Twin) it seems like something that may be more powerful than both.
The adoption rate seems driven by a race to bottom, a desire to control "the next big thing" before someone else does, executive reflex, and some real use cases.
But then we saw the same thing with Crypto, tons of money poured into that, the Metaverse was going to be the next big thing! People who didn't see and accept that must not understand the obvious appeal...
> What essential economic activity is AI crucial to?
The continued devaluing of skilled labor and making smaller pools of workers able to produce at higher levels, if not their automation entirely.
And yeah AI generated code blows. It's verbose and inefficient. So what? The state of mainstream platform web development has been a complete shit show since roughly 2010. Websites for a decade plus just... don't load sometimes. Links don't load right, you get in a never-ending spinning loading wheel, stuff just doesn't render or it crashes the browser tab entirely. That's been status quo for Facebook, YouTube, Instagram, fuck knows how many desktop apps which are just web wrappers around websites, for just.. like I said, over a decade at this point. Nobody even bats an eye.
I don't see how ChatGPT generating all the code is going to make anything substantively worse than hundreds of junior devs educated at StackOverflow university with zero oversight already have.
Stack Overflow university is quite good, honestly. The bigger problem is documentation written by Twitter and Facebook folk: their "solutions" don't even work within those companies, and certainly don't work when other people adopt them. On Stack Overflow, people occasionally point out the bad practices that others try to promote.
[flagged]
Absolutely. Add robotics over vocational training too. I guess religious donations could be rerouted to encoding human consciousness.
Wow, then the cheapest servers which run the poorest uploaded consciousnesses (correct plural ?), would also likely be the worst cooled.
Making it a cold day in hell if the incentives of humanity ever change.
Please tell me this is satire. Surely you can’t honestly mean this, but these days with AI Poe’s law is way too strong.
[flagged]
LLMs don’t educate. They prevent learning in important ways by solving problems for us.
You think the world does a good job educating people? I’m very serious about making that wholesale change. Many are functionally illiterate and never reach sophisticated academic levels. Many don’t even have parents at home that are qualified to supplement them. Many have teachers who absolutely will never be able to extract their potential. Many are in environments where our current education paradigms will never be able to overcome. LLMs will save a generation of kids.
Railroads lead to a distribution of capital into the society and to a long term increase of wealth for many.
Ai leads to a capital concentration on the hands of those that already have money and might lead to a long term reduction of wealth for the middle class.
Less purchasing power in the population usually is not good for economic development, so I have my doubts with respect to a boom .
> Railroads lead to a distribution of capital into the society
does it? I recall that railroads were monopolies (the vanderbilts). The gov't had to pass an act to break them up, because there were price collusions and farmers were forced to pay a higher price for their transport of foodstuffs.
> Ai leads to a capital concentration on the hands of those that already have money
that is true for a lot of other capital intensive ventures. Why pick ai specifically?
And AI is less monopolistic - at least it's not a natural monopoly. There are competition, and there are alternatives.
railroads are for moving freight, and you don't move freight if you're not selling goods. yes there were trust issues, but fundamentally a railroad requires a surrounding market.
ai does not, and seems to suffer the "resource curse" https://www.lesswrong.com/posts/Mak2kZuTq8Hpnqyzb/the-intell...
I just hope when (if) the hype is over, we can repurpose the capacities for something useful (e.g. drug discovery etc.)
During the 1990s dotcom boom we massively overbuilt fiber networks. It was indiscriminate and most of the fiber was never lit.
After the dotcom crash, much of this infrastructure became distressed assets that could be picked up for peanuts. This fueled a large number of new startups in the aftermath that built business models figuring out how to effectively leverage all of this dead fiber when you don't have to pay the huge capital costs of building it out yourself. At the time, you could essentially build a nationwide fiber network for a few million dollars if you were clever, and people did.
These new data centers will find a use, even if it ends up being by some startup who picks it up for nothing after a crash. This has been a pattern in US tech for a long time. The carcass of the previous boom's whale becomes cheap fuel for the next generation of companies.
I can barely get 50Mbps up/down and I only have xfinity in this area. No fiber, I will pay for it, but here we are. 2025 in good ol USA. In an urban area too.
wow
When a few years ago I moved from Eastern Europe (where I had 1GB/s to my apartment for years) to the UK I was surprised that "the best" internet connection I was able to get was about 40MBit/s phone line. But it's a small town, and during past years even we have fiber up to 2GB/s now.
I'm surprised US still has issues that you mentioned. Have you considered Starlink(fuck Musk, but the product is decent)/alternatives?
The US issues have some key driving factors:
One is, of course, the size of the country, but that's hardly an "excuse." It does contribute though.
The other big reason is lack of competition in the ISP space, and this is compounded by a distinctly American captured system where the owners/operators of the "public" utility poles shut out new entrants and have no incentive to improve the situation.
Meanwhile the nationwide regulatory agencies have been stripped down and courts have de-toothed them, reducing likelihood of top-down reform, and often these sorts of problems inevitably end up running into the local and state government vs national government split that is far more severe in the US.
So it's one of those problems that is surprising to some degree, but when you read about things like public utility telephone poles captured by corporate interests, it's also distinctly ridiculous and American, and not surprising at all.
What city, house or apartment? Apartments won’t pull fiber for obvious reasons
Which reasons? I mean for apartments
telcos lay fibers for free up to basement electrical closets, from closets to each units are on landlords, and to each units to actual wall outlets needs arrangements with tenants. Sometimes ISPs subsidizes that cost, but lots of arrangements still needs to be made.
For one entire rented or owned house, it's just a call and a drill away.
landlords don't want to spend money, and it's not like you're going to live somewhere else
You are lucky to have 50Mbps up. I have rented in 2-3 big cities and the upload speed was maximum 20mbps.
> previous boom's whale becomes cheap fuel for the next generation of companies.
How often are they the same players in different costumes?
Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
To me, this all sounds like an “end-of-the-world” nihilistic wet dream, and I don’t buy the hype.
Is it’s just me?
I'm afraid that this might sound flippant, but the answer to your question comes through another question - why were early 19th century industrialists obsessed with replacing textile workers? Replacing workers with machines is not a new phenomenon and we have gone through countless waves of social upheaval as a result of it. The debate we're currently having about AI has been rehearsed many, many times and there are few truly novel points being made.
If you want to understand our current moment, I would urge you to study that history.
https://en.wikipedia.org/wiki/Swing_Riots
We could produce more cloth in safer working conditions.
there was nothing safe about those early machines
the AI parallel is quite apt actually
Programers are going to be replaced by AI in the same way accountants got replaced by VisiCalc and engineers by CAD and mathematicians by calculators and software like Mathematica
You mean the part where you only need 5 people doing the job instead of 50?
And yet there are more engineers and accountants employed than ever.
And yet not keeping pace with the overall population increase of America. There’s also more companies than ever before.
Same reason so many people got excited in the early Internet days of how much work and effort could be saved by interconnecting everyone. So many jobs lost to history due to such an invention. Stock trading floors no longer exist, call centers drastically minimized, retail shopping completely changing, etc.
I had the same thought you did back then. If I could build a company with 3 people pulling a couple million of revenue per year, what did that mean to society when the average before that was maybe a couple dozen folks?
Technology concentrates gains to those that can deploy it - either through knowledge, skill, or pure brute force deployment of capital.
Because developer and other white-collar job salaries are the top expense of most companies.
Oh boy I can’t wait until we get rid of the highest paying jobs!
Your response doesn’t explain why so many people are hyped about it, just why CEOs are.
There's a lot of non-engineering people who are very happy to see someone else get unemployed by automation for a change. The people who formerly were automating others out of a job are getting a taste of their own medicine.
I am not an engineer and I expect my white collar job to be automated.
The reason to be excited economically for this is if it happens it will be massively deflationary. Pretending CEOs are just going to pocket the money is economically stupid.
Being able to use a super intelligence has been a long time dream too.
What is depressing is the amount of tech workers who have no interest in technological advancement.
I'm not sure exactly what you mean by deflationary, but in general deflation in an economy is a very bad thing. The most significant period of economic deflation in the US was 1930-1933, ie, the great depression, and the most recent period was the great recession.
And since when do business executives NOT pocket the money? Pretty much the only exception is when they reinvest the savings into the business, for more growth, but that reinvestment and growth usually is only something the rest of us care about if it involves hiring..
Of course that would cause a tremendous drop in demand for the services the schaudenfreuden folks provide, hurting them as well.
> that would cause a tremendous drop in demand for the services the schaudenfreuden folks provide, hurting them as well
You're correct. But it doesn't matter. Remember the San Francisco protests against tech? People will kill a golden goose if it's shinier than their own.
If this goose is also pricing others out of housing market it's not entirely unreasonable
> If this goose is also pricing others out of housing market it's not entirely unreasonable
It's self-defeating but predictable. (Hence why the protests were tolerated to backed by NIMBY interests.)
My point is the same nonsense can be applied to someone not earning a tech wage celebrating tech workers getting replaced by AI. It makes them poorer, ceteris paribus. But they may not understand that. And the few that do may not care (or may have a way to profit off it, directly or indirectly, such that it's acceptable).
I don't quite follow. What exactly have non-tech people of San Francisco got from all the tech people working there? How did they become richer (ok, apart from landlords) or how would they become poorer if they lose their jobs?
Tech workers pay a lot of taxes in addition to supporting the local economy.
https://sloanreview.mit.edu/article/the-multiplier-effect-of...
The services will get cheaper, since most companies make much profit and the moat of high salaries will be gone.
CEOs run every major media outlet and public platform for communication, people that hype AI will get their content promoted and will see more success which creates and incentive to create content.
This doesn't even require any "conspiracy" among CEOs, just people with a vested interest in AI hype who act in that interest, shaping the type of content their organizations will produce. We saw something lessor with the "return to office" frenzy just because many CEOs realized a large chunk of their investment portfolio was in commercial real estate. That was only less hyped because I suspect there were larger numbers of CEOs with an interest in remaining remote.
Outside of the tech scene, AI is far less hyped and in places where CEOs tend to have little impact on the media it tends to be resisted rather than hyped.
White collar jobs as a white - yes, but even in software companies it is not uncommon when sales and marketing cost more than engineering.
I don’t think software developer is a white collar job. It’s essentially manufacturing. There are some white collar workers at the extremes but the overwhelming majority of programmers are doing the IT equivalent of building pickup trucks.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
For the same reason people are obsessed with replacing all blue-collar jobs. Every cent that a company doesn't have to spend on its employees is another cent that can enrich the company's owners.
I have no problem with replacing jobs with automation. The question to ask is the investment justified by the efficiency gains?
I’m skeptical that this is a good use of resources or energy consumption.
The general view in my bubble was that blue collar jobs are seen as dumb, physically demanding and dangerous, so we kind of replacing them for their own good, so that they can do something intellectual (aka learn coding). Whereas intellectual labour is kind of what humans exist for, so making intellectual work redundant is truly the end of the world.
Maybe it's my post-communist background though and not relevant for the rest of the world
I find it weird and uncomfortable too. Like, why are there a bunch of people excited about mass unemployment?
Nobody wants to be unemployed, but people generally love the idea of getting what they want without having to interact with not to say pay to other people.
Like you have a brilliant idea, but unfortunately don't have any hard skills. Now you don't have to pay enormous sums of money to geeks and have to suffer them to make it come true. Truly a dream!
It is not about mass unemployment. It is about efficiency. Producing the same thing for cheaper. Leading to cheaper products to buy.
That is what allowed our current lifestyles. It is good thing. Now it is just coming to next area.
Ok, but who benefits from these efficiencies? Hint: not the people losing their jobs. The main people that stand to benefit from this don't even need to work.
Producing things cheaper sounds great, but just because its produced cheaper doesn't mean it is cheaper for people to buy.
And it doesn't matter if things are cheap if a massive number of people don't have incomes at all (or even a reasonable way to find an income - what exactly are white collar professionals supposed to do when their profession is automated away, if all the other professions are also being automated away?)
Sidenote btw, but I do think it's funny that the investor class doesn't think AI will come for their role..
To me the silver lining is that I don't think most of this comes to pass, because I don't think current approaches to AGI are good enough. But it sure shows some massive structural issues we will eventually face
> I do think it's funny that the investor class doesn't think AI will come for their role..
investors don't perform work (labour); they take capital risk. An ai do not own capital, and thus cannot "take" that role.
If you're asking about the role of a manager of investment, that's not an investor - that's just a worker, which can and would be automated eventually. Robo-advisors are already quite common. The owner of capital can use AI to "think" for them in choosing what capital risk to take.
And as for massive number of people who don't have income - i dont think that will come to pass either (just as you dont think AGI will come to pass). Mostly because the speed of these automation will decline as it's not that trivial to do so - the low hanging fruits would've been picked asap, and the difficult ones left will take ages to automate.
It's allowed our lifestyle for a short period of time, to small amount of people. It's not given to poor people, to people in places we've bombed, or extracted resources from, or people in the future since it's destroying the planet.
We're all far closer to poor than we are to having enough capital to live off of efficiency increases. AI is the last thing the capitalist class requires to finally throw of the shackles of humanity, of keeping around the filthy masses for their labor.
The poor of today (in developed countries) live absolutely lavish lifestyles compared to the middle class of 80 years ago.
In a capitalist society?
It is the one thing I believe capitalism at some level works for. Invest capital to build something that gives competitive advantage over other capital. Buying newer bigger factory that allows producing more for cheaper to compete with others.
With AI it is white collar work.
This is exactly it. I was talking to my wife about this this morning. She's a sociologist researcher, and a lot of people that work in her organization are responsible for reading through interviews and doing something called coding, where you look for particular themes and then tag them with a particular name associated with that theme. And this is something that people spend a lot of hours on, along with interview transcription, also done by hand.
And I was explaining that I work in tech, so I live in the future to some degree, but that ultimately, even with HIPAA and other regulations, there's too much of a gain here for it not to be deployed eventually, And those people in their time are going to be used differently when that happens. I was speculating that it could be used for interviews as well, but I think I'm less confident there.
In some sense I think it is related to how pyromaniacs like to lit and watch houses burn. Some sort of fetisch?
Nobody is obsessed with it. People are afraid of it. And yet, what will you do? Will you renounce adopting a tool that can make your work or someone else's work faster, easier, better? It's a trap: once you've seen the possibilities you can't go back; and if you do, you'll have to compete with those who keep using the new tools. Even if you know perfectly well that in a few years the tools will make your own job useless.
Personally, however, I would find it possibly even more depressing to spend my day doing a job that has economic value only because some regulation prevents it being done more efficiently. At that point I'd rather get the money anyway and spend the day at the beach.
That's only possible if you as the worker are capturing the efficiencies that the automation provides (i.e. you get RSU's, you have an equity stake in the business as well).
Believe it or not most SWE's and white collar workers in general don't get these perks especially outside the US where most firms have made sure tech workers in general are paid "standard wages" even if they are "good".
Because white collar salaries are extremely high, which makes the services of white collar workers unavailable to many.
If you replace lawyers with AI, poor people will be able to take big companies to court and defend themselves against frivolous lawsuits, instead of giving in and settling. If you replace doctors, the cost of medicine will go down dramatically, and so will waiting times. If you replace financial advisors, everybody will have their money managed in an optimal way, making them richer and less likely to make bad financial decisions. If you replace creative workers, everybody will have access to the exact kind of music, books, movies and video games they want, instead of having to settle for what is available. If you automate away delivery and drivers (particularly with drones), the price of prepared food will fall dramatically.
> Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job?
> They seem to be totally convinced that this will happen.
The two groups of people are not same. I for example belong to the 2nd but not the 1st. If you have used the current gen LLM coding tools you will realize they have gotten they are scary good.
Imagine a world where there is 10x as much wealth and 10x as many hard problems being solved. Suppose there's even a 5% chance of that happening. It's clearly worth doing
Depends on how that wealth is distributed.
No it is not just you. If true it would be beyond dystopic due to current political immaturity.
Because perceived existential risks capture people’s attention and imagination. If you’re a white collar worker then it’s a big deal to you.
Most people prefer not having to work.
> completely obsessed with replacing all developers
I’m paid about 16x an electronics engineer. Salaries in IT are completely unrelated to the person’s effort compared to other white collar jobs. It would take an entire career to some manager to reach what I made after 5 years. I may be 140IQ but I’m also a dumbass in social terms!
That's cool. It sounds like most of us aren't making what you make. I don't make 16x what someone paid minimum wage makes, much less an electrical engineer.
The median wage for programmers is about 5x the average minimum wage. You're paid significantly higher than the average programmer.
Especially outside the US where having a 140 IQ isn't really enough to have a high wage. Only social EQ and high capital does that in most of the world.
Jealousy is absolutely a thing.
You're an outlier and you must surely know that.
[dead]
> Re hype: Why is it that so many people are completely obsessed with replacing all developers and any other white-collar job? They seem to be totally convinced that this will happen. 100%
Because the only thing that gets the executive class hornier than new iPhone-tier products is getting to layoff tons of staff. It sends the stock price through the roof.
It follows from there that an iPhone-tier product that also lets them layoff tons of staff would be like fucking catnip to them.
The job description of a developer is to replace themselves by automating themselves so they can get promoted/find a new more relevant role. That's the point of compilers and new programming languages.
There's no such thing as taking people's jobs, nobody and nothing is going to take your job except for Jay Powell, and productivity improvements cause employment to increase not decrease.
The dream of many business owners is running their business with no products, no employees, and no customers, where they can just collect money. AI promises to fulfil this dream. AIs selling NFTs to other AIs paying in crypto is the final boss of capitalism.
Notice how it wasn't and isn't a big deal when it's not in your own back yard (i.e. destroying blue collar professions); our chickens have just come home to roost. It's amazing the number of gullible, naive nerds out there that can't or won't see the forests for the trees. The number of ancap-lite libertarians precipitously drops when it's their own livelihood getting its shit kicked in.
One of the more annoying parts about being in the tech community was listening to the tone-deafness of these sorts of folks. Coming from a very blue collar background and family.
It's difficult to have much empathy for the "learn to code" crowd who seemingly almost got a sense of joy out of watching those jobs and lifestyles get destroyed. Almost some form of childhood high school revenge fantasy style stuff - the nerd finally gets one up on the prom king. Otherwise I'm not sure where the vitriol came from. Way too many private conversations and overheard discussion in the office to make me think these were isolated opinions.
That said, it's not everyone in tech. Just a much larger percentage than I ever thought, which is depressing to think about.
It's certainly been interesting to watch some folks who a decade ago were all about "only skills matter, if you can be outcompeted by a robot you deserve to lose your job" make a 180 on the whole topic.
People really hate late stage capitalism.
The rest of the world has not caught up to current LLM capabilities. If it all stopped tomorrow and we couldn't build anything more intelligent than what we have now: there would be years of work automating away toil across various industries.
my experience using LLM-powered tools (e.g. copilot in agent mode) has been underwhelming. like, shockingly so. like not cd-ing to the wrong dir where a script is located, and getting lost, disregarding my instructions to run ./tests.ps1 and running `dotnet test`, writing syntactically incorrect scripts and failing to correct them, particularly being overwhelmed by verbose logs. sometimes it even fails to understand the semantic meaning of my prompts.
whereas my experience describing my problem and actually asking the AI is much, much smoother.
I'm not convinced the "LLM+scaffolding" paradigm will work all that well. sanity degrades with context length, and even the models with huge context windows don't seem to use it all that effectively. RAG searches often give lackluster results. the models fundamentally seem to do poorly with using commands to accomplish tasks.
I think fundamental model advances are needed to make most things more than superficially automatable: better planning/goal-directed behavior, a more organic connection to RAG context, automatic gym synthesis, and RL-based fine tuning (that holds up to distribution shift.)
I think that will come, but I think if LLMs plateau here they won't have much more impact than Google Search did in the '90s.
I’m curious which was the model you used when you ran into the cd-ing bug?
I’d give building with sonnet 4 a fair shot. It’s really good, not accurate all the time but pretty good.
> won't have much more impact than Google Search did in the '90s.
Given that Google IPOd in 99, and is one of the biggest tech companies in the world, I'm not sure what you mean by that.
Creating oodles of new jobs in internally QAing LLM results, or finding and suing companies for reckless outcomes. :p
As long as liability is clearly assigned, it doesn't have an economic impact. The ambiguity of liability is what creates negative economic impact. Once it's assigned initially through law, then it can be reassigned via contract in exchange for cash to ensure the most productive outcome.
e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.
> As long as liability is clearly assigned, it doesn't have an economic impact
In a reality with perfect knowledge, complete laws always applied, and populated by un-bankrupt-able immortals with infinite lines of credit, yes. :P
I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress.
Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
Your examples are not LLMs, though, and don't really behave like them at all. If we take the chess analogy and design an "LLM-like chess engine", it would behave like an average 1400 London spammer, not like Stockfish, because it would try to play like the average human plays in it's database.
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
> do you remember "natural language processing"? What, ehm, happened to it
There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question
1. https://arxiv.org/abs/1706.03762
How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
> I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
It's very different from chess etc. If we could formalise and "solve" software engineering precisely, it would be really cool, and probably indeed just lift programming to a new level of abstraction.
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
I mentioned algorithms, not software engineering, precisely for that reason.
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
It's unlikely that LLMs are gonna get us there though. They ingested all relevant data at this point at the net effect might very well kill future sources of quality data. How is e.g. stackoverflow gonna stay alive if the next generation of programmers relies mainly on copilot and vibe coding? And what will the LLMs scrape once it's gone?
I'll bet you $1,000*10^32 that AI never formalizes a novel FFT algorithm worth more than a dime.
We need to stop calling what we have AI. LLMs can't reliably reason. Until they can the progress is far from unstoppable.
I love it how people are transitioning from “LLMs can’t reason” to “LLMs can’t reliably reason”.
Have you ever seen a company say "welp, we wrote all the code. Now we're done?"
People assume (rightly so) that the progress in AI should be self-evident. If the whole thing is really working that great, we should expect to see real advances in these fields. Protein-folding AI should lower the prices of drugs and create competitive new treatments at an unprecedented rate. Photo and video AI should be enabling film directors and game directors to release higher-quality content faster than ever before. Text AI should be spitting out Shakespeare-toppling opuses on a monthly basis.
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
The kaboom already happened on user-generated media platforms. YouTube, Facebook, tiktok, and so on are flooded with AI-generated videos, photos, sounds, and so on. The sheer volume of this low-quality slop is because AI lowered the barrier of entry for creating content. In this space the progress is not happening through pushing the upper bound of quality higher but by reducing the cost for minimal quality to down to near-0.
Another perspective for the kaboom is search and programming tasks for the average person.
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
We won't. Well lay off engineers to balance the books and destroy unneeded capacity.
Very Grapes of Wrath.
I live next to an abandoned building from the Spanish property boom. It's now occupied illegally. Hype's over yet the consequence is staring at me every day. I am sure it'll eventually be knocked down or repurposed yet it'd be better had the misallocation never happened.
> It's now occupied illegally.
So it’s still being used now. That’s good right?
Very cheap game consoles and VR headsets. Unironically that could really help world peace and QOL: less news and doomscrolling, and people would have an outlet for stress, anger, and boredom.
This is one of the more hopeful explanations of the Fermi paradox.
https://en.wikipedia.org/wiki/Fermi_paradox#Hypothetical_exp...
"Alien species may isolate themselves in virtual worlds"
Is this the technosolutionist version of "just one more lane and we'll fix traffic jams for good"?
I've never seen a group more easily angered than gamers.
"What do you mean the women in this game have proportions roughly equivalent to what's actually possible in nature?!?!"
That's a subset of gamers, specifically gamers who actively want a hypersexualized characters and still haven't figured out how to ignore the irrelevant products and still chose to be angered if some product doesn't fit their requirements (despite no shortage of games that would fit the bill).
The most trust and safety research I’ve seen - on things like recidivism, ban success, pro social behavior - comes from gaming.
Dota, league, he’ll - Roblox, twitch, discord - have some of the most data on how angry humans are when they play vidya.
Capital G "Gamers", if you will.
[flagged]
It's about ethics in game journalism!
Have you read Ready Player 1 or watched the Matrix?
You may be interested in alternative approaches such as playing chess, running, singing, dancing, or reading literature.
On the other hand, drug discovery sounds like it's a candidate for really benefitting from AI. To fuel AI model development, there maybe has to be all the garbage that comes with AI.
What drugs? The leading causes of death are routine induced, 80% of cancers are caused by your environment, &c.
We have much better things to do with these billions
Drugs to cure the diseases caused by your environment. It's not as if people are suddenly going to be making perfect decisions (e.g. never getting a sunburn, not eating meat, avoiding sugary foods).
>Drugs to cure the diseases caused by your environment
Humans have so far completely failed to develop any drug with minimal side effects to cure lifestyle diseases; it's magical to think AI can definitely do it.
Everything has side effects. In this case we have three pretty good interventions, Ozempic, FMT, and telling people to drink Coke Zero. The worst "side effect" is just that the first two are expensive.
Oh, in this case GP seems to be including sunscreen as a treatment for lifestyle diseases. Pretty sure those don't have side effects, but Americans don't get the good ones.
including sunscreen as a treatment for lifestyle diseases
HN, where "going outside" is considered a lifestyle.
Is your objection just over the word "cure"? Because hypertension, depression, arthritis, asthma are a few in an absurdly long list of lifestyle diseases that use drugs as a primary method of treatment.
So all these things that skyrocketed in the span of 75 years are immutable facts of life, but magic drugs are somehow in the realm of possibilities?
What's easier, educate your people and feed them well to build a strong and healthy nation OR let them rot and shovel billions to pharma corps in the hope of finding a magic cure?
>skyrocketed in the span of 75 years are immutable facts of life
A number of them seem to have skyrocketed with quality of life and personal wealth. I suspect my ancestors were skinny not because they were educated on eating well but because they lacked the same access to food we have in modern society, especially super caloric ones. I don't super want to go back to an ice cream scarce world. Things like meat consumption are linked to colon cancer and most folk are unwilling to give that up or practice meat-light diets. People generally like smoking! Education campaigns got that down briefly but it was generally not because people didn't want to smoke, it's because they didn't want cancer. Vaping is clearly popular nowadays. Alcohol, too! The WHO says there is no safe amount of alcohol consumption and attributes lots of cancer to even light drinking. I suspect people would enjoy being able to regularly have a glass of wine or beer and not have it cost them their life.
Second one's easier. Technological improvements are always easier than social change.
> shovel billions to pharma corps in the hope of finding a magic cure?
What do you mean finding? We already found it (GLP-1 inhibitors). Ozempic is even owned by a nonprofit (Novo Nordisk). See, everything's fine.
Yes. Let’s ignore 20% of people with cancer. That’s cool.
Bullshit. Heart disease and cancer (and a long tail of medical problems) cook up with age and kill ~everyone inside ~100 years. If you think that environment and exercise can fix this, show me the person who is 200 years old.
We would have to 100x medical research spending before it was clearly overdone.
“inside 100 years” is a veeeeery large span… there is a huge difference if inside 100 is 45 or 95 :)
Obviously healthy habits prolong your life. Nobody argued otherwise. My contention is specifically with the idea that they matter to the exclusion of science and pharma. Healthy habits clearly hit a wall: if they didn't, we would have health and fitness gurus living to 200 and beyond by virtue of having a routine that could actually defeat cancer and heart disease. The absence of these 200 year old gurus indicates that no, 80% of cancer (and heart disease, everyone always forgets heart disease) cannot be avoided with diet and exercise. Hence, work on getting through the wall is valuable.
You can 1000x the research if you want, a 50kg overweight person who doesn't exercise, drink alcohol and lives next to a highway is statistically fucked no matter what. You'd need straight up magic to undo the damages
You're not going to fix lifestyle diseases with drugs, and lifestyle diseases are the leading cause of death
Show me the 200 year old person.
As the author says: this capex isn’t a railroad, it’s a very expensive immediately depreciating asset.
We should be so lucky
Maybe there will be a benefit from the additional power generation and infrastructure that will still be available if the data centers don’t need it.
Can’t believe I have to state the obvious and say that is only a potential gain if the power/cooling is from renewable sources. But I do
That’s a mighty big if.
This implies that a human life is valuable but what they spend it doing is not.
Aren't advancements in AI actually helping drug discovery?
yearly AI capex is more than NASA and NIH combined
That would be $75 billion combined for 2025. A drop in the bucket.
-- > From 2013 to 2020, cloud infrastructure capex rose methodically—from $32 billion to $119 billion. That's significant, but manageable. Post-2020? The curve steepens. By 2024, we hit $285 billion. And in 2025 alone, the top 11 cloud providers are forecasted to deploy a staggering $392 billion—MORE than the entire previous two years combined.
https://www.wisdomtree.com/investments/blog/2025/05/21/this-...
So, not very much, all things considered.
Tell that to Congress.
Cloud gaming 2.0
If you have a GPU which you've used for AI training, but that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue.
If you destroy the GPU, you can write it off as a loss, which reduces your taxable income.
Its possible you could come out ahead by selling everything off, but then you'd have to pay expensive people to manage the sell off, logistics, etc. What a mess. Easier to just destroy everything and take the write-off.
Capital equipment is depreciated over time. By the time you're selling it off, it's pennies on the dollar and a small recoupment of cost. Paying 30% (or whatever) taxes on that small amount of income and having 70% remaining is still better than zero dollars and zero taxes.
It is a giant pain to sell off this gear if you are using in-house folks to do so. Usually not worth it, and why things end up trashed as you state. If I have a dozen 10 year old servers to get rid of - it's usually not worth anyone's time or energy to list them for $200 on ebay and figure out shipping logistics.
However, at scale the situation and numbers change - you can call in an equipment liquidator who can wheel out 500 racks full of gear at a time and you get paid for their disposal on top of it. Usually a win/win situation since you no longer have expensive people trying to figure out who to call to get rid of it, how to do data destruction properly, etc. This usually is a help to the bottom line in almost all cases I've seen, on top of it saving internal man-hours.
If you're in "failed startup being liquidated for asset value" territory, then the receiver/those in charge typically have a fiduciary duty to find the best reasonable outcome for the investors. It's rarely throwing gear with residual value in the trash. See: used Aeron chair market.
> that's no longer valuable, you could sell that GPU; but then you'd incur taxable revenue
Unless GPUs are like post-Covid used cars you're going to sell them at a loss which can be written off. Write-offs don't have to involve destroying the asset. I don't know where you got that idea.
A competent government might rent these company’s GPUs to cure major diseases once they’re sitting idle.
(or they'll end up mining crypto as the only viable alternative, crashing the whole tent )
Bitcoin still needs a-minin’.
GPUs haven't been useful for bitcoin mining for a long time.
I just wish we forced every new data center to be built with renewables or something. The marginal cost over a conventional data center can’t be that big compared to the total cost, and these companies can afford it. Maybe it can help advance the next generation of small modular nuclear reactors or something.
Many of these companies are very interested in small nuclear tech as a means to power these facilities. The major bottleneck now for most of them is finding sites where grid capacity exists to power them.
Talking to anyone in the space for more than 30 minutes and nuclear with come up.
I very much hope the hype cycle lasts long enough for some of this capital raining down from the sky to get these reactors deployed in the field, because those will be a lasting positive from this hype cycle - much like laying railroad infrastructure and fiber optic cables came from other hype cycles.
I've often said that the robber barons sucked, but at least they left massive amounts of physical infrastructure for the average person to benefit from. The hype cycles of late leave very little in comparison.
I'm concerned that the move-fast-and-break-things crowd will attempt to ply their trade on reactor tech and leave us with a bunch of failed projects in jurisdictions that won't hold them to account. The space is heavily regulated for a reason.
> I just wish we forced every new data center to be built with renewables or something.
Already the case in Europe. And in the US, most of the biggest player are doing this: Google, Microsoft, Meta, AWS. By now those 4 are the largest buyers of renewable purchase agreement in the entire world. MS alone invested something like 20B in renewable purchase.
But the issue is that installation of renewable in the US is not bottlenecked by lack of demand, it's bottlenecked by permitting, zoning issue etc. The queue for power deployment right now is something like 100GW (i.e. how much production is paid to be built, but not yet built), that is around 10% of the current total US power capacity. So it's not really clear to me if buying more renewables helps making it's deployment faster through economies of scale, or if the purchase order is just sitting in a queue for years and years.
One notable exception is xAI/Grok, who has one of the biggest cluster, is powering it 100% with gas and afaik did not offset it by buying the equivalent in renewable. Having built the cluster in a what was a washing machine factory that does not have adequate power supply or cooling tech, they have been rolling in 35 mobile gas turbines (large trailer trucks that you connect to gas pipes) and 50+ refrigeration trucks. IMHO, it should be illegal to build such an energy consuming system with such a poor efficiency, but well.
At least in the US, it doesn't need to be forced. In 2024 94% of new US powerplant capacity in the US were renewables or battery storage, we're on track for 93% for 2025, and based on announced plans the next few years will see very similar numbers. What few fossil fuel power plants are being built are exclusively natural gas, and a decent number of them are conversions of former coal plants to natural gas. Planned additional natural gas plant capacity is lower than at any point since the shale revolution started. Renewables have won.
Data centers want firm power to avoid underutilizing expensive assets. Solar and wind are intermittent. New gas has a years-long lead out. 12+ hours of batteries (solar in winter) are, in fact, not free or de minimis.
I wish the hardware itself were renewable too.
These companies would all LOVE to use small modular nuclear reactors in their datacenters... if the NRC ever got around to approving a license for one.
People like the idea of economical SMRs, but it seems unlikely they will ever be economical. There's no real reason not to just build LMRs instead (e.g., AP1000).
Why renewables? These guys don't have an energy problem. The states have made sure give them energy credits and other assurances on that. The problem they do have though is siphoning the water from everyone around them.
Microsoft actually has a design for mini datacenters that stay cool in the ocean and collect tidal energy. But it's way more fun to have states trying to court you into building datacenters cause it'll bring some jobs.
Actually energy is a real problem for them and water consumption isn't.
If energy wasn't a problem then they could just recirculate a coolant
I don't see how you can make the argument that a large portion of funds used for AI capex were diverted from other investments (and starving other industries), while simultaneously applying the economic multiplier to the whole sum when going from the investments to the GDP impact.
Surely you only get one of the two, because for diverted investments the multiplier applies equally on both sides of the equation.
Hence the title, "Honey, AI Capex is Eating the Economy."
That seems like a non sequitur.
He is making two arguments. One is that AI capex is starving other industries. And the other that AI capex is causing major GDP growth, attributed to both the direct investments themselves, as well as the multiplier effects.
One of those could be true. But I assert that both cannot be true at the same time. If these direct investments were going to happen elsewhere if they weren't happening for AI infrastructure then that counterfactual spending would show up in the GDP instead, as would the multiplier effect from that spending.
The main argument builds on the assumption that the economy is a zero sum game when it clearly is not. Just because we invest these ressources in AI does not mean we could mobilize the same capital for other pursuits.
Precisely AI is being built out today because the value returned is expected to be massive. I would argue this value will be far bigger than railroads ever could be.
Overspending will happen, for sure, in certain geographies or for specialty hardware, maybe even capacity will outpace demand for a while, but I don’t think the author makes a good case that we are there yet.
> "... builds on the assumption that the economy is a zero sum game when it clearly is not"
Be cautious making assessments as to compounding effects; while it remains the critical attribute, the compounding nature of a system is not always obvious. For example, the author is correct that financing for AI CapEx is starving other fields of investment at least in the short term.
In addition, the massive investment may be overbuilt, but then likely will end up being useful long-term. This is just as the initial buildout of the internet to power pets.com was a bust at the time, but then led to Amazon, YouTube, Zoom, etc and in fact led to us being able to weather Covid better than you'd have expected.
The modern internet came from folks getting connected over-exuberantly based on near-term returns (with a lot of investors losing their shirts) but then humans figured out what the actual best use of the technology.
Highly recommend this book for more, Carlota Perez is very insightful: https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
That is why I stated transistor improvements what was previously known as Moore's law will continue for at least another 10 years. The Smartphone has carried us from 2008 - 2023. The money that is being used today are already invested into the next 2 - 3 years of Semi Conductor manufacturing. That is 2nm or A20 this year and A18 / 14 in two years time. There is enough momentum towards A10 and A8 by 2030, 2032. Even if things slows down by then it is enough to run till 2035 unless something catastrophic like WW3 or Market collapse happening.
That said even if we somehow reach A5 in 2035, we are only at about 12x density increase. If we include system packaging, chiplet, interconnect advancement pushing this to 30 to 40x. This is still a far cry from the 1000 to 10000x compute demands from a lot of AI companies. And that is assuming memory bandwidth could scale with it.
The LLM crash is gonna be glorious
For who?
The counterintuitive part of automation is that it removes parts of the economy rather than making the economy bigger. You end up with more goods but the value people assign to them goes down as they don't provide additional social advantage.
For example at one point nails were 0.5% of the economy and today owning a nail factory is a low margin business that has no social status.
Similarly the percentage of the economy and social status associated with frontend software dev will get automated and become a smaller percentage of the economy.
Since social status is a zero sum game people increase spending in other areas where social status can be helped.
> The counterintuitive part of automation is that it removes parts of the economy rather than making the economy bigger
So you believe in zero sum economy? I think new capabilities lead to demand expansion, they mobilize latent demand that was sleeping. There is no limit to desires, not even AI automation could outrun them.
I'm waiting for the shoe to drop when someone comes out with an FPGA optimized for reconfigurable computing and lowers the cost of llm compute by 90% or better.
This is where I do wish we had more people working on the theoretical CS side of things in this space.
Once you recognize that all ML techniques, including LLMs, are fundamentally compression techniques you should be able to come up with some estimates of the minimum feasible size of an LLM based on: information that can be encoded in a given parameter size, relationship between loss of information and model performance, and information contained in the original data set.
I simultaneously believe LLMs are bigger than the need to be, but suspect they need to be larger than most people think given that you are trying to store a fantastically large amount of information. Even given lossy compression (which ironically is what makes LLMs "generalize"), we're still talking about an enormous corpus of data we're trying to represent.
Getting theoretical results along these lines that can be operationalized meaningfully is... really hard.
Where is that improvement coming from? Hardware is already here to compute gemm as fast as possible.
Raw gemm computation was never the real bottleneck, especially on the newer GPUs. Feeding the matmuls i.e memory bandwidth is where it’s at, especially in the newer GPUs.
My chip friends constantly complain about Qualcomm owning a bunch of FPGA related patents that stifle any meaningful FPGA progress.
I wouldn't wait. fpgas weren't design to serve this model architecture. yes they are very power efficient but the layout/p+r overhead, the memory requirement (very few on-the-market fpgas have hbm), slower clock speed, and just an unpleasant developer experience makes it a hard sell.
We do already have ASICs, see Google's TPU to get some cost estimates.
HBM is also very expensive.
We're already seeing it with DeepSeek's and other optimizations - like that law with highways - the wider the highway the even more usage of it. Dropping by 90% would open even more use cases.
For white-collar jobs replacement - we can always evolve up the knowledge/skills/value chain. It is the blue-collar jobs where bloodbath with all the robotics is coming.
Can you annualize Nvidia's Q1 results simply by multiplying them by 4?
For a first approximation, yes? Their earnings have been growing steadily for years ($4-5B growth in each of the last 8 quarters), with no seasonal effects.
Q1*4 is highly likely to be a better estimate of their eventual 2025 calendar revenue than their current trailing 12 months revenue would be. Probably still a bit conservative, but easier to justify than projecting that growth continues at exactly the same pace.
how many signs of a bubble do we need?
What future evidence would convince you it's not a bubble?
A positive ROI
The only sign you'll ever get is the pop, no?
Faild to talk about the opportunity cost
What if anything would it take to actually change the markets perception that expectations may not be met in a significant way?
Now is this AI CapEx or data and IT CapEx? Because everyone and their mother are labeling regular data centers as AI data centers.
It seems like it's specifically based on nvidia's sales, which I assume almost entirely for deep learning. "Regular" data centers don't need many GPUs.
> Because everyone and their mother are labeling regular data centers as AI data centers.
Companies (generally) build things with an expectation for a return on their investment: what "regular" data centre usage would necessitate these kind of build-outs?
To sell more Postgres or WordPress VMs/instances? Is that being used to justify the spending in shareholder conference calls and regulatory filings?
Substantially this is Meta, Google, etc. These are advertising companies. They substantially create ML models that do advertising targeting, etc. Meta, it should be noted, doesn't sell LLMs at all. They sell ads. And though they are doing a lot of stuff with AI using this infrastructure... any datacenters they build primarily exist to sell ads.
Apparently the telecom boost was 2020? What am I missing?
Using sus statistics to draw weird conclusions.
There were some major network upgrades during COVID times dueto traffic growth induced by work from home.
The premise of AI and certainly what a large subset of executives and investors believe is that AI will provide a significant productivity increase to a significant part of the work force.
Of 30% of the work is done 10% faster that leaves a 3% gain for other economic activities. If that is true the CapEx is justified.
> the scale and pace of capital deployment into a rapidly depreciating technology is remarkable
That’s an interesting perspective. It does feel a bit like we’re setting money on fire.
And they're not even making paperclips, as in https://www.google.com/search?q=ai+paperclipping
The 1880s 6% on railroads is an interesting number, I didn't know it was that much.
Why does this link open up a different app on my iphone?
I hear AI data centers are consuming more power than the entire country of Argentina /s
But I don't hear anyone worried about the massive power consumption without a clear indication if this is a net positive for our society.
Roughly 5% of energy in the US is dedicated towards AI datacenters. The current usage will double to 70–90 TWh/year by 2026-2027. For software heavy tech businesses it makes sense to host your own AI data center so that you could train it on your codebase and have developers build faster. Not sure if this benefits humanity that much..
Will it? Google originally started trying to sell search servers you'd install on site. And on prem (even on your cloud) is not a model any of the big boys will want to follow.
We're yet to see if it's going to be a winner takes all market or whether there will end up a Linux equivalent pop up that destroys all the investment from the big players because programmers are too tight to pay for software.
> if this is a net positive for our society
Maybe that's something that can only be determined looking back. There are so many unknown unknowns.
Is Argentina a net positive for our society? There's the grilled beef, but every country has some kind of barbecue. There's a few soccer players I guess? Is Argentina worth the energy expenditure though?
that was a joke regarding bitcoin not Argentina. also: congrats for jumping to the worst possible interpretation