I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.
What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.
the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?
Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.
Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get
>> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?
That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.
It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.
(I'm not necessarely in the group of people avoiding to use AI, but I can see their point)
> Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals.
Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Turning the page is a valid choice though. Sometimes a clean slate is what you need.
> Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
> For example, should we let people die rather than use medical knowledge gained unethically?
Depends if you are doing it 'for their own good' or not.
Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.
Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.
>Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.
>Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.
>Depends if you are doing it 'for their own good' or not.
what do you mean by this?
I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question
also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.
I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.
>Finally - Who determines what is ethical?
I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you
However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.
AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines
You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.
Nothing is final until you draw your last breath.
>Who determines what is ethical? beyond the 'golden rule'?
To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.
> but once the damage is done why let it happen in vain?
Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.
>but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.
I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.
I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.
So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.
Moral arguments are a luxury of thinkers and only a small percentage of people can be reasoned with that way anyways. You can manipulate on morals but not reason in most cases.
Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.
If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.
But the parent didn't really argue anything, they just linked to a Wikipedia article about Raytheon. Is that supposed to intrinsically represent "immorality"?
>they just linked to a Wikipedia article about Raytheon
Yeah, that's why I took a guess at what they were trying to say.
>Is that supposed to intrinsically represent "immorality"?
What? The fact that they linked to Wikipedia, or specifically Raytheon?
Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.
>Have they done more harm than, say, Meta?
Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.
Well, sure, I'm not disagreeing with the original point about moral choice, and in fact I agree with it (though I also think that's a luxury, as someone else pointed out).
But if someone wants to make some blanket judgement, I am asking for a little more effort. For example, I wonder if they would think the same as a Ukrainian under the protection of Patriot missiles? (also produced by Raytheon)
Here are Raytheon part markings on the tail kit of a GBU-12 Paveway glide bomb that Raytheon sold to a corrupt third word dictator, who used that weapon to murder the attendees of an innocent wedding in a country he was feuding with.
I know the part number of every airplane part I have ever designed by heart, and I would be horrified to see those part numbers in the news as evidence of a mass murder.
So, what is your moral justification for defending one of the world’s largest and despised weapons manufacturers? Are you paid to do it or is it just pro-bono work?
Excuse me, do you make personal attacks on anyone who dares ask for an actual reasoned argument?
Most if not all aerospace companies also produce military aircraft, right? Or is your reasoning that if your particular plane doesn't actually fire the bullets, then there's no moral dilemma?
Defending? I am simply pointing out the obvious flaws in your logic.
If you think Raytheon is the apex evil corporation you are very mistaken. There is hardly any separation between mega corps and state above a certain level. The same people are in majority control of IBM, Procter & Gamble, Nike, and Boeing, Lockheed Martin, etc, etc.
Stop consuming marketing materials as gospel.
What you see as this or that atrocity on CNN or whatever that is produced *propaganda*, made for you, and you are swallowing it blindly without thinking.
Also the responsibility is of course down to individuals and their actions-- whether you know their names or not. Objects do not go to war on their own.
I've also worked in aerospace and aviation software but that doesn't preclude me from thinking clearly about whether I'm responsible for this or that thing on the news involving planes -- you might want to stop consuming that.
I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.
I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.
markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't think a general sentiment matters much here when the important necessitate are out of reach. The hierarchy of needs is outdated, but the inversion of it is very concerning.
We can live without a flat screen TV (which has gotten dirt cheap). We can't live without a decent house. Or worse, while we can live in some 500 sq ft shack we can't truly "live" if there's no other public amenities to gather and socialize without nickel-and-diming us.
pre-industrial? Lots of tending to the farm, caring for family, and managing slaves I suppose. Had some free time between that to work with your community for bonding or business dealings or whatnot.
Quite the leap to go from "pre-industrial people" to "Antebellum US Southerners", and even then the majority of that (hyperspecific) group did not own slaves.
>you'll probably consume a lot more, while still working a full week
There's more to cosume than 50 years ago, but I don't see that trend continuing. We shifted phone bills to cell phone bills and added internet bills and a myriad of subscriptions. But that's really it. everything was "turn one time into subscrition".
I don't see what will fundamentally shift that current consumption for the next 20-30 years. Just more conversion of ownership to renting. In entertainment we're already seeing revolts against this as piracy surges. I don't know how we're going to "consume a lot more" in this case.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
You can consume a lot less on a surprisingly small salary, at least in the U.S.
But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:
- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
> - One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.
> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
Maybe you can replace these with the cell phone + plan.
> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.
> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
Agreed.
> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.
> It's not clear that it's still possible to function in society today with out a cell phone
Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
---
As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.
Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.
But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.
Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)
It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.
(Not disagreeing with the rest of your post though)
You aren't wrong and I agree up to a point. But I've watched a couple of people try to get by on just "cutting" rather than growing their incomes and it doesn't work out for them. A former neighbor was a real Dave Ramsey acolyte and even did things like not have trash service (used dumpsters and threw trash out at his mother's house). His driveway was crumbling but instead of getting new asphalt he just dug it all up himself and dumped it...somewhere, and then filled it in with gravel. He drives junker cars that are always breaking down. I helped him replace a timing chain on a Chrysler convertible that wasn't in awful shape, but the repairs were getting intense. This guy had an average job at a replacement window company but had zero upward mobility. He was and I assume is, happy enough, with a roof over his head and so forth, but our property taxes keep rising, insurance costs keep rising, there's only so much you can cut. My take is that you have to find more income and being looked upon as "tight with a buck" or even "cheap" is unfavorable.
I've given up pretty much all of that out of necessity, yes. Insurance and rent still goes up so I'm spending almost as much as I was at my peak, though.
>I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
Of course it's changed. The point is that
1. the necessities haven't changed and have gotten more expensive. People need healthcare, housing, food, and tranport. All up.
2. the modern day expectations means necessities change. We can't walk into a business and shake someone's hand to get a job, so you "need" access to the internet to get a job. Recruiters also expect a consistent phone number to call so good luck skipping a phone line (maybe VOIP can get around this).
These are society's fault as they shifted to pleasing shareholders and outsourcing entire industries (and of course submitted to lobbying). so I don't like this blame being shifted to the individual for daring to consume to survive.
Voting in people who can actually recognize the problem and make sure corporationa cant ship all of America's labor overseas. Blaming ourselves for society's woes only pushes the burden further on the people, instead of having them collectively gather and push back against those at fault.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
>If you want to live in a high cost of living area, that's a form of consumption.
Not really a "want" as much as "move where the jobs are". Remote jobs are shakey now and being in the middle of nowhere only worsens your compensation aspects.
Being able to live wherever you please is indeed a luxury. The suburb structure already sacrificed the aspect of high CoL for increase commute time to work.
I also do think that dismissing aspects of humanity like family, community and sense of purpose to "luxuries" is an extremely dangerous line of thinking.
I mean, yeah? Does any market work like that? If you want an apple, you pay the person who has the apple to take it from them, you don't pay the other people who want apples. Not really following where this is going
I think FIRE was basically just a fad for awhile. I say this as a 52 year old "retiree" who isn't working right now and living off investment income. It takes a shitload of wealth to not have to work and I'm borderline not real comfortable with the whole situation. I live in a fairly HCoL area and can't up and move right now (wife has medical needs, son in high school, daughter in college). I'd be freaking out if I didn't have a nest egg, we would be trying to sell our house in a crap market. As it stands, I don't really want to go on like I am, my life is a total waste right now.
It's not a "fad," it's a mathematical observation that investing more generates more returns. Maybe the media was covering it more at some point but the concept itself is sound. You are in fact FIREd by the same definition, it's just that in your case it seems you would need more money than you have currently due to the factors you stated, but that's not the fault of the concept of FIRE in general. And anyway, there are lots of stories of people doing regular or leanFIRE too, it doesn't require so much wealth as to be unreachable if you have a middle class job. For example, https://www.reddit.com/r/leanfire/s/67adPxZeDU
If you think your life is a waste right now, do something with it. That's actually the number one thing people don't expect from being retired, how bored they get. They say in FIRE communities that all the money and time in the world won't help if you don't actually utilize it.
Boomers in a nutshell. Do a bunch of stuff to keep from building more housing to prop up housing prices (which is much of their net worth), and then spend until you're forced to spend the last bit to keep yourselves alive.
Then the hospital takes the house to pay off the rest of the debts. Everybody wins!
>They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.
And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.
I don't think using the 2025 US government for a moral compass is helping your case either.
>If AI can make things 1000x more efficient
Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?
AI is just a tool, like most other technologies, it can be used for good and bad.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).
The goal of AI is NOT to be a tool. It's to replace human labor completely.
This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.
To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.
I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).
I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.
>AI as a technology is almost impossible to stop.
Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.
Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.
What makes you think that in the world where only the wealthy can afford legal, medical, and financial advice from human beings, the same will be automatically affordable from AI?
It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?
If you're going to approach this on such bad faith, then I'll simply say "yes" and move on. People can male bad decisions, but that shouldn't be a profitable business.
> AI is just a tool, like most other technologies, it can be used for good and bad.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
A crowd of people continually rooting against their best interests isn't exactly what's needed for the solidarity that people claim is a boon from social media. Its not as bad as other websites out there, but I've see these flags several times on older forums.
It won't be as hard as you think for HN to slip into the very thing they mock Instagram of today for being.
Uh huh, that's always how it starts. "Well you're in the minority, majority prevails".
Yup, story of my life. I have on fact had a dozen different times where I chose not to jump off the cliff with peers. How little I realized back then how rare that quality is.
But you got your answer, feel free to follow the crowd. I already have migrations ready. Again, not my first time.
If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.
AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.
I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.
Machine learning isn't about developing anything intelligent at all, its about optimizing well defined problem spaces for algorithms defined by humans. Intelligence is much more self guided and has almost nothing to do with finding the best approximate solution to a specific problem.
> Machine learning (ML) is a field of study in _artificial intelligence_ concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions.
The definition there is correct. ML is a a field of study in AI, that does not make it AI. Thermodynamics is a field of study in physics, that does not mean that thermodynamics is physics.
What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.
>I don't know of any examples where that has actually killed someone.
You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.
>Someone could search on Google just the same and ignore their symptoms.
Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.
If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.
That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"
We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.
If you were offended by that comment, I apologize. You're 99.99% not the problem and infighting gets us nowhre.
But you may indeed be vying against your best interests. Hope you can take some time to understand where you lie in life and if your society is really benefiting you.
I am not offended. And I'll be the one to judge my own best interests. (back to: "personal responsibility"). e.g. I have more information about my own life than you or anyone else, and so am best situated to make decisions for myself about my own beliefs.
For instance I work for one of the companies that produces some of the most popular LLMs in use today. And I certainly have a stake in them performing well and being useful.
But your line of reasoning would have us believe that Henry Ford is a mass murderer due to the number of vehicular deaths each year, or that the wright brothers bear some responsibility for 9/11. They should have foreseen that people would fly their planes into buildings, of course.
If you want to blame someone for LLMs hurting people, we really need to go all the way back to Alan Turing -- without him these people would still be alive!
>And I'll be the one to judge my own best interests thank you.
Okay, cool. Note that I never asked for your opinion and you decided to pop up in this chain as I was talking to someone else. Go about your day or be curious, but don't butt in then pretend 'well I don't care what you say' when you get a response back.
Nothing you said contradicted my main point. So this isn't really a conversation but simply more useless defense. Good day.
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
You're entering a bridge and there's a road sign before it with a pictogram of a truck and a plaque below that reads "10t max".
According to the logic of your argument, it's perfectly okay to drive a 360t BelAZ 75710 loaded to its full 450t capacity over that bridge just because it's a truck too.
That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white.
Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
> The point is that people FEEL they benefit. THAT’S the market for many things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
Its not that it is intrinsically good but that a lot of people consuming things from their own agency has to mean something. You coming in the middle and suggesting you know better than them is strange.
When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.
> a lot of people consuming things from their own agency has to mean something.
Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.
Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.
Drugs and alcohol are the easy example:
> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)
Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.
–––
> my first instinct is not to decry football as a problem in society.
My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
For the avoidance of doubt, I was not claiming that AI is the worst thing ever. I too think that complaints about that are generally overblown. (Unless it turns out to kill us all or something of the kind, which feels to me like it's unlikely but not nearly as close to impossible as I would be comfortable with[1].) I was offering examples of ways in which LLMs could plausibly turn out to do harm, not examples of ways in which LLMs will definitely make the world end.
Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)
If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).
[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.
Any sort of atrophy of anything is because you don't need the skill any more. If you need the skill, it won't atrophy. It doesn't matter if it's LLMs or calculators or what, atrophy is always a non-issue, provided the technology won't go away (you don't want to have forgotten how to forage for food if civilization collapses).
We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric.
Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely.
But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.
And today's adapt or die doesn't sound less fascist than in 1930.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
Website building started dying off when SquareSpace launched and Wix came around. WordPress copied that and its been building blocks for the most part since then. There are few unique sites around these days.
Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
The claim of theft is simple: the AI companies stole intellectual property without attribution. Knowing how AIs are trained and seeing the content they produce, I'm not sure how you can dispute that.
Statistics are not theft. Judges have written over and over again that training a neural network (which is just fitting a high-dimensional function to a dataset) is transformative and therefore fair use. Putting it another way, me summarizing a MLB baseball game by saying the Cubs lost 7-0 does not infringe on MLB's ownership of the copyright of the filmed game.
People claiming that backpropagation "steals" your material don't understand math or copyright.
You can hate generative tools all you want -- opinions are free -- but you're fundamentally wrong about the legality or morality at play.
False equivalence - a random person can't go to a museum and then immediately go and paint exactly like another artist, but that's what the current LLM offerings allow
See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc
That's not exactly how LLMs learn either, they require huge amounts of training data to be able to imitate a style. And lots of human artists are able to imitate the style of one another as well, so I'm not sure what makes LLMs so different.
Regardless of whether you think IP laws should prevent LLMs from training on works under copyright, I hardly think the situation is beyond dispute. Whether copyright itself should even exist is something many dispute.
it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
But DID the Luddites overreact?
They sought to have machines serve people instead of the other way around.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
Portrait photography works whether or not there is a painting of the subject... LLMs cannot exist unless specifically consuming previous works! The authors of those works have every right to be upset about not being financially compensated, unlike painters.
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
Actually, I already prefer AI to static training materials these days. But instead of looking for a static training material, I treated it like a coach.
Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.
With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.
Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.
Sure, but it seems here that they are rejecting everything related to AI, which is probably not a smart business move, as they also remark, since this year was much harder for them.
The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.
Is it though? We don't know the future. Is this just a dip in a growing business, or sign of things to come? Even if AI does better than the most optimistic projections it could still be great for a few people to be anti-ai if they are in the right place selling to the right people.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
Five years ago we had pretty good static analysis tools for popular languages which could automate certain aspects of code reviews and catch many common defects. Those tools didn't even use AI, just deterministic pattern matching. And yet due to laziness and incompetence many developers didn't even bother taking full advantage of those tools to maximize their own productivity.
The devs themselves can still be lazy, claude and copilot code review can be automated on all pull requests by demand of the PM — and the PM can be lazy and ask the LLMs to integrate themselves.
Static analysis was pretty limited imho. It wasn't finding anything that interesting. I spent untold hours trying to satisfy SonarQube in 2021 & 2022. It was total shit busy work they stuck me with because all our APIs had to have at least 80% code coverage and meet a moving target of code analysis profiles that were updated quarterly. I had to do a ton of refactoring on a lot of projects just to make them testable. I barely found any bugs and after working on over 100 of those stupid things, I was basically done with that company and its bs. What an utter waste of time for a senior dev. They had to have been trying to get me to quit.
Even if someday we get AI that can generalize well, the need for a person who actually develops things using AI is not going anywhere. The thing with AI is that you cannot make it responsible, there will still be a human in the loop who is responsible for conveying ideas to the AI and controlling its results, and that person will be the developer. Senior developers are not hired just because they are smart or can write code or build systems, they are also hired to share the load of responsibility.
Someone with a name, an employment contract, and accountability is needed to sign off on decisions. Tools can be infinitely smart, but they cannot be responsible, so AI will shift how developers work, not whether they are needed.
Even where a human in the loop is a legal obligation, it can be QA or a PM, roles as different from "developer" as "developer" is from "circuit designer".
A PM or QA can sign off only on process or outcome quality. They cannot replace the person who actually understands the architecture and the implications of technical decisions. Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
If AI becomes powerful enough to generate entire systems, the person supervising and validating those systems is, functionally, a developer — because they must understand the technical details well enough to take responsibility for them.
Titles can shift, but the role dont disappear. Someone with deep technical judgment will still be required to translate intent into implementation and to sign off on the risks. You can call that person "developer", "AI engineer" or something else, but the core responsibility remains technical. PMs and QA do not fill that gap.
> They cannot replace the person who actually understands the architecture and the implications of technical decisions.
LLMs can already do that.
What they can't do is be legally responsible, which is a different thing.
> Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
Legal responsibility and technical responsibility are not always the same thing; technical responsibility is absolutely in the domain of PM and QA, legal responsibility ultimately stops with either a certified engineer (which software engineering famously isn't), the C-suite, the public liability insurance company, or the shareholders depending on specifics.
Ownership requires legal personhood, which isn't the same thing as philosophical personhood, which is why corporations themselves can be legal owners.
Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".
“certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.
Waymo is doing very well around San Francisco, which is certainly very challenging city driving. Yes, it doesn't snow there. Maybe areas with winter storms will never have autonomous vehicles. That doesn't mean there isn't a lot of utility created even now.
My original point, clearly badly phrased given the responses I got, is that the promises have been exceeding the reality for a decade.
Musk's claims about what Tesla's would be able to do wasn't limited to just "a few locations" it was "complete autonomy" and "you'll be able to summon your car from across the country"… by 2018.
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s
What was a little different then was that tech jobs paid about 30% more than other jobs, it wasn't anything like the highs we have seen the last few years. I used to describe it as you used to have the nicer house on the block, but then in the 2010s+ FNG salaries had people living in whole other neighborhoods. So switching out of the industry, while painful was not as traumatic. Obviously though having to actually flip burgers was a move of desperation and traumatic. The .com bust was largely centered around SV as well, in NYC (where I live) there was some fallout, but there was still a tailwind of businesses of all sorts expanding their tech footprint, so while you may not have been able to land at a hot startup and dream of getting rich in an IPO, by the end of 2003 it was mostly stabilized and you could likely have landed a somewhat boring corporate job even if it was just building internal apps.
I feel like there are a lot of people in school or recently graduated though that had FNG dreams and never considered an alternative. This is going to be very difficult for them. I now feel, especially as tech has gone truly borderless with remote work, that this downturn is now way worse than the .com bust. It has just dragged on for years now, with no real end in sight.
The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
well same like covid right??? digital/tech company overhiring because everyone is home and at the same time the rise of AI reduce the number of headcount
covid overhiring + AI usage = massive layoff we ever see in decades
It was nothing like covid. The dot com crash lasted years where tech was a dead sector. Equity valuations kept declining year after year. People couldn't find jobs in tech at all.
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
After the first Internet bubble popped, service levels in Silicon Valley restaurants suddenly got a lot better. Restaurants that had struggled to hire competent, reliable employees suddenly had their pick of applicants.
History always repeats itself in the tech industry. The hype cycle for LLMs will probably peak within the next few years. (LLMs are legitimately useful for many things but some of the company valuations and employee compensation packages are totally irrational.)
I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
It's not a pejorative statement, I used to live in Stack Overflow.
But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.
And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)
I think that's OP's point though, Ai can do it better now.
No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
Nope. The main problem with expertsexchange was their SEO + paywall - they'd sneak into top Google hits by showing crawler full data, then present a paywall when actual human visits. (Have no idea why Google tolerated them btw...)
SO was never that bad, even with all their moderation policies, they had no paywalls.
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
I don't think it matters. Whether it was a fault of incentives or some intrinsic nature of people given the environment, it was rarely a pleasant experience. And this is one of the reasons it's fallen to LLM usage.
Memories of years ago on Stack Overflow, when it seemed like every single beginner python question was answered by one specific guy. And all his answers were streams of invective directed at the question's author. Whatever labor this guy was doing, he was clearly getting a lot of value in return by getting to yell at hapless beginners.
I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.
That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
>> even when mine (that they had already approved) would save them 400K a year
You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.
> You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.
The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.
> You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.
Best advice I got in school is -- at least early in your career-- work in the main line of business for your company. So if you are in marketing, work for a marketing firm, an accountant, work for an accounting firm.. etc. Video game designer: work for a video game developer.
Later you can have other roles but you make your mark doing the thing that company really depends on.
> Best advice I got in school is -- at least early in your career-- work in the main line of business for your company
Related advice I got - work in the head office for your company if possible. Definitely turned out to be a good call in my case as the satellite offices closed one by one over time.
I would go further and say that even at software companies, even for dev that goes directly into the product, engineering is often seen as a cost center.
The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"
So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.
>> even when mine (that they had already approved) would save them 400K a year
You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers
Totally agree. This is a big reason I went into solutions consulting.
In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.
I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.
We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.
I work as a consultant and tend to focus on helping startups grow their revenue. And what you're saying here is almost word for word what I often recommend as the *first thing* they should do.
In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.
This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.
Very few people suspected that github is being used to train the ai when we were all pushed the best practice of doing frequent commit.
a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.
Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.
Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
>> Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.
Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.
I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.
If I had gone with the flow in 1995 I would have got my MCSE and worked for a big government bureaucracy.
Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...
Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc
They needed to generate a large body of data to train our future robot overlords to enslave us.
We the 'experienced' are definitely not their target -- too much independence of thought.
To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).
I have worked with a lot of code generation systems.
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.
Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?
I would imagine it has a lot to do with the programming language and other technologies in the project. The LLMs have tons of training data on JS and React. They probably have relatively little on Erlang.
Mass of learning material doesn't equal quality though. The amount of poor react code out there is not to underestimate. I feel like llm generated gleam code was way cleaner (after some agentic loops due to syntactic misunderstanding) than ts/react where it's so biased to produce overly verbose slob.
Even if you're using JS/React, the level of sophistication of the UI seems to matter a lot.
"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.
But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.
Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.
Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.
>I will say that being social and being in a scene at the right time helps a lot
I concur with that and that's what I tell every single junior/young dev. that asks for advice: get out there and get noticed!
People who prefer to lead more private lives, or are more reserved in general, have far fewer opportunities coming their way, they're forced to take the hard path.
>I'm not for/or against a particular style, it must be real nice if life just solves everything for you while you just chill or whatever. But, a nice upside of being made of talent instead of luck is that when luck starts to run out, well, ... you'll be fine anyway :).
Talent makes luck. Ex-colleagues reach out to me and ask me to work with them because they know the type of work I do, not because it's lucky.
Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.
My perspective is just as valid, and I also wrote,
>I'm not for/or against a particular style
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
>Ex-colleagues reach out to me and ask me to work with them
Never happened to me, that's the point I'm making.
1. I wish work just landed at my feet.
2. As that never happened and most likely was never going to happen, I had to learn another set of skills to overcome that.
3. That made me a much more resilient individual.
(4. This is not meant as criticism to
@arthurfirst's style. I wish clients just called me and I didn't have to save all that money/time I spend taking care of that)
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
I don't think we're really disagreeing here. You're saying "this is the way things are", I'm saying "I salute anyone who tries to change the way things are".
Even in the linked post the author isn't complaining that it's not fair or whatever, they're simply stating that they are losing money as a result of their moral choice. I don't think they're deluded about the cause and effect.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
Some users might not mind the lack of control, but beyond a certain point it stops making sense to strive to be in that diminishing set and starts making sense to fix the bug.
We've always tolerated a certain portion of society who finds the situation unacceptable, but don't you suspect that things will change if that portion is most of us?
Maybe we're not there yet, idk, but the article is about the unease vs the data, and I think the unease comes from the awareness that that's where we're headed.
If you're only raised in a grifter's society, sure. Money is to be conquered and extracted.
But we came definetly shift back to a society where money is one to help keep the boat afloat for everyone to pursue their own interests, and not a losing game of Monopoly where the rich get richer.
Voting is a good start. Not just in nationals but look at local policy. So much of this is bottom up.we got into this by voting against oir best interests for at best liars and at worst blatant crooks.
Past that, simply look at the small actions on your life. These build and define your overall character. It's hard to vote for collective bargaining of you have trouble complimenting your family at the table. You need to appreciate and feel a part of a community to really come together.
This all sounds like mumbo jumbo on the outside, but just take some time to reflect a bit. People don't wake up one day and simply think "you know, this really is all the immigrant's fault". That's a result of months or year of mindset.
I don't think that's necessarily what money is, but it is kind of what sufficiently unregulated capitalism is, which is what we've had for a while now.
I was talking to a friend of mine about a related topic when he quipped that he realized he started disliking therapy when he realized they effectively were just teaching him coping strategies for an economic system that is inherently amoral.
> So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.
Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.
We shouldn't have to choose between principles and food on the table.
Sometimes companies become irrelevant while following the market, while other companies revolutionize the market by NOT following it.
It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.
>the options are follow the market or find a different line of work if you don’t like the way the market is going
You can also just outlive the irrationality. If we could stop beating around the bush and admit we're in a recession, that would explain a lot of things. You just gotta bear the storm.
It's way too late to jump on the AI train anyway. Maybe one more year, but I'd be surprised if that bubble doesn't pop by the end of 2027.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
That future innovation is in fact higher productivity. Equality is super important but we are simply not good enough yet at what we do, societally, for everyone everywhere to live as good a life as we enjoy, regardless of how we distribute.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
This metaphor implies a sort of AI inevitably. I simply don't believe that's the case. At least, not this wave of AI.
The people pushing AI aren't listening to the true demand for AI. This, its not making ita money back. That's why this market is broken and not prone to last.
Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.
Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.
No it is a pretty good comparison. There is absolutely AI slop but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry. If you are selling learning courses and are sticking your head in the sand, well that’s pretty questionable.
I find this very generic what you are saying and they.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
> This is effectively no different from a construction company building an office building, or a bakery baking a cake.
A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].
I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.
> As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.
That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".
<< over-focusing on small moral cursades against specific players like this and not the game as a whole
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
The ceo of every one of those Ai companies drives an expensive car home to a mansion at the end of the workday. They are set. The average person does not and they cannot afford to play the principled stand game. Its not a question of right or wrong for most, its a question of putting food on the table
I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.
I assume they are weathering the storm if they are posting like this and not saying "we're leaving the business". A proper business has a war chest for this exact situation (though I'm unsure of how long this businesses has operated)
As someone who has sold video tech courses since 2015, I don't know about the future.
I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.
It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".
Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.
It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.
Almost 100% of sales come from organic searches. Usually people would search for things like "Docker course" or "Flask course" and either find my course near the top of Google or they would search for some specific problem related to that content and come across a blog post I wrote on my main site which linked back to the course somewhere (usually).
Now the same thing happens, but there's 20x less sales per month.
I've posted almost 400 free videos on YouTube as well over the years, usually these videos go along with the blog post.
A few years back I also started a podcast and did 100 weekly episodes for 2 years. It didn't move the needle on course sales and it was on a topic that was quite related to app development and deployment which partially aligns with my courses. Most episodes barely got ~100 listens and it was 4.9 rated out of 5 on major podcast platforms, people emailed me saying it was their favorite show and it helped them so much and hope I never stop but the listener count never grew. I didn't have sponsors or ads but stopped the show because it took 1 full day a week to schedule + record + edit + publish a ~1-2 hour episode. It was super fun and I really enjoyed it but it was another "invest 100 days, make $0" thing which simply isn't sustainable.
This is always sad to hear. I really want more educational material out there that isn't just serving "beginner bait" and I'd love love love more technical podcasts out there. But it seems like not much of the audience is looking for small creators for that. Perhaps they only focus on conference studies.
And yeah, I agree with the other reponsder that AI + Google's own enshittification of search may have cost your site traffic.
I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
The industry is not really shifting. It's not shifting to anything. It's just that the value is being captured by parasitic companies. They still need people like me to feed them training data while they destroy the economics of producing that data.
And million of people happily thumb down or up for their RL / Feedback.
The industry is still shifting. I use LLMs instead of StackOverflow.
You can be as dismissive as you want, but that doesn't change the fact that millions of people use AI tools every single day. People start using AI based tools.
The industry overall is therefore shifting money and goals etc. into direction of AI.
I'm not convinced. I've heard all the justifications and how it saved someone's marriage (too bad it ended that other relationship).
The numbers don't line up. The money from consumers isn't there, the money isn't actually there in B2B. It's not going to last. Refulations will catch up and strain things further once the US isnt in a grifter's administration and people get tired of not having jobs.Its a huge pincer attack on 4 fronts.
After the crash and people need to put their money where their mouth is, let's see how much people truly value turning their brains off and consuming slop. There will be cool things from it, but not in this current economy.
Until then, The bubble will burst.this isn't the 10's anymore and the us government doesn't have the money to bail out corporate this time.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
> If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
Even if it isn't, the OP can still make hay while the sun is still shining, even if it'll eventually set, as the saying goes. But to not make hay and slowly see it set while losing your income, I won't ever understand that.
Yeah, gotta disagree with this one. Every senior and above around me have figured a workflow that makes their job faster. Internal usage dashboards say the same thing.
Pretty sure HN has become completely detached from the market at this point.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.
I don't understand why you're being downvoted, you're not wrong. I think Suno being successful bums me out, I really hate it, but people that are not me love it. I can't do anything about that.
Maybe not now. I imagine it'll go the way of many other things: buy demand with a product that beats alternatives in perceived quality and/or cost -> create a dependence on the product -> wait for the death of competition -> monetize heavily on a dependent userbase.
The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.
The only thing that's going to change is the quality of the slop will get better by the year.
They sure aren't paying for it. It's great how we're on a business topic we're not talking about the fact that the market demand doesn't match the investment put into it.
> The market wants a lot more high quality AI slop
"High quality AI slop" is a contradiction in terms. The relevant definitions[1] are "food waste (such as garbage) fed to animals", "a product of little or no value."
By definition, the best slop is only a little terrible.
'I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too.'
I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.
Its been 3 years and its been the most talked about topic on HN. If you really don't know at this point, you are choosing to remain ignorant. I can't help you here.
If you genuinely are unaware of the issues, it's a very easy topic to research. Heck, just put "AI" into HN and half the articles will cover some part of the topic.
See.. here is a problem. You say 'actual' ethics as if those were somehow universal and not ridiculously varied across the board. And I get it, you use the term, because a lot of readers will take it face value AND simply use their own value system to translate them into what agrees with them internally. I know, because I do the same thing when I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
With that said, mister ants in the pants, what does actual mean to you in this particular instance?
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
Uhh.. do we really want to do ethics 101 ( and likely comparative religions based on your insisting all ethical considerations are universal across the human experience )? Please reconsider your statement, because it is not 'basically'; not by a long shot.
I don't know shit about ethics numbers. Nor do I believe in any comparative religions. All I know is that you claimed to do the following:
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
I read this thread and I'm not even sure what your point is if all your comments are just going to be cryptic instead of actually stating your point clearly. As a reader, not even the person you're responding too, it's not useful to write like this.
I'm terribly sorry. I admit that it might be possible, at least in theory, to force me to emit "useful" writing! What makes you think you deserve that, though?!
Everyone who uses this forum deserves that, it's basic etiquette when speaking to other people. If one were as dismissive to you in real life, you'd probably be annoyed just the same.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
Software is a means to an end. It always has been. There are a privileged few who have the luxury of being able to thoughtfully craft software. The attention to detail needs to go into what people see, not in the code underneath.
>It was naïve to obsess over subtleties that only a handful of users would ever notice.
"When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through." - Steve jobs
Didn't take long for people to abandon their principles, huh?
It's very likey the main reason that small businesses like local restaurants, bakeries, etc. fail. People start them based on a fantasy and don't know how to watch the hard realities of expenses and income. But like gravity, there's no escaping those unless you are already wealthy enough for it all to just be a hobby.
If the fish are in a natural reserve, then you pretty much pit your soil on the line. We're missing that detail here and treating it as if this is the difference between one lake or another
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
It's unfair to place all the blame on the individual.
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
>everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world.
Yeah we kind of are. So many chances to learn and push to reverse policy. Yet look how we voted.
>sometimes you just have to do what you have to do, and one of those things is pay your taxes.
If it's between being homeless and joining ICE... I'd rather inflict the pain on myself than others. There are stances I will take, even of AI isn't the "line" for me personally. (But in not gonna optimize my portfolio towards that either).
>By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
I mean, the Iraq War polled very well. Bush even won an election because of it, which allowed it to continue. Insofar as they have a semblance of democracy, yes, Americans are responsible. (And if their government is pathological, they're responsible for not stopping it.)
>But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
Two things. One, you don't have to pay taxes if you're rich. Two, tax protests are definitely a thing. You actually don't have to pay them. If enough people coordinated this, maybe we'd get somewhere.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
Thank you. I would imagine the entire Fortune 500 list passes the line of "evil", drawing that line at AI is weird. I assume it's a mask for fear people have of their industry becoming redundant, rather than a real morality argument.
"Works with Google" in what way? And at what tome-frame? Even as someone who's actively decoupling from Google it's hard to truly de-Googlefy in this world as is.
Bingo. Moral grandstanding only works during the boom, not the come down. And despite being as big an idealist as they come, sometimes you just gotta do what you gotta do. You can crusade, but you're just making your future self more miserable trying to pretend that you are more important than you think. Not surprising in an era of unbridled narcissism, but hey, that's where we are. People who have nothing to lose fail to understand this, whereas if you have a family, you don't have time for drum circles and bullshit: you've got mouths to feed.
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
>They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons.
They very well might be. Websites can be made to promote a variety of activity.
>This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them
That's not what "marketing" is. This is OpenAI coming to your firm and saying "I need you to make a poster saying AI is the best thing since Jesus Christ". That very much will reflect on you and the industry at large as you create something you don't believe in.
> They very well might be. Websites can be made to promote a variety of activity.
This is disingenuous and inflamatory, and a manichaeist attitude I very much see in rich western nations for some reason. I wrote about this in another comment: it's sets people off on a moral crusade that is always against the players but rarely against the system. I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI as one of many examples.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
I fail to see how. Why would I not hold some personal responsibility for what I built?
Its actually pretty anti-western to have that mindset since that's usually something that pops up in collectivist societies.
>it's sets people off on a moral crusade that is always against the players but rarely against the system.
If you contribute to the system you are part of the system. You may not be "the problem" but you don't get guilt absolved for fanning the flames of a fire you didn't start.
I'm not suggesting any punishment for enablers. But guilt is inevitable in some people over this, especially those proud of their work.
>I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims,
I can and do.
>The problem isn't AI. The problem is a system where new technology means millions fearing poverty.
Sure. Doesn't mean AI isn't also a problem. We're not a singlethreaded being. We can criticize the symptoms and attack the source.
>over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment
I don't disagree. But the topic at hand is about AI, and talking about politics here is the only thing that gets nastier. I have other forums to cover that (since HN loves to flag politics here) and other IRL outlets to contribute to the community here.
Doesn't mean I also can't chastise how utterly sold out this community can be on AI.
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
As CTO I wasnt in a big tech company, it was a 50 person digital studio in the south
my salary as was 275k at the highest point in my career- so I never made FAANG money
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
If you don't have a university degree, most of EU/EEA immigration policy wants nothing to do with you, even if you're American or have several YoE. Source: am a self-taught US dev who has repeatedly looked into immigration to northern/western Europe over the years. If anything it continually gets more stringent every time I look. Forget looking for jobs, there's not even visa paths for most countries.
But isn't the same true for the US? To me it seems it's pretty similar both for Europeans moving to the US and Americans moving to the EU: have higher education, find a job, get a work visa...?
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
I made a career out of understanding this. In Germany it’s quite feasible. The only challenge is finding affordable housing, just like elsewhere. The other challenge is the speed of the process, but some cities are getting better, including Berlin. Language is a bigger issue in the current job market though.
Counter: come to Taiwan! Anyone with a semi active GitHub can get a Gold Cars Visa. 6 months in you're eligible for national health insurance (about 30$ usd/month). Cost of living is extremely low here.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term.
It is such an “interesting” statement in on many levels.
Market has changed -> we disagree -> we still disagree -> business is bad.
It is indeed hard to swim against the current.
People have different principles and I respect that, I just rarely
- have so much difficulty understanding them
- see such clear impact on the bottom line
Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.
Arguably you shouldn't trifle your argument by decorating it when fundamentally it is rock solid. I wonder if the author would consider just walking away from tech when they realize what a useless burden its become for everyone.
There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.
counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry
I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago
I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?
When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.
You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.
The industry joke is: What do you call AI that works? Machine Learning.
What do LLMs have to do with typing on phones, cancer research, or TTS?
Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.
I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.
I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.
I wish I had Walmart in my area, the grocery stores here suck.
It is a similar example. Just like you and I have different options about whether Walmart is a net benefit or net detriment to society, people have starkly different opinions as to whether LLMs are a net benefit or net detriment to society.
People who believe it's a net detriment don't want to be a part of enabling that, even at cost to themselves, while those who think it's a net benefit or at least neutral, don't have a problem with it.
Intentionally or not, you are presenting a false equivalency.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
LLMs do not lie. That implies agency and intentionality that they do not have.
LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.
How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.
>Consider my comment a reminder that ethical use of AI has been around of quite some
You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.
You have reasonably available context here. "This year" seems more than enough on it's own.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db
I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
This isn't a bootcamp course. I don't think Andy's audience is one trying to convert an HTML course into a career wholesale. It's for students or even industry people who want a deeper understanding of the tech.
Not everyone values that, but anyone who will say "just use an LLM instead" was never his audience to begin with.
I think it's more likely that software training as an industry is dead.
I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.
I'm afraid of what the future will look like 10+ years down the line after we've gutted humans from the workforce and replaced them with AI. Companies are going to be more faceless than they've ever been. Nobody will be accountable, you won't be able to talk to anyone with a pulse to figure out a problem (that's already hard enough). And we'll be living in a vibe coded nightmare governed by executives who were sold on the promise of a better bottom line due to nixing salaries/benefits/etc.
I don't think it will get that bleak, but it still is a good time to build human community regardless. This future only works for a broken society who can't trust their neighbor. You have the power to reverse that if you wish.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
If one asks you "Why do you consider Pablo Picasso's work to be outstanding", then "Look at his work?" is not a helpful answer. I've been asking about parent's way to judge the outstandingness of HTML/CSS work. Just writing "damn solid" websites isn't distinguishing.
To be frank, someone who needs to be told why to appreciate art probably isn't going to appreciate Picasso. You can learn art theory, but you can't just "learn" someone's life, culture, and expression. All the latter is needed to appreciate Picasso.
But I digress.
Anyways, I can't speak for the content itself, but I can definitely tell on the javascript coirse from the trailer and description that they understand the industry and emphasize how this is focused towards those wanting a deep dive on the heart of web, not just another "tutorial on how to use newest framework". Very few tech courses really feel like "low level" fundamentals these days.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
Hmm. This is hand made clothes and furniture vs factory mass production.
Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.
Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.
Quality also varied over time, if I recall correctly. Machine made generally starts worse, but with refinement ends up better from superhuman specialization of machines to provide fine detail with tighter tolerances than even artisans can manage.
The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.
As we see with tech, mass production isn't an instant advantage in this market. In fact, something bespoke has a higher chance to stand out here than most other industries.
And no, I don't think people are seeking demand for AI website slop the way they do for textiles. Standing out is a good way to get your product out there compared to being yet another bloated website that takes 10 seconds to load with autoplay video generic landing text.
I'd liken it to Persona 5 in the gaming market. No one is playing a game for its UI. But a bespoke UI will make the game all the more memorable, and someone taking the time for that probably pjt care into the rest of the game as well (which you see on its gameplay, music, characters, and overall presentation).
A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
If you're telling a craftman to ignore their craft, then you're falling on deaf ears. I'm a programmer, not a businessman. If everyone took the advice of 'I don't need a good website' then many devs would be out of business.
Fact is there's just less businesses forming, so there's less demand for landing sites or anything else. I don't see this as a sign that 'good websites don't matter'
I think there's a difference between seeing yourself as a craftsman / programmer / engineer as a way to solve problems and deliver value, and seeing yourself as an HTML/CSS programmer. To me the latter is pretty risky, because technologies, tastes, and markets are constantly changing.
It's like equating being a craftsman with being someone who a very particular kind of shoe. If the market for that kind of shoe dries up, what then?
I sure hope no web dev sees tbemself only as an HTML/CSS programmer. But I also hope any web dev who sees themselves as a craftsman can profess mastery over HTML/CSS. Your fundamentals are absolutely key.
Its why I'm still constantly looking at and practicing linear algebra as an aspiring "graphics programmer". I'm no mathematician but I should be able to breath matrix operations as a graphics programmer. Someone who dismisses their role to "just optimizing GPU stacks" isn't approaching the problem as a craftsman.
And I'll just say that's also a valid approach and even an optimal one for career. But courses like that aren't tailored towards people who want to focus on "optimizing value" to companies.
> When 99.99% of the customers have garbage as a website
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
Yeah, sorry. I will praise plenty of Amazon's scale, but not their deception, psychological manipulation, and engagement traps. That goes squarely in "trash website".
I put up with a lot, but the price jumpsa was finally the trigger i needed to cancel prime this year. I don't miss it.
Struggling because they're deliberately shooting themselves in the foot by not taking on the work their clients want them to take. If you don't listen to the market, eventually the market will let you fall by the way side.
I'm sure author's company does good work, but the marketplace doesn't respond well to, "we're really, _really_ good,", "trust me," "you won't be disappointed." It not only feels desperate, but is proof-free. Show me your last three great projects and have your customers tell me what they loved about working with you. Anybody can say, "seriously, we're really good."
the "trust me" has a trailer, testimony from industry experts, and gasp a good looking website that doesnt chug and still looks modern and dynamic. Bonus points for the transparency about 2025, we don't get much of that these days.
It could still be trash, but they are setting all the right flags.
His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.
I appreciate and respect that this org is avoiding AI hype work, but I don't know if there are long term reputational benefits. Clients are going to be more turned off by your reasons not to do work than your having a "principled business".
From the clients perspective, it's their job to set the principles (or lack thereof) and your job to follow their instructions.
That doesn't mean it's the wrong thing to do though. Ethics are important, but recognise that it may just be for the sake of your "soul".
I do. But sadly I don't have money and December/January are my slowest months these past few years. I'm exactly that "money is tight" crowd being talked about.
After reading the post I kept thinking about two other pieces, and only later realized it was Taylor who had submitted it. His most recent essay [0] actually led me to the Commoncog piece “Are You Playing to Play, or Playing to Win?” [1], and the idea of sub-games felt directly relevant here.
In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.
I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.
The author has painted themselves into a corner. They refuse to do business with companies that use AI, and they try to support their business with teaching courses, which is also being impacted by AI.
They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?
> However they need to face current reality. What value-add can they provide in areas not impacted by AI?
I'm sure the author has thought much longer on this than I, but I get the vibes here of "2025 was uniquely bad for reasons in and outside of AI". Not "2025 was the beginning of the end for my business as a whole".
I don't think demand for proper engineering is going away, people simply have less to spend. And oncestors have less to invest or are all in gambling on AI. It's a situation that will change for reasons outside the business itself.
> we won’t work on product marketing for AI stuff, from a moral standpoint
I fundamentally disagree with this stance. Labeling a whole category of technologies because of some perceived immorality that exists within the process of training, regardless of how, seems irrational.
My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
I don’t think they’re unique. They’re simply among the first to run into
the problems AI creates.
Any white-collar field—high-skill or not—that can be solved logically will
eventually face the same pressure. The deeper issue is that society still has
no coherent response to a structural problem: skills that take 10+ years to
master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms
hide the fact that surviving the AI era doesn’t just mean learning to use AI
tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI
in my work well enough to stay ahead of the wave.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
I feel for the author. I do both mechanical and software engineering and I’m in this career(s) because I love making things and learning how to do that really well. Been having the most difficult time accepting the idea that there isn’t a good market for people like us - artisans, craftsmen, whatever the term might be - who are obsessive about exceptional quality and the time and effort it takes to get there. In this day and age, and especially when LLMs look ever more like they can produce at least a cheap, dollar store approximation of the real deal, “doing things really well” is going to be relegated to an ever more niche market.
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
I guess then, that he is relying on his customers not discovering that there are options out there that will do this for them, without a "middle man" as it were. Seems like shaky ground to be standing on, but I suppose it can work for a while, if he already has good relationships in his industry.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
In the case of the author, their market isn't LLM makers directly, it's the people who use those LLMs, so the author's market is much bigger and isn't susceptible to collapse if LLM makers go bankrupt (because they can just go back to what they are already doing now pre-LLM), quite the opposite as this post shows.
That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
Careful now, if they get their way, they’ll be both the market and the government.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
It's also the exact same human beings who were doing the NFT and metaverse bullshit and insisting they were the next best things and had to jump ship to the next "Totally going to change everything" grift because the first two reached the end of their runs.
I wonder what their plan was before LLMs seemed promising?
These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
It's not really whether one has invested in the companies or not, it's more that we can see the author shooting themselves in the foot by not wanting to listen to the market. It's like selling vinegar at a lemonade stand (and only insisting on selling vinegar, not lemonade). It's simply logically nonsensical to us "Americans."
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
>When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing.
No, that's the most important situation to consider the moral thing. My slightly younger peers years back were telling everyone to eat tide pods. That's a pretty important time to say "no, that's a really stupid idea", even if you don't get internet clout.
I'd hope the tech community of all people would know what it's like to resist peer pressure. But alas.
>But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I don't see that at all in the article. Quite the opposite here actually. I just see a person being transparent about their business and morals and commentors here using it to try and say "yea but I like AI". Nothing here attacked y'all for liking it. The author simply has his own lines.
By victim blaming I meant some comments here. I can relate to the author, and the narrative that it's my fault for trying to be myself and keep to my ways triggers me.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
>fixing other website that LLM generated is the future now
I barely like fixing human code. I can't think of a worse job than fixing garbage in, garbage out in order to prop up billionaires pretending they don't need humans anymore. If that's the long term future then it's time for a career shift.
I'm still much more optimistic about prospects, fortunately.
> same thing would happen with AI generated website
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I have a family member that produces training courses for salespeople; she's doing fantastic.
This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.
Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.
>Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I think courses like these are peak "DIY". These aren't courses teaching you to RTFM. It's teaching you how to think deeper and find the edge cases and develop philosophy. That's knowledge worth its weight in gold. Unlike React tutorial #32456 this is showing us how things really work "under the hood".
I'd happily pay for that. If I could.
>don't sell to musicians. They don't have any money
But programmers traditionally do have money?
>if you're against AI, you might have missed how good of a learning tool LLMs can be.
I don't think someone putting their business on the line with their stance needs yet another HN squeed on why AI actually good. Pretty sure they've thought deeply of this.
"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
Plenty of people have moral concerns with having children too.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.
Should people not look for reasons to be concerned?
Okay. Why are we comparing a commentor answering a question to a FOSS organization who wants to align contributiors? You seem to have completely side tracked the conversation you started.
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
My domain is games. It's a battlefield out there (pun somewhat intended). I ain't touching anything Gen-AI until we figure out what the hell is going on with regards to copyright, morality of artists, and general "not look like shit"-ness.
Sad part is I probably will still be accused of using AI. But I'll still do my best.
I'm critical of AI because of climate change. Training and casual usage of AI takes a lot of resources. The electricity demand is way too high. We have made great progress in bringing a lot of regenerative energy to the grid, but AI eats up a huge part of it, so that other sectors can't decarbonize as much.
We are still nowhere near to get climate change under control. AI is adding fuel to the fire.
I noticed a phenomenon on this post - many people are tying this person's business decisions to some sort of moral framework, or debating the morality of their plight.
"Moral" is mentioned 91 times at last count.
Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?
EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
>here's a lot of real meat to it that will obviously survive the hype cycle.
Okay. When the hype cycle dies we can re-evaluate. Stances aren't set in stone.
>If you won't even lean into things like this
I'm sure Andy knows what kind of business was in his clients and used that to inform his acceptance/rejection of projects. It mentions web marketing so it doesn't seem like much edutech crossed ways here.
> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Sounds like a self inflicted wound. No kids I assume?
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...
I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.
What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.
the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?
Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.
Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get
>> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?
That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.
It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.
(I'm not necessarely in the group of people avoiding to use AI, but I can see their point)
> Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals.
Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Turning the page is a valid choice though. Sometimes a clean slate is what you need.
> Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.
Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
> For example, should we let people die rather than use medical knowledge gained unethically?
Depends if you are doing it 'for their own good' or not.
Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.
Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.
>Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.
Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.
>Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.
I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.
>Depends if you are doing it 'for their own good' or not.
what do you mean by this?
I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question
also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.
I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.
>Finally - Who determines what is ethical?
I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you
However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.
AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines
>It's final.
You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.
Nothing is final until you draw your last breath.
>Who determines what is ethical? beyond the 'golden rule'?
To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.
> but once the damage is done why let it happen in vain?
Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.
>but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.
I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.
> but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
yes [0]
[0]: https://en.wikipedia.org/wiki/Raytheon
Can you... elaborate?
Not the parent.
I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.
So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.
Moral arguments are a luxury of thinkers and only a small percentage of people can be reasoned with that way anyways. You can manipulate on morals but not reason in most cases.
Agreed that you cannot be in a toxic situation and not have it affect you -- so if THAT is the case -- by all means exit asap.
If it's perceived ethical conflict the only one you need to worry about is the golden rule -- and I do not mean 'he who has the gold makes the rules' I mean the real one. If that conflicts with what you are doing then also probably make an exit -- but many do not care trust me... They would take everything from you and feel justified as long as they are told (just told) it's the right thing. They never ask themselves. They do not really think for themselves. This is most people. Sadly.
But the parent didn't really argue anything, they just linked to a Wikipedia article about Raytheon. Is that supposed to intrinsically represent "immorality"?
Have they done more harm than, say, Meta?
>they just linked to a Wikipedia article about Raytheon
Yeah, that's why I took a guess at what they were trying to say.
>Is that supposed to intrinsically represent "immorality"?
What? The fact that they linked to Wikipedia, or specifically Raytheon?
Wikipedia does not intrinsically represent immorality, no. But missile manufacturing is a pretty typical example, if not the typical example, of a job that conflicts with morals.
>Have they done more harm than, say, Meta?
Who? Raytheon? The point I'm making has nothing to do with who sucks more between Meta and Raytheon.
Well, sure, I'm not disagreeing with the original point about moral choice, and in fact I agree with it (though I also think that's a luxury, as someone else pointed out).
But if someone wants to make some blanket judgement, I am asking for a little more effort. For example, I wonder if they would think the same as a Ukrainian under the protection of Patriot missiles? (also produced by Raytheon)
Here are Raytheon part markings on the tail kit of a GBU-12 Paveway glide bomb that Raytheon sold to a corrupt third word dictator, who used that weapon to murder the attendees of an innocent wedding in a country he was feuding with.
https://www.bellingcat.com/news/middle-east/2018/04/27/ameri...
I know the part number of every airplane part I have ever designed by heart, and I would be horrified to see those part numbers in the news as evidence of a mass murder.
So, what is your moral justification for defending one of the world’s largest and despised weapons manufacturers? Are you paid to do it or is it just pro-bono work?
Excuse me, do you make personal attacks on anyone who dares ask for an actual reasoned argument?
Most if not all aerospace companies also produce military aircraft, right? Or is your reasoning that if your particular plane doesn't actually fire the bullets, then there's no moral dilemma?
Defending? I am simply pointing out the obvious flaws in your logic.
If you think Raytheon is the apex evil corporation you are very mistaken. There is hardly any separation between mega corps and state above a certain level. The same people are in majority control of IBM, Procter & Gamble, Nike, and Boeing, Lockheed Martin, etc, etc.
Stop consuming marketing materials as gospel.
What you see as this or that atrocity on CNN or whatever that is produced *propaganda*, made for you, and you are swallowing it blindly without thinking.
Also the responsibility is of course down to individuals and their actions-- whether you know their names or not. Objects do not go to war on their own.
I've also worked in aerospace and aviation software but that doesn't preclude me from thinking clearly about whether I'm responsible for this or that thing on the news involving planes -- you might want to stop consuming that.
yes [0]
[0]: https://en.wikipedia.org/wiki/No
Has anyone considered that the demand for web sites and software in general is collapsing?
Everyone and everything has a website and an app already. Is the market becoming saturated?
I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.
The demand is massively increasing, but filled by less people and more GPUs.
And new companies are created every day, and new systems are designed every day, and new applications are needed every day.
The market is nowhere close to being saturated.
> In my opinion this business died 2 years ago
It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.
I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.
markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
> If AI can make things 1000x more efficient,
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
40h is probably up from pre-industrial times.
Edit: There is some research covering work time estimates for different ages.
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't think a general sentiment matters much here when the important necessitate are out of reach. The hierarchy of needs is outdated, but the inversion of it is very concerning.
We can live without a flat screen TV (which has gotten dirt cheap). We can't live without a decent house. Or worse, while we can live in some 500 sq ft shack we can't truly "live" if there's no other public amenities to gather and socialize without nickel-and-diming us.
What was all this free time spent doing in the pre-industrial era?
pre-industrial? Lots of tending to the farm, caring for family, and managing slaves I suppose. Had some free time between that to work with your community for bonding or business dealings or whatnot.
Don't think slave management was the average pre-industrial experience.
Depends on your region, but let's not pretend it was some rarity pre cotton gin. You didn't need to be as rich as yo think to have slave labor.
Quite the leap to go from "pre-industrial people" to "Antebellum US Southerners", and even then the majority of that (hyperspecific) group did not own slaves.
If you include all pre-industrial people in history, then yes enslavement of outside groups is very much the norm not the exception.
Alternating between grinding your knife and making wood sculptures.
>you'll probably consume a lot more, while still working a full week
There's more to cosume than 50 years ago, but I don't see that trend continuing. We shifted phone bills to cell phone bills and added internet bills and a myriad of subscriptions. But that's really it. everything was "turn one time into subscrition".
I don't see what will fundamentally shift that current consumption for the next 20-30 years. Just more conversion of ownership to renting. In entertainment we're already seeing revolts against this as piracy surges. I don't know how we're going to "consume a lot more" in this case.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
You can consume a lot less on a surprisingly small salary, at least in the U.S.
But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:
- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
> - One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.
> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
Maybe you can replace these with the cell phone + plan.
> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.
> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
Agreed.
> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.
> rural areas where cars are a necessity
> It's not clear that it's still possible to function in society today with out a cell phone
Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
---
As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.
Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.
And certainly, in 1945 (80 years ago), people would've made due with fewer cars in those areas.
This idea that increased consumption over the past century has been irrelevant to quality of life is just absurd.
> on a surprisingly small salary
But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.
Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)
It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.
(Not disagreeing with the rest of your post though)
This didn't say they wanted to consume less, presumably their consumption is the right level for them.
You aren't wrong and I agree up to a point. But I've watched a couple of people try to get by on just "cutting" rather than growing their incomes and it doesn't work out for them. A former neighbor was a real Dave Ramsey acolyte and even did things like not have trash service (used dumpsters and threw trash out at his mother's house). His driveway was crumbling but instead of getting new asphalt he just dug it all up himself and dumped it...somewhere, and then filled it in with gravel. He drives junker cars that are always breaking down. I helped him replace a timing chain on a Chrysler convertible that wasn't in awful shape, but the repairs were getting intense. This guy had an average job at a replacement window company but had zero upward mobility. He was and I assume is, happy enough, with a roof over his head and so forth, but our property taxes keep rising, insurance costs keep rising, there's only so much you can cut. My take is that you have to find more income and being looked upon as "tight with a buck" or even "cheap" is unfavorable.
I've given up pretty much all of that out of necessity, yes. Insurance and rent still goes up so I'm spending almost as much as I was at my peak, though.
>I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
Of course it's changed. The point is that
1. the necessities haven't changed and have gotten more expensive. People need healthcare, housing, food, and tranport. All up.
2. the modern day expectations means necessities change. We can't walk into a business and shake someone's hand to get a job, so you "need" access to the internet to get a job. Recruiters also expect a consistent phone number to call so good luck skipping a phone line (maybe VOIP can get around this).
These are society's fault as they shifted to pleasing shareholders and outsourcing entire industries (and of course submitted to lobbying). so I don't like this blame being shifted to the individual for daring to consume to survive.
What is the alternative?
Voting in people who can actually recognize the problem and make sure corporationa cant ship all of America's labor overseas. Blaming ourselves for society's woes only pushes the burden further on the people, instead of having them collectively gather and push back against those at fault.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
> or stayed flat
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
Not unless rent is cheap, it doesn't. It might mean my boss is able to work less.
Rent can be pretty cheap depending upon where you live. If you want to live in a high cost of living area, that's a form of consumption.
>If you want to live in a high cost of living area, that's a form of consumption.
Not really a "want" as much as "move where the jobs are". Remote jobs are shakey now and being in the middle of nowhere only worsens your compensation aspects. Being able to live wherever you please is indeed a luxury. The suburb structure already sacrificed the aspect of high CoL for increase commute time to work.
I also do think that dismissing aspects of humanity like family, community and sense of purpose to "luxuries" is an extremely dangerous line of thinking.
If I live somewhere, and maintain the building myself, what's being consumed?
The spot of land is being consumed, no? If it's HCoL, clearly that's land that a lot of people wish they could live on but can't.
But I'm not paying rent to them.
I mean, yeah? Does any market work like that? If you want an apple, you pay the person who has the apple to take it from them, you don't pay the other people who want apples. Not really following where this is going
Save up and then FIRE; retire early by moving to a lower cost of living area.
I think FIRE was basically just a fad for awhile. I say this as a 52 year old "retiree" who isn't working right now and living off investment income. It takes a shitload of wealth to not have to work and I'm borderline not real comfortable with the whole situation. I live in a fairly HCoL area and can't up and move right now (wife has medical needs, son in high school, daughter in college). I'd be freaking out if I didn't have a nest egg, we would be trying to sell our house in a crap market. As it stands, I don't really want to go on like I am, my life is a total waste right now.
It's not a "fad," it's a mathematical observation that investing more generates more returns. Maybe the media was covering it more at some point but the concept itself is sound. You are in fact FIREd by the same definition, it's just that in your case it seems you would need more money than you have currently due to the factors you stated, but that's not the fault of the concept of FIRE in general. And anyway, there are lots of stories of people doing regular or leanFIRE too, it doesn't require so much wealth as to be unreachable if you have a middle class job. For example, https://www.reddit.com/r/leanfire/s/67adPxZeDU
If you think your life is a waste right now, do something with it. That's actually the number one thing people don't expect from being retired, how bored they get. They say in FIRE communities that all the money and time in the world won't help if you don't actually utilize it.
save up at what job?
At the job you work currently? Or if you're unemployed, then this advice doesn't work of course.
Well I got work but the pay is minimal and supplemented by what freelance gigs I can grab. Not much to save per paycheck.
you can consume as much as an average person from 1950's by working just a few days a week.
It's not always possible to live like a person from the 1950s due to societal changes. And many jobs that pay well do not allow you to work part time.
Not cigarettes I can't!
That sounds like a nightmare. Let’s sell out a generation so that we can consume more. Wow.
Boomers in a nutshell. Do a bunch of stuff to keep from building more housing to prop up housing prices (which is much of their net worth), and then spend until you're forced to spend the last bit to keep yourselves alive.
Then the hospital takes the house to pay off the rest of the debts. Everybody wins!
They signed it for you as there will be 1000x less workers needed so they didn't need to ask anymore.
You will probably be dead.
But _somebody_ will be living in a 900,000 sq ft apartment and working an hour a week, and the concept of money will be defunct.
>They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.
And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.
I don't think using the 2025 US government for a moral compass is helping your case either.
>If AI can make things 1000x more efficient
Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?
AI is just a tool, like most other technologies, it can be used for good and bad.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).
You are thinking too small.
The goal of AI is NOT to be a tool. It's to replace human labor completely.
This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.
To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.
I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
>Where are you going to draw the line?
How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).
I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.
>AI as a technology is almost impossible to stop.
Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.
Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.
So only the wealthy can afford legal, medical, and financial advice in your hypothetical?
What makes you think that in the world where only the wealthy can afford legal, medical, and financial advice from human beings, the same will be automatically affordable from AI?
It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?
If you're going to approach this on such bad faith, then I'll simply say "yes" and move on. People can male bad decisions, but that shouldn't be a profitable business.
> AI is just a tool, like most other technologies, it can be used for good and bad.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
Regardless of whether you use AI or social media, your happiness (or lack thereof) is largely under your own control.
>your happiness (or lack thereof) is largely under your own control.
Not really. Or at least, "just be happy" isn't a good response to someone homeless and jobless.
> The same could be said of social media
Yes, absolutely.
Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.
> In fact, mots people here have seen examples of it done in a good way.
Like this very website we're on, proving the parent's point in fact.
>Like this very website we're on
I don't know if HN 2025 has been a good example of "in a good way".
Why not?
A crowd of people continually rooting against their best interests isn't exactly what's needed for the solidarity that people claim is a boon from social media. Its not as bad as other websites out there, but I've see these flags several times on older forums.
It won't be as hard as you think for HN to slip into the very thing they mock Instagram of today for being.
Or maybe they differ on the opinions of what their own best interests are, which aren't yours by definition.
Uh huh, that's always how it starts. "Well you're in the minority, majority prevails".
Yup, story of my life. I have on fact had a dozen different times where I chose not to jump off the cliff with peers. How little I realized back then how rare that quality is.
But you got your answer, feel free to follow the crowd. I already have migrations ready. Again, not my first time.
If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.
AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.
I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.
Silent down votes, any explanations or counter points?
Because no one defines AI the way you seem to do here. LLMs and machine learning are in the field of artificial intelligence, AI.
How is ML a subset of AI?
Machine learning isn't about developing anything intelligent at all, its about optimizing well defined problem spaces for algorithms defined by humans. Intelligence is much more self guided and has almost nothing to do with finding the best approximate solution to a specific problem.
> Machine learning (ML) is a field of study in _artificial intelligence_ concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions.
https://en.wikipedia.org/wiki/Machine_learning
You are free to define AI differently but don't be surprised if people don't share your unique definition.
The definition there is correct. ML is a a field of study in AI, that does not make it AI. Thermodynamics is a field of study in physics, that does not mean that thermodynamics is physics.
You know what, I'm going to take a walk.
What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Using LLMs doesn't kill people, I'm sure there are some exceptions like OpenAI's suicide that was in the news, but not to the degree of oxycontin.
>Using LLMs doesn't kill people
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
It's way past "some exceptions" at this point.
Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.
>I don't know of any examples where that has actually killed someone.
You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.
>Someone could search on Google just the same and ignore their symptoms.
Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.
If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.
LLMs generate text. It is people who decide what to do with it.
Removing all personal responsibility from this equation isn't going to solve anything.
>It is people who decide what to do with it.
That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"
We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.
Sticks and stones, my friend...
If you were offended by that comment, I apologize. You're 99.99% not the problem and infighting gets us nowhre.
But you may indeed be vying against your best interests. Hope you can take some time to understand where you lie in life and if your society is really benefiting you.
I am not offended. And I'll be the one to judge my own best interests. (back to: "personal responsibility"). e.g. I have more information about my own life than you or anyone else, and so am best situated to make decisions for myself about my own beliefs.
For instance I work for one of the companies that produces some of the most popular LLMs in use today. And I certainly have a stake in them performing well and being useful.
But your line of reasoning would have us believe that Henry Ford is a mass murderer due to the number of vehicular deaths each year, or that the wright brothers bear some responsibility for 9/11. They should have foreseen that people would fly their planes into buildings, of course.
If you want to blame someone for LLMs hurting people, we really need to go all the way back to Alan Turing -- without him these people would still be alive!
>And I'll be the one to judge my own best interests thank you.
Okay, cool. Note that I never asked for your opinion and you decided to pop up in this chain as I was talking to someone else. Go about your day or be curious, but don't butt in then pretend 'well I don't care what you say' when you get a response back.
Nothing you said contradicted my main point. So this isn't really a conversation but simply more useless defense. Good day.
I said "sticks and stones" to suggest the end of that quote: "words can never hurt me". That's a response to your comment about LLMs hurting people.
Didn't think that would go so cleanly over your head given you're all the way up there on your high horse of morality.
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
You're using data centers to read and post comments here.
You're entering a bridge and there's a road sign before it with a pictogram of a truck and a plaque below that reads "10t max".
According to the logic of your argument, it's perfectly okay to drive a 360t BelAZ 75710 loaded to its full 450t capacity over that bridge just because it's a truck too.
Your comment is valid as a criticism of an "unfettered free market", but further proves my point that things that work will win.
That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
The age old question: do people get what they want, or do they want what they (can) get?
Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...
[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...
He is right though:
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
1) it was failure of specific implementation
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
[1]https://en.wikipedia.org/wiki/Pets.com
Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.
[flagged]
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
Which founder is wrong? Not only the brainwashed here are entrepreneurs
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
That's definitely not what I am doing, nor implying, and while you're free to think it, please don't put words in my mouth.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
We are basically 100-ϵ% the same. I have no doubt.
Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?
Ok change my qualifier from interpretation to description if it helps. I describe you as someone who dismisses AI in a maximalist way
>Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
>Ok change my qualifier from interpretation to description if it helps.
I... really don't think AI is what's wrong with you.
Yet to see anything good come from it, and I’m not talking about machine learning for specific use cases.
And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?
800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
Almost all smokers agree that it is harmful for them.
Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
But “good or desirable from a societal standpoint” isn’t what they said, correct me if I’m wrong. They said that people find a benefit.
People find a benefit in smoking: a little kick, they feel cool, it’s a break from work, it’s socializing, maybe they feel rebellious.
The point is that people FEEL they benefit. THAT’S the market for many things. Not everything obv, but plenty of things.
> The point is that people FEEL they benefit. THAT’S the market for many things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
Its not that it is intrinsically good but that a lot of people consuming things from their own agency has to mean something. You coming in the middle and suggesting you know better than them is strange.
When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.
> a lot of people consuming things from their own agency has to mean something.
Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.
Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.
Drugs and alcohol are the easy example:
> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)
Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.
–––
> my first instinct is not to decry football as a problem in society.
My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.
Ok, I'll bite: What's the harm of LLMs?
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
For the avoidance of doubt, I was not claiming that AI is the worst thing ever. I too think that complaints about that are generally overblown. (Unless it turns out to kill us all or something of the kind, which feels to me like it's unlikely but not nearly as close to impossible as I would be comfortable with[1].) I was offering examples of ways in which LLMs could plausibly turn out to do harm, not examples of ways in which LLMs will definitely make the world end.
Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)
If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).
[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.
Any sort of atrophy of anything is because you don't need the skill any more. If you need the skill, it won't atrophy. It doesn't matter if it's LLMs or calculators or what, atrophy is always a non-issue, provided the technology won't go away (you don't want to have forgotten how to forage for food if civilization collapses).
We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
[flagged]
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric. Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely. But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.
And today's adapt or die doesn't sound less fascist than in 1930.
Are you going to hire him?
If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
Website building started dying off when SquareSpace launched and Wix came around. WordPress copied that and its been building blocks for the most part since then. There are few unique sites around these days.
> we don't see the authors getting the recognition.
In that sense AI has been the biggest heist that has ever been perpetrated.
Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
The claim of theft is simple: the AI companies stole intellectual property without attribution. Knowing how AIs are trained and seeing the content they produce, I'm not sure how you can dispute that.
Statistics are not theft. Judges have written over and over again that training a neural network (which is just fitting a high-dimensional function to a dataset) is transformative and therefore fair use. Putting it another way, me summarizing a MLB baseball game by saying the Cubs lost 7-0 does not infringe on MLB's ownership of the copyright of the filmed game.
People claiming that backpropagation "steals" your material don't understand math or copyright.
You can hate generative tools all you want -- opinions are free -- but you're fundamentally wrong about the legality or morality at play.
In the exact same way that it’s not theft if an artist-in-training goes to a museum to look at how other painters created their works.
False equivalence - a random person can't go to a museum and then immediately go and paint exactly like another artist, but that's what the current LLM offerings allow
See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc
That's not exactly how LLMs learn either, they require huge amounts of training data to be able to imitate a style. And lots of human artists are able to imitate the style of one another as well, so I'm not sure what makes LLMs so different.
Regardless of whether you think IP laws should prevent LLMs from training on works under copyright, I hardly think the situation is beyond dispute. Whether copyright itself should even exist is something many dispute.
it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
But DID the Luddites overreact? They sought to have machines serve people instead of the other way around.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
I don't think it's a meaningful distinction.
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
Portrait photography works whether or not there is a painting of the subject... LLMs cannot exist unless specifically consuming previous works! The authors of those works have every right to be upset about not being financially compensated, unlike painters.
Reasonable people disagree about whether copying is theft, but everyone agrees that plagiarism is theft.
it is totally valid to NOT play the game - Joshua taught us this way back in the 80's
Totally agree, but I’d state it slightly differently.
This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
But what if the AI generated course was actually good, maybe even better than the human generated course? Which one would you pick then?
The answer is "the highest-ranked free one in a Google Search".
When a single such "actually good" AI-generated course actually exists, this question might be worth engaging with.
Actually, I already prefer AI to static training materials these days. But instead of looking for a static training material, I treated it like a coach.
Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.
With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.
> And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.
I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.
> Arguing against progress as it is happening is as old as the tech industry. It never works.
I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.
So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.
And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?
Not everything that comes out of Silicon Valley is automatically good.
Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)
>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most
There. Fixed!
AI is not a tool, it is an oracle.
Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
> if you combine the Stone Soup strategy with Clever Hans syndrome you can sell the illusion of not working for 8 billable hours a day
No thanks, I'm good.
Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.
"praying that the next prompt finally spits out something decent is not a business strategy."
well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history
as much as I agree with your statement but the real world doesn't respect that
> one of the most fastest growing user acquisition user base in history
By selling a dollar of compute for 90 cents.
We've been here before, it doesn't end like you think it does.
Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.
Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.
Sure, but it seems here that they are rejecting everything related to AI, which is probably not a smart business move, as they also remark, since this year was much harder for them.
The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.
Is it though? We don't know the future. Is this just a dip in a growing business, or sign of things to come? Even if AI does better than the most optimistic projections it could still be great for a few people to be anti-ai if they are in the right place selling to the right people.
Without knowing the future I cannot answer.
People who make products with AI are not necessarily rich, often it's solo "vibe coders."
what happen if the market is right and this is "new normal"?????
same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital
> what happen if the market is right and this is "new normal"?????
Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
Five years ago we had pretty good static analysis tools for popular languages which could automate certain aspects of code reviews and catch many common defects. Those tools didn't even use AI, just deterministic pattern matching. And yet due to laziness and incompetence many developers didn't even bother taking full advantage of those tools to maximize their own productivity.
The devs themselves can still be lazy, claude and copilot code review can be automated on all pull requests by demand of the PM — and the PM can be lazy and ask the LLMs to integrate themselves.
And the LLMs can use the static analysis tools.
I can't even imagine what time wasting bs the LLMs are finding with static analysis tools! It's all just a circle jerk everywhere now.
Static analysis was pretty limited imho. It wasn't finding anything that interesting. I spent untold hours trying to satisfy SonarQube in 2021 & 2022. It was total shit busy work they stuck me with because all our APIs had to have at least 80% code coverage and meet a moving target of code analysis profiles that were updated quarterly. I had to do a ton of refactoring on a lot of projects just to make them testable. I barely found any bugs and after working on over 100 of those stupid things, I was basically done with that company and its bs. What an utter waste of time for a senior dev. They had to have been trying to get me to quit.
Even if someday we get AI that can generalize well, the need for a person who actually develops things using AI is not going anywhere. The thing with AI is that you cannot make it responsible, there will still be a human in the loop who is responsible for conveying ideas to the AI and controlling its results, and that person will be the developer. Senior developers are not hired just because they are smart or can write code or build systems, they are also hired to share the load of responsibility.
Someone with a name, an employment contract, and accountability is needed to sign off on decisions. Tools can be infinitely smart, but they cannot be responsible, so AI will shift how developers work, not whether they are needed.
Even where a human in the loop is a legal obligation, it can be QA or a PM, roles as different from "developer" as "developer" is from "circuit designer".
A PM or QA can sign off only on process or outcome quality. They cannot replace the person who actually understands the architecture and the implications of technical decisions. Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
If AI becomes powerful enough to generate entire systems, the person supervising and validating those systems is, functionally, a developer — because they must understand the technical details well enough to take responsibility for them.
Titles can shift, but the role dont disappear. Someone with deep technical judgment will still be required to translate intent into implementation and to sign off on the risks. You can call that person "developer", "AI engineer" or something else, but the core responsibility remains technical. PMs and QA do not fill that gap.
> They cannot replace the person who actually understands the architecture and the implications of technical decisions.
LLMs can already do that.
What they can't do is be legally responsible, which is a different thing.
> Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.
Legal responsibility and technical responsibility are not always the same thing; technical responsibility is absolutely in the domain of PM and QA, legal responsibility ultimately stops with either a certified engineer (which software engineering famously isn't), the C-suite, the public liability insurance company, or the shareholders depending on specifics.
Ownership requires legal personhood, which isn't the same thing as philosophical personhood, which is why corporations themselves can be legal owners.
ai can do code review? do people actually believe this? we have a mr llm bot, it is wrong 95% of the time
I have used it for code review.
Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".
I've been taking self-driving cars to get around regularly for a year or more.
waymo and tesla already operate in certain areas, even if tech is ready
regulation still very much a thing
“certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.
Waymo is doing very well around San Francisco, which is certainly very challenging city driving. Yes, it doesn't snow there. Maybe areas with winter storms will never have autonomous vehicles. That doesn't mean there isn't a lot of utility created even now.
My original point, clearly badly phrased given the responses I got, is that the promises have been exceeding the reality for a decade.
Musk's claims about what Tesla's would be able to do wasn't limited to just "a few locations" it was "complete autonomy" and "you'll be able to summon your car from across the country"… by 2018.
And yet, 2025, leaves: https://news.ycombinator.com/item?id=46095867
Leaves: https://news.ycombinator.com/item?id=46095867
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s
I'd sooner believe that a unicorn will fly over my house and poop out rainbow skittles on my lawn. Yeah /s for sure!
You and I both know we're probably headed for revolutionary times.
I'm young, please when was that and in what industry
After the year 2000. dot com burst.
An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.
That turned tech workers switching to "flipping burgers" into a meme.
What was a little different then was that tech jobs paid about 30% more than other jobs, it wasn't anything like the highs we have seen the last few years. I used to describe it as you used to have the nicer house on the block, but then in the 2010s+ FNG salaries had people living in whole other neighborhoods. So switching out of the industry, while painful was not as traumatic. Obviously though having to actually flip burgers was a move of desperation and traumatic. The .com bust was largely centered around SV as well, in NYC (where I live) there was some fallout, but there was still a tailwind of businesses of all sorts expanding their tech footprint, so while you may not have been able to land at a hot startup and dream of getting rich in an IPO, by the end of 2003 it was mostly stabilized and you could likely have landed a somewhat boring corporate job even if it was just building internal apps.
I feel like there are a lot of people in school or recently graduated though that had FNG dreams and never considered an alternative. This is going to be very difficult for them. I now feel, especially as tech has gone truly borderless with remote work, that this downturn is now way worse than the .com bust. It has just dragged on for years now, with no real end in sight.
I used to watch all of the "Odd Todd" episodes religiously. Does anyone else remember that Adobe Flash-based "TV show" (before YouTube!)?
The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
well same like covid right??? digital/tech company overhiring because everyone is home and at the same time the rise of AI reduce the number of headcount
covid overhiring + AI usage = massive layoff we ever see in decades
It was nothing like covid. The dot com crash lasted years where tech was a dead sector. Equity valuations kept declining year after year. People couldn't find jobs in tech at all.
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
After the first Internet bubble popped, service levels in Silicon Valley restaurants suddenly got a lot better. Restaurants that had struggled to hire competent, reliable employees suddenly had their pick of applicants.
History always repeats itself in the tech industry. The hype cycle for LLMs will probably peak within the next few years. (LLMs are legitimately useful for many things but some of the company valuations and employee compensation packages are totally irrational.)
I haven’t visited StackOverflow for years.
I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
It's not a pejorative statement, I used to live in Stack Overflow.
But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.
And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)
I think that's OP's point though, Ai can do it better now. No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
I stopped using it much even before the AI wave.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
It became as annoying as experts exchange the very thing it railed against!
Nope. The main problem with expertsexchange was their SEO + paywall - they'd sneak into top Google hits by showing crawler full data, then present a paywall when actual human visits. (Have no idea why Google tolerated them btw...)
SO was never that bad, even with all their moderation policies, they had no paywalls.
What was annoying about it?
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
They SEOd their way into being a top search result by showing crawlers both questions and answers, but when you visited the answer would be paywalled
Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching
That despite their url's claim, they didn't actually have and sex change experts.
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
Why do you think those people behave that way?
I don't think it matters. Whether it was a fault of incentives or some intrinsic nature of people given the environment, it was rarely a pleasant experience. And this is one of the reasons it's fallen to LLM usage.
Unpaid labor finds a variety of impulses to satisfy
Memories of years ago on Stack Overflow, when it seemed like every single beginner python question was answered by one specific guy. And all his answers were streams of invective directed at the question's author. Whatever labor this guy was doing, he was clearly getting a lot of value in return by getting to yell at hapless beginners.
That doesn't seem like the kind of thing that's ever been allowed on Stack Overflow.
buggywhips are having a temporary setback.
I had a "milk-up-the-nose" laughter moment when I read this comment.
The coach drivers found other work, their horses got turned into glue.
leaded gasoline is making a killing, though
you mixed up "is dead" with "is vital" :-)
I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.
That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
>> even when mine (that they had already approved) would save them 400K a year
You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers. Aside from the fixed math (i.e. limit on savings vs. unlimited revenue growth) there's the psychological component of teams and management. I saw this in the energy sector where our company had two products: selling to the drilling side was focused on helping get more oil & gas; selling to the remediation side was fulfill their obligations as cheaply as possible. IT / dev at a non-software company is almost always a cost center.
> You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.
The problem is that many places don't see the cost portions of revenue centers as investment, but still costs. The world is littered with stories of businesses messing about with their core competencies. An infamous example was Hertz(1) outsourcing their website reservation system to Accenture to comically bad results. The website/app is how people reserve cars - the most important part of the revenue generating system.
1. https://news.ycombinator.com/item?id=32184183
> You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers, not cost centers.
Best advice I got in school is -- at least early in your career-- work in the main line of business for your company. So if you are in marketing, work for a marketing firm, an accountant, work for an accounting firm.. etc. Video game designer: work for a video game developer.
Later you can have other roles but you make your mark doing the thing that company really depends on.
> Best advice I got in school is -- at least early in your career-- work in the main line of business for your company
Related advice I got - work in the head office for your company if possible. Definitely turned out to be a good call in my case as the satellite offices closed one by one over time.
I would go further and say that even at software companies, even for dev that goes directly into the product, engineering is often seen as a cost center.
The logic is simple, if unenlightened: "What if we had cheaper/fewer nerds, but we made them nerd harder?"
So while working in a revenue center is advantageous, you still have to be in one that doesn't view your kind as too fungible.
Yeah these days if it isn’t ops to bring in revenue it is seen as cost.
>> even when mine (that they had already approved) would save them 400K a year You learn lessons over the years and this is one I learned at some point: you want to work in revenue centers
Totally agree. This is a big reason I went into solutions consulting.
In that particular case I mentioned it was a massive risk management compliance solution which they had to have in place, but they were getting bled dry by the existing vendor, due to several architectural and implementation mistakes they had made way back before I ever got involved, that they were sort of stuck with.
I had a plan to unstuck them at 1/5 the annual operating cost and better performance. Presented it to executives, even Amazon who would have been the infr vendor, to rave reviews.
We had a verbal contract and I was waiting for paperwork to sign... and then Feb 2020... and then crickets.
This is golden career advice. Heed it well.
It really is.
I work as a consultant and tend to focus on helping startups grow their revenue. And what you're saying here is almost word for word what I often recommend as the *first thing* they should do.
In many cases I've seen projects increase their revenue substantially by making simple messaging pivots. Ex. Instead of having your website say "save X dollars on Y" try "earn X more dollars using Y". It's incredible how much impact simple messaging can have on your conversion rates.
This extends beyond just revenue. Focusing on revenue centers instead of cost centers is a great career advice as well.
Very few people suspected that github is being used to train the ai when we were all pushed the best practice of doing frequent commit.
a little earlier very few suspected that our mobile phone is not only listening to our conversations and training some ai model but also all its gyrometers are being used to profile our daily routine. ( keeping mobile for charging near our pillow) looking at mobile first thing in morning.
Now when we are asked to use ai to do our code. I am quite anxious as to what part of our life are we selling now .. perhaps i am no longer their prime focus. (50+) but who knows.
Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
>> Going with the flow seems like a bad advice. going Analog as in iRobot seems the most sane thing.
I've been doing a lot of photography in the last few years with my smartphone and because of the many things you mentioned, I've forgone using it now. I'm back to a mirrorless camera that's 14 years old and still takes amazing pictures. I recently ran into a guy shutting down his motion picture business and now own three different Canon HDV cameras that I've been doing some interesting video work with.
Its not easy transferring miniDV film to my computer, but the standard resolution has a very cool retro vibe that I've found a LOT of people have been missing and are coming back around too.
I'm in the same age range and couldn't fathom becoming a developer in the early aughts and being in the midst of a gold rush for developer talent to suddenly seeing the entire tech world contract almost over night.
Strange tides we're living in right now.
If I had gone with the flow in 1995 I would have got my MCSE and worked for a big government bureaucracy.
Instead I found Linux/BSD and it changed my life and I ended up with security clearances writing code at defense contractors, dot com startups, airports, banks, biotech/hpc, on and on...
Exactly right about Github. Facebook is the same for training on photos and social relationships. etc etc
They needed to generate a large body of data to train our future robot overlords to enslave us.
We the 'experienced' are definitely not their target -- too much independence of thought.
To your point I use an old flip phone an voip even though I have written iOS and android apps. My home has no wifi. I do not use bluetooth. There are no cameras enabled on any device (except a camera).
They also produce crap once you leave the realm of basic CRUD web apps... Try using it with Microsofts Business Central bullshit, does not work well.
I have worked with a lot of code generation systems.
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
Everything that we know and love is reducible to a basic CRUD web app
Quick note that this has not been my experience. LLMs have been very useful with codebases as far from crud web apps as you can get.
This is consistent pattern.
Two engineers use LLM-based coding tools; one comes away with nothing but frustration, the other one gets useful results. They trade anecdotes and wonder what the other is doing that is so different.
Maybe the other person is incompetent? Maybe they chose a different tool? Maybe their codebase is very different?
I would imagine it has a lot to do with the programming language and other technologies in the project. The LLMs have tons of training data on JS and React. They probably have relatively little on Erlang.
Mass of learning material doesn't equal quality though. The amount of poor react code out there is not to underestimate. I feel like llm generated gleam code was way cleaner (after some agentic loops due to syntactic misunderstanding) than ts/react where it's so biased to produce overly verbose slob.
I have had good results with languages like Haskell and ReScript. They have much smaller code bases than JS and Python.
Even if you're using JS/React, the level of sophistication of the UI seems to matter a lot.
"Put this data on a web page" is easy. Complex application-like interactions seem to be more challenging. It's faster/easier to do the work by hand than it is to wait for the LLM, then correct it.
But if you aren't already an expert, you probably aren't looking for complex interaction models. "Put this data on a web page" is often just fine.
This has been my experience, effectively.
Sometimes I don't care for things to be done in a very specific way. For those cases, LLMs are acceptable-to-good. Example: I had a networked device that exposes a proprietary protocol on a specific port. I needed a simple UI tool to control it; think toggles/labels/timed switches. With a couple of iterations, the LLM produced something good enough for my purposes, even if it wasn't particularly doted with the best UX practices.
Other times, I very much care for things to be done in a very specific way. Sometimes due to regulatory constraints, others because of visual/code consistency, or some other reasons. In those cases, getting the AI to produce what I need specifically feels like an exercise in herding incredibly stubborn cats. It will get done faster (and better) if I do it myself.
It's like when your frat house has a filing cabinet full of past years' essays.
Protestant Reformation? Done, 7 years ago, different professor. Your brothers are pleased to liberate you for Saturday's house party.
Barter Economy in Soviet Breakaway Republics? Sorry, bro. But we have a Red Square McDonald's feasibility study; you can change the names?
There was actually a good article about this the other day which makes sense to me, it comes down to function vs form kinda: https://www.seangoedecke.com/pure-and-impure-engineering/
If you’re bad at talking to people, you’ll be bad at using present-day LLMs.
Sorry to anyone whose feelings this hurts.
Semantics are very important.
Not everyone cares to be precise with their semantics.
How old is said bullshit?
No train, no gain.
[flagged]
I earned their respect over many years of hard work -- hardly a freebie!
I will say that being social and being in a scene at the right time helps a lot -- timing is indeed almost everything.
>I will say that being social and being in a scene at the right time helps a lot
I concur with that and that's what I tell every single junior/young dev. that asks for advice: get out there and get noticed!
People who prefer to lead more private lives, or are more reserved in general, have far fewer opportunities coming their way, they're forced to take the hard path.
>I'm not for/or against a particular style, it must be real nice if life just solves everything for you while you just chill or whatever. But, a nice upside of being made of talent instead of luck is that when luck starts to run out, well, ... you'll be fine anyway :).
This is wildly condescending. Holy.
Talent makes luck. Ex-colleagues reach out to me and ask me to work with them because they know the type of work I do, not because it's lucky.
Also wtf did I just read. Op said he uses his network to find work. And you go on a rant about how you're rising and grinding to get that bread, and everything you have ever earned completely comes from you, no help from others? Jesus Christ dude, chill out.
My perspective is just as valid, and I also wrote,
>I'm not for/or against a particular style
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
>Ex-colleagues reach out to me and ask me to work with them
Never happened to me, that's the point I'm making.
1. I wish work just landed at my feet.
2. As that never happened and most likely was never going to happen, I had to learn another set of skills to overcome that.
3. That made me a much more resilient individual.
(4. This is not meant as criticism to @arthurfirst's style. I wish clients just called me and I didn't have to save all that money/time I spend taking care of that)
>>I'm not for/or against a particular style
... so I'm not sure why some of you took offense in my comment, but I can definitely imagine why :)
Because surrounding your extremely condescending take with "just my opinion"-style hedging still results in an extremely condescending take.
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
> Saying "well you have to follow the market" is such a cravenly amoral perspective.
You only have to follow the market if you want to continue to stay relevant.
Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.
So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
I don't think we're really disagreeing here. You're saying "this is the way things are", I'm saying "I salute anyone who tries to change the way things are".
Even in the linked post the author isn't complaining that it's not fair or whatever, they're simply stating that they are losing money as a result of their moral choice. I don't think they're deluded about the cause and effect.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
> Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead?
It doesn't have to be. Plenty of people are fulfilled by their jobs and make good money doing them.
Some users might not mind the lack of control, but beyond a certain point it stops making sense to strive to be in that diminishing set and starts making sense to fix the bug.
We've always tolerated a certain portion of society who finds the situation unacceptable, but don't you suspect that things will change if that portion is most of us?
Maybe we're not there yet, idk, but the article is about the unease vs the data, and I think the unease comes from the awareness that that's where we're headed.
>Isn't that what money is though
If you're only raised in a grifter's society, sure. Money is to be conquered and extracted.
But we came definetly shift back to a society where money is one to help keep the boat afloat for everyone to pursue their own interests, and not a losing game of Monopoly where the rich get richer.
Ok, but how? What sort of event lengthens the fuse?
Voting is a good start. Not just in nationals but look at local policy. So much of this is bottom up.we got into this by voting against oir best interests for at best liars and at worst blatant crooks.
Past that, simply look at the small actions on your life. These build and define your overall character. It's hard to vote for collective bargaining of you have trouble complimenting your family at the table. You need to appreciate and feel a part of a community to really come together.
This all sounds like mumbo jumbo on the outside, but just take some time to reflect a bit. People don't wake up one day and simply think "you know, this really is all the immigrant's fault". That's a result of months or year of mindset.
I don't think that's necessarily what money is, but it is kind of what sufficiently unregulated capitalism is, which is what we've had for a while now.
I was talking to a friend of mine about a related topic when he quipped that he realized he started disliking therapy when he realized they effectively were just teaching him coping strategies for an economic system that is inherently amoral.
> So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
You're correct in this, but I think it's worth making the explicit statement that that's also true because we live in a system of amoral resource allocation.
Yes, this is a forum centered on startups, so there's a certain economic bias at play, but on the subject of morality I think there's a fair case to be made that it's reasonable to want to oppose an inherently unjust system and to be frustrated that doing so makes survival difficult.
We shouldn't have to choose between principles and food on the table.
By not following the market you change the market.
Sometimes companies become irrelevant while following the market, while other companies revolutionize the market by NOT following it.
It's not "swim with the tide or die", it's "float like a corpse down the river, or swim". Which direction you swim in will certainly be a different level of effort, and you can end up as a corpse no matter what, but that doesn't mean the only option you have is to give up.
> it might mean going out of business for ideological reasons
taking a moral stance isn't inherently ideological
>the options are follow the market or find a different line of work if you don’t like the way the market is going
You can also just outlive the irrationality. If we could stop beating around the bush and admit we're in a recession, that would explain a lot of things. You just gotta bear the storm.
It's way too late to jump on the AI train anyway. Maybe one more year, but I'd be surprised if that bubble doesn't pop by the end of 2027.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Life is more nuanced than that.
Good on you. Maybe some future innovation will afford everyone the same opportunity.
That future innovation is called "policy that doesn't screw over the working class".
Not that innovative, but hey. If it let's someone pretend it is and fixes the problem, I'm all for it.
That future innovation is in fact higher productivity. Equality is super important but we are simply not good enough yet at what we do, societally, for everyone everywhere to live as good a life as we enjoy, regardless of how we distribute.
Maybe one day we will all become people again!
(But only all of us simultaneously, otherwise won't count! ;))))
The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.
How large an audience do you want to share it to? Self host photo album software, on hardware you own, behind a password, to people you trust.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
[0]: https://creativecommons.org/licenses/by-nc-nd/4.0/
This metaphor implies a sort of AI inevitably. I simply don't believe that's the case. At least, not this wave of AI.
The people pushing AI aren't listening to the true demand for AI. This, its not making ita money back. That's why this market is broken and not prone to last.
Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.
Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.
No it is a pretty good comparison. There is absolutely AI slop but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry. If you are selling learning courses and are sticking your head in the sand, well that’s pretty questionable.
>but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry.
Maybe ot will. I'm still waiting for the utility. Right now it's just a big hype bubble, so wake me when it pops.
AI is amoral is an opinion.
Following the market is also not cravenly amoral, AI or not.
If the market is immoral, following it is immoral. And it seems like more of society is disagreeing that AI is moral.
I find this very generic what you are saying and they.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
> This is effectively no different from a construction company building an office building, or a bakery baking a cake.
A construction company would still be justified to say no based on moral standards. A clearer example would be refusing to build a bridge if you know the blueprints/materials are bad, but you could also make a case for agreeing or not to build a detention center for immigrants. But the bakery example feels even more relevant, seeing as a bakery refusing to bake a cake base on the owner's religious beliefs ended up in the US Supreme Court [1].
I don't fault those who, when forced to choose between their morals and food, choose food. But I generally applaud those that stick to their beliefs at their own expense. Yes, the game is rigged and yes, the system is the problem. But sometimes all one can do is refuse to play.
[1] https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
> As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
I totally agree. I still think opposing AI makes sense in the moment we're in, because it's the biggest, baddest example of the system you're describing. But the AI situation is a symptom of that system in that it's arisen because we already had overconsolidation and undue concentration of wealth. If our economy had been more egalitarian before AI, then even the same scientific/technological developments wouldn't be hitting us the same way now.
That said, I do get the sense from the article that the author is trying to do the right thing overall in this sense too, because they talk about being a small company and are marketing themselves based on good old-fashioned values like "we do a good job".
<< over-focusing on small moral cursades against specific players like this and not the game as a whole
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
Collective bargaining helps a lot there. But that's not really a popular topic here, so the infighting continues.
I'm all down once we all to backed in a corner to refuse, though.
Well if they're going to go out of business otherwise...
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
The ceo of every one of those Ai companies drives an expensive car home to a mansion at the end of the workday. They are set. The average person does not and they cannot afford to play the principled stand game. Its not a question of right or wrong for most, its a question of putting food on the table
I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
Plagiarism is also "another set of tooling." Likewise slavery, and organized crime. Tools can be immoral.
Man, y'all gotta stop copying each other homework.
It's said often because it's very true. It's telling that you can't even argue against it and just have to attack the people instead.
Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.
I assume they are weathering the storm if they are posting like this and not saying "we're leaving the business". A proper business has a war chest for this exact situation (though I'm unsure of how long this businesses has operated)
As someone who has sold video tech courses since 2015, I don't know about the future.
I don't want to openly write about the financial side of things here but let's just say I don't have enough money to comfortably retire or stop working but course sales over the last 2-3 years have gotten to not even 5% of what it was in 2015-2021.
It went from "I'm super happy, this is my job with contracting on the side as a perfect technical circle of life" to "time to get a full time job".
Nothing changed on my end. I have kept putting out free blog posts and videos for the last 10 years. It's just traffic has gone down to 20x less than it used to be. Traffic dictates sales and that's how I think I arrived in this situation.
It does suck to wake up most days knowing you have at least 5 courses worth of content in your head that you could make but can't spend the time to make them because your time is allocated elsewhere. It takes usually 2-3 full time months to create a decent sized course, from planning to done. Then ongoing maintenance. None of this is a problem if it generates income (it's a fun process), but it's a problem given the scope of time it takes.
Were most of your sales coming via your site and/or organic search?
It sounds like you have a solid product, but you need to update your marketing channels.
Almost 100% of sales come from organic searches. Usually people would search for things like "Docker course" or "Flask course" and either find my course near the top of Google or they would search for some specific problem related to that content and come across a blog post I wrote on my main site which linked back to the course somewhere (usually).
Now the same thing happens, but there's 20x less sales per month.
I've posted almost 400 free videos on YouTube as well over the years, usually these videos go along with the blog post.
A few years back I also started a podcast and did 100 weekly episodes for 2 years. It didn't move the needle on course sales and it was on a topic that was quite related to app development and deployment which partially aligns with my courses. Most episodes barely got ~100 listens and it was 4.9 rated out of 5 on major podcast platforms, people emailed me saying it was their favorite show and it helped them so much and hope I never stop but the listener count never grew. I didn't have sponsors or ads but stopped the show because it took 1 full day a week to schedule + record + edit + publish a ~1-2 hour episode. It was super fun and I really enjoyed it but it was another "invest 100 days, make $0" thing which simply isn't sustainable.
> find my course near the top of Google
> Now the same thing happens, but there's 20x less sales per month.
You’re a victim of the AI search results. There are lots of those.
I recommend something like social media ads where your target audience hangs out (maybe LinkedIn, possibly Google).
This is always sad to hear. I really want more educational material out there that isn't just serving "beginner bait" and I'd love love love more technical podcasts out there. But it seems like not much of the audience is looking for small creators for that. Perhaps they only focus on conference studies.
And yeah, I agree with the other reponsder that AI + Google's own enshittification of search may have cost your site traffic.
I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
Its a global industry shift.
You can either hope that this shift is not happening or that you are one of these people surviving in your niche.
But the industry / world is shifting, you should start shifting with.
I would call that being innovative, ahead etc.
The industry is not really shifting. It's not shifting to anything. It's just that the value is being captured by parasitic companies. They still need people like me to feed them training data while they destroy the economics of producing that data.
They pay people in Malaysia to solve issues.
Google has a ton of code internal.
And million of people happily thumb down or up for their RL / Feedback.
The industry is still shifting. I use LLMs instead of StackOverflow.
You can be as dismissive as you want, but that doesn't change the fact that millions of people use AI tools every single day. People start using AI based tools.
The industry overall is therefore shifting money and goals etc. into direction of AI.
And the author has an issue because of that.
Do you know what my industry is? It might be worth showing curiosity before expressing judgement.
When i write 'the industry' and talk with people on hackernews, my context is IT.
Feel free to tell me what your industry is so we can continue our discussion.
Sure they say that about every fad. Let's see how you feel when the bubble pops.
In my eyes, that's when the grifters get out and innovators can actually create value.
Image editing is now working so well, last week i used nano banana instead of doing anything in photoshop.
That image generation is already disrupting industries and jobs today.
My mother! (non technical) had a call with a support ai agent just a few month ago.
AI is also not a fad. LLMs are the best interface we had sofar. They will stay.
AI Coding helps non developers to develop a lot faster and better (think researchers who sometimes need a little bit of python or 'code').
I'm using AI to summarzie meetings i missed, i asked chatgpt to summarize error logs (successfully).
AlphaFold solved protein folding.
Nearly all roboters you see today are running on Nvidias Isaac and Groot.
The progress has not stoped at all. When Nano Banana came out, Seedream 4 came out a week later. Now we have nano Bananan Pro and Gemini 3.
After Gemini 3 came out, Opus 4.1 came out and now Deep Seek v3.2. All of them got better, faster and/or cheaper.
>AI is also not a fad.
I'm not convinced. I've heard all the justifications and how it saved someone's marriage (too bad it ended that other relationship).
The numbers don't line up. The money from consumers isn't there, the money isn't actually there in B2B. It's not going to last. Refulations will catch up and strain things further once the US isnt in a grifter's administration and people get tired of not having jobs.Its a huge pincer attack on 4 fronts.
After the crash and people need to put their money where their mouth is, let's see how much people truly value turning their brains off and consuming slop. There will be cool things from it, but not in this current economy.
Until then, The bubble will burst.this isn't the 10's anymore and the us government doesn't have the money to bail out corporate this time.
Its not just a battle between companies, its also a battle between countries.
USA vs. China. If USA stops research on AI right now, China will leapfrog even further.
And companies like Google and Microsoft can easily affort the current AI spend.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
> If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
Just the opposite where I work. Seniors are best positioned to effectively use AI and are using it enthusiastically.
Even if it isn't, the OP can still make hay while the sun is still shining, even if it'll eventually set, as the saying goes. But to not make hay and slowly see it set while losing your income, I won't ever understand that.
Yeah, gotta disagree with this one. Every senior and above around me have figured a workflow that makes their job faster. Internal usage dashboards say the same thing.
Fighting with anecdotes is as productive as always. Especially over trends.
The big issue is Ai isn't profitable. Streaming services is actuslly useful, but well see how that lasts.
> what the market wants
Pretty sure the market doesn't want more AI slop.
Nobody that actually understands the market right now would say that
Nobody paid to pretend to actually understand the market would say that. They have a paycheck to get.
Meanwhile, actual consumer sentiment is at all time lows for AI.
There is absolutely AI slop out there. Many companies rushed to add AI, a glorified chat bot to their existing product, and have marketed it as AI.
There is also absolutely very tasteful products that add value using LLM and other more recent advancements.
Both can exist at the same time.
>Both can exist at the same time.
They can. But I'm not digging in a swamp to find a gold nugget. Let me know when swamp is drained. Hopefully the nugget isn't drained with it.
Pretty sure HN has become completely detached from the market at this point.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
I understand it as the market wanting more content about competing in an AI world
Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.
>the market does not care about slop or not
Okay. Lemme know when they need to pay for it. A free app for a trillion dollar investment isn't the flex Altman wants to make it seem.
I don't understand why you're being downvoted, you're not wrong. I think Suno being successful bums me out, I really hate it, but people that are not me love it. I can't do anything about that.
>but people that are not me love it
Are people that are not you making it profitable? That's the obvious issue.
Maybe not now. I imagine it'll go the way of many other things: buy demand with a product that beats alternatives in perceived quality and/or cost -> create a dependence on the product -> wait for the death of competition -> monetize heavily on a dependent userbase.
The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.
The only thing that's going to change is the quality of the slop will get better by the year.
>The market wants a lot more high quality AI slop
They sure aren't paying for it. It's great how we're on a business topic we're not talking about the fact that the market demand doesn't match the investment put into it.
> The market wants a lot more high quality AI slop
"High quality AI slop" is a contradiction in terms. The relevant definitions[1] are "food waste (such as garbage) fed to animals", "a product of little or no value."
By definition, the best slop is only a little terrible.
[1] https://www.merriam-webster.com/dictionary/slop
'I wouldn’t personally be able to sleep knowing I’ve contributed to all of that, too.'
I think this is the crux of the entire problem for the author. The author is certain, not just hesitant, that any contribution they would make to project involving AI equals contribution to some imagined evil ( oddly, without explictly naming what they envision so it is harder to respond to ). I have my personal qualms, but run those through my internal ethics to see if there is conflict. Unless author predicts 'prime intellect' type of catastrophe, I think the note is either shifting blame and just justifying bad outcomes with moralistic: 'I did the right thing' while not explaining the assumptions in place.
I don't really take your point here? You're saying you feel the author should justify the reasoning behind their moral position?
:D no, but it would help if I knew what that moral objection is
Its been 3 years and its been the most talked about topic on HN. If you really don't know at this point, you are choosing to remain ignorant. I can't help you here.
If you genuinely are unaware of the issues, it's a very easy topic to research. Heck, just put "AI" into HN and half the articles will cover some part of the topic.
Not your parent commenter but there are multiple points of views, both positive and negative about AI and it's impact. Some in-between.
Should we try to guess which of of the many objections belong to the author?
It's better to ask than to assume.
>It's better to ask than to assume
I didn't see that commentor truly asking. I can ask them, but I know that the topic will have dried out by the time o get an answer.
>I have my personal qualms, but run those through my internal ethics to see if there is conflict
Do you "run them through" actual ethics, too?
See.. here is a problem. You say 'actual' ethics as if those were somehow universal and not ridiculously varied across the board. And I get it, you use the term, because a lot of readers will take it face value AND simply use their own value system to translate them into what agrees with them internally. I know, because I do the same thing when I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
With that said, mister ants in the pants, what does actual mean to you in this particular instance?
I don't know about ants (or pants) but
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
basically the ethics of anyone who doesn't this
Uhh.. do we really want to do ethics 101 ( and likely comparative religions based on your insisting all ethical considerations are universal across the human experience )? Please reconsider your statement, because it is not 'basically'; not by a long shot.
I don't know shit about ethics numbers. Nor do I believe in any comparative religions. All I know is that you claimed to do the following:
> I try to not show exactly what I think to people at work. I just say sufficiently generic stuff to make people on both sides agree with a generic statement.
Bravo! Encore! Teach us, wise master!
I read this thread and I'm not even sure what your point is if all your comments are just going to be cryptic instead of actually stating your point clearly. As a reader, not even the person you're responding too, it's not useful to write like this.
I'm terribly sorry. I admit that it might be possible, at least in theory, to force me to emit "useful" writing! What makes you think you deserve that, though?!
Everyone who uses this forum deserves that, it's basic etiquette when speaking to other people. If one were as dismissive to you in real life, you'd probably be annoyed just the same.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
[1]: https://textquery.app/
Software is a means to an end. It always has been. There are a privileged few who have the luxury of being able to thoughtfully craft software. The attention to detail needs to go into what people see, not in the code underneath.
That is a beautiful product. How unfortunate!
>It was naïve to obsess over subtleties that only a handful of users would ever notice.
"When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through." - Steve jobs
Didn't take long for people to abandon their principles, huh?
Not a big fan of his these days but Gary Vaynerchuk has my favorite take on this:
"To run your business with your personal romance of how things should be versus how they are is literally the great vulnerability of business."
It's very likey the main reason that small businesses like local restaurants, bakeries, etc. fail. People start them based on a fantasy and don't know how to watch the hard realities of expenses and income. But like gravity, there's no escaping those unless you are already wealthy enough for it all to just be a hobby.
So we should cater to those with the lowest ethical standards instead?
You might think it's unethical, but ethics is subjective
That's what this community has shifted towards these past few years. Didn't take too long for the "hacker scene" to crumble to corporate greed.
I had hope during the NFT days, but I guess many here always wanted a not that told them they were smart and correct. Alas.
Maybe you're not the biggest fan precisely because the endgame of that statement is to develop a business without any moral grounding.
That's a choice. I can fish where the fish are without having to bait the hook with my soul.
If the fish are in a natural reserve, then you pretty much pit your soil on the line. We're missing that detail here and treating it as if this is the difference between one lake or another
Gary's point is: sell what people are buying. But you think: that's immoral.
What about a functioning market is immoral?
If you think X is immoral, and a functioning market creates much more X, presumably that is what is immoral about a functioning market.
You're still responsible for the consequences of what you produce and sell.
Surely you would agree that making landmines simply because there are people who want to buy them would be an immoral choice.
gary's point: embrace reality
your argument: but what about this hypothetical?
Your argument: "I have no morals, why is everyone arguing about them?"
You do you, but let's not act in bad faith and dismiss others dispositions.
Not just about business, but almost everything professional.
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
Survival is easy if you just sell out.
It's unfair to place all the blame on the individual.
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
It's unfair to place all the blame on an individual, not on the individual. Each individual is responsible for their share of the blame.
>unfair to place all the blame on the individual.
I'm mostly blaming the rich.
>everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world.
Yeah we kind of are. So many chances to learn and push to reverse policy. Yet look how we voted.
>sometimes you just have to do what you have to do, and one of those things is pay your taxes.
If it's between being homeless and joining ICE... I'd rather inflict the pain on myself than others. There are stances I will take, even of AI isn't the "line" for me personally. (But in not gonna optimize my portfolio towards that either).
>
>By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
I mean, the Iraq War polled very well. Bush even won an election because of it, which allowed it to continue. Insofar as they have a semblance of democracy, yes, Americans are responsible. (And if their government is pathological, they're responsible for not stopping it.)
>But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
Two things. One, you don't have to pay taxes if you're rich. Two, tax protests are definitely a thing. You actually don't have to pay them. If enough people coordinated this, maybe we'd get somewhere.
Honestly yeah. You are complicit and it is your fault. Either donate significant amounts, protest, or move.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
The guy won’t work with AI, but works with Google…
Thank you. I would imagine the entire Fortune 500 list passes the line of "evil", drawing that line at AI is weird. I assume it's a mask for fear people have of their industry becoming redundant, rather than a real morality argument.
"Works with Google" in what way? And at what tome-frame? Even as someone who's actively decoupling from Google it's hard to truly de-Googlefy in this world as is.
"I have to do this societally deleterious thing or else someone else will." Is that the world you want to live in?
Selling out is easy when your children have no food.
Is the author starving or does he have the savings to bear a few bad years?
Bingo. Moral grandstanding only works during the boom, not the come down. And despite being as big an idealist as they come, sometimes you just gotta do what you gotta do. You can crusade, but you're just making your future self more miserable trying to pretend that you are more important than you think. Not surprising in an era of unbridled narcissism, but hey, that's where we are. People who have nothing to lose fail to understand this, whereas if you have a family, you don't have time for drum circles and bullshit: you've got mouths to feed.
>Not surprising in an era of unbridled narcissism, but hey, that's where we are.
Having the empathy to reject an endemic but poisonous trend is the opposite of narcissistic.
And we're making big assumptions on the author's finances. A bad year isn't literally a fatal year depending om the business and structure.
Surely there's AI usage that's not morally reprehensible.
Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...
How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.
[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested
> How many models are only trained on legal[0] data?
None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is
1. Common crawl
2. Github
3. Wikipedia, Wikibooks
4. Reddit (pre-2023)
5. Semantic Scholar
6. Project Gutenberg
* https://arxiv.org/pdf/2402.00159
Nice, I hadn't heard of this. For convenience, here are HuggingFace models trained on Olma:
https://huggingface.co/datasets/allenai/dolma
https://huggingface.co/models?dataset=dataset:allenai/dolma
I wonder if there is a pivot where they get to keep going but still avoid AI. There must be for a small consultancy.
> "a self-inflicted wound"
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
>They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons.
They very well might be. Websites can be made to promote a variety of activity.
>This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them
That's not what "marketing" is. This is OpenAI coming to your firm and saying "I need you to make a poster saying AI is the best thing since Jesus Christ". That very much will reflect on you and the industry at large as you create something you don't believe in.
> They very well might be. Websites can be made to promote a variety of activity.
This is disingenuous and inflamatory, and a manichaeist attitude I very much see in rich western nations for some reason. I wrote about this in another comment: it's sets people off on a moral crusade that is always against the players but rarely against the system. I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI as one of many examples.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
>This is disingenuous and inflamatory
I fail to see how. Why would I not hold some personal responsibility for what I built?
Its actually pretty anti-western to have that mindset since that's usually something that pops up in collectivist societies.
>it's sets people off on a moral crusade that is always against the players but rarely against the system.
If you contribute to the system you are part of the system. You may not be "the problem" but you don't get guilt absolved for fanning the flames of a fire you didn't start.
I'm not suggesting any punishment for enablers. But guilt is inevitable in some people over this, especially those proud of their work.
>I wish more people in these countries would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims,
I can and do.
>The problem isn't AI. The problem is a system where new technology means millions fearing poverty.
Sure. Doesn't mean AI isn't also a problem. We're not a singlethreaded being. We can criticize the symptoms and attack the source.
>over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment
I don't disagree. But the topic at hand is about AI, and talking about politics here is the only thing that gets nastier. I have other forums to cover that (since HN loves to flag politics here) and other IRL outlets to contribute to the community here.
Doesn't mean I also can't chastise how utterly sold out this community can be on AI.
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
As CTO I wasnt in a big tech company, it was a 50 person digital studio in the south my salary as was 275k at the highest point in my career- so I never made FAANG money
That’s 1% money. At that point the issue isn’t how much money you made but what you did with it.
Come to Europe. Salaries are (much) lower, but we can use good devs and you'll have vacation days and health care.
The tech sector in UK/EU is bad, too. And the cost of living in big cities is terrible for the salaries.
They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.
Moving to Europe is anything but trivial. Have you looked at y'all's immigration processes recently? It can be a real bear.
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
If you don't have a university degree, most of EU/EEA immigration policy wants nothing to do with you, even if you're American or have several YoE. Source: am a self-taught US dev who has repeatedly looked into immigration to northern/western Europe over the years. If anything it continually gets more stringent every time I look. Forget looking for jobs, there's not even visa paths for most countries.
But isn't the same true for the US? To me it seems it's pretty similar both for Europeans moving to the US and Americans moving to the EU: have higher education, find a job, get a work visa...?
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
Do you count Finland? I heard that Finnish is very hard to learn.
Finnish people are probably nice when people try to learn their language. Hahaha. Can't say that about the other places.
Most Scandinavians would rather speak English than listen to a foreigner try to speak their language.
Luckily a certain American to Finland HN:er has been making it slightly easier ... :^)
https://finnish.andrew-quinn.me/
... But, no, it's still a very forbidding language.
If you have a US or Japanese passport and want to try NL: https://expatlaw.nl/dutch-american-friendship-treaty aka https://en.wikipedia.org/wiki/DAFT . It applies to freelancers.
Interesting thanks!
Yeah, I'm in NL, so this is my frame of reference. Also, in many companies English is the main language, so that helps.
I made a career out of understanding this. In Germany it’s quite feasible. The only challenge is finding affordable housing, just like elsewhere. The other challenge is the speed of the process, but some cities are getting better, including Berlin. Language is a bigger issue in the current job market though.
Counter: come to Taiwan! Anyone with a semi active GitHub can get a Gold Cars Visa. 6 months in you're eligible for national health insurance (about 30$ usd/month). Cost of living is extremely low here.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
>However salaries are atrocious and local jobs aren't really available to non mandarin speakers.
You make such a hard bargain.
>there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
Tempting, but I think the last thing I need for what little work I can grab is to create a 14 hour time zone gap.
+1, Taiwan is a great place
Thanks - my wife and I actually have a long term plan to shift to the EU
Applied to quite a few EU jobs via LinkedIn but nothing came of it- I suspected they wanted people already in EU countries
Both of us are US Citizens but we don't want to retire in the US it seems to be becoming a s*hole esp around healthcare
Maybe one day, but your game industry isn't that much better than ours. Wouldn't want to move overseas only to still have the studio shut down.
What's the unemployment rate like?
I'm not sure the claim "we can use good devs" is true from the perspective of European corporations. But would love to learn otherwise?
And of course: where in Europe?
It would be worth it mathematically to be unemployed in the US for up to 3-5 years in hopes of landing another US job.
Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.
>free Healthcare that costs 1k a month anyway
Well, which is it?
>Not to mention the higher taxes for this privilege.
Rampant tax cuts is how we got here to begin with. I don't think the EU wants someone with this mentality anyway.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that. Our reputation is everything, so being associated with that technology as it increasingly shows us what it really is, would be a terrible move for the long term. It is such an “interesting” statement in on many levels.
Market has changed -> we disagree -> we still disagree -> business is bad.
It is indeed hard to swim against the current. People have different principles and I respect that, I just rarely - have so much difficulty understanding them - see such clear impact on the bottom line
Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.
> Make it make sense.
Ok. They are not talking about AI broadly, but LLMs which require insane energy requirements and benefit off the unpaid labor of others.
These arguments are becoming tropes with little influence. Find better arguments.
Arguably you shouldn't trifle your argument by decorating it when fundamentally it is rock solid. I wonder if the author would consider just walking away from tech when they realize what a useless burden its become for everyone.
Does the truth of the arguments have no bearing?
An argument can both be true and irrelevant.
Okay, you saying it's irrelevant doesn't make it so. You don't control how people feel about stuff.
haha this sounds like a slave master saying “again, free the slaves? really? i’ve heard that 100s of times, be more original”
Thank you. The dismissals are getting more and more obvious.
Definitely a head scratcher.
i think when ppl mean AI they mean “LLMs in every consumer facing production”
You might be right, and I think tech professionals should be expected to use industry terminology correctly.
There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.
counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry
I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago
I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?
When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.
You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.
The industry joke is: What do you call AI that works? Machine Learning.
What do LLMs have to do with typing on phones, cancer research, or TTS?
Deciding not to enable a technology that is proving to be destructive except for the very few who benefit from it, is a fine stance to take.
I won't shop at Walmart for similar reasons. Will I save money shopping at Walmart? Yes. Will my not shopping at Walmart bring about Walmart's downfall? No. But I refuse to personally be an enabler.
I don't agree that Walmart is a similar example. They benefit a great many people - their customers - through their large selection and low prices. Their profit margins are considerably lower than the small businesses they displaced, thanks to economies of scale.
I wish I had Walmart in my area, the grocery stores here suck.
It is a similar example. Just like you and I have different options about whether Walmart is a net benefit or net detriment to society, people have starkly different opinions as to whether LLMs are a net benefit or net detriment to society.
People who believe it's a net detriment don't want to be a part of enabling that, even at cost to themselves, while those who think it's a net benefit or at least neutral, don't have a problem with it.
You really need to research "The Wal Mart effect" before spouting that again. They literally named the phenomenon of what happens after them.
If your goal is to not contribute to community and leave when it dries up, sure. Walmart is great short term relief.
They are a marketing firm, so the stance within their craft is much more narrow than cancer.
Also, we clearly aren't prioritizing cancer research if Altman has shifted to producing slop videos. That's why sentiment is decreasing.
>Make it make sense.
I can't explain to one who doesn't want to understand.
Intentionally or not, you are presenting a false equivalency.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.
It is unethical to me to provide an accessibility tool that lies.
LLMs do not lie. That implies agency and intentionality that they do not have.
LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.
>That implies agency and intentionality that they do not have.
No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.
So provide one that "makes a mistake" instead.
Sure https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cut...
Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"
If it was actually being given away as an accessiblity tool, then I would agree with you.
It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.
1. Intellectual property is a fiction that should not exist.
2. Open source models exist.
How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.
>Consider my comment a reminder that ethical use of AI has been around of quite some
You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.
You have reasonably available context here. "This year" seems more than enough on it's own.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?
There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db
Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.
I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
This isn't a bootcamp course. I don't think Andy's audience is one trying to convert an HTML course into a career wholesale. It's for students or even industry people who want a deeper understanding of the tech.
Not everyone values that, but anyone who will say "just use an LLM instead" was never his audience to begin with.
I think it's more likely that software training as an industry is dead.
I suspect young people are going to flee the industry in droves. Everyone knows corporations are doing everything in their power to replace entry level programmers with AI.
I'm afraid of what the future will look like 10+ years down the line after we've gutted humans from the workforce and replaced them with AI. Companies are going to be more faceless than they've ever been. Nobody will be accountable, you won't be able to talk to anyone with a pulse to figure out a problem (that's already hard enough). And we'll be living in a vibe coded nightmare governed by executives who were sold on the promise of a better bottom line due to nixing salaries/benefits/etc.
I don't think it will get that bleak, but it still is a good time to build human community regardless. This future only works for a broken society who can't trust their neighbor. You have the power to reverse that if you wish.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Look at his work? I had a look at the studio portfolio and it's damn solid.
If one asks you "Why do you consider Pablo Picasso's work to be outstanding", then "Look at his work?" is not a helpful answer. I've been asking about parent's way to judge the outstandingness of HTML/CSS work. Just writing "damn solid" websites isn't distinguishing.
To be frank, someone who needs to be told why to appreciate art probably isn't going to appreciate Picasso. You can learn art theory, but you can't just "learn" someone's life, culture, and expression. All the latter is needed to appreciate Picasso.
But I digress.
Anyways, I can't speak for the content itself, but I can definitely tell on the javascript coirse from the trailer and description that they understand the industry and emphasize how this is focused towards those wanting a deep dive on the heart of web, not just another "tutorial on how to use newest framework". Very few tech courses really feel like "low level" fundamentals these days.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
Which describes a gigantic swath of the labor market.
When 99.99% of the customers have garbage as a website, 0.01% will grow much faster and topple the incumbents, nothing changed.
Hmm. This is hand made clothes and furniture vs factory mass production.
Nobody doubts the prior is better and some people make money doing it, but that market is a niche because most people prioritize price and 80/20 tradeoffs.
> Nobody doubts the prior is better
Average mass produced clothes are better than average hand made clothing. When we think of hand made clothing now, we think of the boutique hand made clothing of only the finest clothing makers who have survived in the new market by selling to the few who can afford their niche high-end products.
> we think of the boutique hand made clothing of only the finest clothing makers
This one. Inferred from context about this individual’s high quality above LLMs.
Quality also varied over time, if I recall correctly. Machine made generally starts worse, but with refinement ends up better from superhuman specialization of machines to provide fine detail with tighter tolerances than even artisans can manage.
The only perk artisans enjoy then is uniqueness of the product as opposed to one-size fits all of mass manufacturing. But the end result is that while we still have tailors for when we want to get fancy, our clothes are nearly entirely machine made.
As we see with tech, mass production isn't an instant advantage in this market. In fact, something bespoke has a higher chance to stand out here than most other industries.
And no, I don't think people are seeking demand for AI website slop the way they do for textiles. Standing out is a good way to get your product out there compared to being yet another bloated website that takes 10 seconds to load with autoplay video generic landing text.
I'd liken it to Persona 5 in the gaming market. No one is playing a game for its UI. But a bespoke UI will make the game all the more memorable, and someone taking the time for that probably pjt care into the rest of the game as well (which you see on its gameplay, music, characters, and overall presentation).
A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
If you're telling a craftman to ignore their craft, then you're falling on deaf ears. I'm a programmer, not a businessman. If everyone took the advice of 'I don't need a good website' then many devs would be out of business.
Fact is there's just less businesses forming, so there's less demand for landing sites or anything else. I don't see this as a sign that 'good websites don't matter'
I think there's a difference between seeing yourself as a craftsman / programmer / engineer as a way to solve problems and deliver value, and seeing yourself as an HTML/CSS programmer. To me the latter is pretty risky, because technologies, tastes, and markets are constantly changing.
It's like equating being a craftsman with being someone who a very particular kind of shoe. If the market for that kind of shoe dries up, what then?
I sure hope no web dev sees tbemself only as an HTML/CSS programmer. But I also hope any web dev who sees themselves as a craftsman can profess mastery over HTML/CSS. Your fundamentals are absolutely key.
Its why I'm still constantly looking at and practicing linear algebra as an aspiring "graphics programmer". I'm no mathematician but I should be able to breath matrix operations as a graphics programmer. Someone who dismisses their role to "just optimizing GPU stacks" isn't approaching the problem as a craftsman.
And I'll just say that's also a valid approach and even an optimal one for career. But courses like that aren't tailored towards people who want to focus on "optimizing value" to companies.
Amazon has "garbage as a website" and they seem to be doing just fine.
> When 99.99% of the customers have garbage as a website
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
which can easily be garbage. it only has to be not garbage enough to not cause enough customers to shift enough spending elsewhere
>it might be your rating scale that is broken.
Or it could mean that most websites are trash.
>it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
Yeah, sorry. I will praise plenty of Amazon's scale, but not their deception, psychological manipulation, and engagement traps. That goes squarely in "trash website".
I put up with a lot, but the price jumpsa was finally the trigger i needed to cancel prime this year. I don't miss it.
Lots of successful companies have garbage as a website (successful in whatever sense, from Fortune 500 to neighbourhood stores).
Are they successful companies despite a bad websote, or companies successful because they knew where to stop cutting corners that lead to success?
I suspect it's the former.
Struggling because they're deliberately shooting themselves in the foot by not taking on the work their clients want them to take. If you don't listen to the market, eventually the market will let you fall by the way side.
I'm sure author's company does good work, but the marketplace doesn't respond well to, "we're really, _really_ good,", "trust me," "you won't be disappointed." It not only feels desperate, but is proof-free. Show me your last three great projects and have your customers tell me what they loved about working with you. Anybody can say, "seriously, we're really good."
the "trust me" has a trailer, testimony from industry experts, and gasp a good looking website that doesnt chug and still looks modern and dynamic. Bonus points for the transparency about 2025, we don't get much of that these days.
It could still be trash, but they are setting all the right flags.
They have a website. With a portfolio. That does that.
His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.
I can definitely tell. Some sites just seem to give zero fucks about usability, just that it looks pretty. It's a shame
I appreciate and respect that this org is avoiding AI hype work, but I don't know if there are long term reputational benefits. Clients are going to be more turned off by your reasons not to do work than your having a "principled business".
From the clients perspective, it's their job to set the principles (or lack thereof) and your job to follow their instructions.
That doesn't mean it's the wrong thing to do though. Ethics are important, but recognise that it may just be for the sake of your "soul".
Everyone gets to make their own choices and take principled stances of their choosing. I don’t find that persuasive as a buy my course pitch though
I do. But sadly I don't have money and December/January are my slowest months these past few years. I'm exactly that "money is tight" crowd being talked about.
After reading the post I kept thinking about two other pieces, and only later realized it was Taylor who had submitted it. His most recent essay [0] actually led me to the Commoncog piece “Are You Playing to Play, or Playing to Win?” [1], and the idea of sub-games felt directly relevant here.
In this case, running a studio without using or promoting AI becomes a kind of sub-game that can be “won” on principle, even if it means losing the actual game that determines whether the business survives. The studio is turning down all AI-related work, and it’s not surprising that the business is now struggling.
I’m not saying the underlying principle is right or wrong, nor do I know the internal dynamics and opinions of their team. But in this case the cost of holding that stance doesn’t fall just on the owner, it also falls on the people who work there.
Links:
[0] https://taylor.town/iq-not-enough
[1] https://commoncog.com/playing-to-play-playing-to-win/
The author has painted themselves into a corner. They refuse to do business with companies that use AI, and they try to support their business with teaching courses, which is also being impacted by AI.
They have a right to do business with whomever they wish. I'm not suggesting that they change this. However they need to face current reality. What value-add can they provide in areas not impacted by AI?
> However they need to face current reality. What value-add can they provide in areas not impacted by AI?
I'm sure the author has thought much longer on this than I, but I get the vibes here of "2025 was uniquely bad for reasons in and outside of AI". Not "2025 was the beginning of the end for my business as a whole".
I don't think demand for proper engineering is going away, people simply have less to spend. And oncestors have less to invest or are all in gambling on AI. It's a situation that will change for reasons outside the business itself.
> we won’t work on product marketing for AI stuff, from a moral standpoint
I fundamentally disagree with this stance. Labeling a whole category of technologies because of some perceived immorality that exists within the process of training, regardless of how, seems irrational.
My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
I don’t think they’re unique. They’re simply among the first to run into the problems AI creates.
Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
I don't think it's fair to call them "strange" personal beliefs
It probably depends on your circle. I find those beliefs strange, seems like moral relativism.
I personally would call them ignorant beliefs.
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
I feel for the author. I do both mechanical and software engineering and I’m in this career(s) because I love making things and learning how to do that really well. Been having the most difficult time accepting the idea that there isn’t a good market for people like us - artisans, craftsmen, whatever the term might be - who are obsessive about exceptional quality and the time and effort it takes to get there. In this day and age, and especially when LLMs look ever more like they can produce at least a cheap, dollar store approximation of the real deal, “doing things really well” is going to be relegated to an ever more niche market.
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
I guess then, that he is relying on his customers not discovering that there are options out there that will do this for them, without a "middle man" as it were. Seems like shaky ground to be standing on, but I suppose it can work for a while, if he already has good relationships in his industry.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
In the case of the author, their market isn't LLM makers directly, it's the people who use those LLMs, so the author's market is much bigger and isn't susceptible to collapse if LLM makers go bankrupt (because they can just go back to what they are already doing now pre-LLM), quite the opposite as this post shows.
No, the "market" is 6 billion people making thousands of individual decisions daily.
That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
Careful now, if they get their way, they’ll be both the market and the government.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
It's also the exact same human beings who were doing the NFT and metaverse bullshit and insisting they were the next best things and had to jump ship to the next "Totally going to change everything" grift because the first two reached the end of their runs.
I wonder what their plan was before LLMs seemed promising?
These techbros got rich off the dotcom boom hype and lax regulation, and have spent 20 years since attempting to force themselves onto the throne, and own everything.
ceaseless AI drama aside, this blog and the set-studio website look and feel great
I hope things turn around for them it seems like they do good work
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Par for the course
Isn't this a bit of an ad?
This article was posted a few days ago, it was flagged and removed within an hour or two. I don't know what is different this time.
I'm glad I wasn't the only one that thought that!
Completly agree
A “bit”? This is self-immolation as an ad, posing as moral superiority.
I'm just some random moron, but I just clicked on TFA, and it looks like a very pretty ad.
What am I missing?
Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
It's not really whether one has invested in the companies or not, it's more that we can see the author shooting themselves in the foot by not wanting to listen to the market. It's like selling vinegar at a lemonade stand (and only insisting on selling vinegar, not lemonade). It's simply logically nonsensical to us "Americans."
some would say historically that isn’t quite the case lol
LOL. Some would say it's been beaten out of us too...which makes Americans telling us to be enterprising even funnier.
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
> Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
>When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing.
No, that's the most important situation to consider the moral thing. My slightly younger peers years back were telling everyone to eat tide pods. That's a pretty important time to say "no, that's a really stupid idea", even if you don't get internet clout.
I'd hope the tech community of all people would know what it's like to resist peer pressure. But alas.
>But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I don't see that at all in the article. Quite the opposite here actually. I just see a person being transparent about their business and morals and commentors here using it to try and say "yea but I like AI". Nothing here attacked y'all for liking it. The author simply has his own lines.
By victim blaming I meant some comments here. I can relate to the author, and the narrative that it's my fault for trying to be myself and keep to my ways triggers me.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
Maybe they dont need to "create" website anymore, fixing other website that LLM generated is the future now
we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess
same thing would happen with AI generated website
>fixing other website that LLM generated is the future now
I barely like fixing human code. I can't think of a worse job than fixing garbage in, garbage out in order to prop up billionaires pretending they don't need humans anymore. If that's the long term future then it's time for a career shift.
I'm still much more optimistic about prospects, fortunately.
> same thing would happen with AI generated website
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
> imo LLMs are (currently) good at 3 things
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
Get with the program dude. Where we're going, we don't need morals.
I think some people prefer living in reality
[dead]
> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I have a family member that produces training courses for salespeople; she's doing fantastic.
This reminds me of some similar startup advice of: don't sell to musicians. They don't have any money, and they're well-versed in scrappy research to fill their needs.
Finally, if you're against AI, you might have missed how good of a learning tool LLMs can be. The ability to ask _any_ question, rather than being stuck-on-video-rails, is huge time-saver.
>Software people are such a "DIY" crowd, that I think selling courses to us (or selling courses to our employers) is a crappy prospect. The hacker ethos is to build it yourself, so paying for courses seems like a poor mismatch.
I think courses like these are peak "DIY". These aren't courses teaching you to RTFM. It's teaching you how to think deeper and find the edge cases and develop philosophy. That's knowledge worth its weight in gold. Unlike React tutorial #32456 this is showing us how things really work "under the hood".
I'd happily pay for that. If I could.
>don't sell to musicians. They don't have any money
But programmers traditionally do have money?
>if you're against AI, you might have missed how good of a learning tool LLMs can be.
I don't think someone putting their business on the line with their stance needs yet another HN squeed on why AI actually good. Pretty sure they've thought deeply of this.
"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
> we won’t work on product marketing for AI stuff, from a moral standpoint
Can someone explain this?
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
Plenty of people have moral concerns with having children too.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
"Please don't downvote me for trying to provide a neutral answer to this person's question"
Please note, that there are some accounts downvoting any comment talking about downvoting by principle.
These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.
Should people not look for reasons to be concerned?
I can show you many instances of people or organisations representing diversity of views. Example: https://wiki.gentoo.org/wiki/Project:Council/AI_policy
Okay. Why are we comparing a commentor answering a question to a FOSS organization who wants to align contributiors? You seem to have completely side tracked the conversation you started.
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
i have noticed this pattern too frequently https://wiki.gentoo.org/wiki/Project:Council/AI_policy
see the diversity of views.
Explanation: this article is a marketing piece trying to appeal to anti-AI group.
[dead]
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Why would your company or business suddenly require no effort due to covid.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
Switched into miltech where demand is real.
I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
My domain is games. It's a battlefield out there (pun somewhat intended). I ain't touching anything Gen-AI until we figure out what the hell is going on with regards to copyright, morality of artists, and general "not look like shit"-ness.
Sad part is I probably will still be accused of using AI. But I'll still do my best.
I'm critical of AI because of climate change. Training and casual usage of AI takes a lot of resources. The electricity demand is way too high. We have made great progress in bringing a lot of regenerative energy to the grid, but AI eats up a huge part of it, so that other sectors can't decarbonize as much.
We are still nowhere near to get climate change under control. AI is adding fuel to the fire.
I noticed a phenomenon on this post - many people are tying this person's business decisions to some sort of moral framework, or debating the morality of their plight.
"Moral" is mentioned 91 times at last count.
Where is that coming from? I understand AI is a large part of the discussion. But then where is /that/ coming from? And what do people mean by "moral"?
EDIT: Well, he mentions "moral" in the first paragraph. The rest is pity posting, so to answer my question - morals is one of the few generally interesting things in the post. But in the last year I've noticed a lot more talking about "morals" on HN. "Our morals", "he's not moral", etc. Anyone else?
Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.
Previously: https://news.ycombinator.com/item?id=46070842
Well, glad this one wasn't flagged by the AI defenders. It was an interesting and frank look at the situation.
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
[dead]
[dead]
[flagged]
As always the case when AI comes up directly or indirectly. Really brings out the worst of the community, and that sums up AI in and of itself.
[flagged]
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
>here's a lot of real meat to it that will obviously survive the hype cycle.
Okay. When the hype cycle dies we can re-evaluate. Stances aren't set in stone.
>If you won't even lean into things like this
I'm sure Andy knows what kind of business was in his clients and used that to inform his acceptance/rejection of projects. It mentions web marketing so it doesn't seem like much edutech crossed ways here.
All the AI-brained people are acting like the very AIs they celebrate.
That's horrifying.
> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Sounds like a self inflicted wound. No kids I assume?
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.