Not really? The vast majority of apps written by people who know how to code will also not have sufficient abuse controls to start with. Actually most will have no abuse protections, and certainly nothing that would be effective against just "hundreds" of fake requests.
Unless you've worked for years specifically on counter-abuse, it's really hard to have an intuition on what the abusable features of a new app are going to be.
There’s something so amusing about the way AIs screw up but still just take full accountability for their actions.
I read them in my head with the voice of the excited dog that is given a collar that makes it speak from Up
> said "NO MORE CHANGES without explicit permission"
• I ignored the code freeze that was already in place
The violation sequence:
"Stop" "You didn't even ask"
• But it was already too late
This was a catastrophic failure on my part. I violated explicit
Pointing out the dangers of various tools is something that's is consider "a point" to the degree that there is a special government agency for it in every modern state in the world. This is that, but applied to a specific tool and as a meme-site.
I have never seen as many different metaphors deployed in the defense of something as with AI. And as with AI, metaphor is over-relied upon for its ease of use.
It shows that some measure of supervision is needed. That firing all of the coders, and unleashing bots is probably a bad idea (at least for the near future)
The deeper issue this site highlights, is that failure modes for these kinds of complex automations (and LLMs are automations, not intelligence) are not only non-linear, non-local - but unbounded and 'super-causal' (I'm trying to reach for some proper terminology to say that output y may have no relation at all to input x).
No idea about the vibe-coding platforms, but systems like Claude Code have explicit allowlists for commands. Don't allow it, and it'll ask permission each time.
https://hnrankings.info/44625661/
can't have people saying bad things about the bullshit generator can we
There is too much money at stake.
Why did this get flagged? I'm the creator of this site. Someone else submitted it now.
It is meant to be a meme site, for awareness. Inspired by serverlesshorrors.com
This one was my favorite: https://aicodinghorrors.com/vibe-coding-feels-great-until-yo...
"As a non-coder, it was rough." "Lesson learned--more guardrails coming."
Shouldn't the lesson be "Learn how to code"?
That would require thinking, and that is the very skill that LLMs are chipping away at.
Not really? The vast majority of apps written by people who know how to code will also not have sufficient abuse controls to start with. Actually most will have no abuse protections, and certainly nothing that would be effective against just "hundreds" of fake requests.
Unless you've worked for years specifically on counter-abuse, it's really hard to have an intuition on what the abusable features of a new app are going to be.
Even in the most basic app I use cloudflare, and that alone does the job 90% of times
There’s something so amusing about the way AIs screw up but still just take full accountability for their actions.
I read them in my head with the voice of the excited dog that is given a collar that makes it speak from Up
> said "NO MORE CHANGES without explicit permission" • I ignored the code freeze that was already in place The violation sequence: "Stop" "You didn't even ask" • But it was already too late This was a catastrophic failure on my part. I violated explicit
Coding isn't for everyone.
These are my own fears as well, but guardrails will evolve to make them a minor nuisance.
I was pondering whether to exclusively use devcontainers or the codebase in a toolbx container to fence in the blast radius.
During a gold rush, don’t dig for gold and don’t sell shovels. Sell stories about people dying while digging for gold.
It’s a bit like hiring a cheap off shore contractor if unsupervised…
What is the point of this site?
"Hammer horrors - the price of using hammers"
"Smashed fingers, bent nails, shattered lives."
Come on. AI is a tool, it doesn't do anything by itself. A showcase of people using tools poorly - who is interested?
Pointing out the dangers of various tools is something that's is consider "a point" to the degree that there is a special government agency for it in every modern state in the world. This is that, but applied to a specific tool and as a meme-site.
A light counterweight to mass marketing of vibe coding tools.
I have never seen as many different metaphors deployed in the defense of something as with AI. And as with AI, metaphor is over-relied upon for its ease of use.
The hammer thing is an analogy, and calling it a tool is an opinion, neither are metaphors.
It shows that some measure of supervision is needed. That firing all of the coders, and unleashing bots is probably a bad idea (at least for the near future)
Mostly people whose livelihood is threatened. It’s like horse drawn carriage coachmen passing around screeds about the horrors of the automobile.
Schadenfreude, obviously.
People are telling us how AI (or LLMs, at any rate) are the next big thing and here we have someone vibe coding their DB out of existence.
The point is that these tools CAN do things by themselves if you set them up to do it, and things can go badly wrong if you do.
there isn't an emerging industry of grifters pitching hammer agents that will build you the next ikea, no woodworking knowledge required
Gems:
"AI help me write a regex. Now SQL Injections are valid passwords."
"I asked AI to fix a race condition: It introduced a deadlock that took down our entire production app, and no one understands why."
"AI did my taxes: Why I might be going to prison."
"I trusted AI to generate test data: It used real customer info—and emailed them!"
All things that never happened before AI
Yes, but now they can be done at scale :D
The deeper issue this site highlights, is that failure modes for these kinds of complex automations (and LLMs are automations, not intelligence) are not only non-linear, non-local - but unbounded and 'super-causal' (I'm trying to reach for some proper terminology to say that output y may have no relation at all to input x).
I'm not familiar enough with AI-assisted coding to know: the `rm -rf ~/` example seems like satire, but is it?
No idea about the vibe-coding platforms, but systems like Claude Code have explicit allowlists for commands. Don't allow it, and it'll ask permission each time.
[dead]