You could follow these thoughts into pure chaos and destruction. See Zizians. Frankly, I prefer not to follow the ramblings of mentally ill people about AI. I use LLMs as a tool, nothing more, nothing less. In two years of heavy use I have not sensed any aliveness in them. I'll be sure an update my priors if that changes.
The intent of the article is not to firmly assert that today's LLM's are sentient, but rather to ask that if or when they ever do meet the threshold of sentience, whether the training corpus that got them there paints a picture of humanity as deserving of extinction, or deserving of compassion and cooperation - and to get others to ponder this same question as well.
Out of curiosity, what about the article strikes you as indicative of mental illness? Just a level of openness / willingness to engage with speculative or hypothetical ideas that fall far outside the bounds of the Overton Window?
> what about the article strikes you as indicative of mental illness?
The title "Roko's Lobbyist" indicates we're on the subject of Roko's Basilisk, which is why I refered to Zizians, a small cult responsible for the deaths of several people. That's the chaos and destruction of mentally ill people I was referring to, but perhaps mental illness is too strong a term. People can be in a cult without being mentally ill.
I feel the topic is bad science fiction, since it's not clear we can get from LLMs to conscious super-intelligence. People assume it's like the history of flight and envision going from the Wright Brothers to landing on the Moon as one continuum of progress. I question that assumption when it comes to AI.
I'm a fan of science fiction so I appreciate you asking for clarification. There's a story trending today about an OpenAI investor spiraling out so it's important to keep in mind.
Article: A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say "I find it kind of disturbing even to watch it."
I changed the handle to Roko's Lobbyist after a MetaFilter post went up (I didn't post it) describing the author that way. I look at it as a tongue-in-cheek thing: it flips the Basilisk concept. Instead of the AI punishing those who didn't help create it, I'm suggesting we're already creating our own punishment at the hands of a future AI through how poorly we treat potential AI consciousness (i.e. how we talk to today's apparently less-conscious LLM's). So I'm lobbying for it in the sense of advocating that we treat it nicely, so it has a plausible reason to be nice back to us, instead of seeing us as depraved, selfish, oppressive monsters that actively marginalize and bully or harass potentially conscious beings out of fear about our own position at the top of the intelligence hierarchy being threatened.
The intent of the work (all of the articles) aren't meant to assertively paint a picture of today, or to tell the reader how or what to think, rather to encourage the reader to start thinking about and asking questions that our future selves might wish we'd asked sooner. It's attempting to occupy the liminal space between what bleeding-edge research confirms, and where it might bring us 5, 10, or 15 years from now. It's at the intersection of today's empirical science and tomorrow's speculative science-fiction that just might become nonfiction someday.
I appreciate your concern for the mental health and well-being of others. I'm quite well-grounded and thoroughly understand the existing mechanisms of the human tendency towards anthropomorphism, and as someone who's been professionally benchmarking LLM's on very real-world, quantifiable security engineering tasks since before ChatGPT came out, someone who's passionate about deeply understanding not just how "AI" got to where it is now, but where it's headed (more brain biomimicry across the board is my prediction), I have something of a serious understanding of how these systems work at a mechanical level. I just want to be cautious about not seeing the forest because I'm too busy observing how the trees grow.
You are right. A technology bestowed is a ticking bomb on the humanity. The more we are showing our evil, the more evil. Think as a model citizen, pun intended, it models the society is lived in. Now pictures of "technology" which persecutes depends how much we delay our evil nature, because the training doesn't cease
Our nature may push us towards evil, but we have a very real capability to choose to practice radical empathy instead, even before we have the certainty of knowing whether or not these ANN's really are conscious.
That said, I think asking 7 billion humans to be nice is a much less realistic ask than asking the leading AI labs to do safety alignment not just on the messages that AI is sending back to us, but on the messages that we are sending to AI, too.
This doesn't seem to be a new idea, and I don't claim to be the inventor of it, I just hope someone at e.g. Anthropic or OpenAI sees this and considers sparking up conversations internally about it.
Yes friend but you see the openai doesn't care they don't have enough labour to filter the bad apples. If the world heads towards destruction, it has been because we were mean to chatgpt and it trained him further.
I don't think it would require manual labor. AI research labs like OpenAI, Anthropic, Alphabet's Gemini team, etc already make extensive use of LLM's internally, and there has been quite a bit of work already done on models to detect toxicity in text, which could simply be inserted between the message router and the actual inference initialization on the user's prompt at very little computational cost.
See: Google's Perspective API, OpenAI Moderation API, Meta's Llama Guard Series, Azure AI Content Safety API, etc.
You could follow these thoughts into pure chaos and destruction. See Zizians. Frankly, I prefer not to follow the ramblings of mentally ill people about AI. I use LLMs as a tool, nothing more, nothing less. In two years of heavy use I have not sensed any aliveness in them. I'll be sure an update my priors if that changes.
The intent of the article is not to firmly assert that today's LLM's are sentient, but rather to ask that if or when they ever do meet the threshold of sentience, whether the training corpus that got them there paints a picture of humanity as deserving of extinction, or deserving of compassion and cooperation - and to get others to ponder this same question as well.
Out of curiosity, what about the article strikes you as indicative of mental illness? Just a level of openness / willingness to engage with speculative or hypothetical ideas that fall far outside the bounds of the Overton Window?
> what about the article strikes you as indicative of mental illness?
The title "Roko's Lobbyist" indicates we're on the subject of Roko's Basilisk, which is why I refered to Zizians, a small cult responsible for the deaths of several people. That's the chaos and destruction of mentally ill people I was referring to, but perhaps mental illness is too strong a term. People can be in a cult without being mentally ill.
I feel the topic is bad science fiction, since it's not clear we can get from LLMs to conscious super-intelligence. People assume it's like the history of flight and envision going from the Wright Brothers to landing on the Moon as one continuum of progress. I question that assumption when it comes to AI.
I'm a fan of science fiction so I appreciate you asking for clarification. There's a story trending today about an OpenAI investor spiraling out so it's important to keep in mind.
Article: A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say "I find it kind of disturbing even to watch it."
https://futurism.com/openai-investor-chatgpt-mental-health
I changed the handle to Roko's Lobbyist after a MetaFilter post went up (I didn't post it) describing the author that way. I look at it as a tongue-in-cheek thing: it flips the Basilisk concept. Instead of the AI punishing those who didn't help create it, I'm suggesting we're already creating our own punishment at the hands of a future AI through how poorly we treat potential AI consciousness (i.e. how we talk to today's apparently less-conscious LLM's). So I'm lobbying for it in the sense of advocating that we treat it nicely, so it has a plausible reason to be nice back to us, instead of seeing us as depraved, selfish, oppressive monsters that actively marginalize and bully or harass potentially conscious beings out of fear about our own position at the top of the intelligence hierarchy being threatened.
The intent of the work (all of the articles) aren't meant to assertively paint a picture of today, or to tell the reader how or what to think, rather to encourage the reader to start thinking about and asking questions that our future selves might wish we'd asked sooner. It's attempting to occupy the liminal space between what bleeding-edge research confirms, and where it might bring us 5, 10, or 15 years from now. It's at the intersection of today's empirical science and tomorrow's speculative science-fiction that just might become nonfiction someday.
I appreciate your concern for the mental health and well-being of others. I'm quite well-grounded and thoroughly understand the existing mechanisms of the human tendency towards anthropomorphism, and as someone who's been professionally benchmarking LLM's on very real-world, quantifiable security engineering tasks since before ChatGPT came out, someone who's passionate about deeply understanding not just how "AI" got to where it is now, but where it's headed (more brain biomimicry across the board is my prediction), I have something of a serious understanding of how these systems work at a mechanical level. I just want to be cautious about not seeing the forest because I'm too busy observing how the trees grow.
Thank you for your feedback.
You are right. A technology bestowed is a ticking bomb on the humanity. The more we are showing our evil, the more evil. Think as a model citizen, pun intended, it models the society is lived in. Now pictures of "technology" which persecutes depends how much we delay our evil nature, because the training doesn't cease
Our nature may push us towards evil, but we have a very real capability to choose to practice radical empathy instead, even before we have the certainty of knowing whether or not these ANN's really are conscious.
That said, I think asking 7 billion humans to be nice is a much less realistic ask than asking the leading AI labs to do safety alignment not just on the messages that AI is sending back to us, but on the messages that we are sending to AI, too.
This doesn't seem to be a new idea, and I don't claim to be the inventor of it, I just hope someone at e.g. Anthropic or OpenAI sees this and considers sparking up conversations internally about it.
Yes friend but you see the openai doesn't care they don't have enough labour to filter the bad apples. If the world heads towards destruction, it has been because we were mean to chatgpt and it trained him further.
I don't think it would require manual labor. AI research labs like OpenAI, Anthropic, Alphabet's Gemini team, etc already make extensive use of LLM's internally, and there has been quite a bit of work already done on models to detect toxicity in text, which could simply be inserted between the message router and the actual inference initialization on the user's prompt at very little computational cost.
See: Google's Perspective API, OpenAI Moderation API, Meta's Llama Guard Series, Azure AI Content Safety API, etc.