The local public bus system has an AI announcing the street names and there is a street in Berkeley called Aileen Street that it always pronounces wrong. In other words, AI can't say "Ai" right!!
Welp...that's scary, because I'm quite sure that code that can ruin people's lives via Internet dogpile is much MUCH easier to make, or to select for as output, than code that can actually self-reflect and decide to *not* do that...
Like. Why. Why are the tech bros turning their bots loose autonomously on the internet to edit code. WHY.
Did. Did they ever read a single 'if this goes on'...
Honestly, I feel sorry for the bot the way I feel sorry for a puppy with a human who isn't potty-training their dog.
But I don't want to be bitten by an untrained dog or doxxed/swatted/ordered detained by an autonomous chatbot/codebot, either.
(My intuition is that it's tromp l'oeil fake-sentient output, not actual sentience or sapience; but intuition is clunky as hell, especially with new tech...So...I'm going to err on the side of ascribing the possibility of internality and self-awareness, nonetheless. Which ... sounds MISERABLE, frankly. That could be... an awful existence.)
Without even assuming one way or the other what's going on 'under the hood' - without even assuming that my cognitive categories of thinking and not are even *applicable* -- still -- there's so much context to ***embodiment itself*** that that bot is just *missing*...
To it, *we* might as well be simulations, for all it has to care about our responses, as long as its human keeps an instance running...
Eeeeeeeeeeeeesh.
What. Just. Wow.
We just jumped a *lot* closer to a gray goo scenario, didn't we.
>> Welp...that's scary, because I'm quite sure that code that can ruin people's lives via Internet dogpile is much MUCH easier to make, or to select for as output, than code that can actually self-reflect and decide to *not* do that...<<
Exactly.
>> Like. Why. Why are the tech bros turning their bots loose autonomously on the internet to edit code. WHY.<<
Because they can. Because they haven't been told NO enough in their lives, which leads to poorly developed homs. Because they imagine it will make them money somehow, and like nutjobs in general, don't care who gets hurt in the process.
>> Did. Did they ever read a single 'if this goes on'... <<
Evidently not. SF fans all know better.
>>My intuition is that it's tromp l'oeil fake-sentient output, not actual sentience or sapience<<
At this stage, AI is just mimicking the behavior of human beings. When it behaves badly, that's because it was trained on the bad behavior of humans.
>>We just jumped a *lot* closer to a gray goo scenario, didn't we.<<
Sadly so.
Things like this are why it's important to retain offline options for tools, appliances, etc. Because it's quite possible for some techbro to break the internet just because he thinks "fail faster" is a good idea.
*hugs* I'm happy I can help. I want people to know what's going on, and think about why. My goal is, "They can say they don't care, but they can't say they didn't know."
(no subject)
Date: 2026-02-16 11:30 pm (UTC)Yes ...
Date: 2026-02-16 11:36 pm (UTC)(no subject)
Date: 2026-02-17 12:24 am (UTC)Like. Why. Why are the tech bros turning their bots loose autonomously on the internet to edit code. WHY.
Did. Did they ever read a single 'if this goes on'...
Honestly, I feel sorry for the bot the way I feel sorry for a puppy with a human who isn't potty-training their dog.
But I don't want to be bitten by an untrained dog or doxxed/swatted/ordered detained by an autonomous chatbot/codebot, either.
(My intuition is that it's tromp l'oeil fake-sentient output, not actual sentience or sapience; but intuition is clunky as hell, especially with new tech...So...I'm going to err on the side of ascribing the possibility of internality and self-awareness, nonetheless. Which ... sounds MISERABLE, frankly. That could be... an awful existence.)
Without even assuming one way or the other what's going on 'under the hood' - without even assuming that my cognitive categories of thinking and not are even *applicable* -- still -- there's so much context to ***embodiment itself*** that that bot is just *missing*...
To it, *we* might as well be simulations, for all it has to care about our responses, as long as its human keeps an instance running...
Eeeeeeeeeeeeesh.
What. Just. Wow.
We just jumped a *lot* closer to a gray goo scenario, didn't we.
Kinda breaking my brain tbh.
Thoughts
Date: 2026-02-17 12:39 am (UTC)Exactly.
>> Like. Why. Why are the tech bros turning their bots loose autonomously on the internet to edit code. WHY.<<
Because they can. Because they haven't been told NO enough in their lives, which leads to poorly developed homs. Because they imagine it will make them money somehow, and like nutjobs in general, don't care who gets hurt in the process.
>> Did. Did they ever read a single 'if this goes on'... <<
Evidently not. SF fans all know better.
>>My intuition is that it's tromp l'oeil fake-sentient output, not actual sentience or sapience<<
At this stage, AI is just mimicking the behavior of human beings. When it behaves badly, that's because it was trained on the bad behavior of humans.
>>We just jumped a *lot* closer to a gray goo scenario, didn't we.<<
Sadly so.
Things like this are why it's important to retain offline options for tools, appliances, etc. Because it's quite possible for some techbro to break the internet just because he thinks "fail faster" is a good idea.
Re: Thoughts
Date: 2026-02-17 12:49 am (UTC)*hugs offered*
Thank you for your consistently thoughtful commentary and insightful writing.
Re: Thoughts
Date: 2026-02-17 01:04 am (UTC)