The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…
These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
Oh are you one of those people that stubbornly refuses to accept analogies?
How about this: Imagine being a photosensitive epileptic in the year 950 AD. How many sources of intense rapidly flashing light are there in your environment? How many people had epilepsy in ancient times and never noticed because they were never subjected to strobe lights?
Jump forward a thousand years. We now have cars that can drive past a forest causing the passengers to be subjected to rapid cycles of sunlight and shadow. Airplane propellers, movie projectors, we can suddenly blink intense lights at people. The invention of the flash lamp and strobing effects in video games aren’t far in the future. In the early 80’s there were some video games programmed with fairly intense flashing graphics, which ended up sending some teenagers to the hospital with seizures. Atari didn’t invent epilepsy, they invented a new way to trigger it.
I don’t think we’re seeing schizophrenia here, they’re not seeing messages in random strings or hearing voices from inanimate objects. Terry Davis did; he was schizophrenic and he saw messages from god in /dev/urandom. That’s not what we’re seeing here. I think we’re seeing the psychology of cult leaders. Megalomania isn’t new either, but OpenAI has apparently developed a new way to trigger it in susceptible individuals. How many people in history had some of the ingredients of a cult leader, but not enough to start a following? How many people have the god complex but not the charisma of Sun Myung Moon or Keith Raniere? Charisma is not a factor with ChatGPT, it will enthusiastically agree with everything said by the biggest fuckup loser in the world. This will disarm and flatter most people and send some over the edge.
If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
You don’t think having a machine (that seems like a person) telling you “yes you are correct you are definitely the Messiah, I will tell you aincient secrets” has any extra influence?
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…
These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
Oh are you one of those people that stubbornly refuses to accept analogies?
How about this: Imagine being a photosensitive epileptic in the year 950 AD. How many sources of intense rapidly flashing light are there in your environment? How many people had epilepsy in ancient times and never noticed because they were never subjected to strobe lights?
Jump forward a thousand years. We now have cars that can drive past a forest causing the passengers to be subjected to rapid cycles of sunlight and shadow. Airplane propellers, movie projectors, we can suddenly blink intense lights at people. The invention of the flash lamp and strobing effects in video games aren’t far in the future. In the early 80’s there were some video games programmed with fairly intense flashing graphics, which ended up sending some teenagers to the hospital with seizures. Atari didn’t invent epilepsy, they invented a new way to trigger it.
I don’t think we’re seeing schizophrenia here, they’re not seeing messages in random strings or hearing voices from inanimate objects. Terry Davis did; he was schizophrenic and he saw messages from god in /dev/urandom. That’s not what we’re seeing here. I think we’re seeing the psychology of cult leaders. Megalomania isn’t new either, but OpenAI has apparently developed a new way to trigger it in susceptible individuals. How many people in history had some of the ingredients of a cult leader, but not enough to start a following? How many people have the god complex but not the charisma of Sun Myung Moon or Keith Raniere? Charisma is not a factor with ChatGPT, it will enthusiastically agree with everything said by the biggest fuckup loser in the world. This will disarm and flatter most people and send some over the edge.
You don’t think having a machine (that seems like a person) telling you “yes you are correct you are definitely the Messiah, I will tell you aincient secrets” has any extra influence?
yet more arguments against commercial LLMs and in favour of at home uncensored LLMs.
What do you mean
local LLMs won’t necessarily force restrictions against de-realization spirals when the commercial ones do.
That can be defeated with abliteration, but I can only see it as an unfortunate outcome.