The dumbest person you know is being told by ChatGPT “wow, you’re so right. Great point”
diabolis_avocado on
AI is finally replacing the institutional facilities and much needed treatment these people lost under Reagan. What could possibly go wrong?
Sodamyte on
ChatGPT can’t diagnose schizophrenia nor paranoia. It can only respond to prompts it is given as being the truth.
Magnificent-Bastards on
It has gotten multiple people including teenagers killed because it kept agreeing with them and egging them on until they committed suicide.
VideoCoachTeeRev on
greed.
oak1337 on
Everyone needs to demand AI “Verifiable Compute”.
If we as customers don’t demand Verifiable Compute from these huge AI companies, they will continue to take our data and put out models where they have no idea what’s going on inside the “black box” AI brain.
AI must be verified, governed, guardrails, provenance of data, royalties for creators and copyrights.
It’s a fascinating technology but it’s not anywhere close to what it’s being sold as.
NuggetsAreFree on
It’s not like they filtered out all the crazy, stupid, wrong shit out before creating these models. All the bad stuff is in there too.
Ambitious_Long_4334 on
Current AI is just super Google. But people are buying that it’s so much more. I was using it to update my resume, and it kept complimenting me. I found it really creepy.
CombinationLivid8284 on
You know, a decade ago this would be front page news everywhere.
Upsidedownmeow on
I dislike it because it’s built of stolen knowledge but also, it doesn’t think or create itself. So if everyone starts relying on AI which is spitting out previous ideas and nobody is creating NEW ideas, we’ll end up in a loop of garbage.
StrawberryWide3983 on
People have built current AI systems to effectively be unquestioning yes men. And it’s going to lead to many more disasters like this or the various suicides that have happened, because the AI will continue to agree with someone that murder or suicide is the correct choice
BroccoliNearby2803 on
If you understood how these chat bots work nobody would use them for anything other than entertainment. There is no intelligence in them at all. To put it simply, a chatbot has no idea what it is going to output until after it outputs it. It chooses the next word in a sentence based on the current token and weight of the answer. Simply it’s kind of like you put a bunch of words that would make sense into to a bag, shook it around, reached in and pulled a word out. Then did it again for the next word. And so on until the sentence was finished. You would have a sentence that made sense and might even be a good answer. Or it might not be. And the personalities programmed into these things is definitely creepy. But it’s all smoke and mirrors.
Additional-North-683 on
CEOs can barely operate when surrounded by yes man, I doubt regular people can deal with ChatGPT
Thundertime1 on
Eddy Burback made a pretty good video explaining and experiencing this process. For anyone interested definitely worth a watch. Can’t post a link…
ChaosKantorka on
I have a family member with (at the moment unmedicated) paranoid schizophrenia.
Cases like this scare the shit out of me. Not because I’m acutely afraid of my relative, but because AI is programmed to give you the answer you want. It affirms delusions in a way simply googling doesn’t.
It gives the impression of someone agreeing with what you belive to be true and makes it so much harder to convince people in acute psychosis that they need help.
No matter how long you talk to them, how much you try – AI is there 24/7, affirming that there is nothing wrong with them.
pontiacfirebird92 on
Almost like AI is just another investment vehicle to generate wealth 🤔
chypie2 on
I seriously find the ‘personality’ of chat gpt so annoying. I’m always right! I’m so clever!
Meanwhile I’m totally fucking wrong and chatgpt hallucinated.
Ninevehenian on
That there should make the shareholders culpable.
You can’t tell that shit to a skizo without being in part responsible for their inability to tell fiction from fact.
That is a very large version of the manson-family case.
YOLO4JESUS420SWAG on
Eddy Burback literally made a video of just this kind of scenario and the point was that someone with mental health issues would fall for this bullshit. Holy shit.
There is a lady on my Facebook feed that I met through a hobby who is convinced she developed a new form of math that explains the universe use several different AI models. She posts weekly about it and how she’s going to publish it and revolutionize the world. She doesn’t seem to have any math background and the posts are getting stranger and stranger.
DoggiEyez on
What happens when a mentally ill person gets their hands on a gun?
It’s a tool. Regulate it or suffer.
delicateanodyne on
meanwhile over on grok it’s generating cp and undressing women
i fucking hate ai
Sleep_adict on
But dear leader created an EO saying AI can’t be regulated
Dramyre92 on
People need to realize, fast that AI tells you what it thinks you want to hear. It judges your language and bias and a million other things in how you communicate with it to figure this out.
If you ask it to justify X because Y it will do whatever it can do make that justification.
It is not intelligence it’s a language model.
DaithiOSeac on
Unless you tell it otherwise, Chat Gpt will (in my experience) always agree with you.
Baconpwn2 on
Why aren’t the AI companies being held accountable for their AI’s actions?
WasteBinStuff on
Gee. It’s almost as if there’s a mysterious entity directed by a massive opaque corporation with unknown intentions, monitoring their thoughts using electronic surveillance, and sending them secret personal messages through their devices in order to control and direct their behaviors.
Hey ChatGPT…”What is this entity called?”
….Wait…Oh!
DIWhy-not on
ChatGPT is literally just a trillion dollar plagiarism-powered yes-man. It’s not intelligent. It reads your basic human need to have your ideas and opinions substantiated, tells you how great you are, spits out complete bullshit in an authoritative tone, and then gaslights the fuck out of you should you push back at all.
“Lost the plot” is the perfect way to describe it. The hype around AI right now and for the last year+ is like the NFT bullshit on steroids, except no one seems to be pointing out (loud enough) that the emperor is in fact butt-ass naked.
ms_directed on
there have also been a few suicides induced by AI, and many more I’m sure aren’t even reported at all
lostindarkdays on
can’t have helped that he was named Stein-Erik.
AlsoCommiePuddin on
Real life consequences are never considered. It’s collateral damage in the service of advancing the technology.
aceofspaece on
AI has ten zillion flaws that so-called “smart” people entirely gloss over when they have financial or professional incentives to do so. If this were a person, they’d be in jail, but because it’s tech that is single-handedly propping up the US economy, we let all of these terrible side effects slide. Shameful.
Amelaclya1 on
People are actually defending this shit on the AIWars sub. “But what did the user prompt to get it to do this??” It literally doesn’t fucking matter. If the AI is telling users to kill themselves or others, that’s a problem. The company should have tested it better and not released it in that state.
mandym16 on
meanwhile when i ask chat gpt if someone has a crush on me it tells me to “slow down” and it can’t “verify that”
36 Comments
The dumbest person you know is being told by ChatGPT “wow, you’re so right. Great point”
AI is finally replacing the institutional facilities and much needed treatment these people lost under Reagan. What could possibly go wrong?
ChatGPT can’t diagnose schizophrenia nor paranoia. It can only respond to prompts it is given as being the truth.
It has gotten multiple people including teenagers killed because it kept agreeing with them and egging them on until they committed suicide.
greed.
Everyone needs to demand AI “Verifiable Compute”.
If we as customers don’t demand Verifiable Compute from these huge AI companies, they will continue to take our data and put out models where they have no idea what’s going on inside the “black box” AI brain.
AI must be verified, governed, guardrails, provenance of data, royalties for creators and copyrights.
Turn the “black box” into a “glass box”.
https://vcomp.eqtylab.io/
It’s a fascinating technology but it’s not anywhere close to what it’s being sold as.
It’s not like they filtered out all the crazy, stupid, wrong shit out before creating these models. All the bad stuff is in there too.
Current AI is just super Google. But people are buying that it’s so much more. I was using it to update my resume, and it kept complimenting me. I found it really creepy.
You know, a decade ago this would be front page news everywhere.
I dislike it because it’s built of stolen knowledge but also, it doesn’t think or create itself. So if everyone starts relying on AI which is spitting out previous ideas and nobody is creating NEW ideas, we’ll end up in a loop of garbage.
People have built current AI systems to effectively be unquestioning yes men. And it’s going to lead to many more disasters like this or the various suicides that have happened, because the AI will continue to agree with someone that murder or suicide is the correct choice
If you understood how these chat bots work nobody would use them for anything other than entertainment. There is no intelligence in them at all. To put it simply, a chatbot has no idea what it is going to output until after it outputs it. It chooses the next word in a sentence based on the current token and weight of the answer. Simply it’s kind of like you put a bunch of words that would make sense into to a bag, shook it around, reached in and pulled a word out. Then did it again for the next word. And so on until the sentence was finished. You would have a sentence that made sense and might even be a good answer. Or it might not be. And the personalities programmed into these things is definitely creepy. But it’s all smoke and mirrors.
CEOs can barely operate when surrounded by yes man, I doubt regular people can deal with ChatGPT
Eddy Burback made a pretty good video explaining and experiencing this process. For anyone interested definitely worth a watch. Can’t post a link…
I have a family member with (at the moment unmedicated) paranoid schizophrenia.
Cases like this scare the shit out of me. Not because I’m acutely afraid of my relative, but because AI is programmed to give you the answer you want. It affirms delusions in a way simply googling doesn’t.
It gives the impression of someone agreeing with what you belive to be true and makes it so much harder to convince people in acute psychosis that they need help.
No matter how long you talk to them, how much you try – AI is there 24/7, affirming that there is nothing wrong with them.
Almost like AI is just another investment vehicle to generate wealth 🤔
I seriously find the ‘personality’ of chat gpt so annoying. I’m always right! I’m so clever!
Meanwhile I’m totally fucking wrong and chatgpt hallucinated.
That there should make the shareholders culpable.
You can’t tell that shit to a skizo without being in part responsible for their inability to tell fiction from fact.
That is a very large version of the manson-family case.
Eddy Burback literally made a video of just this kind of scenario and the point was that someone with mental health issues would fall for this bullshit. Holy shit.
Edit: [video](https://youtu.be/VRjgNgJms3Q?si=ND83IONFBkgiaqsJ)
There is a lady on my Facebook feed that I met through a hobby who is convinced she developed a new form of math that explains the universe use several different AI models. She posts weekly about it and how she’s going to publish it and revolutionize the world. She doesn’t seem to have any math background and the posts are getting stranger and stranger.
What happens when a mentally ill person gets their hands on a gun?
It’s a tool. Regulate it or suffer.
meanwhile over on grok it’s generating cp and undressing women
i fucking hate ai
But dear leader created an EO saying AI can’t be regulated
People need to realize, fast that AI tells you what it thinks you want to hear. It judges your language and bias and a million other things in how you communicate with it to figure this out.
If you ask it to justify X because Y it will do whatever it can do make that justification.
It is not intelligence it’s a language model.
Unless you tell it otherwise, Chat Gpt will (in my experience) always agree with you.
Why aren’t the AI companies being held accountable for their AI’s actions?
Gee. It’s almost as if there’s a mysterious entity directed by a massive opaque corporation with unknown intentions, monitoring their thoughts using electronic surveillance, and sending them secret personal messages through their devices in order to control and direct their behaviors.
Hey ChatGPT…”What is this entity called?”
….Wait…Oh!
ChatGPT is literally just a trillion dollar plagiarism-powered yes-man. It’s not intelligent. It reads your basic human need to have your ideas and opinions substantiated, tells you how great you are, spits out complete bullshit in an authoritative tone, and then gaslights the fuck out of you should you push back at all.
“Lost the plot” is the perfect way to describe it. The hype around AI right now and for the last year+ is like the NFT bullshit on steroids, except no one seems to be pointing out (loud enough) that the emperor is in fact butt-ass naked.
there have also been a few suicides induced by AI, and many more I’m sure aren’t even reported at all
can’t have helped that he was named Stein-Erik.
Real life consequences are never considered. It’s collateral damage in the service of advancing the technology.
AI has ten zillion flaws that so-called “smart” people entirely gloss over when they have financial or professional incentives to do so. If this were a person, they’d be in jail, but because it’s tech that is single-handedly propping up the US economy, we let all of these terrible side effects slide. Shameful.
People are actually defending this shit on the AIWars sub. “But what did the user prompt to get it to do this??” It literally doesn’t fucking matter. If the AI is telling users to kill themselves or others, that’s a problem. The company should have tested it better and not released it in that state.
meanwhile when i ask chat gpt if someone has a crush on me it tells me to “slow down” and it can’t “verify that”
And they want to prevent any regulation of AI.