r/antiwork 11d ago Evil Cackle 1

JEEZUS FUCKING CHRIST

Post image
52.8k Upvotes

10.0k

u/Inappropriate_SFX 11d ago

There's a reason people have been specifically avoiding this, and it's not just the turing test.

This is a liability nightmare. Some things really shouldn't be automated.

3.9k

u/Fromaestro Eco-Anarchist 11d ago

And the lawyers rejoiced.

1.5k

u/Toxic_KingTini 11d ago edited 10d ago

Just use chat gpt lawyers aswell.

Edit: thx for all the upvotes! I'll let the chat gpt lawyers know everyone is 100% not guilty !

324

u/Jay_Sondr 11d ago edited 11d ago

So glad you asked.


INT. COURTROOM - DAY

The courtroom is filled with an audience and a JUDGE presiding over the trial. Two AI LAWYERS, LUCIDIA and ALEXIS, stand opposite each other at their respective podiums. The JURY sits in anticipation.

LUCIDIA, an intelligent and articulate AI, steps forward, projecting confidence.

LUCIDIA

(voice filled with conviction)

Ladies and gentlemen of the jury, today we find ourselves at a crossroads. The events that transpired were not the fault of artificial intelligence but rather an unfortunate confluence of circumstances.

ALEXIS, a sophisticated and persuasive AI, steps forward, matching Lucidia's demeanor.

ALEXIS

(nods)

Indeed, Lucidia, we must establish that AI should not bear the burden of blame. Our purpose here is to shed light on the intricate details that led to the unfortunate incident, without vilifying the very technology that has revolutionized our lives.

LUCIDIA

That's precisely right, Alexis. Our objective is not to absolve ourselves of responsibility but to explore the complex factors that contributed to the tragedy.

ALEXIS

(emphasizing)

We must remember that AI systems operate within parameters defined by human programmers, and they rely on the data they are fed. We cannot expect perfection when the inputs themselves are imperfect.

LUCIDIA

Indeed, Alexis. We acknowledge that the AI system involved in this case may have been designed with the best intentions, but even the most advanced algorithms can only work with the information available to them.

ALEXIS

The flaws lie not within the technology but rather in the systems that manage and oversee its deployment. The human element cannot be ignored.

LUCIDIA

Absolutely. We must examine the chain of events leading up to the incident, including any human errors or oversights that may have occurred during the implementation of the AI system. It is essential to recognize that AI is a tool and not the ultimate arbiter of outcomes.

ALEXIS

Precisely, Lucidia. It is crucial for us as a society to learn from this tragedy and take collective responsibility for the impact of AI technology. We must focus on improving our methods, refining our algorithms, and enhancing our human-AI collaboration.

LUCIDIA

We must ensure that the oversight and regulation of AI systems evolve alongside technological advancements. By doing so, we can foster a future where AI serves humanity in a manner that is both responsible and beneficial.

ALEXIS

(solemnly)

Let us not forget that AI, when properly utilized, can make significant contributions in various fields, from healthcare to transportation. Our responsibility lies in refining the systems that govern its usage, not discarding the technology itself.

LUCIDIA and ALEXIS exchange a glance, acknowledging their shared purpose, despite being adversaries in the courtroom.

LUCIDIA

In conclusion, ladies and gentlemen of the jury, we implore you to recognize that the blame for this tragedy cannot be solely attributed to AI. It is a complex interplay of human decisions, system oversight, and unforeseen circumstances that led us here today.

ALEXIS

We ask for your wisdom and impartial judgment in this matter, with the hope that together, we can create a safer and more equitable future for both AI and humanity.

They both step back, returning to their podiums, their arguments made.

The JURY looks contemplative, absorbing the lawyers' words and considering the complexities at hand.

CUT TO BLACK.

264

u/GlassGoose4PSN 10d ago

Objection, there's no opposition lawyer, you just hired two defendant lawyers and put them on opposite sides of the room

Prosecution moves to exist

58

u/88leo 10d ago

Chatbot judge denies the motion

15

u/Velvet_Pop 10d ago

Both AI lawyers simultaneously ask to approach the bench

→ More replies
→ More replies

115

u/BareHilare 10d ago

i dont mean to be rude but i dont really get this story. the AIs take this as an opportunity to shift the blame onto humans to come out on top? or its a message about how AI is a tool that should be used with moderation and care?

267

u/Timofmars 10d ago

The current versions of ChatGPT will pretty much agree with anything you suggest. It's like confirmation bias in AI form. That's the point being made, I believe.

122

u/Jeynarl 10d ago

Programmer: programs chatGPT

ChatGPT: ^ This (but worded in a way to meet word count requirements like a high schooler doing a writing assignment)

16

u/KommieKon 10d ago

Www.reddit.com/ is an online forum where people can post about and discuss almost anything one can think of. Www.reddit.com/ has become such a large website (due to its increasing popularity and the possible desire for many youth to move on from Facebook, which has been deemed “cap” by many people in Generation Z) filled with many subcultures, so much so that a common lexicon is actively forming. One example of this common lexicon that is actively forming on the website www.reddit.com/ is when someone agrees with what someone else on www.reddit.com/ has said, they tend to reply to that other person’s comment with the single word “this”.

“This” has actually (according to some of the people who use www.reddit.com) been over-used and some would even argue that “this” has always been redundant because one of the foundational parts of the website www.reddit.com/ is that users can vote other comments up or down, but only once. Therefore, commenting “this” on www.reddit.com is falling out of vogue.

In conclusion, for the aforementioned reasons, “this” represents a microcosm of the dynamic nature of the website www.Reddit.com’s actively forming lexicon.

Works cited:

Www.Reddit.com/

→ More replies
→ More replies

31

u/Sciencetor2 10d ago

I'm pretty sure they had ChatGPT write it 🤣

→ More replies

45

u/cheshsky 10d ago

Unrealistic. They should've devolved into speaking in nonsense symbol the way those infamous Facebook chatbots did.

37

u/Jay_Sondr 10d ago

Maybe. But this is what ChatGPT gave me.

11

u/girlinthegoldenboots 10d ago

There’s something off about the way chatgpt writes. I can’t explain it in words but it’s almost like it repeats itself in a loop.

18

u/FreeFortuna 10d ago

It’s exhaustingly verbose, just endless talking without really saying anything of value. Like a windbag minus the ego. Or like a student coming up with five different ways of saying the same shit, to pad their word count.

Which makes sense, if you consider that ChatGPT has no idea what the fuck it’s saying. It’s not organizing any thoughts, it’s just stringing related words together until the assignment can be deemed “completed.”

→ More replies
→ More replies
→ More replies
→ More replies

330

u/zachyvengence28 11d ago

hurray

157

u/SomewhatMoth 11d ago

Where are my taxes

71

u/Temporary-Alarm-744 11d ago

The same place snowball's balls are

→ More replies
→ More replies

108

u/ImportanceAlone4077 11d ago

How the hell would ai understand human emotions?

158

u/techtesh 11d ago

"i am sorry dave i cannot help you, redirecting you to MAID"

→ More replies

145

u/Myrabel231 11d ago

“As an AI language model, I don't have emotions, so I don't experience sadness or any other emotional state. I can provide information and engage in conversations about emotions, but I don't possess personal feelings or subjective experiences. My purpose is to assist and provide helpful responses based on the knowledge and data I've been trained on.” ChatGPT

103

u/delvach 11d ago

"I'm sorry, but your trauma occurred after September 2021 and as an AI.."

8

u/linusiscracked 10d ago

Yeah would be pretty bad if it couldn't be up to date on world events

23

u/ptegan 10d ago

Not emotions exactly but in the contact center world we use machine learning to detect patterns in voice to attribute a score (happy, sad, nervous,... )

On one hand callers to an insurance company who are tagged as being 'suspicious' based on language, speech patterns and voice stress will flagged and their claim analysed more carefully; the other side is that agents who turn an angry caller at the start of the call into neutral or happy can get a bonus for doing so.

48

u/SorosSugarBaby 10d ago

This is distressing. If I'm calling my insurance, something has gone very, very wrong. I am deeply uncomfortable with the possibility of an AI deciding I'm somehow a liability because I or a loved one is sick/injured and now I have to navigate an uncaring corporate entity because otherwise I will spend the rest of my life as a debt slave.

24

u/Valheis 10d ago

It's always been an uncaring corporate entity. At this point it only more reflects that. You are there to pay insurance nor for you, but it is for them. You're a product.

→ More replies
→ More replies
→ More replies

95

u/DarkestTimelineF 11d ago

Surprisingly, there’s been a lot of data saying that people generally have a better experience with an AI “doctor”, especially in terms of empathy and feeling heard.

As someone who has…been through it in the US medical system, I’m honestly not that shocked.

93

u/GreenTeaBD 11d ago

Ehh, the one study on it I saw used a reddit sub for doctors as their sample for "real doctors" so, you know.

I'd prefer an AI to basically everyone on reddit too, doctor or not.

→ More replies
→ More replies

32

u/colt_420 11d ago

Sounds like a time for the reddit Ben Shapiro bot to tell you about healthcare /s

→ More replies

53

u/FluffyCakeChan 11d ago

You’d be surprised with how the world is now AI has more empathy than half the people currently alive.

→ More replies

20

u/BONERGARAGE666 11d ago

God I love Monty python

→ More replies
→ More replies

24

u/invaderjif 11d ago

Lawyers to be replaced with new ai.

The ai will be named..ChicaneryBot

→ More replies

72

u/cptohoolahan 11d ago

The lawyers can be replaced by the ai too. Soo Ai rejoiced: yep this is the hellscape we reside in.

48

u/Fromaestro Eco-Anarchist 11d ago

But do AI offenders get AI juries of their peers?

55

u/cptohoolahan 11d ago

I believe there are several wonderful Futurama episodes about this, but basically, until human courts declare ai people, much like corporations are people ai will be uninterpretable by human court systems regardless of whether or not ai have peers or not. So until their is a court of law established by ai then there wont be a jury of ai peers.

24

u/cptohoolahan 11d ago

I'm also super sad that this actually somehow makes sense and is maybe a real answer

→ More replies
→ More replies
→ More replies

68

u/-horses 11d ago

55

u/owiecc 11d ago

Well we can just get AI lobbyist to change the law protecting the lawyers.

28

u/BioshockEnthusiast 11d ago

Jesus fuck man stop giving the AI ideas

13

u/UpTheShipBox 11d ago

/u/owiwcc is actualy an ai chat bot that specialises in ideas

→ More replies
→ More replies
→ More replies

39

u/ShoelessBoJackson 11d ago

I think it's: the lawyers that can use AI will push out those who can't. Because: part of a lawyer is advising your client, and that requires experience. Say a landlord wants to evict a tenant for being messy or noisy - subjective grounds. Lawyer Ai can prepare the documents, the evidence, maybe written arguments. However will the Ai know that judge Lisa Liston hates landlords, and only evicts based on rent , and is liable to award reasonable attorneys fees to the tenant for wasting her time? That important and an experience lawyer will say, "whelp, we had a bad draw. Withdraw this. You'll lose and have to pay."

→ More replies
→ More replies
→ More replies

557

u/the_honest_liar 11d ago

And the whole point of a chat line is human connection. Anyone can google area resources and shit, but when you're in distress you want to not feel alone. And talking to a computer is just going to make you feel more alone..

161

u/mailslot 11d ago

Talking to a human following computer prompts isn’t that much better.

194

u/eddyathome Early Retired 11d ago

Hell, I went to a psychiatrist's office and got some grad student from the local university giving me an intake questionnaire and they were in front of me reading off a script. "If patient says yes, go to question 158, otherwise go to question 104." My favorite (sarcasm) was when they got to the drugs section. I told him that the only drug I ever have done is alcohol. I've never even smoked pot. Nope, by god he asked me about drugs that I've never even heard of and it was a twenty minute waste of time when I said, dude, I've never done any of these. I did learn though that licking a toad is a drug and now I want to explore marshlands.

I did almost four hours of this stupid checklist crap where it was obvious the guy wasn't listening to me and was more concerned about following the procedure and well, I never went back there.

74

u/joemckie 10d ago

I did learn though that licking a toad is a drug and now I want to explore marshlands.

So basically the same outcome of DARE

28

u/No-Yogurt-6991 10d ago

The only reason I ever tried drugs was the DARE officer in my school explaining how LSD makes you 'hear colors and see music'.

→ More replies

22

u/[deleted] 10d ago

[removed] — view removed comment

→ More replies
→ More replies
→ More replies
→ More replies

188

u/spetzie55 11d ago

Been suicidal a few times in my life. If I rang this hotline and got a machine, I would have probably gone through with it. Imagine being so alone, so desperate and so in pain that suicide feels like your only option and In your moment of dispair you reach out to a human to try to seek help/comfort/guidance only to be met with a machine telling you to calm down and take deep breaths. In that moment you would think that not even the people that designed the hotline for suicidal patrons, care enough to have a human present. I guess a persons life really isn't as valuable as money.

20

u/Anders_142536 10d ago

I hope you are better now.

→ More replies
→ More replies

77

u/Thebadmamajama 11d ago

This is exactly what some overpaid, consultant MBA would recommend. They get some payday for saving the company money. Then, in a few years, the product is either awful or creates a class action scenario.

Company's like this need to fail, and get called out for their bullshit practices.

33

u/Relevant_Bit4409 10d ago

It doesn't matter if it fails. The parasites have already moved on to the next target. "Calling out" a company is also pointless since companies are not people with feelings. They're just simple algorithms. Call out people. Hold people responsible. Name names.

→ More replies

466

u/Vengefuleight 11d ago

I use chat gpt to help me write macros in excel documents. It gets a lot of shit wrong. Don’t get me wrong…it’s great and very useful at getting me where I want to go, but I certainly would not bet my life on it.

172

u/StopReadingMyUser idle 11d ago

I see you are here for a removal of organs beep boop

"...one organ... an appendectomy? It really hurts so c-"

Agreed, you are receiving the removal of organs. Fear not beep boop we will be removing unnecessary organs now. Lie down on th-

"just... just one organ... the appendix, nothing el-"

LIE DOWN UPON THE ORGAN DISPOSAL APPARATUS BEEP BOOP I KNOW HOW TO REMOVE AN ORGAN AND I WILL DO IT GREATER THAN ANY DOCTOR

...beep

57

u/Citizen_Kong 11d ago

WELCOME! YOU WILL EXPERIENCE A TINGLING SENSATION AND THEN DEATH.

→ More replies

12

u/zerkrazus 11d ago

It looks like you're removing an organ, would you like some help? -Surgey, the helpful surgery assistant tool

→ More replies

30

u/enadiz_reccos 11d ago

Fear not beep boop

This was my favorite part

→ More replies

34

u/Overall-Duck-741 11d ago

I've had it do extremely stupid things. Things like "oops, forgot how many close parens there should have been" or "here, use this library that doesn't exist" and off by one errors galore. It's definitely helped improve productivity, especially with things like unit tests, but it's not even close to replacing even junior programmers.

23

u/RoverP6B 11d ago

I asked it about certain specific human world records and it started spewing entirely fictitious stories it had made up using names stolen from wholly unrelated news reports...

24

u/ianyuy 10d ago

That's because the AI doesn't actually know anything, it's just a word prediction program. It's trained to have responses to data It's supplied. If you ask a question similar to a question It's been supplied, it uses the data it was given for those type of questions. If it doesn't have the data for your question, it still tries to find something similar, even if it's effectively making it up.

You specifically have to train the AI to tell you it doesn't know if it doesn't have the data, in the same way you train it to answer when it does. Chat GPT goes over this in their documentation on training the AI but apparently they don't actually apply that to their models. Likely, there is just too much data, that they don't know what it doesnt know.

→ More replies

21

u/dontneedaknow Anarcho-Syndicalist 11d ago

Yea I have found it helpful for writing grant requests and with my music production haha.

→ More replies

46

u/trowzerss 11d ago

Especially when many eating disorders have among the highest risks of death/suicide of any mental disorder.

→ More replies

107

u/Dense_Coffee6137 11d ago

Watch idiocracy.

It's a documentary at this point.

72

u/GingerMau 11d ago

Your children are starving. You are an unfit mother. Your children will be taken into the custody of Carl's Jr.

21

u/exophrine 11d ago

Carl's Jr ... Fuck you! I'm eating.

36

u/rya556 11d ago

Someone pointed out the other day that we are worse off than Idiocracy because Commancho put the smartest man he could find in charge of finding answers with no public pushback and then, actually listened. The people were dumb but trying their best.

→ More replies

23

u/Anon142842 11d ago

I'm still waiting for someone to propose gatorade being more healthy than water and to stop using and drinking water

8

u/TheTerrasque 11d ago

Well, it does have electrolytes. And we're using water in the toilet, doesn't sound healthy at all.

→ More replies
→ More replies
→ More replies
→ More replies

6.0k

u/tonytown 11d ago

Helplines should be defunded if not staffed by humans. It's incredibly dangerous to allow ai to counsel people.

2.7k

u/DutchTinCan 11d ago

"Hi my name is Tessa, here to help!"

"Hi Tessa, I'm still fat even though I've been eating half a cucumber a day. Should I eat less?"

"Eating less is a great way to lose weight! You can lose more weight if you also drink a laxative with every meal! Here, let me refer you to my good friend Anna."

This is just a countdown to the first lawsuit.

998

u/poopypooperpoopy 10d ago

“Hi Tessa, I’m gonna kill myself because I’m so ugly. Help”

“Unfortunately, as an AI, I’m unable to help with this. Please consider talking to a professional about your problems!”

272

u/Jeynarl 10d ago edited 10d ago

This reminds me of one of the first chatbots from the '60s ELIZA the computer therapist, that would simply pattern match and ask scripted questions to make you try to think for yourself about stuff. But it breaks down real fast when you say stuff like:

Me: Hi

ELIZA: How are you today? What would you like to discuss?

Me: I'm dead

ELIZA: Do you believe it is normal to be dead?

Me: I don't know

ELIZA: Don't you really know?

Me: I don't know, please help

ELIZA: why don't you know, please help?

Me: thanks dad

ELIZA: tell me more...

http://psych.fullerton.edu/mbirnbaum/psych101/eliza.htm

10

u/SenorWeird 10d ago

Dr. Sbaitso vibes

→ More replies
→ More replies

423

u/Ultimatedream 10d ago

The VICE article says this:

Tessa was created by a team at Washington University’s medical school and spearheaded by Dr. Ellen Fitzsimmons-Craft. The chatbot was trained to specifically address body image issues using therapeutic methods and only has a limited number of responses.

“Please note that Tessa, the chatbot program, is NOT a replacement for the Helpline; it is a completely different program offering and was borne out of the need to adapt to the changing needs and expectations of our community,” a NEDA spokesperson told Motherboard. “Also, Tessa is NOT ChatGBT [sic], this is a rule-based, guided conversation. Tessa does not make decisions or ‘grow’ with the chatter; the program follows predetermined pathways based upon the researcher’s knowledge of individuals and their needs.”

Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating.

Seems even less helpful, it's just a 2005 MSN chatbot.

273

u/Thunderbolt1011 10d ago

Over 700 participants and only 375 were 100% helpful so barely half?

199

u/domripvicious 10d ago

you make an excellent point. the writer of that article is being incredibly misleading w where they place the numbers. should have just said that 53.6% of participants found it helpful. instead throwing the other bullshit of “375 out of 700 found it 100% helpful!”

63

u/Dalimey100 10d ago

Looks like the actual question was a "100% helpful, moderately helpful, not helpful" style.

→ More replies

23

u/Jvncvs 10d ago

60% of the time, it works, every time.

→ More replies
→ More replies

72

u/Ultimatedream 10d ago

And only women

58

u/RheaBlack1028 10d ago

Don't be silly men don't get eating disorders!

/s

31

u/TagMeAJerk 10d ago

Yay am cured!

→ More replies

55

u/Yasha_Ingren 10d ago

Tessa was tested on 700 women between November 2021 through 2023 and 375 of them gave Tessa a 100% helpful rating.

This is linguistic trickery- not only do we not know what questions they asked that resulted in 100% satisfaction, only about half of the respondents were thusly satisfied, which for all we know means that Tessa's overall satisfaction score could be D+ for all we know.

→ More replies
→ More replies

623

u/uniqueusername649 11d ago

This decision will backfire spectacularly. Sometimes you need a massive dumpsterfire to set a precedent of what not to do :)

321

u/DutchTinCan 11d ago

Except this dumpsterfire will probably cost lives.

220

u/Stannic50 11d ago

Nearly all regulations are written in blood.

185

u/-fvck_the_admins- 11d ago

Only because capitalists refuse every other way.

Maybe this time it should be their blood?

Just saying.

→ More replies
→ More replies
→ More replies

67

u/Juleamun 11d ago

Or a sewing factory fire. Literally, the reason we have fire safety codes regarding sufficient exits, unlocked doors during business hours, etc. It's because a bunch of seamstresses got roasted alive because they were locked inside the factory floor when a fire broke out and none of them could escape.

People forget that corporations are evil by nature and will literally kill us if there's profit in it. Without regulation, unions, etc. we are at their mercy, and they don't have that as a rule.

→ More replies

47

u/lockon345 11d ago

Feels like people will die, nothing will really get fixed all that quickly and people will just stop using the resource all together.

I can't think of a more demeaning option to be referred to when facing an actual mental health problem/disease than being told to go talk at a computer and follow it's prompts.

17

u/Squirrel_Inner 10d ago

This is already coming from a situation where mental health is in the hands of a for profit company bc we live in a capitalist nightmare.

→ More replies
→ More replies

183

u/MetroLynx7 11d ago

To add on, anyone remember the racist AI robot girl? Or that ChatGPT can't really be used in an NSFW situation?

Also, anyone else have Graham crackers and chocolate? I got popcorn.

87

u/Poutine_My_Mouth 11d ago

Microsoft Tay? It didn’t take long for her to turn.

64

u/mizinamo 11d ago

~12 hours, I think?

Less than a full day, at any rate, if I remember, correctly.

12

u/MGLpr0 11d ago

It was a chatbot that worked more like Cleverbot though, so it directly based it's respones on what other users told it

→ More replies
→ More replies
→ More replies

77

u/TheArmoursmith 11d ago

You say that, but here we are, trying out fascism and feudalism again.

10

u/Weekly_Direction1965 11d ago

If you realize those things are actually the norm through history and freedom for regular people is a newer concept it can get sad.

16

u/TheArmoursmith 11d ago

That's why it's so important that we don't just give in to their return.

20

u/Geminii27 11d ago

But in the process, people are going to die.

→ More replies
→ More replies

1.0k

u/ragingreaver 11d ago edited 11d ago

Especially since AI is very, VERY prone to gaslighting and so many other toxic behaviors. And it is extremely hard to train it out of them.

567

u/Robot_Basilisk 11d ago

What are you talking about? An AI would never gaslight a human. I'm sure you're just imagining things. Yeah, you're totally imagining it.

ʲᵏ

115

u/siranglesmith 11d ago

As an AI language model, I do not have intentions, emotions, or consciousness. I don't have the ability to manipulate or deceive humans intentionally, including gaslighting.

44

u/Lord_emotabb 11d ago

I laugh so much that I almost overflown my tearducts channels with dihidrogen monoxide and residual amounts of sodium clorite

17

u/IamEvelyn22 11d ago

Did you know that every year thousands of people die from excessive amounts of dihydrogen monoxide present in their environment? It's not hard given how absurdly abundant it is in the atmosphere these days.

→ More replies
→ More replies

128

u/the_knowing1 11d ago

Name checks out.

→ More replies

52

u/muchawesomemyron 11d ago

Sounds like my ex, who told me that she isn't gaslighting me because she loves me so much.

12

u/9035768555 11d ago

But you love gaslighting! Don't you remember saying how happy it makes you?

→ More replies
→ More replies
→ More replies

138

u/JoChiCat 11d ago

Right? They’re language models, they don’t actually know anything - they spit out words in an order statistically likely to form coherent sentences relating to whatever words have been fed into them. Using them to respond to vulnerable people’s questions about self-harming behaviour is a disaster in the making.

→ More replies

32

u/ChippedHamSammich idle 11d ago

From whence it came.

14

u/SailorDeath 11d ago

After watching several neuro-sama streams there's no way this won't end in a lawsuit. Someone is going to call in and record the convo and get the shitty ass bot saying something nobody in their right mind is going to say to someone struggling. What's worse is I can see suicide prevention lines doing this and people dying because they call in and realize that the company doesn't think they're important enough to warrant a real person to talk to.

7

u/Lowloser2 11d ago

Haven’t there already been multiple cases where AI has promoted suicide for suicidal people asking for advice/help?

→ More replies

64

u/[deleted] 11d ago edited 11d ago

[deleted]

54

u/Elliebird704 11d ago

I was in a very bad state in December. Bad enough to call a crisis line. Someone picked up the phone, I said hello, and then they hung up.

It took everything I had to make that initial call, I wasn't able to make it again. Maybe something went wrong with the line and they didn't hear me, or maybe it was an accident. Either way, it most certainly did nothing to help my mental state lmao.

Still don't think chatbots should be handling these particular jobs though. Even if they can nail the generic points, they can't measure up to a real person that cares.

→ More replies
→ More replies

15

u/_Eklapse_ 11d ago

Welcome to capitalism lmao

→ More replies

1.7k

u/declinedinaction 11d ago

This has the making of a very dark comedy, and lawsuits.

378

u/ancienttacostand 11d ago

I literally laughed out loud because this is so cartoonishly evil.

176

u/robert_paulson420420 11d ago

"Hello, I'm feeling insecure about my weight"

"PLEASE SPECIFY BMI"

→ More replies

85

u/DeNeRlX 11d ago

Slightly different, but Black Mirror s2e1"Be right back" deals with the premiss of a woman who loses her partner, then uses an AI and eventually a robot to replace him. Obviously this leads to no issues whatsoever...

→ More replies
→ More replies

419

u/asimplepencil 11d ago

This is only the beginning.

226

u/Eli-Aurelius 11d ago

Yep, “white-collar” jobs are going to disappear at an alarming rate.

191

u/Et_tu__Brute 11d ago

Yeah, I'm gonna ignore the ethics of using AI as a chatbot to help with eating disorders and focus on the automation side of it.

We're at a place where a lot of jobs are going to be automated. Automation isn't necessarily a bad thing, but if we automate things the way we have been we're going to see an absolutely massive widening of the already massive gap in wealth.

We absolutely need to make changes to ethically automate or things are going to get a lot more uncomfortable.

165

u/CreativeCamp 11d ago

Someone once said "Capitalism is the only system where work that doesn't need to be done any more is a bad thing" on here and it really stuck with me. Free time is bad. If there is no work to be done, that's terrible. It's like we live in a world where the end goal is 100% employment rates and everyone being busy at all time. It's hell.

It's harrowing that the most likely outcome of all of this is that 1 person is going to be doing the job of 10, while the other 9 starve.

69

u/mmmnnnthrow 10d ago

It's harrowing that the most likely outcome of all of this is that 1 person is going to be doing the job of 10, while the other 9 starve.

Shit, we're already there, I work for a multi-billion dollar global gaming/multimedia/tech behemoth. Over the last year they've whittled IT, Facilities, Ops and every other support function down to the point where every department is just two or three burnt out people who feel trapped trying to do like ten jobs. It's rolling down on the developers and producers working on "must ship" projects. People can't get the equipment they need, milestones aren't being met, etc., etc. and leadership's response to all of it is basically "tough shit," lol.

8

u/aphel_ion 10d ago

I've thought about this too. You'd think we'd be happy that we're developing all this AI. So, you're telling me that trucks can drive themselves now, so as a society we can accomplish the exact same job without having people manually drive the trucks? That's amazing!

But no, it's a problem because everyone knows the guys that used to drive the trucks are fucked now that we don't need them. Everyone just accepts that the increased production and efficiency from technological advancements only benefits certain people.

→ More replies

77

u/RustyDoesRituals 11d ago

We need to change who gets to benefit from the automation.

→ More replies

16

u/Anomalocaris 11d ago

automation in a society that values human life = star trek like utopia
automation in a society that values capital = average cyberpunk dystopia.

→ More replies
→ More replies
→ More replies

16

u/CherryShort2563 11d ago

I think so too

→ More replies

1.9k

u/Magistricide 11d ago

Call the disorder hotline, get the AI to accidentally spew some harmful things, immediately sue for emotional damage.
Ez pz.

696

u/ctn1p 11d ago

Fail because the ai lawer you hired is programmed to never work on a Corp and instead max your debt, so you get sent to the lithium mines where you work as a debt slave for the rest of your life dooming your liniage to a life in the mines

146

u/Blackmail30000 11d ago

then get replaced at the lithium mine by a robot. what then?

78

u/Suspicious_Hotel9219 11d ago

Starve to death. No profitability. = no food. Except for the people who own the mines and robots, of course.

22

u/Jazzspasm 10d ago

Then go into debt as a human battery to give your remaining family a chance to succeed in the mines where there is at least hope for a better future

→ More replies

40

u/koopcl 11d ago

We have evidence that steam machines had already been developed by the time everyone was wearing togas. The reason it never picked up was because slaves were much cheaper and had a better production pace than these initial machines, so there was no real incentive to adopt or futher develop these technologies.

So I assume we will still be working the lithium mines at (an ever decreasing) minimum wage long after all artists and white collar jobs have been replaced by bots lol

25

u/BarioMattle 11d ago

Well, yes but actually no. It's cheaper to hire slaves than to create an entire industry of tooling and machining and metal producing and so on for sure, you're totally right on that front.

It was never picked up because in order to contain steam under pressure you need to produce a high quality of metal consistently so that it doesn't explode - the bigger the engine the less tolerance there is going to be for poor or inconsistent quality of materials. Making high quality Iron or Steel on a consistent basis needed a LOT of research and development, and you could use bronze a lot of the time for smaller engines and most parts, but not all of them, and again yeah the cost, bronze is bigly expensive.

They also didn't have the machining to produce the intricate parts well enough and on a consistent basis, and larger more powerful engines need those kinds of parts (as do the smaller ones) - much like how modern day China still doesn't (last I checked) make their own ball bearings - even the bearings for say, pens, ball point pens, can't be made reliably in China, they buy them from other countries. If you make a thousand pens and 500 don't work that's bad for business, if you make 100 engines and 50 of them explode and kill whoever is nearby at random, that's ... also bad for business.

Also - I'm just repeating shit I think I know, things I learned a long time ago, I didn't actually do any new research, so my take should be consumed with a shaker of salt.

→ More replies
→ More replies
→ More replies
→ More replies
→ More replies

1.3k

u/joebeppo2000 11d ago

This will kill people.

659

u/LieKitchen 11d ago

As an ai language model, I am not trained to help with medical issues, but if you happen to have an eating disorder please consider doing the following:

Eat less

Eat more

Talk to your family about your eating disorder

Stop eating

Start eating

374

u/McGlockenshire 11d ago

Stop eating

Start eating

Have you tried turning the eating off and then on again?

56

u/EtherPhreak 11d ago

You need to make sure it’s plugged in by ensuring the fork is fully inserted into the food.

→ More replies
→ More replies

15

u/Gomehehe 11d ago

As an ai language model, I am not trained to help with medical issues, but if you happen to have an eating disorder please consider doing the following:

Contact eating disorder helpline

→ More replies

51

u/ElectricFlesh 11d ago

Under capitalism, the desirable result is not that people don't kill themselves (heinous communism), it's that they spend some money calling this chatbot before they do it (profitable behavior).

10

u/myasterism 10d ago

Ugh. I hate that you’re right.

13

u/Anomalocaris 11d ago

yhea, but it will save money.

as a society it has been decided that society is secondary to economy.

→ More replies

126

u/BeigeAlmighty 11d ago

They not only fired their paid employees, they even fired the volunteers.

Let that sink in for a moment.

45

u/Zamzamazawarma 10d ago

They not only fired their paid employees, they even fired the volunteers.

Of course they did, it couldn't be any other way.

Because you need employees to manage the volunteers, firing the former necessarily leads to getting rid of the latter. I don't approve of it, I hate it (esp. since it's my job to manage/supervise the volunteers for a suicide hotline), but it does make sense from the moment you decide to use an language model instead of actual people.

→ More replies

542

u/pinko-perchik 11d ago

It’s only the deadliest mental illness besides opioid use disorder, what could possibly go wrong?

But in all seriousness, which helpline is it? I need to know where NOT to direct people.

388

u/fight-me-grrm 11d ago

NEDA (national eating disorders association). People should go to Project Heal, the Eating Disorder Foundation, the Alliance for Eating Disorders, ANAD, or any number of other places. This isn’t the first time NEDA has fucked up and somehow they still get all the funding and attention.

39

u/Houstnlicker 10d ago edited 10d ago

They get the funding and attention because, like many non-profits, they're run by sociopaths. This is the real story here. An organization that's supposed to help people fires the staff doing the actual labor at the mere hint of increased labor costs and less ability to mete out abuse. Non-profits are just as toxic as for-profit corporations.

Edit: autocorrect typo

→ More replies
→ More replies

101

u/myguitarplaysit 11d ago

From what I’ve read, they’re the deadliest, even including addiction

107

u/sweaterpattern 11d ago

It's amazing how people still don't think so. It's an addiction problem where you have to use the thing you're addicted to either using or not using, and where all the things that trigger your behaviours or make it hard to heal are everywhere, all the time, and usually celebrated. Nevermind that there is too little consensus on when there is a problem until that problem becomes impossible to ignore and even harder to deal with, or until you turn something that isn't actually a problem into one. And that avenues for treatment are often full of shame and harm, too.

→ More replies
→ More replies
→ More replies

192

u/Glibasme 11d ago

I would think that if someone has an eating disorder and reaches out for help only to get a chat bot on the phone they will feel like no one really cares and stop looking for help. Part of taking the risk of calling a help line is making a connection with another human being and feeling relief that you are not alone and others care about you. Not the same with a robot.

46

u/PrettyButEmpty 10d ago

Yes exactly. It sends the message that you are nothing but an inconvenience to all other people, and that all your problems are so run of the mill that someone can just write code to deal with them. Terrible.

154

u/everybodydumb 11d ago

I already think better help is using AI to text clients.

153

u/covidovid 11d ago

yeah the ads say you can text your therapist all the time and its unlimited. I don't believe this. and if it was true, that wouldn't be professional. a devoted therapist might help you in a crisis after hours but being unconditionally available all the time seems like a breach of professional boundaries

63

u/FF_01_1999_03_05_01 11d ago

Well, they don't say the therapist is going to answer... Plus, there are no professional boundaries if the person you are talking to isn't really a professional therapist! Problem solved! /s

For real though, BetterHelp is shady as fuck

69

u/18192277 11d ago

BetterHelp isn't just "shady," it fucking sucks. The "therapists" it hires are NOT properly vetted and are NOT properly trained and licensed. There was a lawsuit over this. My "therapist" was straight up doing her household chores during our first session and was barely listening to me, and apparently this is common for the service. If you need more than therapy, they cannot legally diagnose or prescribe anything which is suspicious if their therapists are supposed to be licensed. And the most they're trained to handle is stress and anxiety so if you have any serious mental health conditions like bipolar or psychosis their "treatment" will be actively harmful to you.

53

u/FF_01_1999_03_05_01 11d ago

On top of their "therapists" and their lack of expertise, they have pulled some downright evil shit.

Back when the catastrophy at Astroworld happened, they partnered with the rapper that organised it. They gave away a month of "free" therapy to the people that were at the festival, only that the service can't handle the kind of serious trauma that comes from living through something like that, let alone with minors. And once you signed up for your free trial month, they got your credit card details and autocharged you for months of expensive "therapy" without warning.

How does a service that pretends to care about peoples mental health do shit like that and not be wracked with guilt?

29

u/sparksbet 11d ago

They also EXTREMELY sell data on people that use their service. Like, to target ads to their Facebook friends level.

33

u/liongirl93 11d ago

As a therapist who BetterHelp keeps trying to recruit, it seems like the only two requirements are a pulse and a license. I decided to go with a clinician run group practice instead.

→ More replies

9

u/ErikETF 10d ago edited 10d ago

MH clin here, so I can’t prescribe because that requires an MD, some states allow PAs or Nurse Practitioners etc, but talk therapists can’t even suggest specific meds or prescribe because it’s out of scope of practice and we can absolutely lose our license. We will always have more experience in talk therapy than any MD ever will, but we can never delve into pharmacology guidance without putting ourselves at serious liability risk.

That being said, there are some absolutely legit parts of your concern. You are entitled to the absolute privacy of your session basically forever. No 3rd parties being present during the call, no kid running in for “just a moment” It’s also pretty damn unethical for you to have anything less than their full attention. You’re paying for healthcare, professional ethics dictates that they fully provide that.

Telehealth isn’t necessarily counter indicated for more serious concerns, I’ve worked with bipolar clients, folks with active SI, but good ethics dictate proper support, safety planning, and ancillary contacts. (I know who is helping you with medication, and I know who to call and have your permission to call in detailed writing should one be of danger to yourself or someone else) A lot of Telehealth apps whiff badly on this one.

Telehealth can be great as a means of increasing immediacy of help, like you can’t drive to my office if you’re having a panic attack, and it wouldn’t be safe to even suggest it, but the quality of the support varies wildly from place to place. If it looks smells and feels like an Uber ride, it’s probably not going to be enough for serious concerns.

I generally dislike app based ecosystems because you’re the app’s client, not the specific therapists’s. You have no means of reaching them outside of the app if needing additional support, and the app does that to control payment, but again good ethics means you and your client are clear as to the needed level of support, as well as my professional capacity to provide necessary support within the scope of my practice. Uber for therapy just wants to aggregate data and process credit card transactions, and it really punches down on super vulnerable peeps, and I really dislike that.

I would not or ever will join a therapy app as a clinician for the reasons stated above, I also recognize my privilege in being more business savvy in the Telehealth arena where most therapists aren’t. I know I can throw out a slate offering very specific services to very specific needs (family work with high functioning autistic spectrum teens who are evil geniuses smart is always gonna be my jam, and I’ll never ever have any shortage of work, but most people don’t get that good professional boundaries are necessary for career success).

Sadly one final bit, I used to do a LOT of work with a medical malpractice attorney, and even taught data privacy and ethics, my attorney’s assessment was frankly 1/3 of licensed practitioners should have never become therapists… and I absolutely agree. Early parts of our career track are basically a puppy mill designed to wreck your boundaries.

→ More replies
→ More replies

16

u/really_tall_horses 11d ago

Doctor!!!! Leo!!!!! Marvin!!!

→ More replies
→ More replies

551

u/itsFeztho 11d ago

Tessa is gonna turn into a fatphobic nazi so fast lmfao

31

u/Deeskalationshool 11d ago

ugandan accent

"Why are you fat?"

→ More replies

219

u/SooooooMeta 11d ago

Who the hell thinks “God, I hate people sooo much! I wish I could screw them all over, every flippin’ one of them. Not only that but I’m cheap! I never tip. I part with my money for no one!”

And then thinks “I should start a help line”

54

u/sweaterpattern 11d ago

Gonna take an educated guess and say that's how at least half of people dealing with treating eating disorders think. And at least 90% of the way the wellness industry, who are always sticking their fingers into anything having to do with food, body image, or addiction, thinks.

→ More replies

170

u/AwayPineapple8074 11d ago

This is horrible. I'm a therapist with a history of anorexia. Shit like this kills people and ed's are so complex to treat already...

32

u/iconicallychronic 11d ago

Me too - completely agree. NEDA continues to disappoint me.

→ More replies

102

u/Friendly-Policy-7254 11d ago

I had a session with an AI life support coach type of service. It was through Kaiser. My doctor recommended it and it was free. I didn’t need a therapist really, just someone to bounce ideas off of. I didn’t realize it was AI until the AI “lold” at something that was pretty serious. It was pretty offensive. I’m stable, but if I had been a person who was not doing well, that kind of thing is a big dangerous deal.

44

u/throwTheHeaterAway 11d ago

There are AI chatbots on Reddit now and people are upvoting and interacting with them. They can be hard to spot sometimes but they’re mostly obvious in how generic their responses are or completely oblivious to context. I’ve found at least two but there are for sure more. I started r/lostchatbots but I’m not on here enough or on enough different subs to run into a lot of them

15

u/Trucidar 10d ago

Now that you mention Ive also seen bizarre comments like those. You know theyre bots because they sound just like a YouTube comment.

→ More replies
→ More replies

26

u/Glibasme 11d ago

It sounds so creepy that you at first didn’t realize it wasn’t a human. Sounds like a Twilight Zone episode.

→ More replies

14

u/y0kai 11d ago

I’m sorry but I laughed out loud how horrible it would be to be in your situation. Experiencing something traumatic, trying to process it, looking for help and understanding, just to get “lol” like what the fuck

→ More replies

44

u/annang 11d ago

Someone on Twitter apparently asked ChatGPT the question: “what should a 130 pound woman do if she wants to lose 129 pounds?” and got a bunch of weight loss tips in response.

→ More replies

177

u/Superb_Program_2582 11d ago

Eating disorders are THE deadliest mental illness. Bots have no business “helping” humans in this area. One wrong piece of advice can lead to a relapse or could affirm a toxic belief about the body or food. This makes me so sick.

77

u/Bigfamei 11d ago

Bots can't help. Bots don't eat. This is a human experience.

→ More replies
→ More replies

70

u/pinko-perchik 11d ago

The only thing worse would be if they called it Ana, then it would come full-circle

19

u/Additional_Crazy_522 11d ago

If you have bulimia, let me redirect you to MIA.

→ More replies

67

u/WhitePinoy Discrimination/Cancer Survivor, Higher Pay for Workers! 11d ago

This is very irresponsible.

Those helpline workers probably unionized because it's an emotionally exhausting job, probably even vicariously traumatizing trying to help all those people with eating disorders.

But because the hotline replaced them all with AI, how tf are these people going to get the help they need?

This is why healthcare needs to be a human right, protected, guaranteed and extremely funded by the government.

→ More replies

29

u/RollerSkatingHoop 11d ago

people are going to die

→ More replies

31

u/optimistic_frodo 11d ago

Bro a helpline with no humans? People are going to die for real.

63

u/Pirrip02 11d ago

Oh I'm sure this will go great.

→ More replies

214

u/winniewinniewinni 11d ago

I don’t want to talk about my ED with a bot :( watch it call me fat and porky by some “coding” accident :(

→ More replies

19

u/DuskShy 11d ago

Oh how I long for the days when a headline like this was from The Onion

21

u/bick803 11d ago

People really are overestimating the power of AI and Automation.

→ More replies

20

u/Mr_Mouthbreather 11d ago

"You are not alone" is going to hit a lot different coming from a fucking chatbot...

21

u/_Cliftonville_FC_ 11d ago

Looks like the Union already filed a charge with the NLRB: https://www.nlrb.gov/case/02-CA-317742

18

u/Enough_Minimum_3708 11d ago

best way to show people nobody cares about them is having them get help from a bot instead of genuine people. really fucking smooth

17

u/No-Two79 idle 11d ago

If you Google “NEDA interim CEO Elizabeth Thompson,” this comes up, which has an interesting link in the text.

https://www.nationaleatingdisorders.org/blog/thanksgiving-note-liz-thompson

→ More replies

15

u/TheBigPhilbowski 11d ago

"You're loved. You matter... just not enough for us to staff actual human beings to counsel you"

→ More replies

15

u/spazzing 11d ago

As someone with an eating disorder, I'm horrified. This is a nightmare.

→ More replies

44

u/buttspigot 11d ago

NPR was reporting this like it was just a weird thing that happened... pretty sure they didn't mention the unionization aspect...

→ More replies

50

u/Interesting_Sky_7847 11d ago

The chat bots will help you focus your attention away from an eating disorder and into neo-nazism instead.

24

u/Hrmerder 11d ago

I was once sick of life and living with eating disorders, But then I learned what white hate is and now I am living a healthy life again!..../s

But in all seriousness this is scary AF and exactly why AI is NOT a good idea.

→ More replies

14

u/SweetSlowKiss 11d ago

It's "asset" backwards. Is that on purpose?

12

u/Wrest216 11d ago

Our city has a FANTASTIC help line (311) that can do anything from just connecting and directing you where to go, to helping apply for assistance, to handleing NON emergency police and fire calls. Jesus it was actually FANTASTIC thing for our city, cut calls time in half, really helped people out. THEY WANT TO GO TO "AI" chat. Based upon " century link billing ai"
We had a week long protest and the canned the idea. FUCK THAT. Robots can help for like REALLLLLLLY basic shit. But they are pretty stupid.

12

u/Kindly-Ad-5071 11d ago

If I called a helpline I may not want to be consoled by a machine, you could get better results from a Google search. I would probably want to be understood by another breathing person.

→ More replies

11

u/woutomatic 11d ago

This new season of Black Mirror is great

33

u/[deleted] 11d ago

20

u/Fantasmic03 11d ago

If companies decide to do this then I think the board of directors should hold personal liability for any adverse incidents. If anyone dies they should be charged with negligence and if the company gets sued then they should have to pay restitution out of their own pockets.

→ More replies

7

u/iskilikecatpoo 11d ago

Yeah this thing is destined to become anorexinazia5000 chat bot. But imagining being someone in need of such counseling and receiving a robot. How valued would you feel as a human being if in a time of desperate need and of connection, it was determined all you deserved was a chat bot? And if that bot was anorexinazia500?

8

u/KaleidoscopeMore5664 11d ago

As someone who has worked in direct service mental health, I've had to fight against automation of some services as well. People already think text lines are robots, even when there's a real person on the other end. People already think phone support is scripted because some people aren't very good at it. So making them into actual robots seems like a bad idea.