Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would be the cost for OpenAI to just stop these kinds of very long conversations that aren't about debugging or some actual long problem solving? It seems from the reports many people are being affected, some very very negatively, and many likely unreported. I don't understand why they don't show a warning or just open a new chat thread when a discussion gets too long or it can be detected that it's not fiction and likely veering into dangerous territory?

I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.



> It seems from the reports many people are being affected

I think the rapid scale and growth of ChatGPT are breaking a lot of mental models about how common these occurrences are.

ChatGPT's weekly active user count is twice as large as the population of the United States. More people use ChatGPT than Reddit. The number of people using ChatGPT on a weekly basis is so massive that it's hard to even begin to understand how common these occurrences are. When they happen, they get amplified and spread far and wide.

The uses of ChatGPT and LLMs are very diverse. Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.


Ah, the old "we're too big to be able to not do evil things! we've scaled too much so now we can't moderate! Oh well, sucks to not be rich."


They're not claiming they don't moderate, though. Where are you getting that? A common complaint about ChatGPT and even their open weights models is that they're too censored.


Anthropic at least used to stop conversations cold when they reached the end of the context window, so it's entirely possible from a technical standpoint. That OpenAI chooses not to, and prefers to let the user continue on, increasing engagement, puts it on them.


Incidence of harm is a function of harm/population. It is likely that Facebook is orders of magnitude more harmful than ChatGPT and bathtubs and bikes more dangerous than long LLM conversations.

It doesn't mean something more should not be done but we should retain perspective.

Maybe they should try to detect not long conversations but dangerous ones based on spot checking with a LLM to flag problems up for human review and a family notification program.

EG Bob is a nut. We can find this out by having a LLM not pre prompted by Bob's crazy examine some of the chats by top users by tokens consumed in chat not API and flagging it up to a human who cuts off bob or better shunts him to a version designed to shut down his particular brand of crazy eg pre prompted to tell him it's unhealthy.

This initial flag for review could also come from family or friends and if OpenAI concurs handle as above.

Likewise we could target posters of conspiracy theories for review and containment.


> Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.

I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?


>its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.


In June 2015, Sam Altman told a tech conference, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

Do you really think Sam or any of the other sociopaths running these AI companies care whether their product is causing harm to people? I surely do not.

[1] https://siepr.stanford.edu/news/what-point-do-we-decide-ais-...


It seems like a cheaper model could be asked to review transcripts, something like: “does this transcript seem at all like a wacky conspiracy theory that is encouraged in the use by the LLM”?

In this case, it would have been easily detected. Depending on the prompt used, there would be more or less false positives/negatives, but low-hanging fruit such as this tragic incident should be avoidable.


I've had OpenAI do the weirdest things in conversations about aerodynamics and very low level device drivers, I don't think you will be able to reach a solution by just limiting the subjects. It is incredible how strongly it tries to position itself as a thinking entity that is above its users in the sense that it is handing out compliments all the time. Some people are more susceptible to others.


> I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.

Because the mission is a lie and the goal is profit. alwayshasbeen.jpg


Those remediations would pretty clearly negatively impact revenue. And the team gets paid a lot to do their current work as-is.

The way to get the team organized against something is to threaten their stock valuation (like when the workers organized against Altman's ousting). I don't see how cutting off users is going to do anything but drive the opposite reaction from the workers from what you want.


>Those remediations would pretty clearly negatively impact revenue

That might make sense if openai was getting paid per token for these chats, but people who are using chatgpt as their therapist probably aren't using their consumption based API. They might have a premium account but how many % of premium users do you think are using chatgpt as their therapist and getting into long winded chats?


You can ask the same of users consuming toxic content on Facebook. Meta knows the content is harmful and they like it because it drives engagement. They also have policies to protect active scam ads if they are large enough revenue-drivers - doesn't get much more knowingly harmful than that, but it brings in the money. We shouldn't expect these businesses to have the best interests of users in mind especially when it conflicts with revenue opportunities.


It is much harder to blame meta because the content is disperse and they can always say "they decided to consume this/join this group/like this page/watch these videos", while ChatGPT is directly telling the person their mother is trying to kill him.

Not that the actual effect is any different, but for a jury the second case is much stronger.


OpenAI is a synthetic media production company, they literally produce images, text, & video + audio to engage their users. The fact that people think OpenAI is an intelligence company is a testament to how good their marketing is at convincing people they are more than a synthetic media production company. This is also true for xAI & Grok. Most consumer AI companies are in the business of generating engaging synthetic media to keep their users glued to their interfaces for as long as possible.


The cost would be a very large chunk of OpenAI's business. People aren't just using ChatGPT just to solve problems. It is a very popular tool for idle chatter, role playing, entertainment, friendship, therapy, and lots more. And OpenAI isn't financially incentivized to discourage this kind of use.


Looks like this would affect around 4.3% of chats (the "Self-Expression" category from this report[0]). Considering ChatGPT's userbase, that's an extremely large number of people, but less significant than I thought based on all the talk about AI companionship. That being said though, a similar crowd was pretty upset when OpenAI removed 4o, and the backlash was enough for them to bring it back.

[0]: https://www.nber.org/system/files/working_papers/w34255/w342...


> I don't know how this doesn't give pause to the ChatGPT team

a large pile of money

> What would be the cost for OpenAI to just stop these kinds of very long conversations

the aforementioned large pile of money


Just because you do not use a piece of technology or see no use in a particular use-case does not make it useless. If you want your Java code repaired, more power to you, but do not cripple the tool for people like me who use ChatGPT for more introspective work which cannot be expressed in a tweet.

By the way, I would wager that 'long-form'-users are actually the users that pay for the service.


> By the way, I would wager that 'long-form'-users are actually the users that pay for the service.

I think it may be the case that many of these people that commit suicide or do other dangerous things after motivation from AI, are actually using weaker models that are available on the free versions. Whatever ability there is in AI to protect the user, it must be lower for the cheaper models that are freely available.


I would bet that AI girlfriend is a top ten use case for LLMs


It is probably a top 1 use case if you add the AI boyfriend option.

There are a lot of lonely people out there.


And role-playing in general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: