Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sort of feels like GPT models could loop the chats into thinking/reasoning that has a quick check for "is this a minor trying to talk to me" and "is this person in need of mental help", etc.

Dangerous precedent to set (when does it end, what qualifies as "necessary", who pays for the extra processing, etc) but with stories like this, worth consideration at the very least.



How much you willing to bet they've already tried that in an A/B experiment and found that it affected engagement in a negative way so didn't roll it out further.


Yes, small models checking conversations and pruning GPT answers if needed is definitely what should be done, it's much more reliable than one-shot prompts.

But OpenAI is already hemorrhaging money, so they definitely can't afford to run 2 inferences for every answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: