Hacker Newsnew | past | comments | ask | show | jobs | submit | blueside's commentslogin

More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.

I got a chuckle the last time I used Claude's /insights command. The number one thing in the report was, "User frequently stops processing to provide corrections." ;-)

I just tell a new instance and a different provider the core idea and see if they like it too

Trouble is an LLM can test for something being logical in isolation, or coherent unto itself. It’s much weaker at anticipating what will be meaningful to other people which is usually what people are actually looking for.

I started to convert a lot of my content in AV1 until I realized that my Nvidia Shield devices won't play AV1. My $30 firestick will play them but I do really prefer the Shield. I guess I'll wait it out and hope for a new Shield (it's been 2019 since Nvidia released one) but i'm not going to hold my breath.


> my Nvidia Shield devices won't play AV1

Put VLC on them. See if it works for your AV1 videos.


The users review the output


if this guys learns enough, who knows, he may have a future in programming!


As we all know here, if the the title of this post was about Codex on the web, the top comment would have been about using Claude instead.

YMMV, but this definitely doesn't track with everything I've been seeing and hearing, which is that Codex is inferior to Claude on almost every measure.


fwiw I'm happy to see this - been trying to tackle a hairy problem (rendering bugs) and both models fail, but:

1. Codex takes longer to fail and with less helpful feedback, but tends to at least not produce as many compiler errors 2. Claude fails faster and with more interesting back-and-forth, though tends to fail a bit harder

Neither of them are fixing the problems I want them to fix, so I prefer the faster iteration and back-and-forth so I can guide it better

So it's a bit surprising to me when so many people are pickign a "clear winner" that I prefer less atm


LLMs have continually taught me that we have vastly overestimated human intelligence


Perhaps we're overestimating human intelligence and underestimating animal intelligence. Also funny that current LLMs are incapable of continual learning themselves.


>LLMs have continually taught me that we have vastly overestimated human intelligence

LLMs have continually taught me that we have vastly underestimated human intelligence, fixed that for you


only 2 people (engineer and conductor) for an entire train that is over a mile long seems about right to me though


Agreed, I couldn't believe they skipped the RB19


F1 has never been the same for me since they got rid of the engine sound, I miss it so much


There is no question that Tesla helped accelerate EV adoption in China but BYD's significant battery R&D was already underway and would have happened with or without Elon


I rode in an EV BYD taxi in 2016. However, I don’t think EVs would have caught on as fast in China without the Model S setting an example that EVs were viable as full ICE car replacements (they definitely were already there before the Model 3 validated mass market adoption).


You have to admit though, there's a difference between saying "never" or "as fast".


I think the comment I replied to was edited.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: