Hacker Newsnew | past | comments | ask | show | jobs | submit | rarisma's commentslogin

I love AI, think its super useful, I use claude daily and follow the industry closely but I would love to go a day without hearing about it

No 100b model.

My disappointment is immeasurable and my day is ruined.


Give me a graphene os phone moto, my money will be yours.


Every man and his dog can vibe code.


I swear this happens at least once a year.

Wheres my futuristic storage guys?


If you watch the movie johnny mnemonic, they throw around data cubes the size of a stamp. Modern nvme ssds weigh around 4 grams per TB. So we've already achieved scifi movie parity :D. The only problem is the price.


in your hands :)


I'm a little bit old. When I ordered my first M.2 drive, I had never seen one IRL. I'd assumed about RAM-stick size. Nope! Thumb-sized! The future is amazing! So... give it enough time and eventually the mundane will scratch that itch, I guess?


Reality is tearing at the seams.


Great, I can now combine the potential maliciousness of a script with the potential vulnerabilities of an AI Agent!

Jokes aside, this seems like a really wierd thing to leave to agents; I'm sure its definitely useful but how exactly is this more secure, a bad actor could just prompt inject claude (an issue I'm not sure can ever be fixed with our current model of LLMs).

And surely this is significantly slower than a script, claude can take 10-20 seconds to check the node version; if not longer with human approval for each command, a script could do that in miliseconds.

Sure it could help it work on more environments, but stuff is pretty well standardised and we have containers.

I think this part in the FAQ wraps it up neatly:

""" What about security? Isn't this just curl | bash with extra steps? This is a fair concern. A few things make install.md different:

    Human-readable by design. Users can review the instructions before execution. Unlike obfuscated scripts, the intent is clear.

    Step-by-step approval. LLMs in agentic contexts can be configured to request approval before running commands. Users see each action and can reject it.

    No hidden behavior. install.md describes outcomes in natural language. Malicious intent is harder to hide than in a shell script.
Install.md doesn't eliminate trust requirements. Users should only use install.md files from sources they trust—same as any installation method. """

So it is just curl with extra steps; scripts aren't obfuscated, you can read them; if they are obfuscated then they aren't going to use a Install.md and you (the user) should really think thrice before installing.

Step by step approval also sorta betrays the inital bit about leaving installing stuff to ai and wasting time reading instructions.

Malicious intent is harder to hide, but really if you have any doubt in your mind about an authors potential malefeasance you shouldn't be running it, wrapping claude around this doesn't make it any safer really when possible exploits and malware are likely baked into the software you are trying to install, not the install.

tldr; why not just have @grok is this script safe?

Ten more glorious years to installer.sh


This is some really fantastic feedback, thank you!

I personally think that prose is significantly easier to read than complex bash and there are at least some benefits to it. They may not outweigh the cons, but it's interesting to at least consider.

That said, this is a proposal and something we plan to iterate on. Generating install.sh scripts instead of markdown is something we're at least thinking about.


I think this was writen wholly by deep research.

It just reads like a clunky low quality article


It's clearly AI writing ("hum", "delve") but oddly I don't think deep research models use those words.


I think relying on the vocabulary to indicate AI is pointless (unless they're actually using words that AI made up). There's a reason they use words such as those you've pointed out: because they're words, and their training material (a.k.a. output by humans) use them.


No American used "delve" before ChatGPT 3.5, and nobody outside fanfiction uses the metaphors it does (which are always about "secrets" "quiet" "humming" "whispers" etc). It's really very noticeable.

https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...


The link you posted doesn't back up the statement that "No American used "delve" before ChatGPT 3.5". Instead it states that _few_ people used it in _biomedical papers_. I've seen it (and metaphors using the other words you noted) used in fiction for my entire life, and I sure as hell predate chatgpt. This is why it's a bad idea to consider every use of particular words to be AI generated. There are always some people who have larger vocabularies than others and use more words, including words some people have deemed giveaways of AI use.

That said, their use may raise suspicion of AI, but they are _not_ proof of AI. I don't want to live in a world where people with large vocabularies are not taken seriously. Such an anti-intellectual stance is extremely dangerous.


I've been reading deep research results every day for months now and I promise I know what AI writing style looks like.

It has nothing to do with "large vocabularies". I know who the people with large vocabularies were that originally caused the delving thing too, and they weren't American. (Mostly they were Nigerian.) I'm confused what you think specific kinds of metaphors involving sounds have to do with large vocabularies though.

> I've seen it (and metaphors using the other words you noted) used in fiction for my entire life

And the point is that this article isn't fiction. Or not supposed to be anyway.


People with large vocabularies tend to be heavy readers, and therefore experiencing these words and metaphors more than people with smaller vocabularies. I think there's a direct link between people attempting to use certain words as proof of AI and the fact the younger generations aren't reading as much as older generations.

https://www.nytimes.com/2025/12/12/us/high-school-english-te...

Somewhat contradictory, I don't think you can ignore fiction when discussing technical writing, since technical writing (especially online) has become far more casual (and influenced by conversation, pop culture, and yes, even fiction) than it ever was before. So while as I noted above, younger people are reading less these days, people are also less strict about how formal technical writing needs to be, so they may very well include words and expressions not commonly seen in that style of writing in the past.

I'm not arguing that these things can't be indicators of AI generation. I'm just arguing that they can't be proof of AI generation. And that argument only gets stronger as time goes on an more people are (sadly) influenced by things AI have generated.


I bet the llm is biased towards the mtg card delver of secrets


But now Americans do use "delve" since 3.5. So what? No Americans used "cromulent" as a word either until Simpsons invented it. Is it not a real word? Does using it mean the Simpsons wrote it?


future of humanity btw.


now do the rest of the world


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: