I don't see how to interpret your claims. How do you yourself know that you're right when you "recognize" Claude or ChatGPT? How do you know how much of the text you don't recognize as any LLM is actually LLM-generated? My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.
I'm not claiming to be able to spot 100% of LLM written output
However the default tone and output style of Claude and ChatGPT are very obvious.
> My recollection is whenever I've seen data on this--the educators who think they can spot students cheating--the conclusion is people are really bad at identifying LLM-generated content.
If you can share that data we can discuss it, but there's nothing really to discuss here without a source