Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> usually I’m the bottleneck

This is my experience now too. The degree to which we are bottlenecks comes down to how good we are at finding the right balance between micromanaging the models (doesn't work well - massive maste of time; most of the issue you spend time correcting are things the models can correct themselves) vs. abandoning all oversight (also does not work well; will entrench major architectural problems that will take lots of effort to fix).

I spend a fairly significant amount of time revising agents, skills etc. to take myself out of the loop as much as possible by reviewing what has worked, and what doesn't, to let the model fix what it can fix before I have to review its code. My experience is that this time has a high ROI.

It doesn't matter if the steps I add waste lots of the models time cleaning up code I ultimately end up rejecting, because its time is cheap, and mine is not, and the cleanups also tend to make the time it takes to realise its done something stupid shorter.

Getting to a point where I'm comfortable "letting go" and letting the model write stupid code and letting the model fix it, before I even look at it, has been the hardest part for me of accelerating my AI use.

If I keep reading as Claude Code runs, the model often infuriates me and I end up starting to type messages to tell it fix something tremendously idiotic it has just done, only to have it realise and fix it before I get to pressing enter. There's no point doing that, so increasingly I put my sessions on other virtual desktops and try to forget about them while they're working.

It still does stupid stuff, but the proportion of stupid stuff I need to manually review and reject keeps dropping.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: