Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are at a weird moment where the latency of the response is slow enough that we're anthropomorphizing AI code assistants into employees. We don't talk about image generation this way. With images, its batching up a few jobs and reviewing the results later. We don't say "I spun up a bunch of AI artists."


As a follow-up, how would this workflow feel if the LLM generation were instantenous or cost nothing? What would the new bottleneck be? Running the tests? Network speed? The human reviewer?


You can get a glimpse of that by trying one of the wildly performant LLM providers - most notably Cerebras and Groq, or the Gemini Diffusion preview.

I have videos showing Cerebras: https://simonwillison.net/2024/Oct/31/cerebras-coder/ and Gemini Diffusion: https://simonwillison.net/2025/May/21/gemini-diffusion/


Are there any semi autonomous agentic systems for image generation? I feel like mostly it's still a one shot deal but maybe there's an idea there.

I guess Adobe is working on it. Maybe Figma too.


That's part of my point. You don't need to conceptualize something as an "agent" that goes off and does work on its own when the latency is less than 2 seconds.


a lot of coding tasks involve looking stuff up, so that latency of loading and rendering and using a page is the bottleneck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: