Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dead]
38 days ago | hide | past | favorite


Curious what you think.

  We made LLMunix - an experimental system where you define AI agents in markdown once, then a local model executes them. No API calls after setup.

  The strange part: it also generates mobile apps. Some are tiny, some bundle local LLMs for offline reasoning. They run completely on-device.

  Everything is pure markdown specs. The "OS" boots when an LLM runtime reads the files and interprets them.

  Still figuring out where this breaks. Edge models are less accurate. Apps with local AI are 600MB+. Probably lots of edge cases we haven't hit.

  But the idea is interesting: what if workflows could learn and improve locally? What if apps reasoned on your device instead of the cloud?

  Try it if you're curious. Break it if you can. Genuinely want to know what we're missing.
  What would you build with fully offline AI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: