Hacker Newsnew | past | comments | ask | show | jobs | submit | rozgo's commentslogin

World model for AI agents. Doing process mining of missions, operations and logistics to transform them into digital twins. AI agents can then leverage these digital twins as world models for control or prediction.


I develop simulators for digital twins and games. Currently working on a simulator for LLMs to use as world models.


It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.


Working on "context engineering" for coding agents. Specifically for complex dev environments and targets, like robotics, digital twins and games. Been able to witness agents go from 100% failure rate to contributing nearly 90% of the plumbing code. I'm helping agents understand how to use simulators and game engines; configure, build and deploy DevOps/MLOps pipelines.


Yang, this is really well done. I work with Rust and robotics all day, and would have never imagine seeing all this in one file. It is a great learning tool. If you complete the MPC controller it can also be a great learning tool for training AI pilots.


Do you have some suggestion for me to complete the MPC controller? Anything can help! For example: reference rust repo, optimization engine, optimization algorithm (linear / nonlinear). For nonlinear I am deciding between panoc and admm.

It is hard to demonstrate MPC in a simplistic way, but I want to take the challenge!


panoc, fast, efficient, good for real-time, embedded, and adaptive control.

I’ll check what I can dig up for Rust examples.


We ask it to predict, and in doing so it sometimes creates a model of the world it can use to "think" what comes next. In forward pass.


Yoneda Lemma, an object is entirely determined by its relationships to other objects.


That’s exactly right. We, just another games and simulation studio, are implementing LLMs in several games and engines. For dialog systems, hero journeys, behavior trees and techart pipelines. Most do not even expose a chatbot to the player, LLMs are also used for game systems to chatter among themselves.


The challenge is making it fun. I'd compare it to procedural generation. There are some big success stories there, and lots of games where it just generates infinite bland content.


Who are "we"?


6mo+ is spot on. This is true for most dev work that deviates from the CRUD path. Like gaming, simulation, robotics. We might as well use the time to train them in Rust too.


In startups and projects where Rust is a premature optimization this makes sense. But, some startups and projects are all about the competitive advantage created by optimizing from day one. In these cases, choosing Rust and other early optimizations is the main enabler of a unique product.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: