Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have successfully vibe-coded features in C. I still don't like C. The agent forgets to free memory latter just like a human would and has to go back and fix it later.

On the other hand, I've enjoyed vibe coding Rust more, because I'm interested in Rust and felt like my understanding approved along they way as I saw what code was produced.

A lot of coding "talent" isn't skill with the language, it's learning all the particularities of the dependencies: The details of the Smithay package in Rust, the complex set of GTK modules or the Wayland protocol implementation.

On a good day, AI can help navigate all that "book knowledge" faster.





Something I've noticed that I never really see called out is how easy it is to review rust code diffs. I spent a lot of my career maintaining company internal forks of large open source C programs, but recently have been working in rust. The things I spent a lot of time chasing down while reviewing C code diffs, particularly of newer team members, is if they paid attention to all the memory assumptions that were non-local to the change they made. Eg. I'd ask them "the way you called this function implies it _always_ frees the memory behind that char*. Is that the case?" If they didn't know the answer immediately I'd be worried and spend a lot more time investigating the change before approving.

With rust, what I see is generally what I get. I'm not worried about heisenbug gotchas lurking in innocent looking changes. If someone is going to be vibe coding, and truly doesn't care about the language the product ends up in, they might as well do it in a language that has rigid guardrails.


How do LLMs deal with Rust (compared to other languages)? I think this might actually be the time to finally give the language a try. LLMs really lowered the barrier for staying productive while learning.

This is extremely limited scope annecdata, but I've spent a few tens of hours each testing LLM coding agents in Rust for personal projects and in Python at work. My impression is that LLMs are far more productive in Rust. I attribute this to the far more structured nature of Rust compared to Python, and possibly the excellent compiler error messages as well.

The LLM gets stuck in unproductive loops all the time in Python. In Rust, it generally converges to a result that compiles and passes unit tests. Of course the code quality is still variable. My experience is that it works best when prompts are restricted to a very small unit of work. Asking an LLM to write an entire library/module/application from scratch virtually never results in usable code.


sometimes they randomly choose the ugliest possible way to do pattern matching, eg multiple blocks of nested "if let" instead of a "match", or a "match" instead of a single "if let"

otherwise, works great; much easier to un-vibe the code compared to eg python

(gpt 5.* in codex/sonnet 4.5 in cc/glm 4.6)


It's really funny how much better the AI is at writing python and javascript than it is C/C++. For one thing it proves the point that those languages really are just way harder to write. And another thing, it's funny that the AI makes the exact same mistakes a human would in C++. I don't know if it's that the AI was trained on human mistakes, or just that these languages have such strong wells of footguns that even an alien intelligence gets trapped in them.

So in essense I have to disagree with the author's suggestion to vibe code in C instead of Python. I think the python usability features that were made for humans actually help the AI the exact same ways.

There are all kinds of other ways that vibe coding should change one's design though. It's way easier now to roll your own version of some UI or utility library instead of importing one to save time. It's way easier now to drop down into C++ for a critical section and have the AI handle the annoying data marshalling. Things like that are the real unlock in my opinion.


I don't think it has much to do with the languages being harder .. the training sets for JS and Python are probably an order of magnitude larger.

More examples/better models and less footguns. In programming, the fewer (assumed correct) abstractions, the more room for error. Humans learned this awhile ago, which is why your average programmer doesn't remember a lick of ASM, or have to. One of the reasons I don't trust vibe coding lower level languages is that I don't have multiple tools with which to cross check the AI output. Even the best AI models routinely produce code that does not compile, much less account for all side effects. Often, the output outright curtails functionality. It casually makes tradeoffs that a human would not make (usually). In C, AI use is a dangerous proposition.

The amount of freely available C code must be very large. Good C code, significantly smaller :-\

> I don't know if it's that the AI was trained on human mistakes, or just that these languages have such strong wells of footguns that even an alien intelligence gets trapped in them.

First one. Most of C code you can find out there is either oneliners or shit, there are fewer bigger projects for the LLMs to train on, compared to python and typescript

And once we go to the embedded space, the LLMs are trained on manufacturer written/autogenerated code, which is usually full of inaccuracies (mismatched comments) bugs and bat practices


> It's really funny how much better the AI is at writing python and javascript than it is C/C++. For one thing it proves the point that those languages really are just way harder to write.

I have not found this to be the case. I mean, yeah, they're really good with Python and yeah that's a lot easier, but I had one recently (IIRC it was the pre-release GPT5.1) code me up a simulator for a kind of a microcoded state machine in C++ and it did amazingly well - almost in one-shot. It can single-step through the microcode, examine IOs, allows you to set input values, etc. I was quite impressed. (I had asked it to look at the C code for a compiler that targets this microcoded state machine in addition to some Verilog that implements the machine in order for it to figure out what the simulator should be doing). I didn't have high expectations going in, but was very pleasantly surprised to have a working simulator with single-stepping capabilities within an afternoon all in what seems to be pretty-well written C++.


I mean, there's C, and then there's C++. I've found AI to be pretty okay at C.

> I have successfully vibe-coded features in C. I still don't like C.

Same here. I've been vibe-coding in C for the sake of others in my group who only know C (no C++ or Rust). And I have to say that the agent did do pretty well with memory management. There were some early problems, but it was able to debug them pretty quickly (and certainly if I had had to dig into the intricacies of GDB to do that on my own, it would've taken a lot longer). I'm glad that it takes care of things like memory management and dealing with strings in C (things that I do not find pleasant).


Lately I have learned assembly more deeply and I sometimes let an AI code up the same thing I did just to compare.

Not that my own code is good but every single time assembly output from an optimizing compiler beats the AI as it "forgets" about all the little tricks involved. However it may still be about how I prompt it. If I tell it to solve the actual challenge in assembly it does do that, it's just not good or efficient code.

On the other hand because I take the time to proof read it I learn from it's mistakes just as I would from my own.


Shouldn't we try vibe coding on IR then? Basically assembly before compiler optimizations?

Yeah I suppose one would need not only the source and binaries but also the IR in AI training data which may be rare but could probably be easily generated for a lot of software.

> The agent forgets to free memory latter just like a human would and has to go back and fix it later.

I highly recommend people learn how to write their own agents. Its really not that hard. You can do it with any llm model, even ones that run locally.

I.e you can automate things like checking for memory freeing.


Why would I want to have an extra thing to maintain, on top of having to manually review, debug, and write tests for a language I don't like that much?

You don't have to maintain it. LLMs are really good at following direction.

I have a custom agent that can take python code, translates it to C, does a refactoring job to include a mempool implementation (so that memory is allocated once at the start of the program and instead of malloc it grabs chunks out of mempool), runs cppcheck, uploads to a container, and runs it with valgrind.

Been using it since ChatGPT3 - the only updates I did to it was API changes to call different providers. Doesn't use any agent/mcp/tools thing either, pure chat.


There's always going to be some maintenance, at the very least the API changes for providers you mentioned, and then there's still the reviews and testing of the C.

A mempool seems very much like a DIY implementation of malloc, unless you have fixed size allocations or something else that would make things different, not sure why I'd want that in the general case.

For "non hacker style" production code it just seems like a lot of extra steps.


> I.e you can automate things like checking for memory freeing.

Or, if you don't need to use C (e.g. for FFI or platform compatibility reasons), you could use a language with a compiler that does it for you.


Right, a lot of the promise of AI can (and has) been achieved with better tool design. If we get the AI to start writing Assembly or Machine Code as some people want it to, we're going to have the same problems with AI writing in those languages as we did when humans had to use them raw. We invented new languages because we didn't find those old ones expressive enough, so I don't exactly understand the idea that LLMs will have a better time expressing themselves in those languages. The AI forgetting to free memory in C and having to go back and correct itself is a perfect example of this. We invented new tools so we wouldn't have to do that anymore, and they work. Now we are going backwards, and building giant AI datacenters that suck up all the RAM in the world just to make up for lost ground? Weak.

> We invented new languages because we didn't find those old ones expressive enough

Not quite. Its not about being expressive enough to define algorithms, its about simplification, organization and avoidance of repetition. We invented languages to automate a lot of the work that programmers had to do in a lower level language.

C abstracts away handling memory addresses and setting up frame stacks like you would in assembly.

Rust makes handling memory more restrictive so you don't run into issues.

Java abstracts away memory management completely, so you don't need to manage memory, freeing up you to design algorithm without worrying about memory leaks (although apparently you do have to worry if your log statements can execute arbitrary code).

Javascript and Python abstract type definition away through dynamic interpretation.

Likewise, OOP/Typing, functional programming, and other styles were included for better organization.

LLMs are right in line with this. There is no difference between you using a compiler to compile a program, vs a sufficiently advanced LLM writing said compiler and using it to compile your program, vs LLM compiling the program directly with agentic loops for accuracy.

Once we get past the hype of big LLMs, the next chapter is gonna be much smaller, specialized LLMs with architecture that is more deterministic than probabilistic that are gonna replace a lot of tools. The future of programming will be you defining code in a high level language like Python, then the LLM will be able to infer a lot of the information (for example, the task of finding how variables relate to each other is right in line with what transformers do) just from the code and do things like auto infer types, write template code, then adapt it to the specific needs.

In fact, CPUs already do this to a certain extent - modern branch predictors are basically miniature neural networks.


I use rust. The compiler is my agent.

Or to quote Rick and Morty, “that’s just rust with extra steps!”


On a related note, I've always regarded Python as the best IDE for writing C. :)

Replace memory with one of the dozen common issues the Rust compiler does nothing for like deadlocks.

Well, the case would still stand, wouldn't it? Unless C is free of these dozen common issues.

Sure. Or you can let the language do that for you and spend your tokens on something else. Like, do you want your LLM to generate LLVM byte code? It could, right? Buy why wouldn't you let the compiler do that?

Unless im writing something like code for a video game in a game engine that uses C++, most of the stuff that I need C is compartmentalized enough to where its much faster to have an LLM write it.

For example, the last C code I wrote was tcp over ethernet, bypassing the IP layer, so I can be connected to the VPN while being able to access local machines on my network.

If im writing it in Rust, I have to do a lot of research, think about code structure, and so on. With LLMs, it took me an hour to write, and that is with no memory leaks or any other safety issues.


Interesting. I find that Claude 4.5 has a ridiculous amount of knowledge and “I don’t know how to do that in Rust” is exactly what it’s good at. Also, have you tried just modifying your route table?

>Also, have you tried just modifying your route table?

The problem is I want to run VNC on my home computer to the server on my work Mac so I can just access everything from one screen and m+b combo without having to use a USB switch and a second monitor. With VPN it basically just does not allow any inbound connections.

So I run a localhost tunnel its a generic ethernet listener that basically takes data and initiates a connection to localhost from localost and proxies the data. On my desktop side, its the same thing just in reverse.


Do you have any good starting points? For example, if someone had an ollama or lm studio daemon running where would they go from that point?

I think arenas might be better memory management technique when vibe coding C, for this reason

I just wrote a piece on this specific C issue the other day https://news.ycombinator.com/item?id=46186930

well,glib is terrible for anything important, it's really just for desktop apps. when there is a mem error, glib does not really handle it,it just aborts. ok for desktop, not ok for anything else.

I addressed this in the first sentence of the second post (g_try_malloc) in a direct reply to my original post: https://news.ycombinator.com/item?id=46186931



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: