Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you tried having an LLM write significant amounts of, say, F#? Real language, lots of documentation, definitely in the pre-training corpus, but I've never had much luck with even mid sized problems in languages like it -- ones where today's models absolutely wipe the floor in JavaScript or Python.


Even best in class LLMs like GPT5 or Sonnet 4.5 do noticeably worse in languages like C# which are pretty mainstream, but not on the level of Typescript and Python - to the degree that I don't think they are reliably able to output production level code without a crazy level of oversight.

And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.


I’m doing Zig and it’s fine, though not significant amounts yet. I just had to have it synthesize the latest release changelog (0.15) into a short summary.

To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.

I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.

In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.


Humans can barely untangle F# code..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: