Hacker Newsnew | past | comments | ask | show | jobs | submit | alfanick's commentslogin

Anecdotal data, based on a sample of 1 (aka me). I'm originally Polish, but I would say my mother tongue is English. I also learned Latin as a kid/teen. Then learning any other languages is much easier, I also learned German and some Swiss German dialects. I can also do Spanish, Italian, French, Dutch, Czech, some Serbo-Croation. I think being Polish makes learning languages easy - as we have a lot of creations in Polish that do not translate easily to other languages. I think in my case it's the same part of brain that processes both human language and computer language. My brain can do another fun party trick: I never learned cyrillic, but I can read it just fine, my brain does like pattern matching and statistical analysis when reading cyrillic.

I also learned to think in hmm "concepts", and then apply a language of my choice to express them. It's a fun skill to have :) Obviously works of Chomsky are great, especially exploring if language evolves mind or is the other way around, does mind evolve language? [let's skip his rather controversial political views lately].


I speak several languages too, though definitely not as many as you do. I'm also in the process of learning a completely new one, at an advanced age relative to when I last learned a new one (I was in my thirties then). To me, my brain most definitely doesn't process human language the way it handles computer language. It's about as different as it can get. The latter is "learning", the former is "burn patterns into the brain", and learning a language can take years, at least at this age. Computer languages? Those can be picked up in as little as a weekend, and getting proficient isn't a multi-year or decade long process. It feels totally different for me (I've been learning new computer languages at the same time as I've been trying to get up to speed with a new human language).


Computer languages are much simpler than human languages, and they also operate in similar kind of logical ways. I definitely remember how hard was to go from pascal to C to Cpp to Python to prolog to haskell to SQL... until at some point nothing was new.


To me, working with a computer language involves specific thinking, constructing stuff in my mind. But human language is nothing of the sort, though it's possible to kind of do the same if I sit down and try to polish a written sentence. But talking in, and understanding a conversation is as far from this as I can imagine. And the learning process is so extremely different.


I completely understand! I'm also Polish American. I have to say it helps when mother's side of family is Gdańsk+west and father's Lublin+east. My wife's family is all from Warsaw area and I had to translate for my father-in-law during a holiday to Władysławowo-Hel (probably helps my aunt's father's side is Kashubian too, mmm... dessert first).

I was blown-away on holiday to Croatia. It was so unexpectedly relatively easily understandable after Czechia, Austria, and Slovenia. I was all, "What just happened!? Shouldn't this be something more like Italian?"

It took only a month for me to be able to communicate in Ukrainian with my ESL students, you're totally right about Cyrillic. And I too think in concepts but switch my brain to express them externally via language, whatever that language may be at the moment. I am terrible at translating OTOH, so unnatural!

But it has it's limits, I got to a point after German and Norwegian that I thought I harbored a super-power. Then I went to school in Hungary ;) I also had an ESL student from Lithuania, yep incomprehensible.


Kudos to you too. Finnish, Hungarian and Estonian I wouldn’t be able to comprehend ;) these are different beasts


Is there a PDF somewhere? I'm not really able to follow YT videos.


There's a link to the AoCO2025 tag for his blog posts in the op.


This. We could be good buddies.

I'm not understanding this OS and I'm extremely confused that this post got so much traction on HN. Gosh, either I'm too young, or too old, or too nerdy.

I'm not critiquing the project itself, more like, I'm surprised [very surprised], that it got so much traction on HN, not usual news


Same feeling here. Realistically, what is this distro doing that couldn't just be done with a quick bash script on whatever the current 'new user distro' is?

The wording is weird too : "Comes with Lutris preinstalled!". Would Windows users switch to a different hypothetical "Windows Distro" that was optimised for gaming?

None of this makes any sense to me.


[flagged]


No project needs to justify its existence. I'm wondering why people would use it.


I doubt many people think about the long-term implications of using a small Linux distro like this. They see something "cool", so they use it.


I don't really care about long term implications of the OS for my standalone gaming rig in my living room. If it works, it works.

Bazzite works, so I'm happily using it. If it stops working, I'll just install another distro. Easy as


[flagged]


Why would I be an LLM?


The comments here are hilarious. Something to go with my morning coffee. We live in the universe where you cannot distinguish between a human or an AI.


The more I read, the more confused I get. Either I live in some bubble, or I truly don't understand the world. It's a Linux distro based on Fedora, for gamers or smth - did all Linux gamers gather on HN today!? I'm not into gaming, hey I'm into Linux, I don't understand why this gets so much traction. Like yeah, I cannot comprehend that.


Is this an experiment? Is my mind broken? If 1) cool, if 2) I probably need help. @dang wtf is going on in here!?


This looks like babysitting a kid. If that's how CHAT/"vibe coding" looks like - no thank you. I would be frustrated all the time.


I am super efficient these days. But that's exactly what it feels like. Coding is not fun anymore and needs a lot of stress resistance now.

However doing actual manual coding starts to feel weird as well


Someone I know who is all-in on AI—the same person who literally said you're not a real engineer if you're not using LLMs—also made a passing remark about how exhausted he was at the end of a workday talking to the clankers.

I have a feeling the job's about to get a whole lot shittier.


> Coding is not fun anymore

The thing that made our crowd apart in the work society is now vanishing, that’s sad.

I wonder how future movies will depict programmers: depressed faces getting angrier and angrier chatting with a CLI coding agent! This will not inspire future generations!


Ever see the Fritz Lang’s metropolis? Like that but sitting down


I'm finding a good balance by only relying on LLMs for the stuff that used to make me stuck because it was just boring to do, the process of reading code, reasoning about it, and designing in my head a solution is still absolutely needed; after that it's quite easy to start hammering out a solution, sometimes I'd get a bit stuck if I noticed it would need some major changes across multiple places, I can let a LLM do that for me and get back on track.

What I can't stand even though I tried quite a bit is talking to the damn clanker at length to describe step-wise what I believe needs to be done, and keep waiting, reviewing, telling it what's wrong, waiting, reviewing, I don't think I'm at a stage where I have the mental capacity to be running dozens of clankers at once doing all their changes on their own, and just reviewing later. It's absolutely exhausting and joyless, I've tried, and at the moment it's not for me.


You could... not use them. The tool makes you less efficient, it is unpleasant to use... there's no upside here.


Unless part of your compensation or even your company's policy to allow to keep working there is tied to it.

And just up and changing companies is also not a realistic option for many, before people start bleating that tired, privileged trope.


What exactly isn’t fun about coding?


Well when you’re coding on your own you can get into the zone and just “flow”. With an LLM you’re waiting for the result, you see it has changed things it shouldn’t have changed and while the over all result is a step in the right direction, you have to go back and fix a lot of the LLMs “corrections” which is super tedious.

I asked Claude to help me out with an issue I was having with a small renderer I was working on, it fixed the issue but also added a memory leak, the leak was easy enough to fix because I fully understood what was going on but if you’re vibe coding and don’t have the skills to debug yourself you’re going to have a bad time.

In the end I don’t like using LLMs for coding but they are good at solving isolated problems when I get stuck. I prefer it when they review my code instead of writing it for me.


I’ve tried the paid models through GitHub copilot and I just can’t find any of them actually useful for anything more than generating tests.

They can generate stuff, but generally I spend so long fixing it manually that it makes the time savings zero or negative.

Only thing I have found useful is the PR review bot. That thing is genuinely incredible at spotting tiny mistakes in massive PRs that make your eyes glaze over.


Not GP, and I don't use LLMs either, but I find software design far more fun than software engineering. Software engineering feels like I was put in a zoo with Lego pieces scattered by all the animals and I have to recreate the model in six hours.


EAZZY MISTAKE /s


I use Noir extension on my iphone/ipad, does it automatically when iOS/iPadOS goes into night/dark mode (which is also geo-based). Works great on OLED screens!


Is there a joke I'm not getting? Or is there some paper that I can read and understand this?

Reading the README and related link [0] I have no idea if this is some serious math concept that I never considered, or is it some sarcastic manifesto.

[0]: https://gist.github.com/mxfactorial/c151619d22ef6603a557dbf3...


It is an AI fever dream. The readme suggests all dimensions are just a single angle transformation, yet the gist says you have to stack the tuples into vectors to increase the dimensionality.

There are physics systems that are simplified by operating with phase vectors. It is not a magical constant time dimension hack.

https://en.wikipedia.org/wiki/Phasor


> Is there a joke I'm not getting? Or is there some paper that I can read and understand this?

This is exactly what I wished to have answered for myself by submitting this to HN! I came across it in an unrelated PR on GitHub, didn't understand enough to figure out if it's actually something noteworthy or not, but sounded like it, so here we are.

Now someone just have to figure out if this is actually sound or not :) My hunch from looking through the commits is that it's made by someone with an unsound mind, but you never know, could just be I don't understand enough.


It’s unsound but, thankfully, there’re many sound geometric algebra libraries,


> This is exactly what I wished to have answered for myself by submitting this to HN!

dont ask social media technical questions. stay empirical and draft your own tests. welcome to push a failing test discrediting the lib


Could be sarcasm, could be the product of a working manic episode. If it's a joke it's very dry.

But ultimately the key data structure is trivial; it's a 2d vector(?) that splits the angle information into quotient and remainder assuming a divisor of PI / 2. This is hardly a novel construction.


Doesn't read like a joke. Did Kanye post it?

They claim negative numbers and mateicies are unnecessary because they figured out a better way to math.


It reads like the Time Cube... which wasn't un-funny until we learned more about the author.

As for the math itself, you can put anything amount of data you like into this 2-component vector, but nowhere is it claimed that you can get that data back out.

After vigorously shoveling your data onto the head of a pin, you can do any number of operations on that pin in time O(1). And as long as you don't ask for an answer, you'll be satisfied that your calculation was executed the utmost alacrity.

For once I'm almost curious what a LLM has to say about this bullshit.

Amusing tidbit from the copilot instructions:

> avoid words like "proper", "correct", "appropriate" and "valid" in your comments AND responses. these weasel words only create confusion in a lib challenging convention

Ya know what, they claim to be packing thousand-dimensional vectors into a pair of 64 bit floats. Great. The author should compress an entire set of LLM weights and then show us how their LLM performs with this O(1) magic.


> It reads like the Time Cube

push a rust test proving what it reads like


If the author agrees, I could try to learn Serbo-Croatian (I'm Polish, good with languages) and translate it to English. I'm kinda a burnout Linux geek, who cannot look at computers much more. Translating a book would be fun, but I would need some sponsoring. Amadeusz at [the old name of icloud].com


You may want to find an email provider that has a better spam filter if you want people to actually contact you.


the book is licenced under CC BY-SA so you should be OK with translating as long as you follow the licence terms.

you could try do a first pass in an AI model to translate and then proof-read it for quicker translation. good luck, it would be fun and potentially impactful ;)


I used to work as a "Kernel/Hypervisor Engineer" at that big company that sells books. People from outside the tech always thought I'm some kind of supervisor's supervisor ;)


Just make sure you look busy when the Ultravisor's walking around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: