Is this getting traction? The front page of HN and some meta-debate is a pretty low bar for what I’d consider traction if I were a one of the richest companies on Earth.
> Microsoft is Windows. Anyone saying otherwise is completely delusional.
What's delusional is making an unsubstantiated claims and then dismissing any counterarguments before they're made.
> Most of M$ office software has alternatives (Google Docs, OpenOffice...)
True. Yet MS Office is still the de facto standard.
> Github is constantly crashing and burning
True. But that doesn't mean it isn't still a business strategy for MS.
> Azure is garbage
Also true. But that doesn't mean it isn't profitable: "Microsoft Cloud revenue increased 23% to $168.9 billion."
> and they uttery killed Xbox
Quite the opposite. Xbox is thriving: "Xbox content and services revenue increased 16%."
> Oh and Linkedin is for actual psychopaths.
That's subjective. And even if it were true, that's got nothing to do with profitability (eg look at Facebook).
> If Windows dies, all of their other junk that is attached to the platform will die as well.
First off, literally no-one is claiming Windows is going to "die".
Secondly, even if it were to "die", you've provided no evidence why their other revenue streams wouldn't succeed when it's already been demonstrated that those revenue streams are growing, and in some cases, have already overtaken Windows.
I know devs are a different market, but how many folks do we know daily drive Mac/Linux and use MS dev tools? VS Code, Typescript, .NET?
I think they'll do just fine if Windows dies on the vine. They'll keep selling all the same software; even for PC gaming they already have their titles on Steam.
IRC for main chat, Mumble for voice chat when gaming. Been solid for decades. I have at least 3 functional Mumble servers saved (including my own) in my client, most of them are associated with an IRC community. I occasionally hear "Anyone down for some Quake? Hop on Mumble." or something to that effect. Mumble is pretty easy to host, so if you're using it with a small to medium group of friends, I'd say just throw up a server on your LAN somewhere. It's got decent mobile clients on F-Droid as well if you need one.
Some of my gaming buddies on Discord needed help getting that properly working. Asking them to set up and use both IRC and Mumble would be a step too far.
This is a common trap HN falls into. Stuff that’s easy and practical for people of our capabilities can be a nightmarish hellscape for other people.
8 years is not that long, if it can still compile in say 20 years then sure but 8 years in this industry isn't that long at all (unless you're into self flagellation by working on the web).
Except 8 years is impressive by modern standards. These days, most popular ecosystems have breaking changes that would cause even just 2-year-old code bases to fail to compile. It's shit and I hate it. But that's one of the reasons I favour Go and Perl -- I know my code will continue to compile with very little maintenance years later.
Plus 8 years was just an example, not the furthest back Go will support. I've just pulled a project I'd written against Go 1.0 (the literal first release of Golang). It's 16 years old now, uses C interop too (so not a trivial Go program), and I've not touched the code in the years since. It compiled without any issues.
Go is one of the very few programming languages that has an official backwards compatibility guarantee. This does lead to some issues of its own (eg some implementations of new features have been somewhat less elegant because the Go team favoured an approach that didn't introduce changes to the existing syntax).
Have we though? I feel the opposite it true. These days developers expect users of their modules and frameworks will be regularly updating those dependencies and doing so dynamically from the web.
While this is true for active code bases. You can quickly find stable but unmaintained code will eventually rot as its dependencies deprecate.
There aren't many languages out there where their wider ecosystem thinks about API-stability in terms of years.
If they change the syntax sure but you can always use today's compiler if necessary. I generally find the go binaries to have even fewer external dependencies than a C/Cpp project.
It depends on your threat model. Mine includes the compiler vendors abandoning the project and me needing to make my own implementation. Obviously unlikely, and someone else would likely step in for all the major languages, but I'm not convinced Go adds enough over C to give away that control.
As long as I have a stack of esp32s and a working C compiler, no one can take away my ability to make useful programs, including maintaining the compiler itself.
I think relatively few programs need to be large. Most complexity in software today comes from scale, which usually results in an inferior UX. Take Google drive for example. Very complicated to build a system like that, but most people would be better served by a WebDAV server hosted by a local company. You'd get way better latency and file transfer speeds, and the company could use off the shelf OSS, or write their own.
The golden age for me is any period where you have the fully documented systems.
Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.
And software that’s open and can be modified.
Instead what we have is:
- AI which are little black boxes and beyond our ability to fully reason.
- perpetual subscription services for the same software we used to “own”.
- hardware that is completely undocumented to all but a small few who are granted an NDA before hand
- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”
- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.
The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.
My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.
Hostile operating systems. Take the effort to switch to Linux.
Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.
Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.
I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm.
I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms.
Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware.
Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”.
but isn't this like a lot of other CS-related "gradient descent"?
when someone invents a new scheduling algorithm or a new concurrent data structure, it's usually based on hunches and empirical results (benchmarks) too. nobody sits down and mathematically proves their new linux scheduler is optimal before shipping it. they test it against representative workloads and see if there is uplift.
we understand transformer architectures at the same theoretical level we understand most complex systems. we know the principles, we have solid intuitions about why certain things work, but the emergent behavior of any sufficiently complex system isn't fully predictable from first principles.
that's true of operating systems, distributed databases, and most software above a certain complexity threshold.
No. Algorithm analysis is much more sophisticated and well defined than that. Most algorithms are deterministic, and it is relatively straightforward to identify complexity, O(). Even nondeterministic algorithms we can evaluate asymptotic performance under different categories of input. We know a lot about how an algorithm will perform under a wide variety of input distributions regardless of determinism. In the case of schedulers, and other critical concurrency algorithms, performance is well known before release. There is a whole subfield of computer science dedicated to it. You don't have to "prove optimality" to know a lot about how an algorithm will perform. What's missing in neural networks is the why and how any inputs will propagate, through the network during inference. It is a black box of understandability. Under a great deal of study, but still very poorly understood.
i agree w/ the the complexity analysis point, but that theoretical understanding actually translates to real world deployment decisions in both subfields. knowing an algorithm is O() tells you surprisingly little about whether itll actually outperform alternatives on real hardware with real cache hierarchies, branch predictors, and memory access patterns. same thing with ML (just with the very different nature of GPU hw), both subfields hve massive graveyards of "improvements" that looked great on paper (or in controlled environments) but never made it into production systems. arxiv is full of architecture tweaks showing SOTA on some benchmark and the same w/ novels data structures/algorithms that nobody ever uses at scale.
I think you missed the point. Proving something is optimal, is a much higher bar than just knowing how the hell the algorithm gets from inputs to outputs in a reasonable way. Even concurrent systems and algorithm bounds under input distributions have well established ways to evaluate them. There is literally no theoretical framework for how a neural network churns out answers from inputs, other than the most fundamental "matrix algebra". Big O, Theta, Omega, and asymptotic performance are all sound theoretical methods to evaluate algorithms. We don't have anything even that good for neural networks.
>Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself.
That's like saying you can understand humans by watching some physics or biology videos.
Except it's not. Traditional algorithms are well understood because they're deterministic formulas. We know what the output is if we know the input. The surprises that happen with traditional algorithms are when they're applied in non-traditional scenarios as an experiment.
Whereas with LLMs, we get surprised even when using them in an expected way. This is why so much research happens investigating how these models work even after they've been released to the public. And it's also why prompt engineering can feel like black magic.
I think the historical record pushes back pretty strongly on the idea that determinism in engineering is new. Early computing basically depended on it. Take the Apollo guidance software in the 60s. Those engineers absolutely could not afford "surprising" runtime behavior. They designed systems where the same inputs reliably produced the same outputs because human lives depended on it.
That doesn't mean complex systems never behaved unexpectedly, but the engineering goal was explicit determinism wherever possible: predictable execution, bounded failure modes, reproducible debugging. That tradition carried through operating systems, compilers, finance software, avionics, etc.
What is newer is our comfort with probabilistic or emergent systems, especially in AI/ML. LLMs are deterministic mathematically, but in practice they behave probabilistically from a user perspective, which makes them feel different from classical algorithms.
So I'd frame it less as "determinism is new" and more as "we're now building more systems where strict determinism isn't always the primary goal."
Going back to the original point, getting educated on LLMs will help you demystify some of the non-determinism but as I mentioned in a previous comment, even the people who literally built the LLMs get surprised by the behavior of their own software.
That’s some epic goal post shifting going on there!!
We’re talking about software algorithms. Chemical and biomedical engineering are entirely different fields. As are psychology, gardening, and morris dancing
Yeah. Which any normal person would take to mean “all technologies in software engineering” because talking about any other unrelated field would just be silly.
We know why they work, but not how. SotA models are an empirical goldmine, we are learning a lot about how information and intelligence organize themselves under various constraints. This is why there are new papers published every single day which further explore the capabilities and inner-workings of these models.
Ok, but the art and science of understanding what we're even looking at is actively being developed. What I said stands, we are still learning the how. Things like circuits, dependencies, grokking, etc.
Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.
If GenAI could only write documentation it would still be a game changer.
But it write mostly useless documentation Which take time to read and decipher.
And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).
Loop it. Use another agent (from a different company helps) to review the code and documentation and call out any inconsistencies.
I run a bunch of jobs weekly to review docs for inconsistencies and write a plan to fix. It still needs humans in the loop if the agents don’t converge after a few turns, but it’s largely automatic (I baby sat it for a few months validating each change).
That might work for hallucinations, that doesn't work for useless verbose. And the main issue is that LLM don't always distinguish useless verbose from necessary one, so even when I ask it to reduce verbose, it remove everything save a few useful comments/docstring, but some of the comments that were removed I deemed useful. Un the end I have to do the work of cutting verbose manually anyway.
It can generate useful documentation or useless documentation. It doesn't take very long to instruct the LLM to generate the documentation, and then check if it matches your understanding of the project later. Most real documentation is about as wrong as LLM-generated documentation anyway. Documenting code is a language-to-language translation task, that LLMs are designed for.
The problems about documentation I described wasn’t about the effort of writing it. It was that modern chipsets are trade secrets.
When you bought a computer in the 80s, you’d get a technical manual about the internal workings of the hardware. In some cases even going as far as detailing what the registers did on their graphics chipset or CPU.
GenAI wouldn’t help here for modern hardware because GenAI doesn’t have access to those specifications. And if it did, then it would already be documented so we wouldnt need GenAI to write it ;)
> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.
I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)
I’ve heard this argument made before and it’s the only side of AI software development that excites me.
Using AI to write yet another run-of-the-mill web service written in the same bloated frameworks and programming languages designed for the lowest common denominator of developers really doesn’t feel like it’s taking advantage leap in capabilities that AI bring.
But using AI to write native applications in low level languages, built for performance and memory utilisation, does at least feel like we are bringing some actual quality of life savings in exchange for all those fossil fuels burnt to crunch the LLMs tokens.
> perpetual subscription services for the same software we used to “own”.
In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?
If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.
People have been building alternatives to MS Office, Adopt Creative Suite, and so on and so forth for literally decades and yet they’re still the de facto standard.
Turns out it’s a lot harder to disrupt than it sounds.
It's really hard. But not impossible. Figma managed to. What's different this time around is AI assisted programming means that people can go in and fix bugs, and the interchange becomes the important part.
Figma is another subscription-only service with no native applications.
The closest thing we get to “disruption” these days are web services with complimentary Electron apps, which basically just serves the same content as the website except for duplicating the memory overhead of running a fresh browser instance.
Dropbox may not be a great example, either. It's storage and bandwidth, and both are expensive, even if the software wasn't being worked on.
But application software that is, or should be, running locally, I agree. Charge for upgrades, by all accounts, but not for the privilege of continued use of an old, unmaintained version.
Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.
You’re both right. It just depends on the problems you’re solving and the languages you use.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
High temperature seems fine for my coding uses on GPT5.2.
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
Those silly words only come up in discussions like this. I have never heard them uttered in real life. I don't think my experience is bizarre here - actual usage is what matters in my book.
To be honest, I think the power-ten SI people might have won the war against the power-two people if they'd just chosen a prefix that sounded slightly less ridiculous than "kibibyte".
What the hell is a "kibibyte"? Sounds like a brand of dog food.
I genuinely believe you're right. It comes across like "the people who are right can use the disputed word, and the people who are wrong can use this infantile one".
I don't know what the better alternative would have been, but this certainly wasn't it.
1. defined traditional suffixes and abbreviations to mean powers of two, not ten, aligning with most existing usages, but...
2. deprecated their use, especially in formal settings...
3. defined new spelled-out vocabulary for both pow10 and pow2 units, e.g. in English "two megabytes" becomes "two binary megabytes" or "two decimal megabytes", and...
4. defined new unambiguous abbreviations for both decimal and binary units, e.g. "5MB" (traditional) becomes "5bMB" (simplified, binary) or "5dMB" (simplified, decimal)
This way, most people most of the time could keep using the traditional units and be understood just fine, but in formal contexts in which precision is paramount, you'd have a standard way of spelling out exactly what you meant.
I'd have gone one step further too and stipulate that truth in advertising would require storage makers to use "5dMB" or "5 decimal megabytes" or whatever in advertising and specifications if that's what they meant. No cheating using traditional units.
(We could also split bits versus bytes using similar principles, e.g. "bi" vs "by".)
I mean consider UK, which still uses pounds, stone, and miles. In contexts where you'd use those units, writing "10KB" or "one megabyte" would be fine too.
Well behaved CLI tools have for years already been changing their UX depending on whether STDOUT is a TTY or a pipe.
reply