New technology does not eliminate old technology or craftsmanship. It just shifts who uses it and what for.
- Power tools didn't annihilate the craftsmanship of hand-tool woodworking. Fine woodworkers are still around and making money using hand tools, as well as hobbyists. But contractors universally switched to power tools because they help them make more money with less labor/cost/time.
- A friend of mine still knits on a loom because she likes using a loom. Some people knit by hand because they like that better. Neither of them stopped just because of large automated looms.
- Blacksmiths still exist and make amazing metal crafts. That doesn't mean there isn't a huge market for machine cast or forged metal parts.
In the future there'll just be the "IDE people" and the "Agent Prompt people", both plugging away at whatever they do.
You give examples where crafts based on pre-industrial technology still exist. You're right, but you're proving the GP's point.
200 years ago, being a blacksmith was a viable career path. Now it's not. The use of hand tools, hand knitting, and hand forging is limited to niche, exotic, or hobbyist areas. The same could be said of making clothes by hand or developing film photographs. Coding will be relegated to the same purgatory: not completely forgotten, but considered an obsolete eccentricity. Effectively all software will be made by AI. Students will not study coding, the knowledge of our generation will be lost.
I know people who make their living doing those niche things. So what if they're niche? Enterprise Software Architect is niche. Aerospace Engineer is niche. Hell, finding somebody under the age of 40 who can write Assembly is niche.
Everything gets worse overtime. Even before AI, I was constantly complaining about how technology is enshittifying. I'm sure my parents complained about things getting worse, and their parents. Yet here we are, the peak achievement of living beings on this planet, making do. I think we will be OK without typing in by hand a thing that didn't even exist 70 years ago.
> So what if they're niche? Enterprise Software Architect is niche.
It's a question of supply and demand in the labor market. Right now, we are paid well and afforded respect because demand for our service is higher than the supply. When anyone can use AI to do our job, the supply will exceed the demand.
There are blacksmiths still working today. Their work is niche. And although blacksmithing today requires no less skill than it did 200 years ago, there is significantly less demand, and very few can make a living at it.
> I doubt hobbyists would describe their hobby as purgatory.
Programmers have become accustomed to a lot of cultural and financial respect for their work. That's about to disappear. How do you think radio actors felt when they were displaced by movies? Or silent film actors when they were displaced by talkies?
> I doubt the laborer would describe their toil as "craft".
Intellectual labor is labor. I'm a laborer in programming and I definitely consider it a craft. I think a lot of people here at HN do.
And they were and are of course right to feel those feelings, but it doesn't change the fact that the world is changing. Rarely do large changes benefit everyone in the world.
> And they were and are of course right to feel those feelings, but it doesn't change the fact that the world is changing. Rarely do large changes benefit everyone in the world.
I'm not sure who you are arguing against. No one here said that the world isn't changing. But it seems to me that the people who are disadvantaged by AI, which is potentially everyone who doesn't own a data center, should take efforts to ensure their continued survival, instead of merely becoming serfs to the ruling oligarchs.
I don't think that's a good comparison though. We shouldn't compare AI/Software to handcrafting one item, you should compare to handcrafting the machine that crafts the items.
If I knit a hat, I can sell it once, but if I make a game, I can run or sell it repeatedly.
However, I still agree with the outcome - if AI becomes even better and is economically viable - number of people handcrafting software will reduce drastically.
> Effectively all software will be made by AI. Students will not study coding, the knowledge of our generation will be lost.
Given the echo chamber of HN when it comes to AI that certainly seems inevitable. The question is - who would work on novel things or further AI model improvements if it so happens that knowledge of writing software by hand disappears?
A selected few, just like some mechatronic engineers get to develop new factory robots, and a few lucky ones stay around to do the manual tasks they still can perform or press the big red button when something goes wrong.
The examples given are using tools to do well-defined, repeatable processes. So far, despite many attempts by upper management to make software the same way, it hasn't happened, and AI doesn't appear to be any different.
I don't see a huge difference between people writing in a high-level language and people writing complex prompts.
As someone coding since 1986, I certainly see it on the time to get something done.
AI agents isn't coding in Common Lisp home made macro DSL, is me doing in one hour doing something that could have taken a couple of days, even if I have to fix some slop along the way.
Thus I can already see the trend that started with MACH architecture and SaaS products, to go even further decreasing the team sizes required for project delivery.
Projects I used to be part of a 10 people team, are started to be sized into 5 or less.
Sorta? I mean I want my problem fixed, regardless of it it's a person or not. Having a person listen to me complain about my problems might sooth my conscience, but I can't pay my bill or why was it so high; having those answered by a system that is contextualized to my problem sand is empowered to fix it, and not just a talking to a brick wall? I wouldn't say totally fine, but at the end of the day, if my problem is solved or my query, even if it's weird, I can't say I really needed for the voice on the other end of the pHone to come from a human. If a companies business model isn't sustainable without using AI agents, it's not really my problem that it's not, but also if I'm using their product, presumably I don't want that to go away.
Isn't the real scary thing here that the AI agent is empowered to control your life?
You're imagining that if you get the answer you want from the AI you hang up the phone, and if you don't you're imagining a human will pick up and have the political power and will to overrule the AI. I think it's more realistic is the way things have played out here: nobody took or had any responsibility because "they made the AI responsible" and second-guessing that choice isn't second-guessing the AI, it's second-guessing the human leaders who decreed that human support had no value. This with the result that the humans would let the AI go about as far as setting fire to the building before some kind of human element imbued with any real accountability steps in. The evidence of this is all the ignored requests presented here.
Function names are limited. E.g. can't provide a circuit diagram of what you're controlling in a function name. But you can do that in a comment (either with ASCII art or an image link).
In addition to that, if the Why ever changes (maybe the issue was in an external dependency that finally got patched), you'd have to update the name or else leave it incorrect. Mildly annoying if just in one codebase, but a needlessly breaking change if that function is exported.
Agreed. So why not stuff as much as possible into the name before resorting to a comment? Prose looks ugly as a name but the utility is not diminished.
That embeds the "why" into your API. If it ever changes, the function no longer serves as an abstraction over that underlying reason & changing the function name breaks your API.
That's not to say embed nothing into the names. I'm quite fond of the "Long Names are Long" blog post[1]: names need to clearly refer to what the named thing does, and precise enough to exclude stuff it doesn't do. Names can certainly get too short, e.g. the C "sprint fast" function `sprintf` is probably too short to be easily understood.
> React performance concerns in the real world are typically measured in, at worst, hundreds of milliseconds.
Many years ago I worked at a wonderful company that made the terrible decision to rebuild the frontend in React. We had a performance dashboard permanently displayed on a TV in the office, prominently showing the p99 time-to-interactive of our home page. It sat at TWENTY SECONDS for at least 2 years. No progress was ever made, to the best of my knowledge.
This was an e-commerce site, more or less. As per the author's reasoning, it absolutely should not have been an SPA.
The only client side component of interest would be filtering and sorting, (although the server could render the new state too). I would choose traditional server side + a little bit of client side code here.
It is often the case that the nifty Python thing you want to pass around uses one or more nifty Python libraries that have no C/C++/Rust/Golang equivalent (or no obvious equivalent), and so rewriting it becomes a herculean task.
They’re not wrong. If you’ve ever spent meaningful time administering both, you’ll know that Postgres takes far more hands-on work to keep it going.
To be clear, I like both. Postgres has a lot more features, and is far more extensible. But there’s no getting around the fact that its MVCC implementation means that at scale, you have to worry about things that simply do not exist for MySQL: vacuuming, txid wraparound, etc.
I doubt it was true in 2012, because sysadmins would be the ones trying to make it run reliably, including things like replication, upgrades, etc.
Pretty sure that even in 2012 MySQL had very easy to use replication, which Postgres didn't have well into the late 2010s (does it today? It's been a while since I've ran any databases).
> I doubt it was true in 2012, because sysadmins would be the ones trying to make it run reliably, including things like replication, upgrades, etc.
Possibly I got it wrong and switched around which was easier on the devs and which was easier on the sysads?
In my defence, ISTR, when talking to sysads about MySQL vs PostgreSQL, they preferred the latter due to having less to worry about once deployed (MySQL would apparently magically lose data sometimes).
MyISAM in the olden days could/would magically lose data. InnoDB has been the de facto standard for a while and I haven't seen data loss attributed to it.
In 2012 MySQL had several flavors of replications, each with its own very serious pitfalls that could introduce corruption or loss of data. I saw enough of MySQL replication issues in those days that I wouldn't want to use it.
But sure, it was easy to get a proof of concept working. But when you tried to break it by turning off network and/or machines, then shit broke down in very broken ways that was not recoverable. I'm guessing most that set up MySQL replication didn't actually verify that it worked well when SHTF.
> pitfalls that could introduce corruption or loss of data
sometimes, repairing broken data is easier than, say, upgrading a god damn hot DB.
MVCC is overrated. Not every row in a busy MySQL table is your transactional wallet balance. But to upgrade a DB you have to deal with every field every row every table, and data keeps changing, which is a real headache
Fixing a range of broken data, however, can be done by a junior developer. If you rely on rdbms for a single source of truth you are probably fucked anyway.
My experience has been exactly opposite. Ability to do Vacuums is good. MySQL doesn’t free up space taken by deleted rows. The only option to free up the space is to mysqldump the db and load it again. Not practical in most of the situations.
VACUUM rarely reclaims space from the OS’ perspective, if that’s what you meant. It can in certain circumstances, but they’re rare. VACUUM FULL is the equivalent to OPTIMIZE TABLE – both lock the table to do a full rewrite, and optimally binpack it to the extent that is posssible.
EDIT: my mistake, OPTIMIZE TABLE is an online DDL. I’ve been burned in the past from foreign key constraint metadata locks essentially turning it into a blocking operation.
That helps a lot thanks. Will summarize it quickly for those who come later: MySQL (InnoDB really) and Postgres both use MVCC, so they write a new row on update. InnoDB however also additionally writes a record marking the old row for deletion.
To do a cleanup, InnoDB uses the records it kept to delete old data, while Postgres must do a scan. So InnoDB pays a record-keeping price as part of the update that makes it easier to clear data, while Postgres decides to pay this price of occasional scanning.
I don't know how VACUUM works, I couldn't tell you about the differences.
The OPTIMIZE works almost exclusively with online DDL statements. There's only a brief table lock held during table metadata operations, but I haven't found that to be a problem in practice. (https://dev.mysql.com/doc/refman/8.4/en/optimize-table.html#...)
Not in around 15 years. You're thinking of when MyISAM was the default storage engine for MySQL. It has been InnoDB for over a decade. InnoDB is very reliable - I've never had a single data loss incident in all that time, and I've managed some very large (PB-scale) and active databases.
Postgres is definitely more difficult to administer.
MySQL used to have horrible and very unsafe defaults for new installations that persisted well after the introduction of InnoDB. Those went unfixed for a very long time.
I recall this being the case A LOOOONG time ago but I haven't heard of, read about, been warned to look out for or personally seen such a thing in forever. Have you?
* I'm running a lot of MySQL stuff and such a topic might be of interest to me
Yes, it is messy when you want your MySQL databases to be mission critical in production, e.g. handling a large amount of customer data. Historically MySQL's High Availability architecture has a lot of design and implementation issues because it was an afterthought. Dealing with large amount of critical data means you need it to be performant, reliable and available at the same time, which is hard and requires you to deal with caching, sharding, replication, network issues, zone/resource planning, failovers, leader elections and semi-sync bugs, corrupted logs, manually fixing bad queries that killed the database, data migration, version upgrades, etc. There is a reason why big corps like Google/Meta has dedicated teams of experts (like people who actually wrote the HA features) to maintain their mission critical MySQL deployments.
From what I can tell, MySQL is supposed to be safe since 2018 if you have no data from before 2010.
The fact that you still can't use DDL in transactions makes life exceedingly painful, but it's technically safe if you write your migration code carefully enough.
Some places still have columns declared as utf8 instead of utf8mb4, and there's a special place in hell for authors of the MySQL general clusterfuck regarding encodings - it was all nice and great if you didn't care about anything other than latin1 or ASCII - go outside that before utf8 option and it was horror that even experienced operators managed to fuckup (I have a badge from a Google conference in 2017 with nicely visible effect of "we have mixed up one of the three separate encoding settings in MySQL and now you have mojibake in your badge").
And then there's UTF8 not actually being UTF8, which can result in total lockup of a table if someone inputs a character that does not fit in UCS-2 and now you need to recover the database from backup and preferably convert all instances of utf8 to utf8mb4, because fuck you that's why.
In fairness, reasoning about collations is like peering into the abyss. I get why they’re required to have so many levels of detail, and the Unicode Consortium has done a fantastic job, but to say they’re complicated is putting it mildly.
Oracle also didn't support Boolean data types for a long time, and had a 20 some odd year public thread arguing that no one needed a Boolean data type (https://asktom.oracle.com/ords/f?p=100:11:0::::P11_QUESTION_...). They finally added it in Oracle 23 which is nice, but I wouldn't consider it to be in good company to be lacking something Oracle also lacks.
Not having a boolean data type is IMHO just an annoyance, not comparable to the lack of transactional DDL.
But to the point, people often use this point to claim that MySQL is a toy database, not usable for real world production use. I use Oracle as a counterpoint, which also has a lot of warts but is pretty much an archetype of an enterprise-grade DB engine.
Early MySQL versions made egregious design choices like quietly ignoring missing foreign keys and enum typos, truncating long strings, and randomly choosing rows from groups.
Yeah, it was bad. What kills me is SQLite has its own absurd set of gotchas [0] yet is seen as amazing and wonderful by devs. PKs can have NULLs? Sure! Strings can have \0 in the middle of them? Why not? FKs aren’t enforced by default? Yeah, who needs referential integrity, anyway?
My only conclusion is that the majority of devs don’t actually read documentation, and rely purely on the last blog post they read to influence their infrastructure decisions.
While that change from LGPL to GPL affected only the client library (server always was GPL(+commercial)) and the MySQL company relatively quickly reacted with a FOSS exception to the GPL and by providing a reimplementation of the client library under PHP license (mysqlnd) to serve that market.
(I joined MySQL shortly after that mess, before the Sun acquisition)
They also didn't like updating software - to likely that update to PHP or MySQL or something broke some bad script by a customer, who'd complain to the host.
I am a database specialist and have worn the DBA had for many years. I have run MySQL and Postgres in production, both self-hosted and using managed services. Postgres wins on every single dimension that matters, every time. Yes MySQL is easier to setup for non-experts. That counts for nothing.
If you are sticking up for MySQL in this thread... I just don't even know, man.
Right now I rely on TabsOutliner when using Chrome (which I only use for work). It lets me keep 400+ tabs open and stay sane. I like it so much that I've paid for it 3 times, and would have paid a 4th time but it seems you can't anymore, and I fear it's abandoned.
In any case, this is how I work. I use browser tabs as a kind of short- to medium-term memory. I open a bunch of things, and keep them open for as long as I might plausibly need them. To me this is just normal. I don't know how anyone lives with only 10 or 20 tabs open, or a 50 tabs in a single window. How could you remember anything? But without TabsOutliner or something like it, this becomes a sprawling mess, because the browser gives you no native means to search it, or "zoom out".
Unfortunately TabsOutliner isn't available for Firefox, which I use when I have a choice. So seeing SavaOS promote Chrome... I lose a little interest right away. If it doesn't work in Firefox it's not worth getting excited about, because Chrome as a piece of software treats me like an enemy and I don't like that. So: support Firefox!
That said, if SavaOS gives me the capability to organize my tabs, maybe treat them like files I can put in directories etc etc, that sounds awesome and I want to try it. At the very least maybe it's better than TabsOutliner.
I completely understand what you mean regarding Firefox. While everything already works fine there, the extension isn’t available yet. We had to make a decision regarding the tab management feature (tab grouping), which posed some challenges for Firefox.
That said, Firefox extension support is on the way! Even if we start with a stripped-down version focused on 'bookmarking' for now. We can’t be a privacy-centric company without supporting Firefox, that’s for sure.
reply