Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People don't realize how much software engineering has improved. I remember when most teams didn't use version control, and if we did have it, it was crappy. Go through the Joel Test [1] and think about what it was like at companies where the answers to most of those questions was "no."

[1] https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...



At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.

Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.

Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.


Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughput, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)


> Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around.[1]

This is definitely something that is happening with software systems. The question is: is having an AI that is fundamentally undecipherable in its intention to extend these systems a good approach? Or is an approach of slowing down and fundamentally trying understand the systems we have created a better approach?

Has software become safer? Well planes don't fall from the sky but the number of zero day exploits built into our devices has vastly improved. Is this an issue? Does it matter that software is shipped broken? Only to be fixed with the next update.

I think its hard to have the same measure of safety for software. A bridge is safe because it doesn't fall down. Is email safe when there is spam and phishing attacks? Fundamentally Email is a safe technology only that it allows attacks via phishing. Is that an Email safety problem? Probably not just as as someone having a car accident on a bridge is generally not a result of the bridge.

I think that we don't learn from our mistakes. As developers we tend to coat over the accidents of our software. When was the last time a developer was sued for shipping broken software? When was the last time an engineer was sued for building a broken bridge? Notice that there is an incentive as engineer to build better and safer bridges, for developers those incentives don't exist.

[1]: https://en.wikipedia.org/wiki/Normal_Accidents


The other day I was thinking about how stupid little things in the Javascript ecosystem where you have to change your configuration file "just because" are a real billion-dollar mistake and speculating that I could sue some of the developers in small claims court.

Right away I scoffed when I heard people had 20 agents running in parallel because I've been at my share of startups with 20 person teams that tend to break down somewhere between:

- 20 people that get about as much done as an optimal 5 person team with a lot more burnout and backlash

- There is a sprint every two weeks but the product is never done

and people who are running those teams don't know which one they are!

I'm sure there are better ones out there but even one or two SD north of the mean you find that people are in over their heads. All the ceremony of agile hypnotizes people into thinking they are making progress (we closed tickets!) and have a plan (Sprint board!) and know what they are doing (user stories!)

Put on your fieldworker hat and interview the manager about how the team works [1] and the state of the code base and compare that to the ground truth of the code and you tend to find the manager's mental is somewhere between "just plain wrong" and "not even wrong". Teams like that get things done because there are a few members, maybe even dyads and triads, who know what time it is and quietly make sure the things that are important-but-ignored-by-management are taken care of.

Take away those moral subjects and eliminate the filtering mechanisms that make that 20-person manager better than average and I can't help but think 'gas town' is a joke that isn't even funny. Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

[1] I'll take even odds he doesn't know how long the build takes!


> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)

I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?


The most obvious one is this brilliant piece on complexity:

https://steve-yegge.blogspot.com/2009/04/have-you-ever-legal...

It doesn't match OP's description, but it certainly fits talk about his pot use.

There may be others.


I remember thinking of him as a skillful writer and a sometimes incisive thinker, back then. Apparently my taste has significantly improved in the interim; for a piece ostensibly about complexity, this is an embarrassingly superficial analysis from priors that already don't make any sense.

I'm not going to knock a guy today based on an almost twenty-year-old piece, especially on subjects (cannabis legalization, the quality and direction of Obama administration policy initiatives) that were widely misunderstood at the time, including by such luminaries as the Nobel committee. But Yegge really wasn't starting from so strong a position as I had misrecalled. Thanks for the link.


I haven't read it in at least ten years myself - maybe it's not as good as I recall.

I do remember that I appreciated his grasp of the fact that if you aren't deep in the weeds, you really cannot understand just how complex a system really is.

I also appreciated the slow build to the actual point, which I think could help people who wouldn't hear a direct explanation understand what he was getting at.

"'Shit's Easy' syndrome" is real, and I wonder if the prevalence of LLMs doing the scutwork will lead to an entire generation of programmers who suffer from it.


Well, sure. Trying to plan events at incomprehensibly large scale is like that, as the 20th century collectivist states failed largely in consequence of too late discovering. You have to retain a sense of scale in these things, not to say humility. Meanwhile, cannabis legalization in the US proceeds apace as a fifty-state patchwork, with simple possession still a major felony some places, while commercial distribution in others is a wholly legitimate storefront affair, and someone will eventually reap a small political windfall through federal recognition of the situation in being. No one is really planning anything. It is the assumption someone must that I'm criticizing, because for all the decades of planning indulged by the interminable old-times legalization advocates, their desideratum in practice looks nothing like they ever came close to seriously imagining or predicting.

To his dubious credit, I think Yegge has in the interim learned this lesson, possibly at the cost of some others. Looking at his "Gas Town" makes the hair stand up on the back of my neck, not least for that I once had ferrets and I know what chaos they embody and wreak (and how f—ing expensive they are!); I'm sure he was intentional in his choice of the metaphor, but he's always been one of those for whom consensus reality and good sense are likewise mostly optional. So in entire fairness I have to admit I really can't see any just criticism that he's planning too much these days. But the value in such a swing from one extreme to another, versus something more closely resembling moderation, charitably has yet to be demonstrated.

(As a programmer of both fintech and actual finance experience, btw, it's very comical to me to see the Big Design Up Front approach being applied in this way to this specific example, precisely because it so little resembles how anyone genuinely approaching the task does so. It is very much how I would expect the Google of 2009 to look at things. It isn't that much like how a bank or a startup does. But I said I wasn't going to beat up on old work, and I can't pretend I had so broad a perspective myself so long ago.)


Good points.

I was similarly appalled and shocked at Gas Town. Maybe something like it is the future, but I really didn't expect Yegge to be a genAI booster.

If Gas Town has "the Quality Without a Name," I will eat my hat.

https://sites.google.com/site/steveyegge2/tour-de-babel


(Thank you for a pleasant and thought-provoking conversation, by the way! All hopes for a favorable Friday and weekend.)


Likewise!

Oh, God, spare me from the architect who must be sure he is seen to be one with the Tao. Its name is 无为 and Emacs, which I have used exclusively since 2010, does not "have" it, although a given human Emacs user may. (But see previously my comments with respect to js2-mode; Yegge's enthusiasm of the moment notwithstanding, he was at least not then the most obviously reliable judge.)

It isn't something that can exist in the absence of consciousness, because only in the presence of consciousness can it not exist. I grant some computer programs sensu lato may conceivably experience qualia, but even today would be taken sorely aback to discover Emacs among them.


I did not realize that Yegge was referencing the Tao with that, though it certainly had some of that aesthetic flavor to my untutored Western ears.

I can roughly intuit how it might be something which can only be relevant in the presence of consciousness, despite my near-total lack of knowledge of any religious tradition outside the Western ones.

I agree that conscious programs in some sense are conceivable, but I'm skeptical of it myself especially in comprehensible programs, however large - something self-documenting and readable is nearly the opposite of the human brain, which is the only thing we really have strong reason to believe is conscious (by way of each possessing one).


Properly, with "the Quality without a Name" Yegge was referencing Christopher Alexander's The Timeless Way of Building (1979) wherein that phrase is - one would ordinarily say 'defined,' but in this case the author strove with what I consider deeply tasteless artifice to inflict a mostly ersatz epiphany. (It is an extremely 2009 Google or "Chocolate Factory" kind of book.) It was Alexander whom I excoriated as the architect who etc., since he was that. (His work on the U of O campus gets too much credit; Eugene could not but have been lovely, anyway, and it was not the town's fault I wilt for want of full sun.) In any case to construct the idea as "religious" obscures a trivially essential point, in that to do so is like saying you're worried the Name might get mad if you pick up a hammer. Oh, if with a heart of hate or concupiscence then sure, that's a problem, but Jimmy Carter built houses with Habitat for about a million years and I know flights of angels sang that man to his rest. The "Tao," if we like, is a hammer. Anyone is free to believe in it or not. It drives nails just the same either way. 'The rest is commentary.' Don't worry about it too much.

I'm not actually much of a mystic, though some who've known me might disagree, especially after that last paragraph! My concept of consciousness is broadly both mechanistic and scalar, which having arisen is reliably conserved because abstraction, reflection, and introspection are behaviors whose adaptive benefit easily compounds on itself. (The singularitarians aren't wrong that getting smarter makes you better at getting smarter; they just have no idea what "smart" means.) I am also wholly unapologetic about the wholly intuitive and qualitative nature of that understanding, not least because to be both at once places me serenely beyond the moist and smelly grasp of rote scientism. For example, my friends who've been wasps were not less conscious in my estimation than myself and my friends who are human, but I would say they perhaps reflect and ramify less deeply. One might resort for a mental model to the concept of a space-filling or Peano curve [1]: we iterate many orders more deeply than even the most capacious of social wasps, to be sure! But I have seen a Polistes exclamans wasp comfort her anxious and frightened sister with a hug in my kitchen (2), and I've seen them learn me as the final waypoint of what, given the unusually capable aerialism and extensive navigational skills of the average Polistes metricus forager, could well have been a longer and more complex daily commute than mine. (And I never have to deal with birds trying to eat me!)

So these are not at all stupid or robotic animals, the social wasps. As terrestrial predators and foragers who hunt energetically expensive prey by sight, they experience many of the same selection pressures as we do toward episodic memory, constructive theory of mind, kinship recognition by sight rather than odor (and thus at much greater distance,) and other such relatively complex cognitive skills. Also, I have watched a wasp sleep, and seen the rate of her breathing oscillate in a fairly close parallel to the periodicity one sees in the stages of mammalian sleep. I believe they may experience something very like the voluntary paralysis of our REM sleep. I believe there is no reason for such an inhibitory circuit to develop and be conserved, other than the reason we have it. In short I believe they may very well dream, in some way meaningfully like we do, and again for the same reasons. (I incline, in my incompetently autodidactic manner, toward the "integrated information theory" expounded by Hoel, at least inasmuch as I borrow the need for balancing surprise minimization versus overfitting avoidance, but I'm not really dogmatic about it.) And finally, ineluctably, I defy anyone anywhere to show me that of any kind which dreams yet is not conscious.

These are not only (or not all only) individual observations via personal correspondence, either; I'm happy to cite and discuss at length the specific details of the ethology and neurology underlying such complex behavior, which I may not be the first to observe is strongly suggestive of social wasps exercising a constructive theory of mind for a species deeply dissimilar to themselves, ie we H. sap. A good lay overview, written much from a love which I recognize, is Seirian Sumner's 2022 Endless Forms. I forget offhand if she is as explicit as I'm being, but that's okay; no one really of whom I'm aware is really making the kind of (what is arguably a) leap that I am, to treat consciousness in this way; an unkind critic might accuse me of half-assing my way to some half-baked animism, through a daytime-TV pop science conception of consciousness as waves hands I dunno...holographic? Luckily, with no costly postnominals to defend nor student loans to defray, I suppose I'm free to say more or less what I like. (Such as that, if Sumner leaves you wanting more, a good next step into entomology proper - and one of my own first sources! - is 2018's The Social Biology of Wasps.)

Even the largest and fanciest frontier model (properly the vast infrastructure which serves it, which may to some useful ends be considered as a kind of organism) is many orders of magnitude less complex in both "neurome" and connectome than even the most basal of social wasps, and there is no real cause to expect this will change in our lifetime. (Wasps are not getting simpler, and God as yet still stubbornly refuses to be invented by Sam Altman.) A human's brain of course ramifies as many orders further still, but no matter; if there was only ever one example of "Shit's Easy syndrome," I must surely be making fun of it now, in the idea that our programs express our minds more magically than any other form of human mechanism or artifice, so much so as to encapsulate much less surpass.

If a conscious computer system ever arises - and note by that 'in the broad sense,' I include eg the idea of the entire planetary network considered as "a" consciousness, so we're definitely not aiming for any immediate or concrete mapping for that intentionally nebulous concept - then I confide there will also arise humans able to recognize it as like themselves, and vice versa. I would not expect them to find it more comprehensible than they find themselves, or for that matter than it would likely find itself or them. Good grief, who ever does in this life?

(And at no doubt welcome last, thanks once more for the nudge to further work in clarifying my thesis and its argument, perhaps not without interest. I regret if I've given the impression of making light of your faith despite that I do not share it. Oh, I have my differences with Them Upstairs, and we'll work those out by and by - but that is no fault of yours so far as I know, and I hope I haven't made it too much your problem.)

[1] https://en.wikipedia.org/wiki/Space-filling_curve

(2) I was sheltering them from a cold snap, an experience more or less semiotically indistinguishable for them from an alien abduction, although I of course had the grace as a host not to stress them unnecessarily. We all had a hell of a fight on our hands anyway, the night the local pavement ant supercolony caught wind and mounted an invasion, but the next morning was finally warm and mild enough for them to disperse. I suppose things turned out well enough in their eyes, since the family stuck around and we were porch neighbors for a few years after that.


> planes don’t fall from the sky

Boeing would like a word (; https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...


> that's why we now need AIs to understand our code bases

I don't need an AI to understand my code base, and neither do you. You're smarter then you give yourself credit for.


The better processes and tools made larger project possible.


Version control is useful but it has nothing to do with software engineering per se. Most software development is craft work which doesn't meet the definition of engineering (and that's usually fine). Conversely, it's possible to do real software engineering without having a modern version control system.


And maybe it's dangerous for one to think they're doing engineering when in reality they're doing craft work.


... but it helps tremendously to have a solid computer engineering background since you are (finding and) transforming hard facts of reality into working code. I'd say its a mix of both, you can't just vibecode (or hack together before current times) a properly beautiful design (whatever that means in given instance).


What is your definition of engineering, that "craft work" doesn't meet?


The answer to those questions is _still_ 'no' at a lot of companies.

> People don't realize how much software engineering has improved.

It has, but we have gotten there by stacking turtles, by building so many layers of abstraction that things no longer make sense.

Think about this hardware -> hypervisor -> vm -> container -> python/node/ruby run time all to compile it back down to Bytecode to run on a cpu.

Some layers exist because of the push/pull between systems being single user (PC) and multi user (mainframe). We exacerbated the problem when "installable software" became a "hard problem" and wanted to mix in "isolation".

And most of that software is written on another pile of abstractions. Most codebases have disgustingly large dependency trees. People keep talking about how "no one is reviewing all this ai generated code"... Well the majority of devs sure as shit arent reviewing that dependency tree... Just yesterday there was yet another "supply chain attack".

How do you protect yourself from such a thing... stack on more software. You cant really use "sub repositories/modules" in git. It was never built that way because Linus didnt need that. The rest of us really do... so we add something like artifactory to protect us from the massive pile of stuff that you're dependent on but NOT looking at. It's all just more turtles on more piles.

Lots of corporate devs I know are really bad at reviewing code (open source much less so). The PR code review process in many orgs is to either find the person who rubber-stamps and avoid the people who only bike shed. I suspect it's because we have spent the last 20 years on the leet code interview where memorizing algorithms and answering brain teasers was the filter. Not reading, reviewing, debugging and stepping through code... Our entire industry is "what is the new thing", "next framework" pilled because of this.

You are right that it got better, but we got there by doing all the wrong things, and were going to have to rip a lot of things apart and "do better".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: