> we have never before applied a killswitch to a rule with an action of “execute”.
I was surprised that a rules-based system was not tested completely, perhaps because the Lua code is legacy relative to the newer Rust implementation?
It tracks what I've seen elsewhere: quality engineering can't keep up with the production engineering. It's just that I think of CloudFlare as an infrastructure place, where that shouldn't be true.
I had a manager who came from defense electronics in the 1980's. He said in that context, the quality engineering team was always in charge, and always more skilled. For him, software is backwards.
In the post they described that they observed errors happening in their testing env, but decided to ignore because they were rolling out a security fix. I am sure there is more nuance to this, but I don’t know whether that makes it better or worse
"Kudos"? This is like the South Park episode in which the oil company guy just excuses himself while the company just continues to fuck up over and over again.
There's nothing to praise, this shouldn't happen twice in a month. Its inexcusable.
We still have two holidays and associated vacations and vacation brain to go. And then the January hangover.
Every company that has ignored my following advice has experienced a day for day slip in first quarter scheduling. And that advice is: not much work gets done between Dec 15 and Jan 15. You can rely on a week worth, more than that is optimistic. People are taking it easy and they need to verify things with someone who is on vacation so they are blocked. And when that person gets back, it’s two days until their vacation so it’s a crap shoot.
NB: there’s work happening on Jan 10, for certain, but it’s not getting finished until the 15th. People are often still cleaning up after bad decisions they made during the holidays and the subsequent hangover.
This is funny, considering that someone that worked on the defense industry (guide missile system) found a memory leak on one of their products, at that time. They told him that they knew about it, but that it's timed just right with the range of the system it would be used, so it doesn't matter.
It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.
It is not dysfunctional to ignore unreachable "bugs". A memory leak on a missile which won't be reached because it will explode long before that amount of time has passed is not a bug.
It's a debt though. Because people will forget it's there and then at some point someone changes a counter from milliseconds to microseconds and then the issue happens 1000 times sooner.
It's never right to leave structural issues even if "they don't happen under normal conditions".
In hard real-time software, you have a performance budget otherwise the missile fails.
It might be more maintainable to have leaks instead of elaborate destruction routines, because then you only have to consider the costs of allocations.
Java has a null garbage collector (Sigma GC) for the same reason. If your financial application really needs good performance at any cost and you don't want to rewrite it, you can throw money at the problem to make it go away.
I don't think this argument makes sense. You wouldn't provision a 100GB server for a service where 1GB would do just in case unexpected conditions come up. If the requirements change, then the setup can change, doing it just because is wasteful. What if we forget is not a valid argument to over engineer and over provision.
If a fix is relatively low cost and improves the software in a way that makes it easier to modify in the future, it makes it easier to change the requirements. In aggregate these pay off.
If a missile passes the long hurdles and hoops built into modern Defence T&E procurement it will only ever be considered out of spec once it fails.
For a good portion of platforms they will go into service, be used for a decade or longer, and not once will the design be modified before going end of life and replaced.
If you wanted to progressively iterate or improve on these platforms, then yes continual updates and investing in the eradication of tech debt is well worth the cost.
If you're strapping explosives attached to a rocket engine to your vehicle and pointing it at someone, there is merit in knowing it will behave exactly the same way it has done the past 1000 times.
Neither ethos in modifying a system is necessarily wrong, but you do have to choose which you're going with, and what the merits and drawbacks of that are.
Having observed an average of two mgmt rotations at most of the clients our company is working for this comes at absolutely no surprise to me.
Engineering is acting perfectly reasonable, optimizing for cost and time within the constraints they were given. Constraints are updated at a (marketing or investor pleasure) whim without consulting engineering, cue disaster. Not even surprising to me anymore...
When people don't read the documentation, discovery is a real problem. When people do read the documentation, things are different. Many software engineers do not read the documentation, and then complain to you if they break something in a documented way. If you compare to hardware engineers, whose vendors put out tens of thousands of pages of documentation for single parts, they have a lot of skill at reading documentation (and the vendors at writing it).
I used to take this approach when building new integrations. Then I realized (1) most documentation sucks (2) there's far too much to remember (3) much of it is conditional (4) you don't always know what matters until it matters (e.g. using different paths of implementation).
What works much better is having an intentional review step that you come back to.
Most of the time QA can tell you exactly how the product works, regardless of what the documentation says. But many of us haven’t seen a QA team in five, ten years.
You say this like trivial misstakes did not happen all the time in classical engineering as well.
If there is a memory leak, them this is a flaw, that might not matter so much for a specific product, but I can also easily see it being forgotten, if it was maybe mentioned somewhere in the documentation, but maybe not clear enough and deadlines and stress to ship are a thing there as well.
Just try harder. And if it still breaks, clearly you weren't trying hard enough!
At some point you have to admit that humans are pretty bad at some things. Keeping documentation up to date and coherent is one of those things, especially in the age of TikTok.
Better to live in the world we have and do the best you can, than to endlessly argue about how things should be but never will become.
Shouldn't grey beards, grizzled by years of practicing rigorous engineering, be passing this knowledge on to the next generation? How did they learn it when just starting out? They weren't born with it. Maybe engineering has actually improved so much that we only need to experience outages this frequently, and such feelings of nostalgia are born from never having to deal with systems having such high degrees of complexity and, realistically, 100% availability expectations on a global scale.
They may not have learned it but being thorough in general was more of a thing. These days things are far more rushed. And I say that as a relatively young engineer.
The amount of dedication and meticulous and concentrated work I know from older engineers when I started work and that I remember from my grand fathers is something I very rarely observe these days. Neither in engineering specific fields nor in general.
We were talking about making a missile (v2) with an extended range, and ensuring that the developers who work on it understand the assumption of the prior model: that it doesn't use free because it's expected to blow up before that would become an issue (a perfectly valid approach, I might add). And to ensure that this assumption still holds in the v2 extended range model. The analogy to Ariane 5 is very apt.
Now, there can be tens of thousands of similar considerations to document. And keeping up that documentation with the actual state of the world is a full time job in itself.
You can argue all you want that folks "should" do this or that, but all I've seen in my entire career is that documentation is almost universally: out of date, and not worth relying on because it's actively steering you in the wrong direction. And I actually disagree (as someone with some gray in my beard) with your premise that this is part of "rigorous engineering" as is practiced today. I wish it was, but the reality is you have to read the code, read it again, see what it does on your desk, see what it does in the wild, and still not trust it.
We "should" be nice to each other, I "should" make more money, and it "should" be sunny more often. And we "should" have well written, accurate and reliable docs, but I'm too old to be waiting around for that day to come, especially in the age of zero attention and AI generated shite.
If ownerless code doesn’t result in discoverability efforts then the whole thing goes off the rails.
I won’t remember this block of code because five other people have touched it. So I need to be able to see what has changed and what it talks to so I can quickly verify if my old assumptions still hold true
>I wonder how that information is easily found afterwards.
Military hardware is produced with engineering design practices that look nothing at all like what most of the HN crowd is used to. There is an extraordinary amount of documentation, requirements, and validation done for everything.
There is a MIL-SPEC for pop tarts which defines all parts sizes, tolerances, etc.
Unlike a lot in the software world military hardware gets DONE with design and then they just manufacture it.
For the new system to be approved, you need to document the properties of the software component that are deemed relevant. The software system uses dynamic allocation, so "what do the allocation patterns look like? are there leaks, risks of fragmentation, etc, and how do we characterise those?" is on the checklist. The new developer could try to figure this all out from scratch, but if they're copying the old system's code, they're most likely just going to copy the existing paperwork, with a cursory check to verify that their modifications haven't changed the properties.
They're going to see "oh, it leaks 3MiB per minute… and this system runs for twice as long as the old system", and then they're going to think for five seconds, copy-paste the appropriate paragraph, double the memory requirements in the new system's paperwork, and call it a day.
> It might be the case that real revenue is worse than hypothetical revenue.
Because Altman is eying IPO, and controlling the valuation narrative.
It's a bit like keeping rents high and apartments empty to build average rents while hiding the vacancy rate to project a good multiple (and avoid rent control from user-facing businesses).
They'll never earn or borrow enough for their current spend; it has to come from equity sales.
> It's a bit like keeping rents high and apartments empty to build average rents
with very particular exceptions at the high end (like those 8-figure $ apartments by Central Park that are little more than international money laundering schemes) this doesn't really happen irl
This does not happen, if you forgo one month of rent you have to have kept prices up significantly to make up for the loss. The only reason this could happen is if your loan terms are pegged to rent roll (usually only on commercial properties).
an example: $5000/mo apartment generates $60,000 a year; forgoing one month of rent means you have to now generate $60,000 of revenue in 11 months, which in a bad market will likely not rent for $5450 if it didn't rent for $5000. Your mortgage still continues to pile up along with insurance and taxes, so you can't escape the hole.
> changing the habits of 800 million+ people who use ChatGPT every week, however, is a battle that can only be fought individual by individual
That's the basis for his conclusions about both OpenAI and Google, but is it true?
It's precisely because uptake has been so rapid that I believe it can change rapidly.
I also think worldwide consumers no longer view US tech as some savior of humanity that they need to join or be left behind. They're likely to jump to any local viable competitor.
Still the adtech/advertiser consumers who pay the bills are likely to stay even if users wander, so we're back to the battle of business models.
Takeup of Google was rapid but nobody managed to obviate their advantage. It wasn't even a first mover advantage.
The problem for alternatives is they have to answer the question of why they are better than ChatGPT. ChatGPT only had to answer the question of why it was better than <anything before AI> and for most people that was obvious.
Underlying this seems to be a hard engineering problem: how to run a SaaS within UI timeframes that can store or ferry enough context to tailor for individual users, with privacy.
While Eddie Cue seems to be Apple's SaaS man, I can't say I'm confident that separating AI development and implementation is a good idea, or that Apple's implementation will not fall outside UI timeframes, given their other availability issues.
Unstated really is how good local models will be as an alternative to SaaS. That's been the gambit, and perhaps the prospective hardware-engineering CEO signals some breakthrough in the pipeline.
The title is misleading, and HN comments don't seem to relate to the article.
The misleading part: the actual finding is that organoid cells fire in patterns that are "like" the patterns in the brain's default mode network. That says nothing about whether the there's any relationship between phenomena of a few hundred organoid cells and millions in the brain.
As a reminder, heart pacing cells are automatically firing long before anything like a heart actually forms. It's silly to call that a heartbeat because they're not actually driving anything like a heart.
So this is not evidence of "firmware" or "prewired" or "preconfigured" or any instructions whatsoever.
This is evidence that a bunch of neurons will fall into patterns when interacting with each other -- no surprise since they have dendrites and firing thresholds and axons connected via neural junctions.
The real claim is that organoids are a viable model since they exhibit emergent phenomena, but whether any experiments can lead to applicable science is an open question.
I think a helpful conclusion is that while the firing pattern in organoids doesn’t preclude a wetware of complex programmed instructions, it could be just the emergent properties of the underlying physics and electrochemical properties of the neurons; analogous to the phenomenon of synchronism when placing pendulums in a common place.
"Bad" regulation just raises the question what would be better for all concerned. Sometimes that means reducing the weight and impact of a concern (redefining the problem), but more often it means a different approach or more information.
In this case, pumping first-ever possible toxins into the ground could be toxic, destructive, and irreversible, in ways that are hard to test or understand in a field with few experts. The benefit is mainly a new financial quirk, to meet carbon accounting with uncertain gains for the environment. It's not hard to see why there's a delay, which would only be made worse with an oppositional company on a short financial leash pushing the burden back onto regulators.
The regulation that needs attention is not the unique weird case, but the slow expansion of under-represented, high-frequency or high-traffic - exactly like the cellular roaming charges or housing permits or cookies. It's all-too-easy to learn to live with small burdens.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
The traffic stop is for breaking some kind of traffic law, usually.
I suppose you could have a reasonable suspicion stop, but it would have to be something like "a hit and run just happened nearby, no vehicle description", and you witness a car with a smashed grill and leaking radiator fluid, but not breaking any traffic laws.
Reasonable suspicion might develop over the course of the stop, e.g. driver is super nervous, the back seat is full of overstuffed black duffel bags, there is a powerful chemical air freshener odor, and the vehicle has just crossed the Mexico border.
It may be that some media or some alcohol is more toxic than others, but it's still fair to test whether the mode of administration has an independent or enhancing effect.
E.g., crack cocaine is more addictive than nasal, and extended release Adderall is less addictive the immediate-release. So there's good reason to hypothesize that SFV has similar addiction-enhancing effects over long-form, and the article meta-analysis says problems in inhibition and cognition are among the strongest.
wrt choice, the thing about addiction is that while becoming addicted results from a series of choices, being addicted impairs your choice-making executive functions. Addicts use even when they don't like it, and to the exclusion of other things they prefer, and often switch from expensive drugs to cheap ones just to maximize use.
So in the same way that society would prefer to prevent rather than treat legions of fentanyl addicts infecting cities or meth addicts roaming the countryside, society would like to avoid the cognitive decline and productivity loss of a generation lost to scrolling.
reply