> Fluid intelligence, which peaks near age 20 and declines materially across adulthood [...] while fluid intelligence may decline with age, other dimensions improve (e.g., crystallized intelligence, emotional intelligence)
As someone well past "peak" fluid intelligence at this point, I always hate reading research like this. "Crystallized intelligence" and "emotional intelligence" are the consolation prizes no one really wants.
I'd rather we instead perform research to identify how one might reverse the decline of fluid intelligence...
> "Crystallized intelligence" and "emotional intelligence" are the consolation prizes no one really wants.
Strongly disagree.
Crystallized intelligence lets me see analogies and relations between disparate domains, abstract patterns that repeat everywhere, broadening my vision from a blinkered must-finish-this-task to a broader what-the-hell-is-this-world-I'm-in. I'm old enough to realise life is finite. Nothing satisfies like understanding.
Emotional intelligence lets me actually behave more like what I know a sane person should behave like. It lets me see I don't have to act on every passing whim and fancy, which are more like external noise than something of an essential expression from my inner self (which is a culturally-instigated fantasy). It lets me see how I'm connected to everyone else and everything in the world. Why I shouldn't stuff my own pockets at everyone else's expense. Why making other people unhappy ultimately makes myself unhappy. It wouldn't have been that hard to spot if I hadn't been caught up in fluid intelligence feats of strength.
These are the real rewards of middle age, not anyone's consolation prizes.
That said, I respect your right to disagree. But I feel this particular way.
If you can't figure out how to use accumulated knowledge and advanced people skills by your late 30s, then maybe you weren't so rational or adaptable to new situations in the first place. Things may not click for me like they did when I was 25, but I usually see right away when I have relevant knowledge to solve a problem or when I know someone who can help.
In my 20s, I could learn a programming language in a weekend by reading a book. I could write code fast. I could figure out bugs. I felt so fast and so smart.
In my 40s and 50s, I looked back at that guy with some amusement. Sure, I didn't type as fast. But I spent a lot less time debugging because I wrote it right the first time, because I could just see what the right thing do to was. Net result was that I produced working code in less time. 48 might have been my peak year.
Are you sure you really learn slower in your 40s due to biological decline? I have the feeling my lower speed is mostly due to circumstances: too many life responsibilities and too little focus time.
Crystallized intelligence sounds like “wisdom” to me, and emotional intelligence sounds like “charisma + tact + empathy.” Those are all things a person should definitely want, probably even more than raw intelligence itself.
Crystallized intelligence makes you good at solving problems, emotional intelligence makes you good at life, fluid intelligence makes you good at solving puzzles.
I'd gladly trade in some of the fluid intelligence I have left for more emotional intelligence.
I'm only half joking. I think it's notable that chess players tend to peak in their mid to late thirties. But that's only looking at world class players who have reached something relatively close to their genetic potential for the understanding we have today. It's entirely possible for 'regular' humans to continue seeing major improvement well past 40. I know that some players have achieved the GM title in their 50s and 60s. These were already strong players beforehand, but maintaining the level of play to get those norms and ratings is a very significant task for anybody.
It's entirely possible that these observations are 100% consistent with the reported observations and analyses, but if so then those analyses don't really matter in the way that we intuitively think they'd matter.
An individual can improve their fluid intelligence (“variance”) through a variety of means well into adulthood. Yes, more research is needed (and I’m sure a lot of research is being done), but I can guarantee you can already do this reliably right now.
I’m not a medical professional. I know nothing about your biopharmacological profile and I don’t know what I am talking about in general.
Micro-dose (sub-threshold) 5-MEO-DMT for a couple of days then solidify during re-integration with something that increases BDNF like intranasal Semax. Can mix and match substances, but I found principle that works well is similar to training a muscle (break and rebuild)
The most important is to stay grounded, always keep learning and engaging in mentally challenging tasks, don’t completely isolate socially, and watch hard biological limits like nutrition and sleep. Otherwise you might go psychotic.
What they call "fluid intelligence" is just intelligence and the rest are skills/aptitudes.
"Crystallized intelligence" is more plainly: efficacy/productivity and it's common knowledge that people are most productive during the middle of their lives. When they have the best balance of knowledge accumulated and raw intelligence.
In humans, intelligence manifests as memory, spatial and verbal reasoning, pattern recognition, etc.
What is so interesting about IQ and g (the general factor) is that all of these abilities trend together.
A score in one area is a good prediction of the score in another area.
There is no reason why that must be the case a priori, and LLMs are a great example of an intelligent system which is much better at recalling information than it is at reasoning.
Human aging doesn't seem to affect all of these abilities uniformly.
e.g. Everyone seems to complain about memory the most (and that matches my experience), but I've been pleasantly surprised how well neuroplasticity and pattern recognition have held.
LLMs in my opinion is pattern recognition of text sequences at an almost infinite scale. My understanding is that "world models" is an attempt to replace the text sequences with more realistic approximations of the world. But they still plan to use pattern recognition.
In the meantime, humans would still need to do the reasoning.
We do pattern recognition at an unconscious level all the time. When we perform a task using "system 2" of our mind we are rational and conscious, but when we perform a task using system 1 we do it automatically, intuitively and witu no effort. System 1 often relies on unconscious patterns matching
For example, a chess grandmaster can instinctively tell a good move when seeing a board during a real game. But if they see a random chess board that was not part of a real game, they regress to almost novice level at this task, because there are no patterns to recognize
(patterns such as tactics like pins and skews, or common mating patterns like backrank mate, or common pawn structures and corresponding pawn breaks, things like that - all those patterns can get recognized unconsciously)
Then perhaps the peak identification is wrong -- surely they haven't tested solid comparison groups for such claims, like individuals that didn't receive education later in life.
Really? What did you achieve in those times of high fluid low emotional intelligence?
I played a whole lot of video games myself. It’s nice to look back at would i could have achieved with my current perspective but that’s kind of the point of this.
Emotional intelligence is pretty useless (not really, especially since it’s important for career progression) but crystallized intelligence seems pretty solid.
> As someone well past "peak" fluid intelligence at this point, I always hate reading research like this. "Crystallized intelligence" and "emotional intelligence" are the consolation prizes no one really wants.
At the end, I agree with you, but for a different reason. My fluid intelligence is still doing well, but my newly acquired “crystallized” and “emotional” intelligence are just good to let me understand why people want to write existential horror stories. Hell, I now realize that some of the dark stuff I didn’t want to touch with a long pole three years ago are in fact escapism to a rosier parallel universe. I liked myself better when I was sixteen years old and I couldn’t understand that boy one year older than me who said he despised our prisons of flesh. May you be doing well Y.P., and if you happen to stumble upon this paragraph, know it took me 25 years to see what you saw so clearly.
Sort of like the people who work in big tech and the people who post on Hacker News. You'd think the intersection is an empty set, but it's probably pretty large.
> Every time you see collaboration happening, speak up and destroy it. Say “there are too many people involved. X, you are the driver, you decide.” (This is a great way to make friends btw).
Corollary for managers: Do not say "it's your call", then once the decision has been made (and you skipped all the meetings pertaining to that decision), comment about how you would have done it differently and then retroactively request your report to go back and make changes. This is a great way to lose employees.
One of many reasons I left Apple. My manager's manager would say stuff like this all the time, and then when I actually made my PR he would basically have me redesign stuff from scratch. It made me dread working on projects because I knew that no matter what I did I would be forced to rewrite it from scratch anyway.
"Second-guessing works by forcing someone to reverse acts of destruction. If I delegate a decision to you, you quickly spin up a set of relevant mental models, work to get a lot of momentum into them, pay the cost of killing many possible worlds, and experience the relief of a lightened load to carry. Then, by second guessing, I suddenly demand that you resurrect dead models, so I can validate or override your decision. Next time, you won’t put so much momentum in to begin with." (Venkatesh Rao, Tempo: timing, tactics and strategy in narrative-driven decision-making)
Yep. There are many, many teams at Apple. Your manager makes all the difference in the world. Hated working on the Photos team at Apple, loved all the other teams I worked on. (So I left the Photos team to go work on a team where the manager was cool. I was able to stay at Apple, just move about.)
I don't dispute that. I wish I had been on a better team. My team had a famously high turnover rate, so it wasn't just me. I liked my direct manager just fine, he's a decent dude, but I thought his manager, who I had to deal with a lot, was kind of a dumbass and I did not enjoy working with him at all.
I tried joining other teams but without going into elaborate detail it didn't pan out.
I normally move within a company when I want to quit a manager. It's much easier than getting an entirely new job usually. And you have a lot more information about the potential role.
It's also a good way to get into areas you have no experience of.
I tried that, multiple times actually. My options were already pretty limited because I didn't want to move to California, and without going into elaborate detail I the interviews for other teams just didn't work out.
The attitude I like to have is that the author can choose to do the design (doc + approval, or live discussion, some kind of buy in) first or go straight to the PR.
If the design is agreed on first, reviewers would need a really good reason to make the author go back and rethink the design—it happens, sometimes a whole team misses something, but it should be very rare. Usually, there's just implementation things, and ones that are objective improvements or optional. (For project style preferences, there should be a guide to avoid surprises.)
If the author goes straight to a PR, it's an experiment, so they should be willing to throw it away if someone says "did you think about this completely different design (that might be simpler/more robust/whatever)".
This is not the approach suggested by this article, and I'm okay with that. I tend to work on high reliability infrastructure, so quality over velocity, within reason.
I like this - and I think it’s a natural reality. When trust is low (for many reasons, including joining a new team), it may reduce risk to start with a design doc.
There are a lot of reasons anyway I like to have the design doc around. A few:
* I think the designs are often better when people write down their goals, non-goals, assumptions, and alternatives rather than just writing code.
* Reading previous designs helps new people (or even LLMs I guess) understand the system and team design philosophy.
* It helps everyone evaluate if the design still makes sense after goals change.
* It helps explain to upper management (or promotion committee in a large company) the work the author is doing. They're not gonna dig into the code!
...so it's usually worth writing up even if not as a stage before implementation starts. It can be a quick thing. If people start using a LLM for the writing, your format is too heavy-weight or focused on style over substance.
There's definitely a negative side to approval stages before shipping, as this article points out, but when quality (reliability, privacy/security, ...) is the system's most important attribute, I can't justify having zero. And while getting the design approved before starting implementation isn't necessary, it should avoid the bad experience tombert had of having to redo everything.
Yeah, my org had a pretty high turnover rate. I didn't enjoy it, I wish my transfer had gone through because I suspect I would have enjoyed the team I was applying for considerably more. Not how it worked out though, and after a certain point I couldn't take it anymore.
There are worse outcomes than that. Software devs are clever people. Not all of us can be confrontational, and confrontation is not the only tool available to those who can.
If you as a boss find yourself to be very busy all of a sudden, it is likely because you have pissed off and alienated your reports by questioning and overriding their judgment too many times. Suddenly the team needs your “help” to make every decision, and every bad outcome of those decisions suddenly becomes a surprise to them.
They’re letting you choke to death on your own arrogance and control issues.
We are by and large hired for cleverness, so there’s a lot of selection that makes that true even if undergrads are not far off from average.
It would be better if we were hired for wisdom. Don’t confuse cleverness and foolishness. You can be both.
But devs aren’t usually the ones treating their reports like children and then acting surprised when their prophecies become self fulfilling. You can blame Taylor for that.
No, you can't. Taylor was a huge advocate for standardizing people's work so it could be studied and improved. He was also an advocate for well-studied people to go and teach workers how to do their jobs, and a not intense advocate for thinking ill of workers based on everything you can expect from a rich 19 century guy.
What he advocate a lot against was doing power games against workers or automatically dismissing everything they say.
Standardizing people’s work turned them into automatons to be studied and improved by a management elite.
Which all came crashing down when Deming had to go to Japan to get people to listen to his ideas and triggered a massive recession in the US.
Deming and (to a lesser extent) Goldratt pull the employees back into the conversation. Tether are closest to the problem and even if they can’t solve it, they can help you shape the answer. Taylor was neofeudalism and he can rot.
The one thing people are complaining upthread is not Taylor's fault. He actively fought against it.
The stuff you are complaining on the first line is. But also, Taylor was an advocate for listening what the workers had to say too. You can't really blame Taylorism on him, he invented only the mildest parts of it.
And that said, Deming advocated standardizing work too. You just can't run a factory without doing that.
Collaboration is between peers. Taylor was top-down. That’s dictatorial, not collaboration. When you take collab out of the mix it’s a product manager and one dev and that’s a power imbalance.
Developers may be hired for cleverness, but cleverness in code and technical matters does not necessarily carry over into cleverness with respect to office politics or good management.
Hired for ultimately a fairly narrow field of expertise. But do need support in the sense of building the right thing, ensuring that thing aligns with business objectives, that constraints and requirements and customer needs have been communicated.
So in that light - either you give engineers the support they need (which can be quite a lot, more than I think most care to admit), or accept they're going to get a lot of stuff wrong. Probably technically correct and good, but still wrong.
Cleverness in code actually correlates with cleverness in social aspects. People come up with this artificial dichotomy of the awkward nerd with high iq but zero social skill but the reality is both of the two correlate slightly.
At the very least we know there isn’t an inverse correlation meaning the stereotype isn’t really true.
It’s not clever to brag about how smart you are or imply you and your entire cohort are smarter than other occupations. It’s a sign of how much you are the opposite of clever.
Additionally the average IQ of software developers is measured to be 110-113. That’s barely above one std of the average so you’re actually wrong. Software devs aren’t particularly clever but a disproportionate number likes to think they are smarter than normal.
So bragging about something that you’re completely wrong about… is that clever? No.
While I mostly agree with your original post (egos in this field are huge), it’s hard to interpret that IQ stat without more data. Kinda low IQ to present it like that without additional info cause it’s impossible to really evaluate.
Only because while you don’t brag about it you truly do believe you’re smarter. It’s not even about acting humble. The ego is real because despite me saying software engineers have roughly above average intelligence you couldn’t take it.
Did you deep dive into this? Given the amount of downvotes I have I’m sure you weren’t the only one if you did.
About 30 percent of developers have no degree. Which means… if the average graduate of CS is 130, how low does the average swe without it a degree have to be in order to bring it down to 110?
I mean I spelled out the math. Dear readers, Draw what conclusion you want from that while asking yourself: “Do I have a degree?”
I will say that iq varies heavily across schools as well.
Exactly, these two sentences seem at first related,
> No deadlines, minimal coordination, and no managers telling you what to do.
> In return, we ask for extraordinarily high ownership and the ability to get a lot done by yourself.
but can be insidious if implemented incorrectly. High ownership to do what you want, but what happens if what you decide goes against the goals of the manager or the company itself? No company can succeed without at least some sort of overarching goal structure, from which employees will naturally avail and seek to benefit themselves.
I think if you are empowered to make decisions as an employee it's YOUR responsibility to know the limitations of your scope and when to seek feedback and approvals from architecture, management, business or whoever.
So if your decisions are getting turned over, you are either making decisions outside of your scope or your management is genuinely micromanaging you.
"Delivering this feature goes against everything I know to be right and true. And I will sooner lay you into this barren earth, than entertain your folly for a moment longer."
That’s the only way I would utter it — if I can then sit down and so do it. If I am asking someone else to do I would ask them to tell me how hard it would be and if they need help or if they suggest a different approach.
I once worked at a place where one of the partners consistently claimed the engineering team over-built and over-thought everything (reality: almost everything was under-engineered and hanging on by a thread.)
His catch phrase was "all you gotta do is [insert dumb idea here.]"
It was anxiety inducing for a while, then it turned into a big joke amongst the engineering staff, where we would compete to come up with the most ridiculous "all you gotta do is ..." idea.
Similar to my experience doing low-level systems work, being prodded by a "manager" with a fifth of my experience. No, I'm not going to implement something you heard about from a candidate in an interview, an individual whom we passed on within the first 30 minutes. No, you reading out the AI overview of a google search to me for a problem that I've thought about for days ain't gonna work, nor will it get us closer to a solution. Get the fuck out of the way.
I'm there right now at my current job. It's always the same engineer, and they always get a pass because (for some reason) they don't have to do design reviews for anything they do, but they go concern-troll everyone else's designs.
Last week, after 3 near-misses that would have brought down our service for hours if not days from a corner this engineer cut, I chaired a meeting to decide how we were going to improve this particular component. This engineer got invited, and spent thr entire allocated meeting time spreading FUD about all the options we gathered. Management decided on inaction.
People think management sucks at hiring good talent (which is sometimes true, but I have worked with some truly incredible people), but one of the most consistent and costly mistakes I’ve observed over my career has been management's poor ability to identify and fire nuisance employees.
I don’t mean people who “aren't rockstars” or people for whom some things take too long, or people who get things wrong occasionally (we all do).
I mean people who, like you describe, systemically undermine the rest of the team’s work.
I’ve been on teams where a single person managed to derail an entire team’s delivery for the better part of a year, despite the rest of the team screaming at management that this person was taking huge shortcuts, trying to undermine other people’s designs in bad faith, bypassing agreed-upon practices and rules and then lying about it, pushing stuff to production without understanding it, etc.
Management continued to deflect and defer until the team lead and another senior engineer ragequit over management’s inaction and we missed multiple deadlines at which point they started to realize we weren’t just making this up for fun.
“It’s your call” specifically means all the decisions on the table are valid and fit the requirements and the employee is being granted permission to make a judgment call. Questioning that judgment call afterwards is shitty and leads to an erosion of trust because the employee will thereon second guess themselves and also try to avoid making any decisions (because they expect them to be overridden).
And that is where the friction is. A lot of these requirements are implicit.
For example; If an application is built around Protobuf / gRPC and you suddenly start doing JSON / REST you are going to need a really, really good reason for that. Since you are introducing a new technology, with all it's glory and all it's horror. So your judgment is going to be questioned on that and most likely the reaction will be; We are not going to do that.
If you know they are going to be like that you could try getting the information out of them as early as possible. Ask a bit to much about every detail, find the point where it annoys them then try not to cross it. When you inevitably do remind him of the previous rewrites and the time it consumed. It is our job to perfect this system. It gets considerably harder to request changes after you've asked not to be bothered with every detail.
That said, I will never work for a company unless I get to make all of the decisions, write all of the code and do all of the maintenance. The work one person can get done cowboy coding a pile of spaghetti is mind blowing. Cleaning up the mess later is so much easier and so satisfying if it was your own making. Until recently this was a bad formula as it makes for a terrible bus factor but now that we have LLMs it suddenly seems entirely reasonable.
I'm not sure that's a corollary, it seems to have tension with "Prefer to give feedback after something has shipped (but before the next iteration) rather than reviewing it before it ships. Front-loading your feedback can turn it into a quasi-approval process."
Though I guess it is in tune with "no managers telling you what to do."
Ah, ChatGPT’s hidden INTP mode. We’ll finally get the right theory for ASI, but it will provide no clues on how to actually implement it in a timely manner.
I saw that Reddit post a while back. It’s interesting, but I wonder how much it really applies to all of the super wealthy. There are certainly billionaires and centimillionaires who reject that lifestyle out of hand (I know I certainly would). The average person doesn’t know their name and they prefer it that way. Even the local billionaire near where I lived for a while was pretty modest, all considered (his kids not so much). I was surprised to see him and his family sit down next to mine at a restaurant one day. Could overhear him talking about the local farmers market and commenting about the tomatoes of the season haha
There is certainly a wide spectrum in how people behave, and an important factor is also how they would like to be perceived by others. Rundowns on how wealthy people live will tend to overindex on the behavior of people who want others to know they’re wealthy.
As an example, there’s a culture among what people refer to as "old money" families in the US northeast (with generational wealth from long ago), wherein they tend to avoid seeming outwardly wealthy or really talking about money at all…generally aiming to project an unpretentious vibe, eschewing designer clothes and driving 20 year old Volvos, but still spending vacations at long-owned family getaways worth tens of millions, flying first or charter, and send their kids to specific, expensive schools to socialize with others of similar backgrounds.
What wealth buys you is the freedom and opportunity to make that choice. It’s not that you have to use your influence; it’s that if your local billionaire whose name nobody knew decided to make a phone call or two, people would still pick up his call.
The local billionaires here own a large family business. The founders (parents) were great people. They lived life poor for the first 50 years of their lives. They were super down to earth.
The next generation are mostly good people. They're involved in politics at the state level and have some philanthropic organizations that really do good work with zero strings attached.
The grandkids, who all have known nothing but having immense wealth are garbage humans. They're entitled, awful, mean spirited assholes. Every. Last. One. They frequent local businesses, and the number of times I've heard, "don't you know who I am" is astounding.
The business mostly runs itself at this point, but I genuinely fear for the future in this area. There's already undercurrents of the family using its connections to bail out one of the grandkids when he was drunk driving. I believe the first murder will happen within the next decade.
The grandparents would be absolutely horrified if they saw what their family was turning into.
> The grandparents would be absolutely horrified if they saw what their family was turning into.
I wonder if they somewhat expected it. The 'third generation curse' is a widely known effect. Question is whether there is anything that could have been done to avoid it.
Donate their wealth before they die and leave only a token amount for the kids. Enough to give them a headstart, but not enough that they never have to try.
There's definitely a whole spectrum, and not every ultra-wealthy person is out there collecting yachts and private islands. Some just want comfort, privacy, and to be left alone to enjoy the little things
Do most people really want an internet without ads, flights with large seats and plenty of space, high-quality local food — or do most people just say they want that? Because when push comes to shove and these options temporarily become available for some reason (e.g., a new farmers market, a premium streaming plan that removes ads, etc.), most people don't spring for the higher quality option. The cheapest option still seems to consistently win out overall.
I'm certainly not saying "blame the consumer", but if people really don't like ads so much (to the extent that they stop clicking on them), really dislike the subpar streaming services so much (to the extent that they unsubscribe) — then why haven't they abandoned these products?
There are other countries where valuing quality seems to be more deeply embedded in the culture, and most people in these countries will reject subpar offerings altogether. I think the U.S. has had a uniquely precipitous fall in this regard — the average person just doesn't seem to care that much. Why this is the case, I'm not sure, but it's not surprising that since Silicon Valley is located in the U.S., the region simply optimizes on whatever (revealed) consumer preferences return the most. Tech companies are certainly not unique in this regard.
Why not? You’re exactly right that people will rant and rave about wanting a higher quality option all day long, but as soon as one comes along very few people will actually pay for it.
This happens with niche product preferences too. For years it felt like the consensus across the internet was that phones are too big and if Apple would make a smaller phone it would sell like hotcakes. Apple finally did make a smaller phone and it had relatively few sales.
> I think the U.S. has had a uniquely precipitous fall in this regard.
I disagree about this, though. The more I’ve traveled and been exposed to other cultures, the more I appreciate how much choice and opportunity we have. I have slowly learned that U.S. consumers have some of the most insatiably high expectations, though. It leads to a lot of disappointment, but when you go below the surface you discover then wants are for something that checks all boxes without any compromises (good, fast, cheap) or they want we already have but think the cost should be negligible or even free. There’s another variation where we want quality to go up, the workers to be paid more, but the prices to go down.
> I'm certainly not saying "blame the consumer", but if people really don't like ads so much (to the extent that they stop clicking on them), really disliked streaming so much (to the extent that they unsubscribe) — then why haven't they?
In my observation/bubble, people actually do:
- I rarely click on ads (though I admit the reason is typically much more mundane: nearly all ad networks don't really "get" my interests. When they (rarely) actually do, the common situation is that I recently bought such a product, and thus clearly don't need another one when the advertising networks realize my interest and show me ads).
- Many people install ad blockers.
- Many people that I am aware of who are annoyed of streaming either did cancel some subscription(s) or never got one.
Well I thought so too. I match those behaviors, and I don't even watch television. But then I worked at a tech company where I could see the actual data on consumer preferences and behaviors, and it's fairly undeniable: most people aren't like you, me, or the average commenter on Hacker News.
Exactly this. Because capitalism tells us to pay people the minimum (which isn’t enough) and charge them the maximum (which is too much) and suddenly living is unaffordable for the majority.
I mean, my simple theory is people buy everything cheap because most people are broke. Small businesses die because as much as people want to support them, they can't spend more. They can only afford to buy goods from businesses that take advantage of economies of scale, and small businesses by definition are usually locked out of that.
That's a fair point actually, and perhaps we are only seeing these problems increase recently because "locally optimal" capitalism had historically sort of prevented the global algorithmic optimizations we're seeing now across industries. E.g., rental price fixing via algorithms.
It will be interesting if/when these models start proving major open problems, e.g. the Riemann Hypothesis. The sociological impact on the mathematical community would certainly be acute, and likely lead to a seismic shift in the understanding of what research-level mathematics is 'for'. This discussion already appears to be in progress. As an outsider I have no idea what the timeline is for such things (2 years? 10? 100?).
On the plus side, AlphaProof has the benefit over ordinary LLMs in their current form in that it does not pollute our common epistemological well, and its output is eminently interrogable (if you know Lean at last).
Humans are terrible at anything you learn at university and incredibly good at most things you learn at trade school. In absolute terms, mathematics is much easier than laying bricks or cutting hair.
I would say that "narrow" mathematics (finding a proof of a given statement that we suspect has a proof using a formal language) is much easier that "generally" laying brick or cutting hair.
But I cannot see how consistently doing general mathematics (as in finding interesting and useful statements to proof, and then finding the proofs) is easier than consistently cutting hair/driving a car.
We might get LLM level mathematics, but not Human level mathematics, in the same way that we can get LLM level films (something like Avengers, or the final season of GoT), but we are not going to get Human level films.
I suspect that there are no general level mathematics without the geometric experience of humans, so for general level mathematics one has to go through perceptions and interactions with reality first. In that case, general mathematics is one level over "laying bricks or cutting hair", so more complex. And the paradox is only a paradox for superficial reasoning.
> But I cannot see how consistently doing general mathematics (as in finding interesting and useful statements to proof, and then finding the proofs) is easier than consistently cutting hair/driving a car.
The main "absolute" difficulty there is in understanding and shaping what the mathematical audience thinks is "interesting". So it's really a marketing problem. Given how these tools are being used for marketing, I would have high hopes, at least for this particular aspect...
Is it really marketing in general? I can agree with some of it, but for me the existence of the term "low hanging fruit" to describe some statements says otherwise...
Sure but checking everything is correctly wired, plug-in, cut etc. Everything needes is thought of? There is plenty of things an AI could do to help a trades man.
Not the endgame by far. Maybe the endgame for LLMs, and I am not even convinced.
Maths is detached from reality. An AI capable of doing math better than humans may not be able do drive a car, as driving a car requires a good understanding of the world, it has to recognize object and understand their behavior, for example, understanding that a tree won't move but a person might, but it will move slower than another car. It has to understand the physics of the car: inertia, grip, control,... It may even have to understand human psychology and make ethical choices.
If "better than humans" means when you give it a real world problem, it gives you a mathematical model to describe it (and does it better than human experts), then yes, it's the end game.
If it just solves a few formalized problems with formalized theorems, not so much. You can write a program that solves ALL the problems under formalized theorems already. It just runs very slowly.
I don’t think you can gloss over the importance of computational tractability here. A human could also start enumerating every possible statement in ZFC, but clearly that doesn’t make them a mathematician.
I doubt it. Math has the property that you have a way to 100% verify that what you're doing is correct with little cost (as it is done with Lean). Most problems don't have anything close to that.
Math doesn't have a property that you can verify everything you're doing is correct with little cost. Humans simply tend to prefer theorems and proofs that are simpler.
You can, in principle, formalize any correct mathematical proof and verify its validity procedurally with a "simple" algorithm, that actually exists (See Coq, Lean...). Coming up with the proof is much harder, and deciding what to attempt to prove even harder, though.
You can verify it with a simple algorithm, but that verification won't always be cheap. If it was, curry-howard would be incredibly uninteresting. It only seems cheap because we usually have little interest in exploring the expensive theorems. Sometimes we do though and get things like the 4 color theorem whose proof verification amounts to combinatorial search.
These kind of experiments are many times orders of magnitude more costly (time, energy, money, safety, etc.) than verifying a mathematical proof with something like Lean.
That's why many think math will be one of the first to crack with AI as there is a relatively cheap and fast feedback loop available.
Computers have been better than us at calculation since about a week after computers were invented.
If a computer proves the Reimann Hypothesis, someone will say "Oh of course, writing a proof doesn't require intelligence, it's merely the dumb application of logical rules, but only a human could have thought of the conjecture to begin with."
The quality of AI algorithms is not based on formal mathematics at all. (For example, I'm unaware of even one theorem relevant to going from GPT-1 to GPT-4.) Possibly in the future it'll be otherwise though.
This concept of a blameless culture reminds me of one time when I was talking to a SWE at Facebook around 2010. I don’t know whether the story is actually true or just folklore, but apparently someone brought down the whole site on accident once, and it was pretty obvious who did it.
Zuckerberg was in the office and walked up to the guy and said something along the lines of “Just so you are aware, it would probably take a lifetime or more to recoup the revenue lost during that outage. But we don’t assign blame during these sorts of events, so let’s just consider it an expensive learning opportunity to redesign the system so it can’t happen again.”
Whenever this happens to someone it's always a horrible feeling where one feels very guilty and ashamed no matter what people say to you and unfortunately mistakes like these are almost the bread and butter of any extremely experienced grey beard so it's kind of normal that something like this happens to someone at some point sooner or later. The only people who never make costly mistakes are those who were never trusted with responsibility in the first place.
So having said that I would like to emphasise that the cost which often gets quoted with those mistakes is not a real cost, it's an unrealised opportunity cost and sure it hurts, but you know what, the same company culture that allows such mistakes to happen and miss out on opportunity costs is the same culture which also allows engineers to quickly roll out important features and updates and therefore create more opportunity in the first place, and much faster as a whole, so in theory the cost doesn't come without the opportunity and it all evens itself out. Don't feel too bad about it.
> the cost which often gets quoted with those mistakes is not a real cost
It is still money they would have made that now weren't made. It is very important to explain to people how much value is lost during these events so that we also correctly value the work to prevent such events in the future.
You're comparing "reality where accident happened" to "an alternate reality where everything is exactly the same but the accident did not happen" and this is not a sensible comparison.
The reality we have produced the accident. You can't have that reality and have it not produce the accident, because it was set up to produce the accident. Proof: it produced the accident.
To avoid the accident, you need an alternative reality that is sufficiently different so as not to produce the accident, and some of those differences may well have resulted in lower profit overall.
(You may argue that you're able to set up an alternate reality that does not produce the accident and results in higher profit overall – that's a completely different argument, but it also requires you to specify some more details to make it a falsifiable hypothesis. Without those details we can not guarantee a higher profit in that alternate reality.)
And to add to that - the number is almost always wrong because people tend to just count the money hose throughput times the downtime. But many of the people who would have spent money on the downtime will do so later. I guess maybe that's not true of advertising revenue? Although I imagine advertisers tend to have some monthly spend.
Sure, the probability that things that have happened will have happened is 1.
The real test for hard determinists is being able to conclude that the probability of things that will happen is also 1. At that point there's no such thing as "falsifiable".
If your shop takes $3600 an hour in revenue, but there's a problem with the till which means that people can't pay for 10 seconds, you haven't lost $10 in revenue, you've just shifted revenue from $1/second to $0/second for 10 seconds and $2/second for the next 10 seconds.
Yup, the only "real" cost there is a customer who decides not to buy after all, or buys elsewhere instead. But that's pretty unlikely, especially for short outages. And it's even less of an issue for entities with a lot of stickiness like social networks (Facebook, Twitter) or shopping websites with robust loyalty programs (Amazon Prime).
It's also hard to understand because it's largely illusory. If, say, Facebook is down and ad spending ceases for an hour, that money didn't just go up in smoke. It's still in somebody's ad budget, and there's a very good chance they're still going to spend it on ads. Thus, while there will be a temporary slow down in ad spend rate, over the course of the quarter the average may be completely unaffected due to catch up spending to use the budget.
Some usage (sales, ad views, whatever) will be delayed, some usage will be done somewhere else, some usage will be abandoned.
But costs are likely down too. If there's any paid transit, that usually drops during an incident. If you're paying for electricity usage, that drops too.
And significant outages can generate news stories which can lead to more usage later. Of course, it can also contribute to a brand of unreliability that may deter some use.
> the cost which often gets quoted with those mistakes is not a real cost
Oh that depends entirely on the industry.. in social media maybe not, in banking and Fintech those can most certainly be real costs. And can tell you - that feels even worse
But that isn't quite what you want in a blameless culture. The right response looks something like ignoring the engineer, gathering the tech leads and having an extremely detailed walkthrough of exactly what went wrong, how they managed to put an engineer in a position where an expensive outage happened and then they explain why it is never going to happen again. And anyone talks about disciplining the responsible engineer shout at them.
Also maybe check a month later and if anything bad happened to the engineer responsible as a result of the outage, probably sack their manager. That manager is a threat to the corporate culture.
Maybe Zuck did all that too of course. What do I know. But the story emphasises inaction and inaction after a crisis is bad.
They'll also be the person most able to identify what went wrong with your processes to allow the failure to occur and think through a mechanism to systematically avoid it happening again.
Also, they're probably the person least likely to make that class of mistake again. If you can keep them, you've added a lot of experiential value to your team.
Perhaps one slight amendment - maybe don't ignore the engineer, but ask them (in a separate, private meeting) if they have any thoughts on the factors that lead to it, and any ideas they have on how it could be avoided in future. Could be useful when sanity-checking the tech-leads ideas
Describing my last company’s incident process exactly.
We’d have like 3 levels of peer review on the breakdown too.
Once there was an incorrect environment variable configured for a big client’s instance which caused 2 hours of downtime (as we figured out what was wrong) and I had to write a 2 page report on why it happened.
That whole thing got tossed into our incident report black hole.
Personally I feel like the right thing to do is let the engineer closest to the incident lead the response and subsequent action items. If they do well commend them, if they don't take it seriously then it may be time to look for a new job.
I don’t think “blameless” and “shared responsibility” are mutually exclusive, in fact, they are two halves to this same coin. The dictionary definition of “blameless” does not encompass the practical application of a “blameless” culture, which can be confusing.
The “blameless” part here means the individual who directly triggered the event is not culpable as long as they acted reasonably and per procedure. The “shared responsibility” part is how the organization views the problem and thus how they approach mitigating for the future.
But when I think of “shared responsibility”, I think of everyone as sharing fault.
When something goes wrong, I think someone, somewhere likely could have mitigated it to some degree. Even if you’re following procedures, you could question the procedure if you don’t fully understand the implications. Sure, that’s a high bar, but I think it’s a preferrable to pointing the finger at the people who wrote the procedures.
On that note, someone or some group being at fault doesn’t necessitate punitive action.
> ... but I think it’s a preferrable to pointing the finger at the people who wrote the procedures ...
It is better to point the finger at the people who wrote the procedures. Their work resulted in a system failure.
If the person doing the work is expected to second guess the procedures, then there was little point having procedures in the first place, and management loses all control of the situation because they can't expect people to follow procedures any more.
Sure the person involved can literally ask questions, but after they ask questions the only option they have is to follow the procedure, so there isn't much they can do to avert problems.
When I was only a few years into my career I accidentally deleted all the Cisco phones in the municipality where I was a sowtware developer. I did it following the instructions of the IT operations guy in charge of them, but it was still my fault. My reaction was to go directly to the IT (who wasn’t my) boss and tell him about it.
He told me he wasn’t happy about the clean up they now needed to do, but that he was very happy about my way of handling the situation. He told me that everyone makes mistakes, but as long as you’re capable of owning them as quickly as possible, then you’re the best type of employee because then we can get to fix what is wrong fast, and nobody has to investigate. He also told me that he expected me to learn from it. Then he sent me on my way. A few hours later they had restored the most vital phone lines, but it took a week to get it all back up.
It was a good response, and it’s stuck with me since. It was also something I made sure to bring into my own management style for the period I was into that.
So I think it’s perfectly natural to react this way. It’s also why CEOs who fuck up have an easy time finding new jobs, despite a lot of people wondering why that is. It’s because mistakes are learning experiences.
I'd much rather hear about a problem from a team member than hear about it from the alert system, or an angry customer.
plus when the big fuckup happens and the person causing it is there, there is an immediate root cause, and I can save cycles on diagnosis; straight into troubleshooting and remedy.
I don’t know when this was turned into a Facebook trope, but I’ve heard it before as an engineer asking “Am I being fired?”, to which the director responds “We just invested four million dollars in your education. You are now one of our most valuable employees!”
Four million is definitely in the range of an outage at peak, that's not counting reallocated engineering resources to root cause and fix the problem, the opportunity cost of that fix in lost features, extra work by PR, potential contractual obligations for uptime, outage aftershocks, recruiting implications, customer support implications, etc.
If you have a once a year outage, how many employee-hours do you think you are going to lose to socially talking about it and not getting work done that day?
$116.6 Billion in revenue is ~13 million an hour. Outages usually happen under greater load, so very likely closer to ~25 mil an hour in practice.
> revenue wouldn't be lost if you had a 100ms outage
If that little blip cascades briefly and causes 1000 users to see an error page, and a mere five of them (0.5%) to give up on purchasing, boom you just lost those $700 (at least in the travel industry where ticket size is very high). Probably much more.
An error page can be enough for a handful of customers to decide to “come back later” or go with a competitor website.
If you think about experiments with button colors and other nearly imperceptible adjustments, that we know can affect conversion rates, an error page is orders of magnitude more impactful.
Probably, though when your business is making billions this is still just a few hours outage, or one long-running experiment dragging your conversion down by a few percentage points.
> Just so you are aware, it would probably take a lifetime or more to recoup the revenue lost during that outage. But we don’t assign blame
Assuming that’s accurate, it’s a pretty shitty way to put it. “Hey man, just so you know you should owe me for life (and I pay your salary so I decide that), but instead of demanding your unending servitude, I’m going to be a chill dude and let it slide. I’m still going to point it out so you feel worse than you already do and think about it every time you see me or make even the smallest mistake, though. Take care, see you around”.
It’s the kind of response someone would give after reading the Bob Hoover fuelling incident¹ or the similar Thomas Watson quote² and trying to be as magnanimous in their forgiveness while making it a priority that everyone knows how nice they were (thus completely undermining the gesture).
But it’s just as likely (if not more so) the Zuckerberg event never happened and it’s just someone bungling the Thomas Watson story.
I was an FB infra engineer in 2010. It's not accurate, there was already a "retro" SEV analysis process with a formal meeting run by Mike Schroepfer, who was then Director of Engineering. I attended many of them. He is a genuinely kind person who wouldn't have said anything so passive-aggressive. Also, many engineers broke the site at one time or another. I agree this is just a mutation of the Watson quote.
The only time I ever saw an engineer get roasted in the meeting was when they broke the site via some poor engineering (it happens), acknowledged the problem (great), promised to fix it, then the site went down two weeks later for the same reason (not great but it happens) and they tried to weasel out of responsibility by lying. Unfortunately for them there were a bunch of smart people in the room who saw right through it.
Look to your left, look to your right, count the heads. Now divide the money that was lost through the number of heads. This is the theoretical ceeling- how much you could make if there were no shareholders and you had your own company - or had a union.
> Now divide the money that was lost through the number of heads. This is the theoretical ceeling
So, if we assume a $10 million loss divided by 100 heads, that means your ceiling is -$100,000 if you were to organize yourself.
Let's see: Six months to build a Facebook clone on an average developer salary plus some other business costs will put you in the red by approximately $100k, and then you'll give up when you realize that the world doesn't need another Facebook clone. So, yeah, a -$100,000 ceiling sounds just about right.
Eh, that’s a really strange way to phrase it. Singling out the engineer isn’t blameless. Sure it’s a learning opportunity but it’s a learning opportunity for everyone involved. One person shouldn’t be able to take the site down. I have always thought of those situations as “failing together.”
Considering that everyone already knew who was responsible, I think saying "you won't be held accountable for this mistake" is the most blameless thing you can do.
> Sure it's a learning opportunity but it's a learning opportunity for everyone involved. One person shouldn't be able to take the site down.
The way I read the comment, it sounds to me exactly like what Zuckerberg said.
> Considering that everyone already knew who was responsible, I think saying "you won't be held accountable for this mistake" is the most blameless thing you can do.
What you’re describing isn’t blamlessness, it’s forgiveness. It’s still putting the blame on someone but not punishing them for it (except making them feel worse by pointing it out). Blamelessness would be not singling them out in any way, treating the event as if no one person had caused it.
> The way I read the comment, it sounds to me exactly like what Zuckerberg said.
Allegedly. Let’s also keep in mind we only have a rumour as the source of this story. It’s more likely that it never happened and this is a recounting of the Thomas Watson quote in other comments.
> But we don’t assign blame during these sorts of events, so let’s just consider it an expensive learning opportunity to redesign the system so it can’t happen again.
It's the latter half of the sentence that makes it blameless. Zuckerberg is very clearly saying the problem is that it was allowed at all.
sometimes the root cause is someone fucking up, if you're not willing to attribute the root cause to someone making a mistake then being blameless is far less useful.
What part of "so let’s just consider it an expensive learning opportunity to redesign the system so it can’t happen again" doesn't mean that it's happened, but let's see how we get there.
"It should not have been possible for one person to take the site down" - yes, and that's exactly what Zuck is addressing here? May be such controls are there across the development teams and some SRE did something to bring it down and now there needs to be even better controls in that department as well?
As told this is clearly not a Zuck quote because it’s shitty leadership. There’s no way Facebook got where it is with such incompetence. This is clearly a mistelling of older more coherent anecdotes.
Not really. If you single someone out as CEO that's a punishment. Even if your words are superficially nice what he really did was blame the engineer and told him not to do it again. He should have left it with the engineer's line manager to make that comment, if at all because essentially he's telling the employee nothing that he didn't know already.
> because essentially he's telling the employee nothing that he didn't know already.
The employee did not know that the CEO would be so forgiving. And it helps set that culture as other's here about the incidence and response.
Also, why is this so important? If your punishment for bringing down Facebook is your boss' boss telling you "Hey even if this is a serious mistake, I don't want you to worry that you're going to be out of a job. Consider this a learning opportunity," than that seems more than fair to me.
> Even if your words are superficially nice what he really did was blame the engineer and told him not to do it again.
The person being told that may feel that way, but IMO nothing from his phrasing implies that:
"let's just consider it an expensive learning opportunity to redesign the system so it can't happen again"
Note the "can't" in the "can't happen again" - he isn't telling the employee "don't you dare do that again!" as you seem to be saying, he's saying "let's all figure out how to protect our systems from such mistakes".
Strange way to describe the same situation and Zuck's thrust there in different words. Zuck is literally saying "failing together" and "learning together".
The other story you are referencing is "this was an expensive education in which you have learnt to not do stupid stuff".
This is framed as "this was an expensive learning opportunity for us to learn that we have a gap in our systems that allowed this downtime to happen".
These are different sentiments! To me the above quote is very explicitly the latter and directly refutes the notion of "this was expensive training for you" by stating that it's impossible for an individual to apply that learning in a way that would recoup the loss.
Related to this problem is the interesting tradeoff between fairness and harm minimization. The idea of fairness is that no individual should have a higher probability of a guilty verdict or punishment than any other individual due to factors that are outside of their control. But there is an inherent conflict between fairness and our ability to reduce harm that results from crime.
For example, consider two hypothetical but identical individuals: one born into a low-income neighborhood and one born into a high-income neighborhood. If you develop a model to predict what we currently categorize as "crime" (the definition of which is its own separate issue), you will find that the income of a neighborhood is inversely correlated with the density of crime. If this is the only factor in your predictive model, then you will more effectively reduce crime by directing attention toward the low-income neighborhood. But now there is an inherent unfairness, because the additional scrutiny toward the low-income neighborhood means that individual 1 is more likely to be caught for a crime than individual 2, despite both individuals having an equal likelihood of committing a crime. This also creates a self-reinforcing situation where having more statistics on the low-income subset of the population now allows you to improve your predictive model even further by using additional variables that are only relevant to that subset of the population, meanwhile neglecting other variables that would be relevant to predicting crime in high-income neighborhoods. Repeat this process a few times and soon you have a massive amount of unfairness in society.
It's probably impossible to eliminate all unfairness while still maintaining any sort of ability to control crime, but what is the appropriate threshold for this tradeoff?
I work at one of the well-known tech companies. We have recently gone on a consultant hiring spree after previously having avoided external contractors for years.
With a team of five consultants and their manager, months of useless hour long meetings to answer simple questions that can be looked up in the documentation, and plenty of slideshows and design docs that are mostly filled with fluff, they’ve produced... a dashboard that I could have put together in about 30 minutes.
Whatever social purpose this contract fulfills is a total mystery to me, but all I can think of is what a waste of human time this is.
As someone well past "peak" fluid intelligence at this point, I always hate reading research like this. "Crystallized intelligence" and "emotional intelligence" are the consolation prizes no one really wants.
I'd rather we instead perform research to identify how one might reverse the decline of fluid intelligence...
reply