Daniel Nettle gives a great layperson's explanation of the Five Factor model in his book "Personality," and the first thing he explains is in line with exactly what you ended on. We exhibit a variety of personalities because different personalities are useful in different environments. Sometimes it's GOOD to be highly neurotic, or low extraversion. Natural selection doesn't care about your internal conscious experience of life, it will make you miserable if that helps you survive.
I'm an avid reader (several dozens of books per year at least), and one of the things that bums me out is all of the morality around my hobby. 3 or 4 times out of 5 when I talk to people about it the reaction is "oh man I'm such a bad person because I don't read enough books."
It's fine! The number of books you read is not a reflection on your quality as a person.
Reading absolutely has positive benefits, but really it's exactly what you said. It's just more interesting than other options out there. The tradeoff is yes, it can require some effort, but that's the same as any other effortful activity. You have to get past the cost, but there's a really nice reward on the other side.
And for what it's worth, there ARE television shows, movies, etc. that have more value than many books. ("The Wire" is a prime example, probably better than 70-80% of the books out there.) The point is just generally that more cognitively demanding avocations can have a higher cost-benefit ratio than cheaper ones like TV. On average, books fall more into this category than other media, but that's just on average.
Anyway this is a long way of saying that feeling bad about the media you consume is counterproductive. The message should be that there is potentially a more rewarding experience out there, but whether you pursue it or not is totally up to you and doesn't make you a good or bad person either way.
Yes to all of that. My biggest pet peeve is the Goodreads reading challenge, I cringe at it every year. Imagine that but a 'TV show challenge', it would be absurd. This is the way people think about books.
Read what you want, how you want. Pick up the same book five times. Do whatever. Forget arbitrary challenges.
I always laugh when people say something like oh wow you must be so smart reading all those books. Nah I'm reading about Goblins, Gnomes and vampires in space its really not ground breaking intellectual stuff. I enjoy reading but its similar to sitting down and watching a movie or TV show in my eyes.
I agree. Books have a higher intellectual ceiling than most things, but there is as always a mountain of slop, too. I'd rather someone spend a year interrogating Plato or Moby Dick than read 300 Agatha Christie or Steven King type novels. There is nothing virtuous about reading in itself.
I echo the sentiment of the sibling comment: book count challenges are foolish and missing the point.
I'm sorry for your loss. It sounds like your father was a great man. No need for apologies, I think what you said is very poignant and relevant to the topic at hand. We should all be so lucky to live such full lives.
My take (no more informed than anyone else's) is that the range indicates this is a complex phenomenon that people are still making sense of. My suspicion is that something like the following is going on:
1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.
2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.
3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.
4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.
5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.
6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.
That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.
The first part is surely true if you change it to "the hardEST part," (I'm a huge believer in "Programming as Theory Building"), but there are plenty of other hard or just downright tedious/expensive aspects of software development. I'm still not fully bought in on some of the AI stuff—I haven't had a chance to really apply an agentic flow to anything professional, I pretty much always get errors even when one-shotting, and who knows if even the productive stuff is big-picture economical—but I've already done some professional "mini projects" that just would not have gotten done without an AI. Simple example is I converted a C# UI to Java Swing in less than a day, few thousand lines of code, simple utility but important to my current project for <reasons>. Assuming tasks like these can be done economically over time, I don't see any reason why small and medium difficulty programming tasks can't be achieved efficiently with these tools.
My framing for this is "mass production of stimuli." Before industrialization, the number of things grabbing your attention at any given moment wasn't super high. But once you had mass production, and especially the innovation of extrinsic advertising (associating psychological properties not intrinsic to the product being advertised itself), we were all suddenly awash in stimulating signals. But like this article notes, those stimuli go mostly unfulfilled by the action we take (buying the thing, opening the app), and so we all have this low level background noise of frustration and dissatisfaction.
EDIT: Some later posts mentioned it, but philosophers and religions have contemplated this stuff for centuries. Nevertheless I do think it's an exacerbated problem in the modern world due to technology and scale.
I hit the same roadblock unfortunately. My academic references were all in a different field and I hadn't really stayed in contact except with one professor, who sadly has died. I did see that there's an option to use professional references, so even though I haven't done this myself, one route you could consider taking is to get references from managers, colleagues etc. who can speak to your technical knowledge. I agree though with your general point that after being out of an academic environment for a while that requirement becomes challenging.
The best system I ever worked with looked incredibly simple. Small, clear functions. Lots of "set a few variables, run some if statements." Incredibly unassuming, humble code. But it handled 10s of millions of transactions per day elegantly and correctly. Every weird edge case or subtle concurrency bug or whatever else you could think of had been squeezed out of the system. Everything fit together like LEGO blocks, seamlessly coming together into a comprehensible, functional, performant system. I loved it. After years of accepting mediocre code as the cost of doing business, seeing this thing in a corporate environment inspired me to fall in love with software again and commit to always doing my best to write high quality code.
EDIT: I think what made that code so good is that there was absolutely nothing unnecessary in the whole system. Every variable, every function, every class was absolutely necessary to deliver the required functionality or to ensure some technical constraint was respected. Everything in that system belonged, and nothing didn't.
I had the pleasure of working with a handful of Pivots for about 2 years, and I have to say that felt like the closest I ever got to a healthy engineering culture. Delightful people, superb engineers, always focused on working and learning together. I feel really privileged to have worked in that environment.
I had the same experience and also dropped out after my MA. It's pretty sad. One of my professors told me, "You should have been here in the 70s, you would have loved it."
An older CS professor (whose book, I’m guessing, about half of HN posters have read) told me essentially the same thing.
He’s one of the best people to talk to in the department. Kind, passionate and compassionate, interested first and foremost in ideas and people. No ego, doesn’t care about telling anyone he’s smarter than them (he is though), just wants to figure things out together.
I agree that this is very important. The flip side of that you will also have entrenched lazies who refuse to keep up with new knowledge, get comfy in their chair, plus grow a big ego etc. It's a tradeoff.
You have to give breathing room for creativity to unfold, but the breathing room can also be taken advantage of.
Also, it used to be more accepted to play elite inside baseball, hiring based on prestige, gut feel and recommendation. Today it's not too different in reality, but today we expect more egalitarianism and objectivity, and do literature metrics become emphasized. And therefore those must be chased.
Similar to test prep grind more broadly. More egalitarianism and accountability lead to tougher competition but more justice but less breathing room and more grind and less time for creative freedom.
In the 70s, academia in general was still growing so there were opportunities for many of the people who wanted a career in that field. Now that the field is shrinking due to demographic changes the competition has become much more vicious.
The baby boomers were going to college, ergo colleges and universities were expanding.The Ph.D. from a Tier-N school who didn't catch on there could find a tenure-track position in a Tier-N+M school.
Back in those years, at I suppose a Tier-3 school, I went to some academic ceremony where the professors wore their robes. I was impressed at how spiffy the crimson Harvard robes looked. Somebody more sociologically aware would have thought, Hmmm, there sure are a lot of Harvard Ph.D.s on the faculty here, and considered why.
How was it before then? Surely you can't expect that N PhDs minted by one doctoral advisor will each be able to take an equivalent spot at the same institution as the doctoral advisor. Or did people expect that? Unless the population is growing, the steady state is that one prof can only mint one prof-descendant in their lifetime on average. That means, maybe some can create more, but then some will not have any mentees that ever become professors. It is very basic math, but the emotions and egos seem to make this discussion "complex".
>Unless the population is growing, the steady state is that one prof can only mint one prof-descendant in their lifetime on average. That means, maybe some can create more, but then some will not have any mentees that ever become professors. It is very basic math
Yes, and the US population went from about 130 million in 1940 to 330 million in 2020, while the percent of adults with a college degree went from about 5% to about 40%. There were a few decades of particularly rapid growth.
I think that the American college and university system had previously been expanding slowly. The GI Bill and the then the baby boom greatly increased the rate of expansion. Expansion still goes on, but maybe at quite a low rate.
Colleges and Universities have, out of necessity, started thinking more like a company. Part of that is often new accounting models. One such way of modeling costs anscribes indirect costs to programs (utilities, building maintenance etc). Low enrollment graduate and doctoral programs look really bad on a balance sheet when you factor in these indirect costs and they will never look good. In fact they will always lose millions per year under this model. It is frankly an inappropriate budgeting model for colleges to adopt because academic programs are not product lines, but here we are.
It seems like it's just poor management. I understand they are not product lines, but a university has bills to pay. They have to pay people salaries, benefits, maintain those builds, labs, libraries, etc. The money to do that has to come from somewhere and in the hard times, the fields with the least likely chance of generating revenue to keep the university afloat will see hits. It seems like the university though has put itself in the hard times by taking on a large amount of debt: https://chicagomaroon.com/43960/news/get-up-to-date-on-the-u.... It seems like its less malicious and just risk taking gone wrong.
It's not that different in the corporate world. Lots of companies make bad bets that then lead to layoffs, but not always in the orgs that actually were part of the bad bet. I've seen many startups take on too much risk, then have to perform layoffs in orgs like marketing, recruiting, sales, HR, etc. even if those orgs weren't responsible for the issues that the company is facing.
When I first heard Jimi Hendrix's Purple Haze blasted out as I walked in darkness down the hillside to the womens' dorms, I realized it was a new age and a good time to be alive!8-))
reply