I have a journalist friend with 0 coding experience who has used ChatGPT to help them build tools to scrape data for their work. They run the code, report the errors, repeat, until something usable results. An agent would do an even better job. Current LLMs are pretty good at spotting their own hallucinations if they're given the ability to execute code.
The author seems to have a bias. The truth is that we _do not know_ what is going to happen. It's still too early to judge the economic impact of current technology - companies need time to understand how to use this technology. And, research is still making progress. Scaling of the current paradigms (e.g. reasoning RL) could make the technology more useful/reliable. The enormous amount of investment could yield further breakthroughs. Or.. not! Given the uncertainty, one should be both appropriately invested and diversified.
The thing that doesn't click in articles like this is the advice section afterwards. Do you think the people described thought about how to increase their practice surface area? No, they were simply /interested/. And part of the reason they were interested was because of natural talent! Do you want to know how to increase your practice surface area? Find things you're interested in.
But the reality is that many people just aren't as interested in anything as some people are interested in something. And that's okay. The real advice is to learn to accept yourself as you are, whether you're an obsessive or not.
The article’s advice section is actually quite practical, and directly addresses your point:
> It should go without saying that the best way to increase your practice surface area in a given field is to be obsessed with that field. Obsession makes quick work of formal and bounded training sessions, and it doesn't need "tips" on how to do so.
> So the question then becomes, "How do I increase my practice surface area if I'm not already obsessed?"
> Do you think the people described thought about how to increase their practice surface area? No, they were simply /interested/.
In fact, many of them did think about it. Many of them designed, tested, experimented, and tweaked their approaches to their crafts endlessly throughout their lives.
It can be comforting (because it lets us off the hook) to think that masters of their craft all follow the "just do it" ethos while simply "accepting themselves as they are," but usually the opposite is true. At least at the margin.
Being interested in something is a skill that can be cultivated in itself. Of course, you probably won't be able to convince yourself that something you find unbearably dull is interesting, but you can deepen a vague interest into curiosity towards and appreciation of the details and nuances.
It's not about joyless grinding or forcing yourself, more like putting yourself in a space where you engage with the thing and deciding to go just one step further than the point where your attention initially starts to drift. Or just putting yourself back in the space of thinking about it when you have a free moment, like waiting in a queue. You can use that time to, for example, make up a few sentences in the language that you're learning (perhaps about how annoying the queue is), or playing music in your head.
It is slightly more difficult in the moment, but in the long term it makes your life experience richer and more fulfilling than if you pulled out your phone and started scrolling (which you can still do afterward). You don't have to be mercilessly beating yourself about productivity, but if you develop these kinds of habits you tend to naturally start doing it more often.
Came here to write this. To add to that, the book "The Art of Learning" is a great example of this; Joshua Waitzkin tells how his approach to learning saw him reach the pinnacle of both Chess and Tai Chi. From the outside it might just seem like a "genius" who just so happened to have innate talents for both of these highly disparate forms of competition, but his story paints a different picture.
My own approach to learning is very similar, and that's before I read this book. But it was not an approach that came naturally to me. I was never good at school. I was never good at paying attention to things I didn't see a _reason_ to pay attention to. But I realized early on that my propensity for "hyperactive behavior" (as they called it at school in the early 90's) meant I couldn't rely on the "standard" way of learning to work for me, and that if I wanted to learn the things I did care about, I needed to find my own way.
This is in no way me trying to brag, but to give some context to why I challenge this kind of mindset whenever I'm faced with it, including (and without much success) in my own mother who keeps bringing up "I wonder who you inherited this skill from" whenever I learn something new, let me list some of the activities other people tell me I'm talented in:
* Software architecture and design
* Indie game development, including 2D and 3D engines from scratch in multiple languages as well as game design
* Videogames -- I'm particularly good at Smash Bros. Melee
* Hiking
* Body control - I can do push-ups while hand standing
* Conversation and hanging out - I make people laugh and have a good time because I have a lot of topics of interest to keep conversation flowing, and I set aside my ego to make sure the group has fun as a whole
Again, I'm not bragging, because I'm not a master in any of these disparate activities, and I also firmly believe this is something almost everyone can do. There is no "genetic" or inherited talent here. I was terrible at all these things and had to apply myself and experiment with constantly adaptive reinforcement techniques to get good at every single one of them. I grew up with asthma, but started practicing martial arts in spite of it. If I have a talent, it's simply curiosity. Everything else was an uphill struggle where I spent longer getting _OK_ at every single activity than most, but because I kept at it, I eventually got better than the average.
And one of the major factors of that is increasing my practice surface area in everything, which often ends up overlapping existing knowledge in surprisingly deep and fascinating ways, which in turn keeps me interested and curious. It's a feedback loop of learning.
As someone who also juggles (literally and figuratively) a dozen hobbies, I call this learning by "teetering on the edge". It's not just about being curious. It's about constantly figuring out unorthodox ways of challenging the same skill.
You want to force yourself to the very precipice of uncomfortableness. This is an ongoing process since our brains are designed to ruthlessly optimize any skill that you are learning.
Let me give an actual specific example. When I was a teenager I became interested in learning the piano. I quickly found myself plateauing where sheet reading was concerned. In an effort to scale the difficulty up, I used to take the organ sheet music from our local church and try to play it on the fly on a piano. The challenge? There is an extra bottom staff representing the notes to be played on the pedals - thus you had to transpose that bottom staff up while you were playing and merge them together in a pianistic manner.
Another example? I used to play a LOT of Tetris for the NES. As soon as I could feel my internal system optimizing - I wrote a tiny little program that would use text-to-speech to read out random arithmetic problems that I would try to answer while playing. I remember the first time I tried this it felt like somebody had poured miracle-gro on the dendritic trees within my mind.
What goes on in your head when you attempt to increase your mastery? What is your own loop from your perspective? What emotions are you feeling? Thank you.
A main concept I have in mind when learning new things, no matter whether it's a language, physical exercise routines, or an instrument, is "micro, macro, meta".
Early on in my life I didn't have concrete words for these, and I'll explain what I mean by each. To me this isn't anything special, mind, and it might be extremely obvious, although based on conversations with friends and family, it doesn't always seem to be.
Micro: The fundamentals and specifics of each activity and skill. For piano, for example, it would be concepts like "where are the keys", "how does the sound differ with pressure or speed", "how does rhythm notation work", etc. The micro often has immutable principles: A specific muscle _will_ have a limit (even if that limit changes over time). Piano sheet music notation's structure works a specific way and the rules for existing sheet music is inflexible. It is often as close to "fact" as you get.
Macro: The application and combination of the micro skills. Hitting keys in the right order to make a pleasing tune is a macro skill using a string of micro interactions. To hit a key at the right rhythm, with the appropriate force, is a macro application of the micro skill of knowing what key to hit based on the notation or memory, and the learned and practiced macro skill of rapidly applying varying force.
Meta: The often surprising connection between disparate micro and macro concepts from seemingly unrelated activities. Keeping a subconscious rhythm, something I learned over time whilst playing piano -- first using a metronome, then counting out loud, then internalizing it -- is also useful when exercising or when meditating, because it helps you more effectively align different muscle groups and tendons in unison when, say, doing hand-stand push-ups, or breathing exercises to force your brain into a relaxed state during meditation. Keeping an internal rhythm that can "beat" separate from your heart and external stimuli makes so many physical acts easier. But conversely, having a body that is used to various body parts acting in unison to a beat makes playing the piano easier as well.
The meta aspect is also observing myself as I'm learning new skills. Listening to when I get frustrated or lose motivation, and analyzing why. I learned early that losing interest didn't mean I'd lost interest in wanting to get good at the activity itself; it simply meant I was hitting a specific wall in the micro or macro, and I needed to change my approach. This is a key insight: I was told by teachers that when I lost interest, I just needed to "focus and apply myself". But it felt _wrong_ to force myself to repeat something that felt like it was going nowhere. When I realized it was my brain and/or body effectively telling me "we're going in circles by repeating this right now", all it took was to change the activity. For example when learning Japanese; when I hit a wall during kanji memorization study, I'd shift focus to how I could combine the kanji and kana I knew into sentences, and test it out. The next day I'd usually be much better at kanji memorization again. If not I'd shift to something else, like re-reading the literature on language structure, to give myself macro applications of the micro specifics.
These three concepts are something which my focus will flow between as I learn something new. I'll often ping pong my focus from day to day between micro and macro, while always being conscious of the meta aspect. And the interesting thing to note is that as you get good at things like the piano, often macro skills turn into micro skills as the brain and body optimizes memory and nerve signals. For example, playing a sequence of notes at great distance that requires one hand to jump between distant keys while the other is playing a steady tune will at first be a series of micro skills executed in order, slowly, as part of a macro series of maneuvers requiring great focus. But over time, it becomes a micro tool to skip all of those keys without requiring any focus at all -- in essence what we often call muscle memory, although it goes deeper than that.
Finally, I'd say that for the brain and body to most effectively optimize learning, it needs constant reinforcement, but it doesn't need to be for long. You just need to indicate that this activity is the new normal; then the body and brain will go out of their way to optimize so that it becomes easier and easier to do, expending less energy.
When I'm learning something new, some days I might spend a lot of time on it. But many days I'll spend fifteen minutes, and that'll be that. Life gets in the way, but usually not so much that we can't spend fifteen minutes at _least_ every other day or so. Even five minutes is better than nothing. It's enough to signal that this is the new normal, so best keep optimizing for it, because we'll keep doing it until it's cheaper energy wise.
I think that’s down to two things: Adaptability and survivorship bias. If you have ADHD (and mine was never medicated nor have I bothered to get a re-evaluation as an adult; they called it “slightly hyperactive” when I was a kid, no idea if I would still be considered to have ADHD today) then most education isn’t effective and so most likely you either adapt to autodidact or you fail.
Maybe, or maybe adhd is the term for what happens when you take somebody who is best at learning on their own and force them to sit in a classroom and listen to boring shit all day.
Ok let me restate the point more clearly: Of course there's _some_ genetics in play; if I was born without arms, a handstand would be very hard -- depending on factors, _maybe_ possible with robotic limbs but doubtful.
My point was, there's nothing genetic about my ability to teach myself the skills. In fact in many cases -- including being born with asthma -- I had to work around weaknesses many of my peers didn't have. I've always had weak balance, so it took me much longer to get used to acrobatics to build up to handstands than my friends. I have innate low blood pressure which means I more easily get dizzy when I handstand if I'm not very particular about how I breathe. In most things, I'm slower to learn them, but I'm generally better at perseverance and structured learning.
I see what you're getting at, but I'd disagree. I think accepting who you are also comes with commitment to pursue the things you're interested in, because non-acceptance would be "ignoring this part of me I wanted to pursue but didn't think I should".
Pursuing interest isn't the same as practicing things to get better at them. Things can be superficially interesting, and it takes an understanding that self not being perfect is totally fine to know that the process of change is not necessarily interesting, but it has rewards.
Accepting yourself as you are is the worst poison to tell to your child or someone you care about. Life is about in inventing yourself through craft, doing, trying, discovering, challenging, achieving goals and self improvement.
Interested is the wrong word. Motivated is a better one, with interest being a subcategory of motivation.
The kind of intrinsic practice is even easier today than it was back then because we carry powerful computers in our pocket. If, like Orwell, you want to be a writer, you can literally work on it anywhere at any time. If you want to learn a language, you can spend time practicing it. If you want to learn music or art, the tools for explicitly practicing those things are in your actual pocket.
“Talent” is a myth. We all have our aptitudes, but in most subjects that only gets you a small advantage. Maybe you can remember Spanish vocab better than me, but if I practice every day and you don’t, you’ll reach the limits of “talent” quickly. Putting in the reps is the most important part of learning anything. The irony is that when you DO get good at something people forget that you were ever bad at it, and they will quickly dismiss your hard work as talent.
In high school, I had an ongoing argument with my dad about this. I got really into music and practiced daily for hours, wanting to be good enough to play in a band. By the time I graduated, I wasn’t bad. My dad would pontificate about which relative I had inherited this talent from, and I would get annoyed because it undermined the 100s of hours I spent locked in a music room. It did NOT come easy; I wanted to be able to do something and worked my ass off to get there.
I've been reading about expertise and deliberate/purposeful practice, starting with "Peak" by Ericcson.
Experts, as a rule, augment their practice with a coach, and they never stop doing that. The quintessential example is the olympian.
So, that is how they increase their practice surface area.
I think interest is also generally relevant, but it's not the core in established fields: coaching is.
I am interested in unestablished fields too, which may be fundamentally interest driven. Although IMO that interest may be more about establishing frontiers than the specific topic.
Man, talk about out the opposite of a growth mindset. What's that called? Stagnation mindset? Mediocrity mindset? How depressing.
People can change and get better at things. They do it all the time. Plenty of people _do_ actively think about things like how to increase their surface area to improve, because _they want to get better_.
In the long run, absent intervention, virtually all income flows to the owners of compute.
We need more than UBI. AGI is the culmination of all human activity up to that point and all humanity deserves ownership of it. It should not belong solely to those who put the cherry on top with the rest of us at their mercy. They don't deserve to control the humanity's destiny. AGI, at some point, has to be made into ... I don't know. Not nationalized - something more. A force of pure good for all humans unaffiliated with any corporation or state.
The same reason why owning a business is less of a commodity than electricity.
It's the ultimate monopoly. Anyone with more compute will ultimately be able to out perform any business you could invent ultimately locking you out of competition.
The owners of compute will make a killing and can set whatever price they like. But if the owner is someone like say amazon, then what actually stops them from using their massive compute army they already own to enter the most lucrative businesses for compute slowly dominating everything?
because compute is owned and sold by people who have businesses built on top of compute, thus they let you have their excess compute, it follows that their needs will come before yours.
Of course. If AGI becomes real I don't see any reason to keep a capitalism society. Ultimately capitalism works because it incentivizes people to produce goods and services efficiently. If AI is more efficient than humans in every single aspect then what's the point of giving people economic incentives?
Things still cost money. There are always scarcities. For example cells already reached exponential self replication capabilities long ago, but eventually hit environment constraints. It became a struggle for survival, but under infinite resources they would infinitely replicate without effort instead of evolving.
It's funny that just over 100 years ago, they were saying EXACTLY the same thing about electricity. Ultimately, history has shown all the reasons to to keep a capitalism society.
100 years ago people said electricity would be more efficient than humans in every single aspect? They said electricity would invent more efficient way to generate electricity itself?
That's news to me! Some people were really ahead of their time.
Let's call it Commonism then. Where we recognize the need for economic activity that furthers whatever we humans have in common. Instead of tumour-like, zero-sum, number must go up turbocapitalism that just concentrates wealth.
If AGI occurs some form of communism will be necessary no? How else will they cover all the costs of UBI?
It's our work/earths resources/internet its been born from, it should benefit us all.
I'm assuming there won't be more meaningful work for most of population to do that AI can't do. Some people think the opposite. That seems to be the main point of contention.
And some hybrid of capitalism and socialism eventually will happen. Target would be to prevent rich few from hoarding wealth and force them to put it back into economy. Otherwise people with nothing to lose will just repeat social revolutions from 19-20th century.
Because too many HN folk see that word and recoil as they're only casually "familiar" with human attempts at it during the era of scarcity as told by mass media, with no understanding as to the reality of why said systems failed or succeeded.
In a post-scarcity society (which we're technically in now, if we took this seriously), Communism is a more appropriate model of governance than Capitalism. It would ensure a more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation (of truly scarce resources - rare elements, for instance) over extraction.
The problem is that humans desire power for themselves and the humiliation of others, which results in every method of governance becoming corrupted over time, especially if it doesn't see regular change to address its weaknesses (as we see now with neoliberal societies resisting populism on both extremes of the political scale). Combined with centuries of nationstates lumbering onwards and fighting for their own survival in an increasingly nebulous and ever-shifting digital landscape, and no wonder things are a tinderbox.
All that being said, Communism is an (maybe not the, but an) appropriate choice for a post-scarcity, post-AGI society. It's something we need to discuss in earnest now, and start dismantling Capitalism where feasible to lay the foundation for what comes next. As others (myself included) have pointed out repeatedly, this is likely the last warning we'll get before AGI arrives. It's highly unlikely LLMs and current technology will give rise to AGI, but it's almost a certainty that we'll see actual glimmers of AGI within the next fifty years - and once that genie is out of the bottle, we'll be "locked in" to whatever society we've created for ourselves, until and unless we leave our planet behind and can experiment with our own alternatives at scale.
Good craftsmen know when they've reached the limits of their current tooling. We need to recognize that Capitalism is the wrong tool for an AGI era if we value our humanity.
Human needs unlimited. There can't be any "post-scarcity society".
The transition point to a post-scarcity society is in the eyes of the beholder, and moves away from them at the same speed with which they approach it.
From the perspective of the hundreds of millions of people working for 10 cents an hour, any American, even the poorest of them, whose only available job is a minimum wage of $8 an hour, has long since passed that point of post-scarcity society.
But try convincing minimum wage American that he's beyond that point and that he needs to give up $6 out of $8 because "there are no scarcity after $2 per hour". Then you will know the real opinion of people about "more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation"
If I'm understanding your broken sentences correctly, you're seemingly trying to parrot the same "insatiable appetite of humanity" that all proponents of Capitalism like to trot out as some sort of defense of the (otherwise) indefensible; same with your misleading comparison of income and cost of living across national boundaries.
The fundamental needs of humanity aren't infinite: a safe home, nutritious food, healthcare, and education are the sum total of human needs. Everything else is superfluous to survival, albeit not self-fulfillment or personal enrichment. We're post-scarcity in the sense that, on a planetary scale, we have enough food, shelter, healthcare, and education for every single inhabitant on Earth, but Capitalism incentivizes the misuse of these surplus resources to create value for existing stakeholders.
This is where I flatly reject any notion of Capitalism being viable, suitable, or acceptable in a post-AGI society, and rail against it in the present day. Its incentives no longer align with human needs or challenges, and in fact harm humanity as a whole by promoting zero-sum wealth extraction rather than a reconciling of the gap between human needs and Capital desires. As much pro-Capitalism content as I consume in an effort to better my perspective, the reality is that it is rapidly outliving its usefulness as a tool like a shambling zombie, wholly divested from human survival and soldiering onward solely as a means to prop up the existing power structures in existence.
> as some sort of defense of the (otherwise) indefensible
Trying to dispute this fact reveals your inexperience and out-of-touch worldviews. And frankly, I'm not entirely understand what you think this fact is defending.
> The fundamental needs of humanity aren't infinite: a safe home, nutritious food, healthcare, and education are the sum total of human needs
You are literally listing needs that are insatiable.
Let me repeat, there are hundreds of millions of people in the world working for 10-20 cents an hour. Try talking to them and ask them what salary a person needs to get "safe home, nutritious food, healthcare, and education". You'll almost certainly hear a figure around a couple of dollars per hour, maybe even less.
And no, it's not about the cost of living, it's about the fact that in their opinion, anyone with access to the American labor market (even as an illegal worker) makes several times more money than they need to get "safe home, nutritious food, healthcare, and education". Because these human needs are exactly as infinite as other "superfluous to survival" needs.
> Capitalism incentivizes the misuse of these surplus resources to create value for existing stakeholders
Or, in the opinion of people earning 10-20 cents an hour, the enormous salaries of American workers earning the American Minimum Wage. American workers EVERY year earn more money than American billionaires have accumulated over generations. What a enormous source for distribution and building a fair post-capitalist society!
But alas, if the builders of a post-capitalist society cannot convince even the most needy workers to show class solidarity and give up $6 of their $8 minimum wage in the name of avoiding "the misuse of these surplus resources" and "safe home, nutritious food, healthcare, and education" for every human on the planet, what can we expect from capital, which is in a much more advantageous negotiating position?
To verify your account during online customer service calls, Comcast will text you a six digit 2FA looking auth code which you must provide to the Comcast customer support. Guys.
This is interesting, and I have always thought this approach worth exploring given the "bitter lesson" in other ML domains, but I think we should be skeptical until we see such models deployed and operating effectively on real-world vehicles.
Would be really interesting to see a quantitative analysis of how this acquisition impacts the podcast/youtube ecosystem. I feel like Squarespace's podcast ad market share was a big deal 5 years ago but podcasts and youtubers have grown and diversified a lot since then.
Counterpoint: I take probably 3/4 of my meetings on Zoom and 1/4 on meet. So on any given day I'm probably doing at least 1 on meet. If I look back on any day at all the meetings with unaceptable audio lag or very degraded video quality? They are always all "meet". It is just hands-down worse when networks are unreliable.
In addition meet insists I click on the same about 4 or 5 different "Got it" feature popups every single call, and every call also insists on asking me if I want to use Duet AI to make my background look shit which just adds to annoyance.
It's a lot better than it used to be. In 2020, universities that already had GSuite (which includes Meet) still paid to put their classes on Zoom. Personally I like Zoom more today, mostly because even my high-end laptop can struggle with Meet.
this is conflating a lot of concepts. there's nothing wrong with being rational with regard to the development of your own beliefs, and there's nothing wrong with using rationality to determine the best way to achieve your goals. when considering whether to be blunt with someone, you should consider your goals. if telling them the bare truth is going to offend them, and that's not in your interest, it's not rational to do so. being rational doesn't mean acting like a truth-telling robot.
All fair -- except the context is there's sort of a self-proclaimed "rationalist" community which is known for eschewing social norms to the point of seemingly deliberately over-considering controversial ideas as a reaction (e.g. certain racial topics, pandemic, etc). I think those people are the type we're talking about.
A truly rational person would potentially never call themselves "rational" because they would know most people find it an alienating term and kinda an unfair self-assessment.
The author seems to have a bias. The truth is that we _do not know_ what is going to happen. It's still too early to judge the economic impact of current technology - companies need time to understand how to use this technology. And, research is still making progress. Scaling of the current paradigms (e.g. reasoning RL) could make the technology more useful/reliable. The enormous amount of investment could yield further breakthroughs. Or.. not! Given the uncertainty, one should be both appropriately invested and diversified.