When you are developing an autocratic regime within an elected system, criminal extralegal military action reveals who the leaders are who will act criminally (are "loyal") segregating them from the constitutional adherents who oppose you.
It's a final step to overthrowing the US's elected officials or rendering them powerless.
Well, two things: it's the last sentence of the film; being on hour into something you're calling propaganda is brave.
Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.
It's science under the creative boundary of binary/symbols. And as analog thinkers, we should be developing far greater tools than these glass ceilings. And yes, having finished the film, it's far more propagandic than it began as.
Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.
Everything between your ears is an electrochemical process. It's all math and there is no "creative boundary." There's plenty to criticize in AI hype that we're going to get to machine intelligence very soon. I suspect a lot of the hype is oriented towards getting favorable treatment from the government if not outright subsidies. But claiming that there are fundamental barriers is a losing bet.
It doesn't happen "btwn ears" and math is an illusion of imprecision. The fundamental barrier is frameworks and computers will not be involved. There will be software obviously. But it will never be computed.
Your mind emerges from a network of neurons. Machine models are probably far from enabling that kind of emergence, but if what's going on between our ears isn't computation, it's magic.
It's not magic. It's neural syntax. And nothing trapped by computation is occurring. It's not a model, it is the world as actions.
The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.
Only analog correlation gets us to agency and thought.
Agency will emerge from exceeding the bottleneck of evolution's hand-me-down tools: binary, symbols, metaphors. As long as these unconscious sportscasters for thought "explain" to us what thought "is", we are trapped. DeepMind is simply another circular hamster wheel of evolution. Just look at the status-propaganda the film heightens in order to justify the magic.
Quite honestly, it's about time the penny dropped.
Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.
I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.
The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency
Not sure why this is downvoted. The comment cuts to the core of the "Intelligence vs. Curve-Fitting" debate. From my humble perspective as a PhD in the molecular biology /biophysics field you are fundamentally correct: AlphaFold is optimization (curve-fitting), not thinking. But calling it "propaganda" might be a slight oversimplification of why that optimization is useful. If you ask AlphaFold to predict a protein that violates the laws of physics (e.g. a designed sequence with impossible steric clashes), it will sometimes still confidently predict a folded structure because it is optimizing for "looking like a protein", not for "obeying physics". The "Propaganda" label likely comes from DeepMind's marketing, which uses words like "Solved"; instead, DeepMind found a way to bypass the protein folding problem.
If there's one thing I wish DeepMind did less of, it's conflating the protein folding problem with static structure prediction. The former is a grand challenge problem that remains 'unsolved' while the latter is an impressive achievment that really is optimization using a huge collection of prior knowledge. I've told John Moult, the organizer of CASP this (I used to "compete" in these things), and I think most people know he's overstating the significance of static structure prediction.
Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.
Right now techniques that exist and used now are mostly around target discovery (identifying proteins in humans that can be targeted by a drug), protein structure prediction and function prediction. Identifying sites on the protein that can be bound by a drug is also pretty common. I worked on a project recently where our goal was to identify useful mutations to make to an engineered antibody so that it bound to a specific protein in the body that is linked to cancer.
If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).
Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.
"Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.
Thank you for the detailed answer! I'm just about to start college, and I've been wanting to research molecular dynamics, as well as building a quantitative pathway database. My hope is to speed up the research pipeline, so it's heartening to know that it's not a complete dead end!
It seems that to solve the protein folding problem in a fundamental way would require solving chemistry, yet the big lie (or false hope) of reductionism is that discovering the fundamental laws of the universe such as quantum theory doesn't in fact help that much with figuring out the laws/dynamics at higher levels of abstraction such as chemistry.
So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.
Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.
So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.
We have seen some suggestion that the classical molecular dynamics force fields are sufficient to predict protein folding (in the case of stable, soluble, globular proteins), in the sense that we don't need to solve chemistry but only need to know a coarse approximation of it.
I'm concerned that coders and the general public will confuse optimization with intelligence. That's the nature of propaganda, substituting sleight of hand to create a false narrative.
There is quite a bit of bait-and-switch in AI, isn't there?
"Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"
"Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"
"Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"
One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)
Maybe but the film is about Hassabis thinking about thinking and working towards general intelligence that can think. It doesn't really make claims about their existing software regarding that.
"I believe that artificial general intelligence (AGI) is achievable."
And the problem is circular: as the Riley Verge piece citing Fedorenko https://news.ycombinator.com/item?id=46072838 exhibits, words aren't related to thoughts. And do the thought experiment, neither are symbols, math, binary. They're all factorized, linear models of real events, which are manifold, nested, scale invariant, in other words, analog correlations.
AGI isn't solvable under these current technologies.
The nativist cog-sci program doesn't run in our heads or externalized. It's false.
The key is that there is no content to thought. It's all nested oscillations. It can't be extracted as symbols, so there is no connection between them. Words play the role of a sportscaster reading the minds of the players by observing their behavior. How accurate are they or are we about ourselves? Not very.
Language may ultimately be maladaptive as it is arbitrary and disconnected from thought. Who cares about the gibberish of logic/philosophy when survival is at stake in ecological balance? The key idea is, there are events. They are real. The words we use are false/inaccurate externalizations of those events. Words and symbols are bottlenecks that place the events out of analog reach but fool us by our own simulation processes into thinking they are accurate.
Words are essentially very poor forms of interoception or metacognition. They "explain" our thoughts to us by fooling us. Yet how much of the senses/perceptions are accessible in consciousness. Not very much. The computer serves to further the maladaption by both accelerating the symbols and sutomating them, which puts the initial real events even further from reach. The only game is how much we can fool the species through the lowres inputs the PFC demands. This appears to be a sizable value center for Silicon Valley, and it seems to require coders to ignore the whole of experience and rely solely on the bottleneck simulations centers of the PFC which themselves are disconnected from direct sensory access. Computers, 'social' media, AI, code, VR essentially "play" the PFC.
How these basic thought experiments that have been tested in cog neuroscience since the 90s in the overthrow of the cog sci models of the 40s-80s were not taught as primer classes in AI and comp sci is beyond me. It takes now third gen neurobiology crossed with linguistics to set the record straight.
As there are no symbols in our brains, using symbols as transport for 'intelligence' is oxymoronic. Symbols are the bottleneck to intelligence, not the referential or representation of it. This is basic stuff, folks. The brain doesn't 'store information' it just does stuff and as a byproduct it uses differences held in a library (the allocortex) for the action-memory (cortex).
reply