Hacker Newsnew | past | comments | ask | show | jobs | submit | AIPedant's commentslogin

Articles like this indicate we should lock down the definition of "computation" that meaningfully distinguishes computing machines from other physical phenomena - a computation is a process that maps symbols (or strings of symbols) to other symbols, obeying certain simple rules[1]. A computer is a machine that does computations.

In that sense life is obviously not a computation: it makes some sense to view DNA as symbolic but it is misleading to do the same for the proteins they encode. These proteins are solving physical problems, not expressing symbolic solutions to symbolic problems - a wrench is not a symbolic solution to the problem of a symbolic lug nut. From this POV the analogy of DNA to computer program is just wrong: they are both analogous to blueprints, but not particularly analogous to each other. We should insist that DNA is no more "computational" than the rules that dictate how elements are formed from subatomic particles.

[1] Turing computability, lambda definability, primitive recursion, whatever.


I don't think it's necessary to completely discard the idea. However, I do think it's important, at the end of it all, to ask: Okay, so what's the utility of this framework? What am I getting out of setting up my point of view this way?

I'm reminded of an old YouTube video [0] that I rewatched recently. That video is "Every Zelda is the Darkest Zelda." Topically, it's completely different. But in it Jacob Geller talks about how there are many videos with fan theories about Zelda games where they're talking about how messed up the game is. Except, that's their only point. If you frame the game in some way, it's really messed up. It doesn't extract any additional meaning, and textually it's not what's present. So you're going through all this decoding and framing, and at the end your conclusion is... nothing. The Mario characters represent the seven deadly sins? Well, that's messed up. That's maybe fun, but it's an empty analysis. It has no insight. No bite.

So, what's the result here other than: Well, that's neat. It's an interesting frame. But other than the thought to construct it, does it inform us of anything? Honestly, I'm not even sure it's really saying life is a form of programming. It seems equally likely it's saying programming is a form of biochemistry (which, honestly, makes more sense given the origins of programming). But even if that were so, what does that give us that we didn't already know? I'm going to bake a pie, so I guess I should learn Go? No, the idea feels descriptive rather than a synthesis. Like an analogy without the conclusion. The pie has no bite.

[0]: https://youtu.be/O2tXLsEUpaQ


> I don't think it's necessary to completely discard the idea. However, I do think it's important, at the end of it all, to ask: Okay, so what's the utility of this framework? What am I getting out of setting up my point of view this way?

That's the important question indeed. In particular, classing life as a computation means that it's amenable to general theories of computation. Can we make a given computation--an individual--non-halting? Can we configure a desirable attractor, i.e. remaining "healthy" or "young"? Those are monumentally complex problems, and nobody is going to even try to tackle them while we still believe that life is a mixture of molecules dunked in unknowable divine aether.

Beyond that, the current crop of AI gets closer to anything we have had before to general intelligence, and when you look below the hood, it's literally a symbols-in symbols-out machine. To me, that's evidence that symbol-in symbol-out machines are a pretty general conceptual framework for computation, even if concrete computation is actually implemented in CPUs, GPUs, or membrane-delimited blobs of metabolites.


The very immediate utility is that if life is computation, would be to tell us that life is possible to simulate, and that AGI is possible (because if there is no "magic spark" of life, then the human brain would be existence proof that a power and space- efficient computer capable of general intelligence can be constructed; however hard it might be).

If life is not a computation, then neither of those are a given.

But it has other impacts too, such as moral impacts. If life is a computation, then that rules out any version of free will that involves effective agency (a compatibilist conception of free will is still possible, but that does not involve effective agency, merely the illusion of agency), and so blaming people for their actions would be immoral as they could not at any point have chosen differently, and moral frameworks for punishment would need to center on minimising harm to everyone including perpetrators. That is hard pill to swallow for most.

It has philosophical implications as well, in that proof that life is computation would mean the simulation argument becomes more likely to hold.


> a computation is a process that maps symbols (or strings of symbols) to other symbols, obeying certain simple rules[1]

There are quite a number of people who believe this is the universe. Namely, that the universe is the manifestation of all rule sets on all inputs at all points in time. How you extract quantum mechanics out of that... not so sure


> In that sense life is obviously not a computation: it makes some sense to view DNA as symbolic but it is misleading to do the same for the proteins they encode.

Proteins can also be seen as sequence of symbols: one symbol for each aminoacid. But that's beyond the point. Computational theory uses Turing Machines as a conceptual model. The theories employ some human-imposed conceptual translation to encode what happens in a digital processor or a Lego computer, even if those are not made with a tape and a head. Anybody who actually understands these theories could try to make a rigorous argument of why biological systems are Turning Machines, and I give them very high chances of succeeding.

> These proteins are solving physical problems, not expressing symbolic solutions to symbolic problems

This sentence is self-contradictory. If a protein solves a physical problem and it can only do so because of its particular structure, then its particular structure is an encoding of the solution to the physical problem. How can that encoding be "symbolic" is more of a problem for the beholder (us, humans), but as stated before, using the aminoacid sequence gives one such symbolic encoding. Another symbolic encoding could be the local coordinates of each atom of the protein, up to the precision limits allowed by quantum physics.

The article correctly states that biological computation is full of randomness, but it also explains that computational theories are well furnished with revolving doors between randomness and determinism (Pseudo-random numbers and Hopfield networks are good examples of conduits in either direction).

> ... whatever.

Please don't use this word to finish an argument where there are actual scientists who care about the subject.


our relationship to computation got weird when we moved to digital computers. Like, I don’t think anyone was saying “life is like millions of slide-rules solving logarithms in parallel”. but now that computers are de-materialized, they can be a metaphor for pretty much anything


Good point - maybe the analogy to computation arises simply because digital computation and the synthesis of DNA, RNA and proteins are all performed by discrete-state machines?


does DNA/RNA keep state other than the position of the read head?


Not as far as I know, but that is not saying much.


By your defdinition, life is obviously a computation.

The symbolic nature of digital computers is our interpretation on top of physical "problems". If we attribute symbols to the proteins encoded by DNA, symbolic computation takes place. If we don't attribute symbols to the voltages in a digtal computer, we could equally dismiss them as not being computers.

And we have a history of analogue computers as well, e.g. water-based computation[1][2], to drive home that computers are solving physical problems in the process of producing what we then interpret as symbols.

There is no meaningful distinction.

The question of whether life is a computation hinges largely on whether life can produce outputs that can not be simulated by a Turing complete computer, and that can not be replicated by an artificial computer without some "magic spark" unique to life.

Even in that case, there'd be the question of those outputs were simply the result of some form of computation, just outside the computable set inside our universe, but at least in that case there'd be a reasonable case for saying life isn't a computation.

As it is, we have zero evidence to suggest life exceeds the Turing computable.

[1] https://en.wikipedia.org/wiki/Water_integrator

[2] https://news.stanford.edu/stories/2015/06/computer-water-dro...


I think you may be forgetting about analog computers https://en.wikipedia.org/wiki/Analog_computer


I don't think they are. The things analog computers work on are still symbolic - we don't care about the length of the rod or what have you, we care about the thing the length of the rod represents.


analog computers don't generally compute by operating on symbols. For example see the classic video on fire control computers https://youtu.be/s1i-dnAH9Y4?t=496

OP's specific phrasing is that they "map symbols to symbols". Analog computers don't do that. Some can, but that's not their definition.

Turing machines et al. are a model of computation in mathematics. Humans do math by operating on symbols, so that's why that model operates on symbols. It's not an inherent part of the definition.


> analog computers don't generally compute by operating on symbols. For example see the classic video on fire control computers https://youtu.be/s1i-dnAH9Y4?t=496

> OP's specific phrasing is that they "map symbols to symbols". Analog computers don't do that. Some can, but that's not their definition.

How is that not symbolic? Fundamentally that kind of computer maps the positions of some rods or gears or what have you to the positions of some other rods or gears or what have you, and the first rods or gears are symbolising motion or elevation or what have you and the final one is symbolising barrel angle or what have you. (And sure, you might physically connect the final gear directly to the actual gun barrel, but that's not the part that's computation; the computation is the part happening with the little gears and rods in the middle, and they have symbolic meanings).


There's a confusion of nomenclature.

Computers are functional mappings from inputs to outputs, sure.

Analog fire computers are continuous mappings from a continuum, a line segment (curved about a cam), to another continuum, a dial perhaps.

Symbolic operations, mapping from patterns of 0s and 1s (say) to other patterns are discrete, countable mappings.

With a real valued electrical current, discrete symbols are forced by threshold levels.


> Analog fire computers are continuous mappings from a continuum, a line segment (curved about a cam), to another continuum, a dial perhaps.

> Symbolic operations, mapping from patterns of 0s and 1s (say) to other patterns are discrete, countable mappings.

What definition of "symbolic" are you using that draws a distinction between these two cases? If it means merely something that symbolises something else (as I would usually use it), then both a position on a line segment and a pattern of voltage levels qualify. If you mean it in the narrow sense of a textual mark, that pattern of voltage levels is just as much not a "symbol" as the position on the line segment.


To what degree is the threshold precise? Maybe fundamentally there's not that much difference.


No, analog computers truly are symbolic. The simplest analog computer - the abacus - is obviously symbolic, and thus is also true for WW2 gun fire control computers, ball-and-shaft integrators, etc. They do not use inscriptions which is maybe where you're getting confused. But the turning of a differential gear to perform an addition is a symbolic operation: we are no more interested in the mechanics of the gear than we are the calligraphy of a written computation or the construction of an abacus bead, we are interested in the physical quantity that gear is symbolically representing.

Your comment is only true if you take an excessively reductive view of "symbol."


I'm not confused, and an abacus is a digital computer.

You keep referring to what we are interested in, but that's not a relevant quantity here.

A symbol is a discrete sign that has some sort of symbol table (explicit or not) describing the mapping of the sign to the intended interpretation. An analog computer often directly solves the physical problem (e.g. an ODE) by building a device whose behavior is governed by that ODE. That is, it solves the ODE by just applying the laws of physics directly to the world.

If your claim is that analog computers are symbolic but the same physical process is not merely because we are "interested in" the result then I don't agree. And you'd also be committed to saying proteins are symbolic if we build an analog computer that runs on DNA and proteins. In which case it seems like they become always symbolic if we're always interested in life as computation.


This is where you are confused - in fact just plain wrong:

  A symbol is a discrete sign that has some sort of symbol table (explicit or not) describing the mapping of the sign to the intended interpretation
Symbols do not have to be discrete signs. You are thinking of inscriptions, not symbols. Symbols are impossible for humans to define. For an analog computer, the physical system of gears / etc symbolically represent the physical problem you are trying to solve. X turns of the gear symbolizes Y physical kilometers.


Surely an abacus is a simple form of digital computer? The position/state of the beads is not continuous, ignoring the necessary changes of position/state.


I think the notion largely boils down to another dogmatic display of tech industry's megalomania.


In what sense? I agree the tech industry fucking sucks right now, but I don't see how this has anything to do with that.

A physical computer is still a computer, no matter what it's computing. The only use a computer has to us is to compute things relative to physical reality, so a physical computer seems even closer to a "real computer" or "real computation" to me than our sad little hot rocks, which can barely simulate anything real to any degree of accuracy, when compared to reality.


I suspect what the parent is alluding to is that we tend to reduce everything to computer-world analogies, which we believe we're uniquely qualified to analyze.

It's sort of like a car mechanic telling you "SQL query, eh? It must be similar to what happens in an intake manifold." For all I know, there might be Turing-equivalency between databases and the inner workings of internal combustion engines, but you wouldn't consider that to be a useful take.


I understand the broader point but it is not actually constitutionally problematic for the executive branch to assert that a suspect committed a crime - of course they believe that, that's why the suspect was arrested! It is better for an elected official to preface things with "allegedly" "we believe" etc, but the governor is ultimately speaking on behalf of the prosecution, not the judge. The first half of this article is based on a bad-faith misreading of the governor's words.


Well, I don't care about splitting hairs but using the arrest of the suspect as an indictment of America as an ideal is a major warning sign and I can't think of another "broader point" that would be applicable to this article. To me that's not only bad faith, it's dirty and malicious propaganda.


"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.


> Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.

Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.


> how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive

As it stands, AI is a tool and requires artists/individuals to initiate a process. How many AI made artifacts do you know that enjoy the same cultural relevance as their human made counterparts? Novels, music, movies, shows, games... anything?

You're arguing that the types of film cameras play some part in the significant identity that makes Mulholland Drive a work of art, and I'd disagree. While artists/individuals might gain cultural recognition, the tool on its own rarely will. A tool of choice can be an inspiration for a work and gain a certain significance (e.g. the Honda CB77 Super Hawk[0]), but it seems that people always strive to look for the human individual behind any process, as it is generally accepted that the complete body of works tells a different story that any one artifact ever can.

Marcel Duchamp's Readymade[1] (and the mere choice of the artist) gave impact to this cultural shift more than a century ago, and I see similarities in economic and scientific efforts as well. Apple isn't Apple without the influence of a "Steve Jobs" or a "Jony Ive" - people are interested in the individuals behind companies and institutions, while at the same time also tend to underestimate the amount of individuals that makes any work an artifact - but that's a different topic.

If some future form of AI will transcend into a sentient object that isn't a plain tool anymore, I'd guess (in stark contrast to popular perception) we'll all lose interest rather quickly.

[0]: https://en.wikipedia.org/wiki/Honda_CB77#Zen_and_the_Art_of_...

[1]: https://en.wikipedia.org/wiki/Fountain_(Duchamp)


> unless he was completely unlike anyone I've ever met,

I mean ... he is David Lynch.

We seem to be defining "predicted" to mean "any vision or idea I have of the future". Hopefully film directors have _some_ idea of what their film should look like, but that seems distinct from what they expect that it will end up.


I look at it the complete opposite way: humans are defining intelligence upwards to make sure they can perceive themselves better than a computer.

It's clear that humans consider humans as intelligent. Is a monkey intelligent? A dolphin? A crow? An ant?

So I ask you, what is the lowest form of intelligence to you?

(I'm also a huge David Lynch fan by the way :D)


Intelligence has been a poorly defined moving goal post for as long as AI research has been around.

Originally they thought: chess takes intelligence, so if computers can play chess, they must be intelligent. Eventually they could, and later even better than humans, but it's a very narrow aspect of intelligence.

Struggling to define what we mean by intelligence has always been part of AI research. Except when researchers stopped worrying about intelligence and started focusing on more well-defined tasks, like chess, translation, image recognition, driving, etc.

I don't know if we'll ever reach AGI, but on the way we'll discover a lot more about what we mean by intelligence.


If you look at my comment history you will see that I don't think LLMs are nearly as intelligent as rats or pigeons. Rats and pigeons have an intuitive understanding of quantity and LLMs do not.

I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish.


Im not sure what that gets you. I think most people would suggest that it appears to be a sliding scale. Humans, dolphins / crows, ants, etc. What does that get us?


Well, is an LLM more intelligent than an ant?


I would say yes. But is it more intelligent than an ant hill?


Well yes, any creation tries to anticipate some reaction, be it audience, environment, or only the creators one.

A prediction is just a reaction to a present state, which is the simplest definition of intelligence: The ability to (sense and) react to something. I like to use this definition, instead of "being able to predict", because its more generic.

The more sophisticated (and directed) the reaction is, the more intelligent the system must be. Following this logic, even a traffic light is intelligent, at least more intelligent than a simple rock.

From that perspective, the question of why a creator produced a piece of art becomes unimportant to determine intelligence, since the simple fact that he did is sign of intelligence already.


"David Lynch made Mullholland Drive because he was intelligent" is also absurd.


But "An intelligent creature made Mullholland Drive" is not


It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.

Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.


> "Making predictions about the world" is a reductive and childish way to describe intelligence in humans.

It also happens to be a leading theory in neuroscience: https://news.ycombinator.com/item?id=45058056


How would you define intelligence? Surely not by the ability to make a critically acclaimed movie, right?


He was trying to predict what movie would create the desired reaction from his own brain. That's how creativity works, it's just prediction.


Putin and Xi fantasizing about immortality via 3D-printed organs quite starkly illustrated that many adults do not understand the difference between science and science fiction.


Keep in mind that we also have no clue how general anesthesia works! It's not just psychiatry, many medications targeting the nervous system (e.g. muscle relaxants) have unknown mechanisms of action https://en.wikipedia.org/wiki/Category:Drugs_with_unknown_me...

I think you're being extremely reductive about what neuropsychiatry actually entails.


It didn't come completely out of nowhere, Euler and Bernoulli had looked at trigonometric series for studying the elastic motion of a deformed beam or rod. In that case, physical intuition about adding together sine waves is much more obvious. https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli_beam_t...

Other mathematicians before Fourier had used trigonometric series to study waves, and physicists already understood harmonic superposition on eg a vibrating string. I don't have the source but I believe Gauss even noted that trigonometric series were a solution to the heat equation. Fourier's contribution was discovering that almost any function, including the general solution to the heat equation, could be modelled this way, and he provided machinery that let mathematicians apply the idea to an enormous range of problems.


On the simplest end of that spectrum, Taylor series are useful because many real-world dynamics can be approximated as a "primarily linear behavior" + "nonlinear effects."

(And cases where that isn't true can still be instructive - a Taylor series expansion for air resistance gives a linear term representing the viscosity of the air and a quadratic term representing displacement of volumes of air. For ordinary air the linear component will have a small coefficient compared to the quadratic component.)


Caves of Qud is quite good, though a bit less traditional in being a big open world vs a dungeon. There a few quirks and bugs but the game is very fun and creative, and it has excellent music. I also love the graphics but it is an acquired taste.

I played the Dwarf Fortress roguelike mode several years ago, and it was really more of a toy - nifty to play around with the mechanics but too dry and arbitrarily difficult to be a fun game. But almost all the dev focus was on fortress management, maybe they’ve spruced up the roguelike with the Steam release.


The permadeath mode is really not that well suited to CoQ I find. It's a long game and most playthroughs are substantially the same for the first couple hours. It doesn't have the fun "fresh start" feel of the the early dungeon in other RLs. It's a cool feature to include for experienced players though.


I've played a lot of CoQ and totally see your point but isn't that the same in most Roguelikes? To be fare DCSS and CoQ are the ones I've spent the most time in my life on but in my experience with DCSS, Nethack, and Slash'Em the first few hours are pretty much the same "opening". Though it's been over a decade since I've touched most Roguelikes from my youth other than Pixel Dungeon and CoQ.


I don't think so really. In those others you mentioned early drops (or other random choices like altars) can have a big impact on which direction your build goes. CoQ also has a more traditional rpg approach to quests which is completely different from the others and adds to the repetition. I think ToME is probably its closest relation, also having an overworld, skill trees you can plan out in advance, reliably placed towns & npcs. And it also has a goofy relationship to permadeath.


Good point yeah. I never played ToME but have read discussions about permadeath in ToME. Maybe I should turn off permadeath next time I play CoQ.


Human perception is not 2D, touch and proprioception[1] are three-dimensional senses.

And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.

[1] https://en.wikipedia.org/wiki/Proprioception


the sensors are 2D


Two of them, giving us stereo vision. We are provided visual cues that encode depth. The ideal world model would at least have this. A world model for a video game on a monitor might be able to get away with no depth information, but a) normal engines do have this information and it would make sense to provide as much data to a general model as possible, and b) the models wouldn't work on AR/VR. Training on stereo captures seems like a win all around.


> We are provided visual cues that encode depth. The ideal world model would at least have this.

None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.


Increasing the fidelity and richness of training data does not go against the bitter lesson.

The model can learn 3D representation on its own from stereo captures, but there is still richer, more connected data to learn from with stereo captures vs monocular captures. This is unarguable.

You're needlessly making things harder by forcing the model to also learn to estimate depth from monocular images, and robbing it of a channel for error-correction in the case of faulty real-world data.


Stereo images have no explicit 3D information and are just 2D sensor data. But even if you wanted to use stereo data, you would restrict yourself to stereo datasets and wouldn't be able to use 99.9% of video data out there to train on which wasn't captured in stereo, that's the part that's against the Bitter Lesson.


You don't have to restrict yourself to that, you can create synthetic data or just train on both kinds of data.

I still don't understand what the bitter lesson has to do with this. First of all, it's only a piece of writing, not dogma, and second of all it concerns itself with algorithms and model structure itself, increasing the amount of data available to train on does not conflict with it.


Incorrect. My sense of touch can be activated in 3 dimensions by placing my hand near a heat source. Which radiates in 3 dimensions.


You are still sensing heat across 2 dimensions of skin.

The 3rd dimension gets inferred from that data.

(Unless you have a supernatural sensory aura!)


The point is that knowing where your hand is in space relative to the rest of your body is a distinct sense which is directly three-dimensional. This information is not inferred, it is measured with receptors in your joints and ligaments.


No it is inferred.

You are inferring 3D positions based on many sensory signals combined.

From mechanoreceptors and proprioceptors located in our skin, joints, and muscles.

We don’t have 3-element position sensors, nor do we have 3-d sensor volumes, in terms of how information is transferred to the brain. Which is primarily in 1D (audio) or 2D (sensory surface) layouts.

From that we learn a sense of how our body is arranged very early in life.

EDIT: I was wrong about one thing. Muscle nerve endings are distributed throughout the muscle volume. So 3D positioning is not sensed, but we do have sensor locations distributed in rough and malleable 3D topologies.

Those don’t give us any direct 3D positioning. In fact, we are notoriously bad at knowing which individual muscles we are using. Much less what feeling correspond to what 3D coordinate within each specific muscle, generally. But we do learn to identify anatomical locations and then infer positioning from all that information.


Your analysis is incorrect again. Having sensors spread out across a volume is, by definition, measuring 3D space. It’s a volume. Not a surface. Humans are actually really good at knowing which muscles we are using. It’s called body sculpting. Lifting. Body building. And all of that. So nice try.


Ah good point. 3D in terms of anatomy, yes.

Then the mapping of those sensors to the bodies anatomical state in 3D space is learned.

A surprising number of kinds of dimension involved in categorizing sensors.


Agreed :)

It doesn’t make it any less 3d though. It’s the additive sensing of all sensors within a region that gives you that perception. Fascinating stuff.


The GPCRs [1] that do most of our sense signalling are each individually complicated machines.

Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...

In any case, when all of our senses are integrated, we have rich n-dimensional input.

- stereo vision for depth

- monocular vision optics cues (shading, parallax, etc.)

- proprioception

- vestibular sensing

- binaural hearing

- time

I would not say that we sense in three dimensions. It's much more.

[1] https://en.m.wikipedia.org/wiki/G_protein-coupled_receptor


And the brain does sensor fusion to build a 3d model that we perceive. We don't perceive in 2d

There are other sensors as well. Is the inner ear a 2d sensor?


Inner ear is a great example! I mentioned in another comment that if you want to be reductive the sensors in the inner ear - the hairs themselves - are one dimensional, but the overall sense is directly three dimensional. (In a way it's six dimensional since it includes direct information about angular momentum, but I don't think it actually has six independent degrees of freedom. E.g. it might be hard to tell the difference between spinning right-side-up and upside-down with only the inner ear, you'll need additional sense information.)


It is simply wrong to describe touch and proprioception receptors as 2D.

a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.

b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.

Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion


I think you mean 0D for individual receptors.

Point (I.e. single point/element) receptors, that encode a single magnitude of perception, each.

The cochlea could be thought of 1D. Magnitude (audio volume) measured across 1D = N frequencies. So a 1D vector.

Vision and (locally) touch/pressure/heat maps would be 2D, together.


No, the sensors measure a continuum of force or displacement along a line or rotational axis, 1D is correct.


That would be a different use of dimension.

The measurement of any one of those is a 0 dimensional tensor, a single number.

But then you are right, what. is being measured by that one sensor is 1 dimensional.

But all single sensors measure across a 1 dimensional variable. Whether it’s linear pressure, rotation, light intensity, audio volume at 1 frequency, etc.


That's not what "coherent computational representation" means in this context. It means being able to reliably apply the rules of Othello / chess / etc to the current state of the board. Any competent amateur can do this without studying thousands of board positions - in fact you can do it just from the written rules, without ever having seen a game - they have a causal, non-heuristic understanding of the rules. LLMs have much more trouble: they don't learn how knights move, they learn how white knights move when they're in position d5, then in position g4, etc etc, a "bag of heuristics."

Notably this is also true for MuZero, though at that scale the heuristics become "dense" enough that an apparent causal understanding seems to emerge. But it is quite brittle: my favorite example involves the arcade game Breakout, where MuZero can attain superhuman performance on Level 1 and still be unable to do Level 2. Healthy human children are not like this - they figure out "the trick" in Level 1 and quickly generalize.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: