Do you have a source for this? Most information I’ve seen around this (e.g. Acquired podcast, from the Costco side) claims strong positive relationships.
Their discussion is super relevant to exactly what I wrote --
* They note speed benefits
* The quality benefit they note is synonym search... which agentic text search can do: Agents can guess synonyms in the first shot for you, eg, `navigation` -> `nav|header|footer`, and they'll be iterating anyways
To truly do better, and not make the infra experience stink, it's real work. We do it on our product (louie.ai) and our service engagements, but real costs/benefits.
I made an obsidian extension that does semantic and hybrid (RRF with FTS) search with local models. I have done some knowledge graph and ontology experimentation around this, but nothing that I’d like to include yet.
This is specifically a “remembrance agent”, so it surfaces related atoms to what you’re writing rather than doing anything generative.
I had the good fortune of seeing Lawrence of Arabia in 70mm in a theater and then going to watch Prometheus within the same two week span. It gave me a much greater appreciation for the movie [Prometheus], and what it was trying to do.
Also in practice we work with number representations, not number themselves. So there are some patterns where the representation is influenced by which base we encode them into. That's not something specific to primes of course.
For example, length in term of digits or equivalently weight in bits will carry depending on the base, or more generally which encoding system is retained. Most encoding though require to specifically also transmit the convention at some point. Primes on the other hand, are supposedly already accessible from anywhere in the universe. Probably that's part of what make them so fascinating.
I like the idea of self-hostability, but not having to think about the deployment of the frontend piece has been a huge accelerant for me, someone who typically thinks only of ML and backend components.
> You have to remember EVERYTHING. Only then you can perform the cognitive tasks necessary to perform meaningful knowledge work.
You don't have to remember everything. You have to remember enough entry points and the shape of what follows, trained through experience and going through the process of thinking and writing, to reason your way through meaningful knowledge work.
"It is requisite that a man should arrange the things he wishes to remember in a certain order, so that from one he may come to another: for order is a kind of chain for memory" – Thomas Aquinas, Summa Theologiae. Not ironically I found the passage in my Zettelkasten.
It's weird to read this from zettelkasten.de, given that the method is precisely about cultivating such a graph of knowledge. "Knowing enough to begin" seems to me to be the express purpose of writing and maintaining a zettelkasten and other such tools.
I arrange my code to follow a certain order, so that I can get my head back into a given module quickly. I don't remember everything; there's too much over the weeks, months, and years. But I can remember enough to find what I need to know if I structure it properly. Not unlike, you know, a Zettlekasten.
This is task-specific. Consider having a conversation in a foreign language. You don't have time to use a dictionary, so you must have learned words to be able to use them. Similarly for other live performances like playing music.
When you're writing, you can often take your time. Too little knowledge, though, and it will require a lot of homework.
There might be words I don’t use or chords I don’t know. It doesn’t matter though because part of expertise is being able to consult a reference and go “of course”, implement it, and keep moving.
Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis.
Of course you have to remember everything. Your brain stores everything, and you then get to add things by forgetting, but that does not mean you erase things. The brain is oscillatory, it works somehow by using ripples that encode everything within differences, just in case you have to remember that obscure action-syntax...a knot, a grip, a pivot that might let you escape death. Get to know the brain, folks.
Interesting take.
I respectfully differ. IIRC, Feynman said something akin to my POV:
Brains are for thinking.
Documents / PKM systems / tools are for remembering.
IOW: take notes, write things down.
FWIW I have a degree in cognitive psychology (psychobiology, neuroanatomy, human perception) and am an amateur neuroscientist. Somewhat familiar w/ the brain. :)
I'd read Spontaneous Brain by Northoff (Copernican, irreducible neuroscience) or oscillatory neurobiology Buzsaki.
The brain is lossless.
I would agree that external forms of memory are evolutionarily progressive, that ability to utilize the external forms requires a lossless relationship.
Once we grasp the infinitely inferior external of arbitrariness (symbols words) are correlated through superior, lossless, concatenated internals (action-neural-spatial-syntax), until we can externalize that direct perception, the externals are deeply inferior, lossy forms.
But taking notes and writing ideas out requires that we think them through...which we usually don't do otherwise. This has been a commonplace of the intellectual life for centuries.
Words and thoughts are wholly separate. Notes aren't the direct results of perception, they are more like sportscasters reading the mind of pitchers.
Notes point to thoughts or observations, they aren't the thoughts themselves.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.”
Ev Fedorenko Language Lab MIT 2024
I did not say that my brain uses linguistic representations internally when I think; I said that the process of turning my ideas into words helps me think.
Actually you said "writing ideas out requires that we think them through" and this isn't what's happening in brains. In actuality, words interfere with our ability to think.
Fourier transforms are lossless. If it entered the oscillations of senses, it's still there in your brain. You may never need it, but every action is detailed by difference.
A Fourier transform is just a change of coordinates. It has nothing to do with the signal per se. If you have a signal which was measured or recorded with finite precision (as any signal must be) then the fourier transform (as a pure mathematical object) simply preserves the same amount of loss that the original signal had. But, in fact, to do that, we would need to do the transform on hardware that could represent real numbers. This hardware does not exist in computers or in your brain, and so a fourier transform is lossy in that case. Still, the idea that your brain encodes all information in oscillations is not accurate - your temporary electrical activity can be substantially disrupted without you losing your memories, suggesting very strongly (to put it mildly) that some of your memories are encoded chemically and physically in changes to the connectivity between neurons that do not depend on persistent electrical activity. That encoding scheme must be lossy.
Fourier transforms are simply ways of interpreting. The senses restrict, the existing memory/engram/affinity of those senses: we don't yet know how much is lost, whether any of it is lost. In the schema, we must assume their inputs are preserved in some way, (study those with photographic memory, or the eidetic as example) and it is assumed preservation is chemical in nature that represents or co-represents the oscillations, yet we're not sure what exactly this is as of 2024 (I haven't finished reading this year's). Of course there are disruptions, and sleep - you do not remember the sounds around us when asleep. But if you recorded an event, it is still there. By lossless, I am referring to the idea there is no true separation between the world and the brain. While the senses mitigate, the fields interact. Would the toroidal manifold of the entorhinal cortex exist without a Copernican-Mach universe? Of course not. In essence, as Northoff illustrates on his last book's cover, the universe is within us. He is neither being poetic nor ironic. Rather paradoxical. Brain's are oscillatory Copernican bulbs, the design elements of the brain reflect the Machian idea that everything affects.
Without direct perception, and using such poor tools as symbols and narratives to externalize memory, we're deeply impoverished as to the nature of memory and our ability to access it. But once we have a better grasp of the neuronal units, spatial-syntax, we will unlock every memory.
Also to consider are the shapes and phases between oscillation. "It’s high dimensional complexity; the mind is an attractor in high dimensional phase space formed between neural oscillators." Emergent properties are not reducible to their constituent parts.
fourier transforms are lossless, but what impleemntation are you refering to that losslessly implements a fourier transform?
to my knowledge practical fourier transforms set a number of sine waves they will calculate for, and a window of time to look at. these limitations result in loss.
but, just taking the brain, at some point the person will die and decompose. how are you gonna get the oscillations back out of the rotted flesh? there has to be some form of loss to the brain
We only need brains when we're alive, so extracting the points isn't required.
In terms of brains, the math is used to model the irreducible occurrences in brains - that everything is still in there. So the math only gives us a window into the complexity. Brains don't compute or calculate necessarily. As an analog, or analoga of differences, it never has to exclude, or experience loss.
For the details: Rhythms of the Brain or Unlocking the Brain both volumes.
I don't think that the point of the article was "you are dumb if you don't remember absolutely everything".
The point, I believe, was that the more you remember, the better you can think. As in you should strive to remember stuff, and not just be lazy and rely on LLMs. I agree with that.
You only need the initial seed to restore the full state, provided you can reason your way from there. If you haven't applied yourself to problem solving, then perhaps you might need to memorize the full state.
Executing on meaningful knowledge work also might require many different paths, depending on the context and the environment. To me it's more about the method of inquiry and how you begin than it is the specific content. Sure, more individual facts help to guide that inquiry, but at any given moment you're only truly going to be able to recall a subset of those.
Fundamentally the goal of an insurance company is to pool risk and distribute it so that catastrophic events can be covered. These areas have too much risk (and certainty in the occurrence of catastrophic events) for the pooling to be viable.
Home owners in low-risk regions like upstate NY actually have to subsidize the solvency of insurers offering coverage in extreme-risk regions like Florida and the south.
Everyone's home insurance premiums should be lower than they actually are. Except they're not, because we have to pay to offset the cost of all the people rebuilding their houses on the Florida coast every few years.
I mean that's kind of a strong claim. Premiums are priced by risk. If insurees of one company in Florida are getting more than they pay in aggregate, then other insurance companies which correctly price the risk in NY will take all their customers.
This only holds for government insurance programs. And in that case all taxpayers are subsidizing them.
I cant say I understand this, to me it seems like if you know the averages of claims across time will be $X, then you just sell the premium for >$X . Like it could be seemingly absurd, like if you expect to replace a house every 10 years, then you do annual premium >= ($REPLACEMENT/10) . (all averaged across the pool)
The pools can be different based on the US state and the type of insurance based on the location where the insurer is doing business. For a region where a major dangerous weather event is a certainty on a yearly basis, I (a layman in insurance) would expect the premium would approach just the full cost of replacing the property rather than being something you barely think about once a year. Hence the pricing out.
Part of the problem is that insurers are in the business of predicting the future, which has gotten very hard recently (global warming, cost shocks).
The other constraint is that, jurisdiction dependent, rate hikes are subject to regulatory approval.
Most importantly, the insurance industry does not make much if any money on underwriting. They make their money on investing the float, which only exists if they sell policies. That’s why insurers for the most part are ok posting slightly negative margins on their underwriting.
Well there’s overhead like cost of administering the claims etc etc so actually if it were literally replacing the house every 20 years the annual premium would be like house$ / 10 not 20.
But if it gets even remotely close to that point the yearly insurance is a significant portions of the house worth and it’s obviously not affordable/insurable - whatever term one wants to use.
At that point if one can’t self insure then they can’t afford the house
reply