Actually this is how LLMs (with reasoning) work as well. There is the pre-training which is analogous to the human brain getting trained by as much information as possible. There is a "yet unknown" threshold of what is enough pre-training and then the models can start reasoning and use tools and the feedback from it to do something that resembles to human thinking and reasoning. So if we don't pre-train our brains with enough information, we will have a weak base model. Again this is of course more of an analogy as we yet don't know how our brains really work but more and more it is looking remarkably aligned with this hypothesis.
For these kinds of blogposts where the content is very good but not very approachable due to assumption of extensive prior knowledge, I find using an AI tool very useful, to explain and simplify. Just used the new browser Dia for this, and it worked really well for me. Or you can use your favorite model provider and copy and paste. This way the post stays concise, and yet you can use your AI tools to ask questions and clarify.