2. The world is a latticework of facts
I have always found claims of knowledge to be a little weird. I mean, sure, it's great that I know a lot about the history of marmalade, and the Aztec civilisation, and the fall of the gold standard. But I'm not sure what knowing these things means. Do I have some sort of an internal implicit model that these new pieces of information glom on to? Or do I have a simplified version of the events that, one stripped from all the 'non-essential' bits, creates an internal story representation, which I can then use and reuse like a knowledge baller? Or is it all immaterial, and all the pieces of information has done has been to create and strengthen my internal model of the world, so that the next time an unknown comes up the neural net that's my brain pops out a better answer? Is it all of the above together and a whole lot of other things? What does knowing something even mean?
So I come back to my original problem. Claims of knowledge feel weird. What does weird mean? Incomplete, unattached, as if they are super interesting pieces of art, but unclear on what type of gallery they should be placed in. As I write all of that, to throw my confusion out on paper in case it evaporates, the utility I feel in learning something is one of signalling. I like knowing things, and therefore I try to know them. Post hoc knowing things feels nice. Depending on the time, place and circumstance, knowing things could be highly useful, though except for carefully calibrated times and places and circumstances, this is more an axiom than a firm conclusion. I believe it to be so, because it has been true about several things in the past, and so it shall be.
So what's the use of knowing about the Aztecs? There's some (very minor) signalling utility that I can whip it out and impress people at the right times at the right parties. But honestly, that feels like a stretch. By that metric, I could just as easily learn about several hundred other things. There's potentially some self-improvement ability that I've learned through the knowledge of Tenochtitlan deities, but I kind of have to take it on faith at this point, since there isn't much direct evidence of it.
All of which brings up three questions then - 1) why learn anything at all? 2) how to learn or know about something? and 3) what happens when do you learn something?
And that's the latticework we speak about.
All knowledge that we acquire gets hung on our own version of a lattice. Most of the time we might not even know that that's where it's getting hung, but nevertheless it exists. That becomes the definition of our causal-web. What each additional piece of knowledge does is that it helps expand the web. With the causal-web in place, created and interacting through multiple large clusters, then it stands to reason that the webs interact with each other to create a system.
When we try to understand how something happens, we have to dig deep to understand all the relevant factors. But the causal web is incredibly difficult to decipher in full usually. How do we solve for this? We assume multiple parallel worlds in which randomness plays a part, and write a story.
Facetiousness aside, we look for entire groups of variables, whole branches and leaves of the causal-web, to brush aside and say, "we don't know how all of that will actually play and how they will inter-affect each other, so let's just say that usually it does this thing, at least seven out of ten times, because that's what we've seen happen before." And that becomes a way to reason about those insanely complex casual-webs which otherwise would just be indecipherable.
There exists a large (and confusing) literature that asks the question about probability as to whether it's a state of the world or a state of one's own uncertainty. While the answers to this might elude us, it doesn't affect the conclusion, that this is the only way we have to reason through the uncertainty brought about by trying to untangle the intricate web of causality in our universe.
When smaller components interact and affect each other, as the size and number of these underlying components increase, the predictability of such an impact goes right down. Despite the popularisation of chaos theory as a symbol of dinosaur survival, it remains true that emergent phenomena are incredibly difficult to predict in advance.
Individual threads, as they flourish and branch and combine and recombine with each other, create highly detailed tapestries that become intertwined in ways that aren't easily separable. The connections amongst them could be in the form of loops that provide feedback to the nodes themselves - either positive or negative. Some aspects increase with every positive response and create a runaway positive feedback loop, at least until it hits a threshold and shuts down. Others end up becoming self-policing and keep parts of systems under control, at least until a regime change.
You could try to control the interactions, at least in smaller sub-segments of the causal-web. But eventually the outside will seep in, and the world will get more confusing, and the easily visible cause and effect patterns will fade, and following the thread of a particular event will become harder and harder. As the world encompassed inside the causal-web expands, the 'surface area' of exposure increases and the number of potential outcomes will increase dramatically. The application of a selection pressure in localised regions could help "prune" the casual-web and shape the form the surface area evolves, but this would have to be done as a feedback mechanism that helps shape the evolution, and not a predictive process. This also means that the narrower the vision of the end state, the tougher it will be for any 'shaping' to take place, since the pathways will be that much more unclear.
And that's the price. Despite the incredibly flourishing of chaos and randomness working together to create wonderful and diverse outcomes at every endpoint of the causal-web, the costs come in the form of opacity to immediately penetrable reason, and the inability to predict potential outcomes in any easily systematisable fashion. The price of complexity is uncertainty.
Well, you know, there's a large cognitive science literature dating back to the 1970s and 1980s that models knowledge as an interrelated network of ideas. I've even contributed to that literature.