Instance / real-time image or question goes to area and attempts to be connected with related things in that area / vector embedding like correlation.. from there the process repeats - the strengthened signal / vector from the memory now attempts to connect with other major pathways with the new context (ie 3-5 specialities in addition to the major thought) — and the unconscious is just 3-5 of these parallel ideas going on.. we branch maybe 3-5 more times so we have 20-30 major contexts we’ve explored along with their speciality and then there’s some language mechanism.. for relating what we’ve found and as we do that and select words we have echos of this process (based on the contexts they invoke).. in fact we sometimes even switch words before we communicate based on whether it meshes well with the original results of our thoughts…
There’s always that refinement… artificially that could be way enhanced at each of those levels.
But why aren’t we building neural networks with that repeated search and growing “context”? Or are we? And why not structuring long-term memory into these large pathways that are referred to a lot? With a more short term memory just being an LRU “cache” - ie not just a memorized lookup but a smaller graph to search first with the above mechanism before searching the full long-term graph?
Word2vec does what you explained with words and their context.