This is a followup to the Computational Metaphor thread, describing the differences in processing between Ni/Se (V) and Ne/Si (M) by way of analogy. This OP won't contain computer code yet, but that may come in later posts. For a start, I aim to represent the differences using visualizations in a PowerPoint presentation. These can be found at the links below. This remains a draft, and may still need further refinement, but it's presented here for initial feedback. Thank you so much for reading.
It's better when looked at in PowerPoint, or in Google Slides by saying yes to this prompt box:
You'll notice the terms V and M are used in this document divided into "+" and "-" (for e and i). The Jungian names of the functions are mentioned in parentheses only below the tiles.
This is not meant as any disrespect to Jung, but has been done for the purpose of offering complete clarity in terminology. If I'm to begin to make a serious academic case for these cognitive dynamics, I will need to describe them in a way that's not contingent on older and hazier terms and material like Jung's, which unfortunately lacks scientific standing as they are. And science is also very precise, so each term has to be well defined as "just" what it is, and nothing else. And in this manner, what I mean by V and M are only properly described in the following documents, but nowhere else in Jungian literature, at least not in this computational format.
Thanks again, and let me know your thoughts!
I'll reserve this post as well, for the computer code to follow-- once it is fully written.
Nice! I would love to be able to connect this up to some real life examples, so it could really come to life for me (sorry, I'm a sensor lel).
So far, it reminds me of the idea that occurred to me while talking to someone with Ne, that Ne/Si users perceive things as discrete, and Se/Ni users perceive things as continuous.
Sorry! I gave almost no context or introduction to what this is. It's probably undecipherable as it is. Here's a bit of background:
The above two documents are first drafts towards describing a cognitive architecture (thank you Supah for pointing me to this field). When completed and refined I am hoping it can form something which can maybe be called the Cognitive Typology Architecture (CTA). It would be a program-set meant to be an analogy for cognitive computation, which could also be tested in computer code.
The translation of the metabolism of functions into computer language opens up so many possibilities, both in terms of application but also in scientific testing.
Now, since the hypothesis is that the cognitive processes are existent at the pre-thought and pre-context level, as operations that happen at the millisecond level, describing them with tangible examples is always going to be a problem. Hence why nothing in the provided powerpoints deals with specifics -- just the computational differences. But I can try to give some examples, so long as they're understood as being stand-ins for general variables. 🙂
The first thing I need to explain further is the definition of "objects." Variables such as A, B, C are objects. However, to understand objects one has to understand the emergent layering of complexity that the brain undergoes. This can be explained through an introduction into neural networks, where the layering process becomes evident where lower level neural layers process smaller bits, while higher order neural layers process the aggregate of such outcomes. This is a diagram by Kurzweil:
^ ..in "How to Create a Mind" (chapter: A MODEL OF THE NEOCORTEX: THE PATTERN RECOGNITION THEORY OF MIND) ...showing an oversimplification of how the brain might come to formulate a registration of the "pattern' (or object) such as the letter "A", or the higher-order object "Apple."
Humans don't think in language/letters, not initially, we think in symbolism more generally. Language is a form of symbology. When we see a dog or a cat, we're registering nameless objects (fur, claws, nose, ears) and then those bundled into higher order objects like "my dog spot."
To understand the function of neural layers, in creating higher order objects, this video presents a good introduction, for those interseted. It goes over how neural networks might go about creating number categories of 1-2-3-4-5-6-7-8-9-0:
Now, as it relates to the computations in the two documents of the OP, the scale of the layer in question is unspecified. This is because V and M (as well as ALE and EDA) would essentially be self-similar at every scale, when coded properly. Hence, scale does not matter to a comprehension of the general data structure.
I wonder if that makes any more sense. It's a complicated topic and I have a grasp of it personally but maybe I haven't learned how to communicate it to others yet. But I hope this adds a little clarity.
As for tangible examples, I won't succeed in giving a good example off the top of my head, but I'm gonna try anyhow. Going back to the PowerPoints document, lets say we plug in some examples for the variables:
[ [walking, shoes, sidewalk, leash, dog, bark], <-- array 1
[driver, car, road, wheels, jaguar, luxury] ]<-- array 2
Here we have two arrays, the first array can be summarized as "walking the dog" and the second array can be summarized as "rich person driving a luxury jaguar." These are 'episodes' which Pi (Ni/V- and Si/M-) have both archived.
Now lets say there are "hits" for [sidewalk] and [leash], but nothing else from that array. A woman is walking on the sidewalk with a leash in hand, but not a dog or pet. Now lets say there are also "hits" for [luxury], [car] and [driver]. There's a man parked outside in their fancy car.
So the situation is like this: A person is walking on the sidewalk with a leash in hand, but no dog. There's also a person parked on the road in a luxury car.
There is no episode archived with this exact combination of elements yet, which means both V+ and M+ activate. "What's going on?" is the question. And they ask for more objects (triggering eye-toggles). V+ is looking for more hits that would pull up a theme, so it "tunnels" forward, such as by taking a closer look at the driver, his expression, what he might be doing there, in this neighborhood that doesn't match the high status of his car. He finds out the driver is looking back at the person walking on the sidewalk, as if waiting for them to get into the car. He looks nice. The women has an angry look on her face.
V+ and V- synthesize into an understanding (at a glance) that this is probably a search for a lost dog, and this rich couple is looking for him in this neighborhood. She's upset due to the lost dog.
Oppositely, M+ looks for more information, of an associative nature, and also notices that the lady is angry-looking. The leash and the angry look form an association between themselves (a cluster). For example, "who is she going to strangle with that leash?" "Maybe it's the guy in the car?" I bet she could benefit from the financial upgrade of stealing that car too, to get out of this neighborhood.
M+ and M- then synthesize into an understanding (at a glance) that this lady may be up to no good, and she might be going after the guy in the car.
The Reality is: The women with the leash and the man in the car don't know each other. The women is from this neighborhood and she did lose her dog, and is upset, but the man in the luxury car is not helping her look for it, he's just checking her out.
In the V scenario, the person was correct to assume the lady had lost her dog and was upset and looking for it, but was wrong to assume the luxury car guy was helping her look for it -- because for V it was a "better fit" to the theme to also explain the luxury car in relation to a rich couple looking for their dog. It would answer more things in less threads.
In the M scenario, the person was correct to not correlate the lady and the man, but the connection of her anger with the leash, and that to a murderous rage, was farfetched. This was an "isolated association" (a mini-skit) that played in the Ne user's mind, but which lacked deeper Pi support. In reality, the Ne user may never have actually seen anyone act that way when in a murderous rage, but "whip/leash" and "angry-face" belong to the same category of "hostile" and so they associated, even though the two haven't been seen in reality together before.
Moving on, after this situation is over, without knowing what the truth was, the changes to the V data structure can roughly be described as a new scenario where rich people are now understood as sometimes going as far as looking into old neighborhoods for their beloved dog.
And moving on, for the M data structure, the changes can roughly be encapsulated by an additional episode/anecdote that can be recalled later as a stand-alone event. For example "Hey this morning I almost saw some lady strangle a guy." This doesn't necessarily change the M user's overall data structure as a whole, although over time the aggregate activity of these anecdotes does affect what they recall later, and therefore what their overall worldview is.
But at each step, and in each situation, contextual associations are made, based on what is in front of them, and that's archived independent from what happened before.
The reason it's an anecdote for Si is because the same episode will not be remembered the same way twice, unless most/all variables are the same. Every time there's a recall, it's reshuffled in Si/Ne because not all objects are utilized, only the ones highly associated. Which means, by necessity, each episode is its own standalone instance that is not replicated later.
Hence, to really try to step back into the episode in full, this may cause greater insistence on all variables being the same way as before. Contingency on discrete data, in storage, causes a need for more exactitude in present context for a complete repetition to occur.
With Ni, new objects are appended onto the same episodes (loops), rather than it making new arrays at the bottom of the matrix. And so this creates a 'theme' which keeps getting recalled in full every time any of the elements within it are present in the 'now'. With Si, it only recalls/keeps the discrete elements that relate to the present.
Therefore we can codify this by saying that Pi has episodes. These episodes are "themes" if new objects keep getting appended to the same general episode, expanding it, and recalling the same episode. These episodes are called "anecdotes" if new objects are only discretely recalled from an episode, as needed, and re-settle into new episodes (anecdotes) each time. The mathematics of the array system in the document naturally gives rise to this emergent difference.
(I'm gonna continue dropping off notes here, sorry for the multi-post.)
The idea behind this equation is to be able to describe the key computational structure which, when played over and over, even over itself, hundreds of times, would give rise to all of the emergent effects we then identify as the 'properties' of the function axes. If done right, a simple algorithm for each function and axes should be able to generate all of these effects we see. And this is the core aim of this endeavor. Similar to how an mandelbrot set produces emergent complexity from a simple equation...
I am confident that all the various features we identify, at the phenomenological and behavioral level, can be traced back to a single equation-set. So I am actually trying to describe the function as literally only the order of operation at root to everything else. But I also need more training in neural networks and computational models before I can succeed at this.
So far, from the two OP documents, I go a little into how the provided order of operations would create modularity vs linearity, anecdotal vs thematic episodes, tangent-hopping vs fatalism, full admission vs selective omission, etc. We can see also Ne's optimism in association formation, at the local scale, and Se's immersive/honed data mining.
When the algorithm is written into computer code, I hope to also write down a formal essay on how each effect would physically and necessarily come about from the given equation.
Pardon again, I can't seem to stop or contain myself from sharing more unfinished material, hope you'll forgive me for anything that remains cryptic. It's a very complex idea to communicate, and I find myself struggling to find the perfect medium of expression. I mentioned in the two powerpoints in the OP that this was a two-dimensional analogy using arrays as a matrix. And that the limitations of this is that in actuality V and M exist three-dimensionally (or more, depending on how you count dimensions, but spatially at least, it's certainly not two). I'd like to to represent that, if I can, and capture the bits that are missing in the metaphor so far.
First, I'm going to take a more organic representation of the cognitive architecture, by representing it like so:
Working memory can be roughly thought of as the net result of all of the neural circuits which are excited at any given time, in other words what we mean when we say something is "on our mind."
Now, here you see the V-/Ni array enter into working memory. Notice that it's one unbroken thread. I believe the holism of Ni/Se is due to the fact that the entire data structure is one unbroken array. The little circles here ("beads") represent objects, which you can think of as discrete parameters (typically sourced by V+/Se).
This diagram above is essentially the same diagram as this:
Here when you see A-through-Y, with the strand crossing through them all; that's the same unbroken thread that we see in the first graphic. So you can think of the insertion of [A,B,C,D,E,F] into working memory, as "spooling" that part of the array into working memory, while it never breaks itself away from the entire web.
What then happens is something like this:
On the other side, V+ (Se) has collected an array of objects that are lined up in a sequence by the discrete properties they adjacently share. (a discrete sequence can be, for example, [finger-hand-elbow-shoulder-neck-head]).
The new "beads" then get appended into the V- array in the appropriate place, growing the loop:
^ This diagram above is essentially the same idea as:
^ The new data structure, as one continuous array that has expanded to make way for the linear sequence appended by V+/Se. As you can see, Se's tunneling is compatible with Ni's convergent sequences.
The other part of this that needs to be mentioned is the recall of multiple array strands:
Because of how V- is one continuous and unbroken array, different parts of it are "spooled" out into working memory at the same time, each one undergoing an appending process in their given section. The same process I just described above applies also to multiple array appendings at a time.
That brings us more or less up to speed with the powerpoint in the OP, but it doesn't explain how this manifests 3 dimensionally. (p.s. no, this is not a socionics term at all, the way I'm using it. I'm literally talking about x-y-z coordinates ;p).
The need for this other dimension arises when we consider the matter of redundancies. With Ne/Si, redundancies naturally exist because new sets are created an appended at the bottom of the matrix. But for Ni/Se, I mention in the document that redundancies are not a thing. But at the same time it's possible for an [object] to belong in two different "situations", for an Ni/Se user, just as for anyone.
I have to try to model this first in an array, then in a graphic. Beginning with the array:
[[ A, B*, C, D, E, F ],
[ G, H, I, J, K, L ],
[ M, N, O P, Q, R],
[S, T, U, B*, V, W]]
Here we see a seeming redundancy of B, as it exists both in [A,B,C,D,E,F] and in [S,T,U,B,V,W]. If B represented something like a dog, then a dog can be part of more than one thematic strand, for example "walking the dog" as well as "hunting companion." I placed an asterisk on both because the two are actually the same object, not two copies of the same object.
This cannot be modeled in 2D, but in 3D it can.
^ This is a diagram that represents what I'd call a "conjunction", which is the same object in V is used in two sequences, but which ultimately connect. This is also what we might call a "loop", and the V- data structure is composed of such loops, based on conjunctions where the same [objects] are used in different sequences, while always being part of the same meta-sequence, which remains unbroken.
There are many other things that emerge from this data structure, such as the notion, so repeated by our Ni/Se users, that the end and the beginning are the same (Bruhh). Or that Ni is experienced like a web (Ash). From what I understand of the implications of this data structure, that's entirely an appropriate way to think of it.
Over time, with more of these loops, the V- data structure would become a sort of web with many loops within loops, while everything can ultimately be traced (through V+ tunneling, and the way the data was composed to begin with) from one pole to the other pole.
This is also what leads to behaviorisms such as numerological focus, a unique phenomenology of Ni convergent synchronicity, etc.
Lastly, we have to model what this looks like with self-similarity, scaling across different neural network layer embeddings:
^ Pardon my horrible artistry here, this was just a quick 5 minute sketch. But then idea is, as you can see here, that the 'loops' described in the above post, at higher orders, themselves become equivalent to [objects] within meta-loops, which themselves become [objects] in even more meta-loops, etc.
At this scale, certain loops/episodes themselves are part of multiple loop-sequences, such as how, for instance, [death] may be a part of the cycle of [birth-baby-toddler-child-teen-adult-elder-death] as well as part of the meta-loop that includes societal death/rebirth/transformation, in self-similarity, when [individual-death/birth] is seen as one bead along with other beads, in a greater loop.
This process can become abstracted up towards the highest levels we know of.
I love the bit about Si rambles and how it gets further and further from the topic. I do that a lot where I sprinkle random anecdotes in certain topics I love (like Star Wars) and it confuses people. They’re like, why is this relevant? To me it feels like just providing the background info, like I’m immersing them in the story. I wonder if that sort of off-topic Si is more common when the function is more mythologized due to not having conscious development, or being under Ne. And if that’s true, I wonder if it would even be possible to give computational correlates to conscious and more realistic function development versus more mythologized usage that still carries a lot of unconscious baggage.
I’m happy to see CT is evolving in something that is aiming at getting even more scientific than it was before and trying to get rid of everything that is not supported by clear evidence. I’m glad it went further away from the constraining jungian heritage. It feels a lot cleaner and that’s great. I have still a lot of articles to catch up with, but i can already say i like that computational metaphor (well, the part. 2 - haven't gone through part. 1 yet). It’s helpful to get the two processes as you understand them.
I don’t know where to put that, but I also think the impact of the context’s different parameters on typing's interviews should be measured at some point. More precisely, i think CT would probably benefit to CT to verify if specific sets of questions (asked during interviews) are stimulating specific functions.
It would clarify :
1) If some questions/contents are indeed stimulating some functions more than others.
2) If 1) happens to be true, then we could verify if, when some particular functions are stimulated by specific questions/contents, the typings and dev. levels remain consistent across contexts.