Having arrived where we are currently with CT, and moving forward, I wanted to share some personal reflections from a meta perspective on the whole typology enterprise. 🙂
As I was telling Alice here, lately I've been feeling the contrast between how low-res old models are/were, and how non-cognitive they really are. Barring perhaps some versions of socionics (but even then..) most typology theories are high-level behavioral, not really cognitive, because they don't address how thoughts fundamentally work, how they're formed and so forth. In most cases the so-called "cognitive functions" are actually just two sets of behavioral concepts merged with a bit of very basic logic. "IF (N with I) then (S with E)." But I'll get to that a bit later.
I don't want to make this a bash post though, since I actually am sympathetic to the process of evolution in human thought. And I think we've all been trying to crack the psyche this whole time. But we can see our infantile attempts (mine also) once we've gown up a bit, the same way we might look at old photos of us and cringe lol. So here goes.
Most typology systems that are popular right now are very broad-strokes and they are attempts to make sense of phenomenology by dividing up phenomenological aspects off from each other, and then talking about them that way. For example, often times MBTI systems use "F" to describe the phenomenology of affect. They use "S" to describe the phenomenology of sense-perception. They use "T" to describe the phenomenology of contemplation/thought, and so forth. But of course human experience is not as simple as a division into four. I believe this is a linguistic convenience, an easy sort of heuristic that Jung developed. Compared to having nothing at all, it's useful to parse out feelings, sensations, thoughts and intuitions - as four core types of phenomenologies, which leads me to:
Imagine for a moment that we could capture human phenomenology as an array of bubbles. All of it. All in one whole diagram. And lets say it looked something like this (even though this diagram is very simplified and put together in five minutes)
^ So, lets pretend everything that humans experience is here. Simple enough right? Well, no... The mind does a lot of different things, for all sorts of different reasons, to meet all sorts of different tasks. It is so complex and we have very little grasp on it. Okay so then someone comes along and develops a heuristic. Jung says "hey lets group these together roughly into four camps like this"
^ Okay, so now we have aspects of phenomenology grouped together into camps. What does that mean? Well, a bunch of different mental functions are assumed to belong with each other. This is maybe useful if you wanted to classify people, in the event that a person seems to have a greater experience of phenomenological weight in one of these groups. But this diagram won't work very well for many people because the divisions don't match up to them. In the words of David Deutsch -- it's "easy to vary." The pie could have been cut differently, and different things could go together. A person may be high in one bubble of the quadrant, but super low in another of the same quadrant. You could have major representation in eros + body but not in memory. Or you could have major representation in spirituality but not in intuitions. So the system 'breaks down' when examined more closely, but it 'works' as a sort of first-pass broad-strokes device. This is how most typology models work. They cut up the pie differently but they're all trying to account for a vast array of very different things using rule-of-thumb categories.
The next step that seems to have developed, in this heuristic refinement, is to take the faint and under-developed notions that Jung introduced of compounds -- I(N|S|N|F) and E(N|S|T|F) -- and use that now as an even more useful toolset from which to try to dissect the mind. I use the word "toolset" here deliberately, because a model is like a set of instruments you use to slice up reality. And the toolbox of 8 functions was more comprehensive than that of 4 attitudes. So then practitioners started to hack away at the psyche with this new toolset, each cutting it up differently:
Some decided to put socialization into the E bubble, others didn't. Some put memory into the SI bubble, some instead put sentimentality and nostalgia into the SI bubble. In one way or another, they all tried to shoehorn human phenomenology into an 8 slice pie. But again it was "easy-to-vary", one could parse it out all sorts of ways and nobody can prove what's really better. But that wasn't really the point. The point is that this exercise provided an avenue from which people can converse about the mind -- in an age where psychology was just rising into human consciousness.
Answers about where thoughts come from, how we form a sense of the passage of time, how memory works, etc - none of that is explained very well. And that wasn't really the aim. These were not really theories of cognition, but of human behavioral patterns. Even the models that tried to divorce themselves from behavior, and state their models as being "structure, not content", had only a few verbal axioms to rely on for their 'structure', while all the rest was behavior. Even those who would compare their 'cognitive' functions to programs in the brain... had no actual computer code to show for it... so it was an empty statement. At core, all they had were principles which held together behaviors in a scaffold. That is what JCF amounted to, especially compared to the work cognitive scientists do. If you show a computer scientist a JCF structure, I think they'd quickly point out how little it has in the way of cognitive theory --how little it actually can model a real, living mind.
But again, that wasn't the deep point of JCF. The personal benefit and clarity that people might receive from having this heuristic and applying it to their lives, was enough justification to continue this culture/tradition. It wasn't really about objective truth, I would argue, it was about having "some" means, any means, to grapple with the mind in a common language. Jung gave us this basic language and we rolled with it and expanded it. And I find this valuable because it's historically been the case that astrologers paved the way for astronomers. Alchemist became chemists. By immersing yourself in the subject using a poor toolset, you are equipped to find out that the tool is inadequate and what tool you really need. The limits become clearest to those who have earnestly tried to solve a problem with the current means. But moving toolsets becomes inevitable at some point.
Meanwhile neuroscience was starting to take off and was telling us how very very complex the brain is. Any respect that the old heuristic approach could have had, was tossed into the pseudoscience bin. But it didn't make the human need to understand our own consciousness go away. First of all, neurological data could not be accepted as the source of qualia by all parties, so some disregard any parallels between the two. And on the other hand, those who wished to use neuroscience to understand consciousness were not really addressed either, because of the baffling array of data and how hard it is to make sense of it. No unified theory emerged from neuroscience yet.
So the disciplines broke up. The neuroscientists went their own way, with the scientists, trudging along study by study, trying to set some sort of foundation to this multi-variable mess. And a section of the population -- the psychodynamic theorists and psychologists -- continued to explore consciousness in first-person and with patients, and the typologists were close by. They were still using rudimentary tools and heuristics that they knew, deep down, were inadequate, but they chose to believe in their pragmatic value in a world where we have pain and suffering that could benefit from any sort of knowledge at hand.
Then we had another player in the game. Alan Turing, and those after him, proposed methods of engineering consciousness from the ground up. Turing put aside the philosophical problems about 'being' or 'soul' and suggested that if it came out convincing enough to be indistinguishable from a human mind (i.e. passing the Turing test), then for all practical purposes you've succeeded in making the mind. Efforts to engineer consciousness from the ground up lead to neural networks, machine learning, and eventually projects like IBM Watson and AlphaGo. My own sense is that AI research has been the most successful so far at yielding functional results on the question of 'mind' - by reproducing it in rudimentary forms. Often the researchers themselves don't know exactly how it happens, but they can set the parameters from which something like intelligence emerges.
What needs to happen now is we need a synthesis of these domains. The psychodynamic theorists need to step it up and learn from the cognitive scientists, and frame their phenomenological elements in a way that can explain the mind procedurally. Their tools are way outdated. And I don't think the psychodynamic domain has much of a future without moving forward with the rest of cognitive science and A.I. research. We need to move past those old tools and start integrating some ideas from recent fields into our understanding of the mind, to have any chance of making real progress. Rather than thinking about the mind by an old tradition of quaternary phemoneologies, a new premise can be used where the core aspects of the model are not categories of mind, but necessary operations for the computation of reality's information. And this transition needs to happen in the typology domain as well -- which is where CT comes in.
By my new understanding of cognitive, CT is only just now starting to really become a cognitive theory with the introduction of Model 2. We have a lot of work to do, but it's becoming capable of modeling aspects of the human mind (attention, curiosity, imagination, memory, short-term memory, causality, episodic time, etc) as more than just principles, and as complex sets of operations with emergences. And in that sense, I believe it can start to live up to its name as "cognitive typology." What it has been before, and certainly what Jungian 'cognitive function' theories are now -- are not truly cognitive by modern standards.
This thread is just a ramble... 🙂 So, pardon the multi-posting. Err, I have a few more thoughts.
One of the key, necessary changes in the field that needs to happen is that typological systems need to stop swallowing up whole chunks of the mind into their pie slices. To not do this, what is meant by type needs to be defined more discretely. For instance...
...above we see a diagram of how CT is conceptualized. Notice that CT is not taking up everything. Instead it is localized around "objects" - which are one of the phenomenologies humans have. CT is a description of how we build up and represent objects in our mind (and thus a situation), what properties they have and what forms they take. The dotted lines represents some of the other processing regions the CTA makes tangential use of, in order to accomplish this task, but they stand on their own as well.
Notice that the CTA, as an object-management process, does not have much to say about anger, fear, love, sensations, self-image, and so on. If a person wants to have a description of these elements, they have to look to another complementary system. (No longer can convos about Fi and Fe be stand-ins for this matter.) And in that sense CT is more specialized in its scope - but it tries to do "one" thing right, rather than trying to be a whole-human outline using archetypal, but ultimately heuristic, dichotomies. There is a lot of complexity in human nature and each individual subject - the body, emotions, sense perception, logic, dreams, etc -- is its own enormous field of study which can, and should, be treated rigorously, because each one may vary independent of any other one. (In fact, that's what we keep seeing, with clumsy Se-leads, emotionally dissociated Fi-leads, etc.) Good science focuses on identifying these contingencies or non-contingencies and examining each closely.
Another one is absolutist thinking. It's a human inclination to want to form clean, closed systems. Absolutist thinking is one of the J system's greatest indulgences, and one I've been the most guilty of. But this won't do. There are hardly any things in nature that conform to monolithic architectures without some exceptions here and there. The more absolute the system is, the longer it takes to update itself because erroneous data is looped-back and re-explained within the absolutist framework which is held as axiomatically true. I think this happens everywhere that humans engage in activities -- from government to religion to ideologies of all sorts, but also in sciences and forms of reason. The hard pill to swallow here is that the multi-variability of complex phenomenon like the psyche makes it so that, at best, we can only approximate "most" cases. So we should never close the door to the exact opposite of our theory being true. This is SO hard to do, and it's so easy to slip into myopic thinking. We have to actively combat this human tendency for certainty by fostering a culture of skepticism and reiteration. It has to be possible for your model to evolve -- it has to be seen as a work-in-progress, always. So that's also part of the newest CT efforts, where model 2 is built to be upgradeable at the core.
That's it for now! Now that I've written it, I'm not really sure why I wrote all this ..or who I write it for. Maybe just for myself? ...Maybe it's just been swimming in my mind and I needed to spew it out. Sorry guys! >.> Err, still I hope some of these thoughts are interesting to some members here lol! Or not.
What do you guys think of all this?
How do you see the evolution of typology?
And how do you see its future?
This may be unrelated, and I think I might harp on him a lot, but it seems like you are undergoing what Carl Rogers defined as development through the human process. He theorized that as we grow we challenge deeply embedded constructs in our psyches, and as a result, we come to appreciate ambiguity, contradiction, and holding multiple exclusive ideas at the same time. He calls it richness, and becoming more of a human being, more of oneself, and he makes sure to tell his readers that it is not for the faint of heart.
The kind of broadening taking place here in order to do a better job at something smaller is a good example of this kind of trajectory, I think. The theory needed to be humbler, less broad, in order to accomplish more. Paradoxically, it also zooms out - instead of studying the phenomenology of chunked sections of the human experience, we are now dealing with something broader, but purer. We're just dealing with the whole of human phonomenology. Not by breaking into computable pieces with awkward divisions, but by devising a tool that will allow us to interact with it as a whole. Thus we can retain the richness of the whole experience, and be able to talk about whole selves and the natural aspects of each of us, not just artifice that is layered on top of what is already observable.
That got a little abstract and maybe hard to grasp, but I hope that made sense. This theory is getting more mature and deep, more interested in the genuine study of humanity over the study of broad constructs, and it is refeshing and genuinely exciting to see.