In the rise of our modern scientific era, Charles Darwin was among the first to notice and document the connection between expressions and mental/emotional states. He theorized that emotional expressions played some developmental purpose in animals and arose from inborn instincts. But it was his other works – such as the Origin of Species (1859) – that became the most popular, and spread his name across the academic world. His theories on facial expressions remained some of his lesser works and would remain underdeveloped for the next hundred years. At the turn of the century, the dominating opinion of the time was that of anthropologists such as Margaret Mead who saw emotional expressions as contrived by upbringing and social conditioning.
A then young anthropologist, Paul Ekman, had a different opinion on the matter. Sharing the position of Darwin, Ekman sought to prove that emotional expressions carried a universal basis in the animal kingdom but more specifically in humans.
He conducted a wide scale survey of people’s opinions on images showing expressions, to identify what they thought the emotions being conveyed were. Soon he discovered there was strong agreement across various countries between at least six key expressions and what subjects across the civilized world identified them as. But the counter-argument remained that these countries, due to their intercontinental ties and shared histories, could have developed a shared but conditioned opinion on the subject of emotions as a sort of cultural cross-contamination.
Fortunately for Ekman, at the time there were still areas in this world with indigenous populations who had never been exposed to the outside world or to western civilization. Ekman flew to Papua New Guinea where he conducted a blind test on the civilians – showing them the images and asking them what emotional experience they depict. To the surprise of the academic community, the results came back the same as that of the developed world. Charles Darwin’s long forgotten hypothesis had finally been verified.
This discovery changed the way we think about social conditioning. It showed us that we don’t really have as much conscious control over how we express ourselves as we thought. Our emotions are part of an ancient programming which we partially share with other mammals. Our expression for anger, for instance, exposes our bare teeth and canines the same way we see them in wolves and dogs – demonstrating a sign of aggression.
Then in 1985 Ekman took this concept of unconscious expressions further when he published his book Telling Lies. There he applied the principles he discovered about expressions to identify people’s deeper emotional attitudes even when their words did not match. For the first time, an instrument had been developed that could peer through our words and bypass our subjective bias. We were no longer the final opinion on ourselves, as our expressions could give us away.
Later his work extended beyond the six fundamental emotions into a comprehensive codex that involved emotional compounds and every major combination of facial muscle contractions. The FACS (Facial Action Coding System) was established as a standard that is used til this day for studies on schizophrenic patients, in cinema productions, facial tracking software and in multiple other applications.
Ok but what does any of this have to do with cognitive type? Well, the same premises established by Darwin and Ekman apply to the material of cognitive type – mainly:
- The unconsciousness of expressions: While humans generally have the capacity to contract their facial muscles deliberately – as we can do also with our breathing – the majority of our facial muscle contractions happen automatically. From the day we are born, emotional expressions come instinctually to us and require no learning process.
The primary difference between Ekman’s work and CT is that CT describes micro-expressions that deal with elements of cognition, not emotion. If Ekman can tell you what you’re feeling, CT can tell you – in a general sense – what you’re thinking.
Why is it that we scowl and look down at the ground when trying to find an ancient memory? Why is it that we look upward or dart our eyes around when brainstorming ideas or trying to rephrase a thought in our head? Why is it that we seem to have a need to gesticulate as we talk, and why is it that when we contemplate a serious question, our eyes disengage and look downward?
Like emotions, these expressions unconsciously reveal facets of our internal experience. But unlike emotions, they relate more directly to the type of information our mind is processing at that moment. By observing these expressions we can identify whether a person is reevaluating language, visualizing information, recalling memories or consulting their inner convictions. And when we note the ratios of these mental processes we quickly notice that some people rely more heavily on certain forms of information metabolism. This typicality of expression is what gives spontaneous rise to types.
After months in development we have just launched the first version of the vultology Webtool, which now allows us to precisely catalog peoples visual signals second-by-second according to a standardized metric called the Cognitive Type Vultology Code (CTVC). This is a codex containing 110 signals outlining all the basic expressions related to information processing. It provides a clear descriptions of each signal and the accompanying psychology.
The webtool outputs percentages of each mental process by tallying up the signals in the video and calculating their ratios. This gives each subject a unique visual signature which we call a statistical report. Going forward, every new and existing sample will be converted into this format, allowing us to compare the similarities between any two samples down to a high level of accuracy – and identify the cognitive functions they use along with their level of strength.
In the future, this process will be automated using facial tracking software, and a motion comparison algorithm that can match people’s movement profiles to that of others from an eternally growing database of samples. This will allow us to instantly find exact matches between the expressive profiles of hundreds of people. From there a multitude of doors will open as we examine, at a far larger scale, which emotional/psychological/lifestyle similarities people share when they have an exact match in their visual profile.