Is the vultology theory somewhat self-confirming?

Home Forums Vultology & Learning Center Is the vultology theory somewhat self-confirming?

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #19671
    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    There's something about the general theory of the vultological signals that has been bugging me. It seems like a fair amount of the theory rests on the fact that there is clustering of the signals in the data (e.g. that people who have one Fi signal tend to have other Fi signals and also Te signals and that they tend not to have Ti or Fe signals). It seems like some (though not all) of that clustering can be explained by the nature of the signals themselves. There are two main ways I see this happening:

    • Sometimes multiple signals have the same underlying cause. For example, Ni-2 Elevated Brow Area, Ni-3 Intense Scowling, Se-1 Sharp Eyes, Se-2 Taut Eye Area, Se-3 Amped Perk-Up, and Se-10 Raised Outer Edges are all connected to have taut preseptal muscles. So it isn't surprising that someone who has one of these signals would also display many of the others.
    • Sometimes signals on the opposite axes are opposites - they could not both appear at the same time. For example, just as the Ni and Se signals listed above all depend on having taut preseptal muscles, many of the Si and Ne signals rely on having relaxed preseptal muscles. So it isn't surprising that someone displaying many of the Se and Ni signals wouldn't also display the Si and Ne signals - because either your preseptal muscles are taut or they're not. They can't be both (at the same time). Another example is how multiple Fi/Te signals rely on having snarling tension and multiple Ti/Fe rely on having a lack of snarling tension. Since you cannot have both snarling tension and a lack of snarling tension (at least not in the same moment), it makes sense that people would cluster into either Ti/Fe signals or Fi/Te signals.

    Not all of the signals have these issues, so it's possible that even if these issues were removed, the data might still show the clustering that is claimed. But that would have to be checked.
    And with the signals as they are now, it might still be notable that people who display signals from one axis in one moment tend to continue to display signals from the same axis in the future (e.g. people don't often have taut preseptal muscles for a few minutes and then relaxed preseptal muscles for a few minutes). But this definitely seems like a weaker conclusion than what is currently being claimed.
    Thoughts?

    #19677
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Ah! My kind of question.
    So while I actually agree with the heart of your critique (I'll get back to the self-confirming point in a bit) I think that a scientific experiment would also need to not broadly group muscles together, but examine each muscle independently. And that might be the heart of how to conduct such an experiment in a controlled setting.
    My present attempt has within it an effort not to overlap signals-- and here's a diagram showing how each signal covers a separate area: https://cognitivetype.com/forums/topic/face-muscles-to-signals-reference/ I think it's possible to have tension in any one of those areas without the others, although some are indeed coming from the same muscles.
    This could present a bit of a problem in testing. And so I think that if we wanted to more universally test out the signals, we'd have to treat each muscle as a unit. So we'd try to see whether a person had, or didn't have, constant (not just situational but persistent) contraction of the preseptal muscle area. And to do so despite anatomical obstruction like droopiness of eye anatomy or surrounding skin. (Possibly via checking for nerve conductivity to the area?)
    Then we can treat the whole of the eye area's tension or non-tension as one signal, that we expect to either be active most of the time or off most of the time, ...in most people. And that data point would cover a bigger batch of signals all at once (taut eye area, sharp eyes, on one side -- relaxed eye area, naive eyes, etc.. on the other). This would then be tested independently against things like eye toggle patterns, which have no necessary reason to be connected to eye-tension. And we can then test against body mannerisms as either being prone to levity/buoyancy or gravity, if we define those as vectors.
    Overall I think, in an experimental setting, the number of discrete and controlled data parameters we'd be properly contrasting against one another would shrink -- and be more like five opposing pairs per function axes. And indeed, we would essentially be testing out whether these divisions are just tautological (and circular dichotomies) or whether it's truly the case that most people prefer one modality over the other.
    If the hypothesis of CT is wrong, then we would see a bell curve distribution with, say, most people having some of each type of eye tension, some of the time. Making it a dichotomy that is invented, rather than a self-emergent bimodal organization of people. That's totally possible, although I'm betting against it.

    Current Code

    But as for why this isn't how it's done already, it goes back to the point about equipment/resources/funds. I'm not entirely sure how we'd even go about designing experiments that control for all the parameters, or the gear we'd need. Wires with nerve conductor nodes would be ideal maybe..?

    (I need to write a grant proposal for this)
    The structure of the current vultology code (2.15) is not setup to be a proper scientific laboratory. This is, in some ways, intentional since the present aim is not to perform scientific testing, but to try to get people's types "right"--via present means-- insofar as we're tracking a certain phenomenon beneath everything. And minimalism is not the most effective tool in this context, I think.
    When we start running experiments, we'll have to adjust methodology. But for now I find that it helps to have a little leniency (possibly even redundancy) in terms of how we parse out the signals of the axes, to account for human error in a very noise-ridden natural environment. Designating the eye-area below and above the eye (for instance, as taut/relaxed eye area and naive/sharp eyes respectively) allows practitioners to better focus on nuances of each area, even if it's one muscle as a whole. And in cases where tension shows up very clearly in one area but not in the other due to anatomical obstructions, a false negative isn't given.
    (But I totally get your point about this also leading to piggybacking of signals together. In many cases Se Sharp Eye and Se Taut Eye Area are both clicked together, which sorta doubles the boost of signals. And perhaps this is a problem to address.)
    Anyhow, I'm very much hoping for the day when the signals are stripped back to a hypothesis of mutual exclusivity that can be tested in a controlled way. And I'm quite open to suggestions for consolidating signals together, if the redundancy is not helping the matter but causing greater confusion.
    The division of signals as 10 per function is largely instrumental-- which isn't to say the reality beneath them isn't real, but the parsing exercise itself has made some concessions in order to allow the investigation and discussion to even take shape/form. I'm glad you're bringing this up though, because if we can do a better job of making a cleaner code, I'm all for it.

    #19682
    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    Yay! I'm glad to know this is something you're aware of and have thought about. And I definitely get how the way it's set up now makes more sense for the practical action of watching a video of someone and trying to figure out their type based on what you see. I just wanted to make sure that the clustering you have seen so far was really occurring and that it wasn't just you being fooled by the stuff I talked about in the OP. Also it's hard for me sometimes to remember that there are different stages in this process and we're not actually doing real scientific experiments yet because there are some other things we need to get done first. I'm anxious to get to the science! But I do think it's worth being cognizant of this stuff even in this more informal unscientific phase to try to avoid fooling ourselves. So thanks for addressing this 🙂

    #19683
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    I love it when people ask the right questions-- so I'm very happy to have a chance to explain! And it would be a totally valid suspicion for me too, if I was approaching CT without knowing the historical process.
    I've actually been thinking lately what the next codex would look like. And I would like to try and make the 3.0 code one that moves away from the "pragmatic" 10-signals-per-function format, with its convenient symmetry, and strip things down to consolidated, discrete and opposite signals pairs which are all fully independent data points.
    I think it's this 3.0 code that we can actually start to perform semi-controlled tests on, even here in our community. Although it still may not be the most instrumental for typing, so the two may exist side by side and serve different functional purposes. But the exercise of making the 3.0 code may provide insights in how to also improve the typing process-- so it loops back around.

    #19714
    Robert Mitchell
    Participant
    • Type: NeFi
    • Development: l-l-
    • Attitude: Seelie

    If there is going to be problems its going to lie in the judgment functions, perception functions are irrational, and thus there is no room to learn behaviour.
    For judgement functions I do see a lack of diversity in certain areas, however there are many interaction that probably have not been picked up yet. For instance Ti vs Te is related to brain functions that encourage action or halt it. So For Ti you often see a fight between Ti and Fe, or what I call wheel spinning. Ti is the brake and Fe is the engine. Now if you have developed your codifier based off Lead function characterisation, then your going to miss things that only crop up in non lead users.
    So you if you use a differentiation codifier (like we use in microbiology) you can have branches that don’t need to apply universally but only is specific instances.
    As to Fi signals appearing in Fe/Ti psychological types (yours truely) we still don’t know how much is environment and how much is genetic. Can a Fe user emulate Fi signals to survive in a Fi/Te dominance hierarchy? Would someone with heterozygous genetics use both sets of facial muscles? Would an enneagram 5/6 type with its fear develop toward a Fi phenotype? And if anxiety does make Fe/Ti look Fi at what stage of development does it need to occur before muscle memory has already become fixed. You certainly wouldn’t expect it to be an issue in J leads. Do N/S P leads have the same response, or do Ne types struggle to differentiate as you would predict based on their neurobiology of being generalists who avoid hardwiring their brains like Si does so well.
    So far using a linguistics based approach I’ve struggled to find Fe/Ti types that express Fi (other than myself) although I still have a couple of candidates. So I doesn’t appear to be a massive problem.
    Upside is I can make a video showing the full range of mouth movements!

    #19724
    Ninth
    Participant
    • Type: TiSe
    • Development: l--l
    • Attitude: Directive

    @singularity Hi! I'd like to ask you what you mean by these:
    1) «and thus there is no room to learn behaviour»
    2) «Upside is I can make a video showing the full range of mouth movements!»

    #19953
    Robert Mitchell
    Participant
    • Type: NeFi
    • Development: l-l-
    • Attitude: Seelie

    @Ninth
    1/ By no room to learn behaviours, I refer to the capacity of Fe users to mimic the motions of Fi/Te in order to fit in, where as perception function can’t be learned. Perception functions are irrational, there is no choice involved, unlike Judgement functions whose function is to make choices.
    2/I can consciously perform all Smile types, Ti(default) Fe, Fi and Te. I’m not sure how common that is, so I thought that might be useful seeing one person doing all the different smiles.

    #19954
    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    @singularity

    Now if you have developed your codifier based off Lead function characterisation, then your going to miss things that only crop up in non lead users.

    This isn't how CT was developed. In fact, there has been quite a lot of work put into distinguishing what different functions look like at different levels of development. And indeed what you mention about Ti being the brake to Fe's engine is a part of what has been noted. That's not to say that there is no more work to be done in this area, but just that that is already part of CT and continues to be on the radar of things that need to be taken into consideration. Perhaps the idea of a differentiation codifier, like you mentioned, could be useful. Could you say more about what it is?

    Upside is I can make a video showing the full range of mouth movements!

    I'd be interested to see a video of this 🙂 However, as far as I know, it wouldn't contradict CT for a person to physically be able to perform all 4 smile types. I don't think CT says that a Ti/Fe person cannot physically learn how to use different muscles to smile and create an Fi or Te smile. Even if someone does that, I think you could still use vultology to type them because
    a) Our vultology is for the most part unconscious and very difficult to control consciously for extended periods of time. If someone naturally had a placid smile, but they learned to do a snarling smile, they would likely still do placid smiles on a regular basis because they could not be in complete conscious control of their smiles at all times. Perhaps if someone practiced enough for a long enough period of time, they could manage to get the muscle memory to make the snarling smile their default, but then...
    b) CT is careful not to over-rely on one signal to type someone. Even if someone appears to have a snarling smile, they would not be considered to be an Fi type unless there is an abundance of other evidence of them being an Fi type. For someone to be mistyped because of this kind of situation, the person would have to learn to stop doing all their natural vultology and learn to instead do the vultology of the other type. Even if this were possible, I would imagine it would be rare enough that it would not affect the validity of CT, which is really based on statistical correlations and not necessarily on 100% accuracy for every individual.
     

    #20050
    Robert Mitchell
    Participant
    • Type: NeFi
    • Development: l-l-
    • Attitude: Seelie

    @fayest42
    Well so far I’ve only detected a problem with ENTPs, where I have identified multiple individuals displaying Fi/Te type Vultology in addition to Fe/Ti Vultology.
    Of course you would predict ENTPs to have a number of characteristic that make them vulnerable to this (although you could argue its actually shadow Fi)
    1/Ne is an anti emergence/Differentiation pattern in the brain. It prevents specialisation and thus shadow function use may be more accessible.
    2/ENTPs don’t fit into society well which can lead to Fi introspection as to why they are different, creating emotional turmoil.
    3/ENTPs are often lacking in coordination,(like other N doms) which is required for Fe signals, and they are often highly emotional underneath, with Ti neutralisation being unable to neutralise Fi emotional signals above a certain threshold.
    Now in my case my shadow Fi use is extreme, so its quite understandable with my history that Fi mouth signals are present to some extent. However I have figured out why Auburn mistyped me based on my first video. I made the mistake of holding something in my hands which masked my main form of Fe/Ti Vultology which is coordinated emphasis. My second video which I’m guessing wasn’t check, showed 115 instances of coordinated emphasis in ~8 minutes, along with numerous instances of puppeteer hands, delicate pinches, Fe disclaimers. At the same time I don’t employ warm swelling motions, so without the hands in play it would naturally look like a combination of Fi signals plus Te movement. My asymmetric mouth really doesn’t help things either, plus I’m I probably didn’t smile much talking about myself!
    From what I can tell CT is pretty close to the mark but naturally has a few exception that haven’t been figured out yet.

    #20075
    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    @singularity I think Auburn generally looks at both videos even though he doesn't use the codifier on both of them. But I'm sure if you asked him, he'd be happy to tell you whether he sees the coordinated emphasis, puppeteer hands, delicate pinches, and Fe disclaimers in your second video. And if it's a genuine case of signal mixing, he'll work with you to figure out your true type.

Viewing 10 posts - 1 through 10 (of 10 total)
  • You must be logged in to reply to this topic.
© Copyright 2012-2021 Juan E. Sandoval - Use Policy
searchhomecommentsenvelope