Vultology Quiz #5

Home Page Forums Vultology & Learning Center Vultology Quiz #5

  • Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Just to clarify, I am speaking of signals not functions

    Ah! It’s only after I read this that I started to get your post! Lol I like this!

    This is indeed very helpful information, and I’m gonna go through it more carefully later. I’ve gotta say I’m so excited to see these results! Especially how Participant 1 (it was Bera!) and I got such similar scores. Who’s participant #2? (edit: ah Sander!) I want to know all the names! 😀

    I was comparing these to the last quiz, specifically this post: https://cognitivetype.com/forums/topic/vultology-quiz-4-1-sample/page/4/#post-17168

    And what I noticed is that I got exactly 90% of the group consensus then too. 😀 That’s curious.

    But in the previous quiz I think there was a slight bit more of a group consensus (with scores like Alice’s up at 92.7%) and slight bit less consensus against my own report. This time the two reports seem closer to being same. And the top 6 participants have 85%-87% alignment to my own report, compared to the 80%-85% alignment of the previous quiz’s top five. So that’s a 5% jump in alignment between myself and the consensus, I think? (Not that my own report is the most important thing here– but it just tells me that we have better alignment in reading methodology, which is what’s exciting!)

    • This reply was modified 2 months, 2 weeks ago by Auburn.
    • This reply was modified 2 months, 2 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Also yea lets talk a little about the percentage calculator. (Btw the percentage calculator has always been a sort of nice-to-have tool, to assist the reader, but not the determinant of type) The calc was weighted like this for this quiz:

    1st function: (10/10 = 50%)
    2nd function: (10/10 = 15%)
    3rd function: (10/10 = 7%)
    4th function: (10/10) = 3%

    Energetic-Lead: (5/5) = 15%
    Energetic-Aux: (5/5) = 5%
    J/P Lead: (5/5) = 5%

    ^ So for example, checking 10 out of 10 signals in a given function would raise the percent bar to 50%. Same idea for the others. It all adds up to 100%. This equation is duplicated for each of the 16 types. So an example of this in the code would be:

    TiSe % weighting breakdown:

    Ti signals: (10/10) = 50%
    Se signals: (10/10) = 15%
    Ni signals: (10/10) = 7%
    Fe signals: (10/10) = 3%

    Ji signals: (5/5) = 15%
    Pe signals: (5/5) = 5%
    J signals: (5/5) = 5%

    However, I’ve gone in and made adjustments to the weighting that I think is more true to how visual readings actually ought to be done:

    NEW:

    1st function: (10/10) = 30%
    2nd function: (10/10) = 15%
    3rd function: (10/10) = 7%
    4th function: (10/10) = 3%

    Energetic-Lead: (5/5) = 30%
    Energetic-Aux: (5/5) = 5%
    J/P Lead: (5/5) = 10%

    ^ The adjusted areas are bold. There was a strong imbalance towards the lead function’s signals, with the gap between 1st function (50%) and 2nd function (15%) being an enormous 35%. Setting that down to 30% and 15% respectively, represents a better proportionality, imo, and that extra 20% is now allotted to the Energetic lead function (Je/Pi/Pe/Ji) and to J-vs-P signals.

    This is now live in the codifier. And I re-ran the same signal tally sheet from the consensus and this time I got this:

    In this particular case it’s giving an equal output of 65% for SeFi and TeNi, which I think is still much better, although it’s not edging SeFi over TeNi. However, given the specific signals chosen here, I like this estimate and don’t feel too compelled to adjust the weighting any further. I think that it’s at a good balance here– what do you think, I wonder?

    I do think the function signals should have some negotiation power over energetics, especially in extreme cases. And having 9/10 Te while only 4/10 Se signals is pushing it. So the percentage is reflecting that. However, adding just one additional signal (Se Locked on Eyes) (bringing it to Se 5/10) would break the tie and show SeFi weighted over TeNi. I think that’s appropriate.

    And for example in the excel sheet, Se-8: Locked-On eyes was at 6/14, so it barely missed the 7/14 threshold. One more participant clicking Se-8 would cause the new calculator to output SeFi, rather than a tie. Looking forward to your thoughts though. 🙂

    Fayest, haven’t forgotten your questions! Will get to them next.

    • This reply was modified 2 months, 2 weeks ago by Auburn.
    Sander
    Participant
    • Type: NeFi
    • Development: lll-
    • Attitude: Seelie

    Thanks, @staas!

    @auburn: one more participant clicking Se-8 would cause the new calculator to output SeFi, rather than a tie. Looking forward to your thoughts though.

    As I discuss in the post you missed (according to your edit), I didn’t check locked-on eyes because I thought looking at the camera wouldn’t count as “point in the environment”.

    @auburn: Fayest, haven’t forgotten your questions! Will get to them next.

    And don’t forget my three signal definition questions 😉

    • This reply was modified 2 months, 2 weeks ago by Sander.
    Staas
    Participant
    • Type: SeFi
    • Development: llll
    • Attitude: Seelie

    Alright so I have run chi-square statistics on the signals to determine which signals were the most different between the people who accurately predicted SeFi and the others.

    The significant discrepancies (p-value under 0.05) were, in order from most significant to less :

    • Ni 3 : Intense scowling (p-value 0.02, very significant)

    All the next ones have the same p-value of around 0.05, which is significant

    • Pi 3 : Diagonal eye drift
    • Si 1 : Dulled eyes
    • Si 3 : Concerned scowling
    • Ne 5 : Buoyant undercurrent
    • Fi 2 : Snarling smile
    • Se 9 : Persistence effect
    • Se 1 :  Sharp eyes

    We can see most of the big confusion reside on the Pi functions, with people confusing Si and Ni, hence why the biggest errors were TeSi and SiTe typings

    Now we have a big jump in p-value, from around 0.05 to around 0.08, so all the signals before are significant, and the next ones somewhat significant to explain the discrepancies

    • Se 2 : Taut eye area
    • Se 3 : Amped perk-up
    • Se 10 : Vivid realism
    • Fi 8 : Wounded expression
    • Ne 1 : Naïve eyes

    What we can see here is that the biggest source of confusion was the Pi function, with people incorrectly identifying Si when Ni was present, so maybe more precision of the corresponding signals is needed. Another source of confusion was the Ne vs Se function.

    This shows there are relatively few mistakes on the energetics, but that Ne/Si vs Se/Ni the big source on confusion while the Judgement axis was clearly identified

    I am not sure I can derive a good way to make a consensus from checked signals from such a small dataset, @auburn I would need as much typing reports as possible (anonymised if need be, i just want the type). there are ways to do so and I could probably tune your profiler empirically if you can provide me with a sufficiently large dataset (please in an excel, txt or csv form, you should add a function like that to the profiler, it’s the easiest to use when running programs and it should not be too complicated)

    • This reply was modified 2 months, 2 weeks ago by Staas.
    • This reply was modified 2 months, 2 weeks ago by Staas.
    Staas
    Participant
    • Type: SeFi
    • Development: llll
    • Attitude: Seelie

    Ok after discussing with a friend I ran a F-test which is supposedly more accurate in our small sample size case, here are the results and their P-values, remember the smaller the more explanative difference between people who predicted the correct type and the others. I represent only important P-values, under 0.05.

    Ni3 : 0.000062
    Se1 : 0.000078
    Se3 : 0.004579
    Se2 : 0.004579
    Fi2 : 0.012658
    Se9 : 0.012658
    Si1 : 0.022442
    Pi3 : 0.022442
    Ne5 : 0.022442
    Si3 : 0.022442
    Se4 : 0.030622
    P1 : 0.037521
    Fi8 : 0.037521
    Ne1 : 0.037521
    Se10 : 0.042608
    Te7 : 0.042608

    Again the biggest confusion is clearly on the P signals, so there needs to be more clarification on P-axis identification

    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    Fayest, haven’t forgotten your questions! Will get to them next.

    Hey @Auburn, I’m giving you a little nudge because it’s been a while 🙂 As I reminder, here were Sander’s questions:

    • since the fixed gaze of Pi-2 excludes eye contact, shouldn’t Se-8’s locked on eyes also exclude staring into the camera?
    • You described the pointing at 2:15 as as J-4 exacting hands, yet isn’t that actually a pointed emphasis? As the examples and description of J-4 exacting hands suggest repeated forward vectors, which isn’t happening around 2:15
    • the name of Fi-7 Excessive Contempt implies contextual dependence while it’s description ignores this nuance; so is all contempt excessive?

    And my questions:

    • Why is what she’s saying at 2:41 persistence effect?
    • What makes what she’s doing at 0:45 avalanching articulation?
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Thanks for the nudge!

    since the fixed gaze of Pi-2 excludes eye contact, shouldn’t Se-8’s locked on eyes also exclude staring into the camera?

    Yes, which kind of makes this sample a little sub-optimal because she appears to be directly addressing the camera in a predetermined format directed at an audience. Videos of people talking into the camera are less ideal than interview videos where the eyes can wander if they have an inclination to.

    Given that, my own clicking of locked-on eyes was perhaps debatable, yes. Although I have noticed that even when a person is in a predetermined camera format like this, high Ne users still break that eye contact much more regularly. Here is NeFi l-l- Laci Green as a parallel example, in a similarly formatted video: https://www.youtube.com/watch?v=nQ1ga8yuM50

    When compared to Jane we see Laci does have wandering eyes and doesn’t stay fixated for long on any points. This is because, unless a person is reading from a script in front of them, maintaining eye fixation is still harder for Ne’s than Se’s, all things being equal. But you’re right to bring this up. I think these are the kind of points we need to talk more about.

    You described the pointing at 2:15 as as J-4 exacting hands, yet isn’t that actually a pointed emphasis? As the examples and description of J-4 exacting hands suggest repeated forward vectors, which isn’t happening around 2:15

    Hmm, I went back to look at the video Fayest provided ( https://www.youtube.com/watch?v=imidL5dvW5Q ) and this is what I see at 2:15

    Spoiler:

    ^ This seems like J Exacting Hands to me, and might simultaneously count as J Projecting Hands if it goes far enough away from the torso. But I had already clicked Projecting Hands elsewhere, and this codifier only had 1 timestamp. 🙂

    As for the pointed emphasis I note that at 2:07 like so:

    Spoiler:

    And there’s also another one at 2:17 (…maybe this is the one you’re referring to?)

    ..but I don’t see one at 2:15..

    the name of Fi-7 Excessive Contempt implies contextual dependence while it’s description ignores this nuance; so is all contempt excessive?

    No, I think I describe it in the tutorial video in more detail — the difference being that a generic signal of contempt (which is a human signal, and not Fi) is connected to a moment of clear moral disapproval or disagreement. Like a “scoff.” When a person scoffs alongside a contempt signal, for example when they’re being smug, that’s not Fi. Maybe we do need a better name for the signal, but in essence for Fi users sometimes they make this signal in a way that’s unrelated to what is being said, nor to any real moral disapproval. Hence “uncoordinated.”

    Come to think of it, “excessive” is not a good word here — it would be more like “accidental”. Excessive implies dependence on number, but it’s really a dependence on context, yes. I’ll think about updating the short description too, to make this clearer.

    Why is what she’s saying at 2:41 persistence effect?

    Well, in a more broad sense the entire video is about mitigating the frustrations and discomforts that may come with a trip to Hogwarts. There are many points that I think would qualify, and the exact timestamp is kindof an artificial constraint because this signal has no “one” timestamp. But I was looking more generally at the timeframe between 2:34-2:42 where she says:

    “It takes even longer than it usually would be to wait in line for the Hogwarts Express, and which can go anywhere from ten minutes to an hour or two hours, depending on how busy it is but usually it’s about half an hour.”

    The real defining point for me was at around 2:39 when she says “ten minutes to an hour” and she makes a very amped+frustrated expression:

    Spoiler:

    ^ So I interpreted that expression, in the given context, as a Persistence Effect because she appeared quite affected by the notion of the potentially very long wait times (especially given how the subject is about how to maximize your time there).

    That was my thought process when clicking the signal. Admittedly, this one has some qualia in it, and perhaps is one of those signals that might not make it to scientific testing since it’s focused on the content of speech. But that was my reasoning behind it, nonetheless!

    What makes what she’s doing at 0:45 avalanching articulation?

    Again, this is one of those where picking out a timestamp is a bit silly, so there’s nothing particular about 0:45 that is avalanching speech as opposed to any other part of the video. I think I probably was just going down the list of signals and got to that one 45 seconds into the video. I think this signal may be one of those candidates that, in an updated version of the codifier, would just have a yes/no marker. Same with the other voice tone ones like Fi sprite-like voice, Te’s nasal monotone and Ti’s faint/trailing articulation.

    Hope that covers them all.

    I like these questions a lot, they seem to cover edge cases that show the current failings of the reading methodology and also the software.  I have some solutions in mind for some of these — next time I dip into reprogramming the tool.

    • This reply was modified 2 weeks, 4 days ago by Auburn.
    • This reply was modified 2 weeks, 4 days ago by Auburn.
    • This reply was modified 2 weeks, 4 days ago by Auburn.
    • This reply was modified 2 weeks, 4 days ago by Auburn.
Viewing 7 posts - 46 through 52 (of 52 total)
  • You must be logged in to reply to this topic.
© Copyright 2012-2020 | CognitiveType.com
This website's articles, its reading methodology and practices are the intellectual property of J.E. Sandoval.
Animated GIFs, images and videos belong to their respective owners.