Vultology Code 3.0 (PT2: Codifier)

Home Forums Vultology & Learning Center Vultology Code 3.0 (PT2: Codifier)

  • Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Alright, this is part 2 (part 1 is here), where I’m gonna talk about the upcoming Codifier’s design.

    For now, you can find the newest codifier’s test link at: https://cognitivetype.com/test-codifier/

    Test Codifier

    So I know what some of you might be thinking– “what are all these buttons! D: *runs*” But there’s no need to worry. You don’t need to use all the buttons, and the extra buttons are there in case you want to use the Advanced mode. This codifier has two modes it comes with:

    • Mode A: Advanced
    • Mode B: Simple

    This codifier will be used in the Advanced mode when doing official Vultology Reports and formal research – which has to be a lot more rigorous with the data. But for most of you, the Simple mode will likely suffice. Here’s an overview video explaining the interface. 🙂

    • This topic was modified 1 month, 2 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Mode: A

    So the advanced mode is not very “fun” to work with… …even with all these tools. But it’s not fun to work with in the same way that good things we know we should be doing, are not fun – like eating healthy and exercising. It’s good scientific practice — but that comes with a lot of necessary restrictions and environmental controls. I’ll get into these conditions below. But the benefit of this is that this mode serves as a producer of objectively usable data, which can be respectably placed into (pilot) studies going forward. We can no longer stay in the lukewarm experimental phase of CT, we have to push forward into the hard but rewarding work of complete methodological control.

    No Cherry Picking

    In order to do that, the data has to be gathered very neutrally and fairly, with no cherry-picking. The only way to do that is to force an examination of every signal x every second — to be sure that there aren’t some that are being ignored. Human nature is very inclined to confirmation bias, so this actually takes some willpower.

    The way this is achieved in this new codifier is by the use of Tabs:

    The signals are divided into groups of about 5-9, and that’s all that appears on your screen at one time. The goal is then to examine a 2 minute segment of the video, looking for those 5-9 signals ONLY, and faithfully clicking each instance of those 5-ish you see, and finishing the 2 minute segment, before going onto the next tab and repeating the process again. As a result of this method, a reading cannot take less than 22 minutes since there are 11 tabs. But realistically it takes at least 45 minutes, when all pausing/playing is considered.

    Btw, mostly these constraints are for myself (and any other vultologists who’d join in on this method) since I can see the confirmation bias working within me, wow. And you’d be surprised at what this does to the reading process. Suddenly, signals start popping up where you might not have thought to look — because you’re forced to look lol. Our first impulse is always to jump the gun, but when you need to examine 5-ish signals and no others, you see just how much more nuanced it is.

    Accordingly, the results of this report appear as a spectrum with a strength factor for each metric. Everyone shows at least a little of the opposite axe’s signals, at least some of the time. And this approach will be actively fighting against the inclination to tune the others out. It is entirely normal, in this approach, to have some representation of Measured and Candid signals, or some Suspended and Grounded signals. The point isn’t to have zero signals of the other axes, but to see which one is the highest one. In order to do that, both have to be weighted.

    So, this codifier is also fully doing away with mutual exclusivity, and letting the data lead the way. I still believe that the majority of people will tend to have low levels of signal mixing, but this codifier will be able to identify the exact levels of each, because it forces the calculation of each bar’s strength.

    No Choosing the Type

    Another methodological improvement of this new codifier, which is in line with scientific hygiene, is that only the data is used to determine the type. No longer do we have a type dropdown menu at the top for you to choose what type they are, haha. This is because it’s the results of the experiment that should determine the result. In every other scientific experiment, the scientist doesn’t get to disagree with the data…  or choose what the outcome is. And even though nobody sincerely wants to falsify results, human bias is an all too real thing. So the result of your signal tallying will be given when you export the PDF report. In some cases you may not know what type they are until you export it.

    One final note — since this method is far more involved, it will have a higher cost than $30. But this cost will still be far below what some other typology gurus are charging out there, with practically no methodological control. So if you’d like to have a precise result of your vultology, the Advanced mode is the way to go. You’ll get a breakdown that looks like this– with the “degree” of use of each of your functions:

    ( ^ this will look more like a blood test panel; cuz that’s how objective vultology needs to become)

    I’ll make another video walking through the use of the Advanced mode on a real sample, sometime later. 😀

    But I hope this presents a useful overview!

    Let me know if you have any questions, and thank you all for your support and feedback — which have made these refinements possible.

    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    • This reply was modified 1 month, 2 weeks ago by Auburn.
    Alice
    Participant
    • Type: FiSe
    • Development: ll--
    • Attitude: Unseelie

    Hi! Just tried out advanced mode, and it was actually really neat to be able to pay such close attention! I haven’t done any codifying in a while, but I remember that I used to constantly rewind or end up watching a video at half speed because I was so anxious I was going to miss something, haha! The bar turning green at the 2 min mark is a very very nice touch, and sorting the signals into their individual categories helps a TON, it really saves me from getting distracted! I used to often see a very clear signal, then go and search for that signal amongst all the whole list of signals, and end up losing a few seconds just searching it out and have to rewind – now it’s much simpler. I can just have the whole list for one category right there!

    One small bug appeared, and it might just be on my end, but a few of the example gifs I think have been replaced with an identical gif of Carl Segan doing what I assume to be coordinated emphasis or ballistic momentum. Other than that it worked perfectly, and I’m looking forward to trying it out some more 🙂 Thank you for making it and constantly improving these tools!

    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    Love this! I haven’t tried it out yet, so I can’t speak for what it’s like to actually use, but I think introducing ideas to reduce confirmation bias and forcing you to focus on a small set of signals at a time is a great idea and should definitely improve accuracy.

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.

© Copyright 2012-2020 J.E. Sandoval
SEE HERE FOR MORE INFORMATION

DISCLAIMER

The content on this site is not
intended for medical advice, diagnosis,
or treatment. Always seek the advice
of your physician or other qualified
health provider with questions you
may have regarding a medical condition.
For more information visit this link.

SHARE: FACEBOOK, SUPPORT: PATREON