A Computational Metaphor

Home Forums Model 2 Discussions A Computational Metaphor

Viewing 17 posts - 1 through 17 (of 17 total)
  • Author
    Posts
  • #18878
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Hello,
    This is a thread about information metabolism, but also an attempt to explain it in a more useful way than it has been explained before, by making use of a computational metaphor. Although CT is a model of information processing, I've resisted making any absolute claims about the structure of the equations that comprise the functions.
    In part this has been because I find it very easy to fall into the trap of over-systematizing things from an impulse for systemic coherence, but without evidence. It’s been my experience that metabolic descriptions are the most susceptible to variability in interpretation if not done correctly. What follows is my attempt to achieve a robust description of cognitive processing while hopefully avoiding all of the problems that come with this endeavor. This is going to be a little dense, but I hope it’s comprehensible.
    Computation
    As I have mentioned before, I believe the functions are content-less at core (void of opinion/belief/topics) and run at the millisecond level, much like computer code, to produce effects which become evident to us only after they’ve magnified to recognizable scales. I’ll be describing the four energetic processes first, using a fictional coding language as a metaphor.
    (Please pardon the syntax errors, as I'm not a programmer by trade, and I am trying to convey an essential idea by it. I hope my explanations can clarify what I mean by each line, even if the syntax doesn't reflect that. But for any programmers out there, help with the syntax would also be most appreciated. )

    #18879
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

     

    // Pe
    function explorer() {
    		open_perceptionSystem;
    			load_perceptionSystem.scan;
    			load_perceptionSystem.data;
    		perceptionSystem.scan(data.objects),
    		foreach perceptionSystem.scan(objects[í]) {
    			if (objects[í] != data.sets) {
    				data.sets.appendObject(objects[í]);
    				data.seekAdjacent(objects[í]);
    			} else {
    				ignore(objects[í]);
    			}
    			if (perceptionSystem.objects > maxCapacity) {
    				delete(data.sets.oldest);
    			};
    }
    

     
    In order to understand the above I’m gonna try to go into it line by line, beginning with open_perceptionSystem; . It is my understanding that evolution is a conservative enterprise (see: Neural Recycling Hypothesis) and if at all possible it will reuse existing circuits by modifying them, rather than create new ones from scratch. Therefore the function explorer() first loads the same perceptionSystem that is used for other parts of the brain. This is similar to calling up a “library” for those into programming. The “library” of perceptionSystem contains many built in functions which are utilize throughout this code.
    The second line of this code is load_perceptionSystem.scan; which activates visual search. Visual search is well studied, and it’s when the eyes search the environment (such as by reading a paper or looking at the features of something). The reason this module is loaded is because the mental operation we want to perform (i.e. “conceptual exploration”) is already largely encoded within visual search. Therefore, due to the aforementioned neural recycling hypothesis, “.scan” is loaded even though we are not actually going to explore the environment. It is a vestigial effect of using the same operations for different reasons. Therefore, it’s “.scan” here that causes the vultological secondary effects, namely eye-toggles, that we read in order to identify that the explorer process is activated.
    Thirdly, along with the need to load the scan function, we also have load_perceptionSystem.data; which gives us the content to observe and to scan.  The word “data” here requires detailed explanation.
    “Data” is information. But it has already undergone processing from other psychic systems before it gets to the cognitive processes in question. One example of this is evident in the effects of optical illusions which change the properties of what we see. This happens so rapidly that our very experience of our visual feed is already adjusted before we know we’re registering the image. In other words, we don’t ever really “see” the unmodified feed from the optic nerve in our mind’s eye. This also definitively puts Ne and Se in different categories than “the senses.” Ne and Se are cognitive processes that handle the exploration of information / “data.” Data is presented in front of them both from the visual system.
    Moving forward, in the next line perceptionSystem.scan(data.objects), we see that “.scan” directed towards “data.objects.” And here we need clarification on what an object is. While for most other animals an object is very closely allied to the physical analog in question, in the human mind this process is eventually represented as a “mental object.”. My main source for how this happens is Ray Kurzweil and his theory of mind, in which he says that humans build hierarchical structures in our neurons when it comes to representation. In brief, we all begin by representing (“quantizing”) the world as objects in a way that is very closely allied to physical analogs. However, as we become more complex, so do our objects while never being fundamentally different metabolically. Eventually things like “round”, “pretty”, “money”, “husband”, “police officer” become conceptual objects that we manage. This is not a feature of any one cognitive process, but an emergent feature of the amount of cortical processing we have and how many stacked layers of conceptualization we can perform.
    The entirety of our cognitive processing is mental and thus conceptual. All eight functions are conceptual in their metabolism. However, if there’s a close alliance in the data to a physical analog, then the visual search (“.scan”) happens on the physical objects themselves. And if there is no physical analog to the mental object, then the visual search activates nonetheless, but it searches the mind’s information field for mental objects—thus creating eye-toggles.
                                    foreach perceptionSystem.scan(objects[í]) {
                                                    if (objects[i] != data.sets) {
                                                                    data.sets.appendObject(objects[í]);
                                                                    data.seekAdjacent(objects[í]);
                                                                   
                                                    }
     
    The next few lines have to be viewed together, as they comprise a “for” loop. This means that an array of objects passes through the same operation and the operation is run on each one. An organic way to read this code is as follows: For every (mental) object scanned, if the object is not already in the data sets, add it to the sets. And if you do so, seek adjacent objects to that one.
    The “sets” here refer to Pi which I’ll explain later but for now you can consider this an archive. Essentially, “.scan” is scanning the information field looking for what has not yet been cataloged into sets. If an object has not been cataloged, it is indexed (data.sets.appendObject(objects[í])) and another round is run for surrounding objects. In other words, the program suspects that if there was one non-cataloged object, there might be more nearby. So, the identification of one non-cataloged object activates .seekAdjacent which is also responsible for the toggling. The toggling stops after no new looping is triggered by the identification of new objects.
    Notice how this entire function explorer() is literally exploring non-cataloged objects. And this brings us to the next line of code: else {ignore(objects[í]);} . What this means is that if an object is already within “sets” then it is skipped over, or “ignored.” The explorer function has no use for that which has already been explored, and this ignoring is also what allows it to seek for new information. This ignoring feature, like all other lines in this code, ends up having macro-level effects which we’ll talk about later.
    Finally we get to the last bit of code:
                                                    if (perceptionSystem.objects > maxCapacity) {
                                                                    delete(data.sets.oldest);
                                                    };
    When read organically, what this says is that if the number of mental objects being handled are beyond the perceptualSystem’s capacity, then the oldest sets (Pi) are deleted to make room for new objects. This is a necessary part of the symmetry in the code because an infinite loop of data-gathering cannot happen while not also having a means to make room for that infinity. The human mind is not infinite, and so this function explorer() requires the tossing-out of the old to make way for the new. Once again this has macro-level effects which we’ll get to later.
    The word “.delete” here is not literal, as what will happen is the “sets” that are depreciated are unloaded from consciousness, and thus fall into the unconscious. Some may still be retrievable, but not necessarily.
    And that’s the end of the function explorer(). Nothing that isn’t in the above code is part of the explorer function itself. There are no other features that are fundamental to the explorer function, and so everything else is emergent effects. The emergent effects, and how they emerge, will be discussed further down.

    #18880
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Ji
    function compass() {
    		open_logosSystem;
    			load_logosSystem.define;
    			load_logosSystem.mono;
    		if (data.objects[0].define != mono) {
    			logosSystem.define(objects[0].properties);
    			if (objects[0].properties != mono) {
    				logosSystem.define(objects[0].properties.properties);
    			}
    		}
    		else if (data.objects[0].define == mono) {
    			ignore(objects[0]);
    	}
    

     
    As with Pe we begin this function by calling up a library (open_logosSystem). “Logos” in this case is the left-brained proclivity, according to Iain McGilchrist, to make distinctions between things, and to quantize information. This has been happening for as long as we’ve had a nervous system and we needed to differentiate one substance from another, to know how to act. When we look at a physical object, it’s logos that allows us to tell it apart from its environment. This is the task that current A.I. computer vision is trying to perfect by being able to recognize objects within natural settings.
    So logosSystem is a system of information differentiation that is called up. It has sub-operations within it already, which work to set the boundaries around an object. Again, this happens so rapidly that we don’t even notice it. It’s very fast and is shared by other animals. For example, when a dog sees you and recognizes that you are you, they’re using this logosSystem. This same logosSystem is what is being called up here. But humans use this system in new and different ways, as we’ll see.
    Additionally, the activation of the logosSystem causes rigidity of the body as a vestigial effect. I can only speculate as to why this is. If the logosSystem evolved from an evolutionary setting in order for animals to identify what-is [Ji], and then how to act [Je] according to what-is, then it follows suit that the entire logosSystem is deliberate. This means that no action is random. The fluidity of the body is restricted so that there is either a restraint of movement, or very intentional movement, because the purpose of the logosSystem is to define reality precisely and to move precisely in it.
    Coming back around, the next line of code is load_logosSystem.define, which loads the operation to put boundaries around mental objects. However, it needs a criterion to do so. The next line of code (load_logosSystem.mono) is responsible for providing that criteria. “Mono” is a method of definition that is singular and is essentially measuring how self-defined (non-contingent) an object is. For example, a keyboard has imperfections in “mono” because it could be argued as being part of a larger object called “computer.” A finger has low mono too, because it belongs to a larger object, the hand, which itself has lower mono than the body as a whole. Inversely, Platonic Forms have very high mono, and are self-existent/self-defined without contingency.
    The next two lines can be analyzed together:
                                    if (data.objects[0].define != mono) {
                                                    logosSystem.define(objects[0].properties);
     
    First we notice that unlike Pe which handled “objects[í]” here we see “objects[0]”. The term [0] refers to a singular object, not an array of objects. In other words, the operation here is being performed on one object only. This singular objects[0] can also be called the “subject,” as that is how it is treated by the operation.
    So what these two lines say is that if the given object’s boundaries are not perfectly mono/self-defined, then the compass() moves its “.define” operation down a level, to examine the object’s properties.
    Now, as we discussed mental objects are layers of conceptualization. So mental objects are themselves made up of smaller mental objects (i.e. “properties”). So the word “properties” here refers to the sub-objects that make up a larger mental object. So, the aim here is to find the lower layer at which there is mono. Which leads to the next line:
                                                    if (objects[0].properties != mono) {
                                                                    logosSystem.define(objects[0].properties.properties);
                                                    }
    If you notice, this is the same code as the above, except it is now directed towards an object’s propertiesproperties. If no mono is found at the level of an object’s properties, then those properties are investigated too. This can create a potentially infinite loop, until a satisfactory level of mono is found. And this also has macro-level effects which we’ll discuss later.
    Also, from a vultological perspective, it’s worth noting that since this operation is not a “for each” loop with an array of objects, but an investigation into one object’s properties and sub-properties, the mental attention of the person sinks, with every second, deeper into a hole— causing disengagement with the array of objects in the environment.  What could have been ten processing loops carried across ten objects become ten operations carried on the same object and its sub-properties. During this time the body remains frozen in rigidity. And this is what causes the effects of Ji introversion.
                                    else if (data.objects[0].define == mono) {
                                                    ignore(objects[0]);
     
    Lastly, if we see that an object’s mono is true, then it is ignored. The operation compass() is focused only on identifying what isn’t mono, and trying to apply boundaries using logos on objects until it does.
    And that’s the extent of the compass() function. Nothing else is fundamentally part of the process but is emergent from it.

    #18881
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Je
    function articulator() {
    		open_logosSystem;
    			load_logosSystem.objects.positions;
    			load_logosSystem.objects.vectors;
    			load_logosSystem.order;
    				ready_logosSystem.motorSystem;
    				ready_logosSystem.languageSystem;
    		foreach (objects[í].positions|vectors) {
    		if (objects[í].positions|vectors != order) {
    			order(objects[í].positions|vectors);
    				pass.order(motorSystem);
    				pass.order(languageSystem);
    		}
    		else if (objects[í].positions|vectors == order) {
    			ignore(objects[í]);
    	}
    

     
    With Je, we begin by opening up open_logosSystem; but this time instead of loading “.define” we load the operations load_logosSystem.objects.positions; and load_logosSystem.objects.vectors;. What we want to know is an object’s position and its vector. Notice that the definition of the object itself is not the concern, but where it is and where it is going.
    But it’s also important to remember that this is a mental object, not necessarily a physical analog, which means that a mental object’s “position” may not be a 3D-space coordinate, and an object’s “vector” may not be a 3D-space vector. For example, an object’s position might be the vice-president being right next to the president. This is a conceptual “position,” whether or not the two are truly adjacent each other very much in real life.
    As for “vector,” at higher levels of conceptualization, the same thing occurs. For example, a pencil’s vector is its production of ideas in writing. Here an object’s vector is what the object “does,” its verb or functionality.
    The next line of code is load_logosSystem.order; and here “.order” is the analog we have to Ji’s “.mono” as far as providing a criterion for measuring. This is because, even if we know where an objects is and what its vector/verb is, we don’t know how it should be positioned. This criteria is provided by “.order” based on a notion of what is the proper arrangement into order. I realize the word “should” carries in it a value judgment, which will be discussed further down as we differentiate into function axes. But for Je by itself, we treat the operation without knowing what the nature of this specific order is.
                                                                    ready_logosSystem.motorSystem;
                                                                    ready_logosSystem.languageSystem;
    The next two lines “ready” other systems, the motor system and the language system. For anyone interested in why this is, you can read about the bi-directional hypothesis of language and action. Essentially, the motor systems and the linguistic centers are tied to the conceptual registration of vectors/”verbs”. There is a direct link, which is measurable in fMRI scans, between the comprehension of verbs (both abstract and concrete) and motor systems. In other words, when objects.vectors loads, that mental object’s registration in the mind also “readies” these two other systems of the body. Even if no action is literally performed in the world, the motor system triggers brain activity. And by this we know there is a tie between articulation, motor movement and language. This is responsible for the vultological effects of Je.
                                    foreach (objects[í].positions|vectors) {
                                    if (objects[í].positions|vectors != order) {
                                                    order(objects[í].positions|vectors);
    Now we get to the root operation.  We can translate the above by saying that for each object, if that object is not positioned in “order” in relation to other objects, then it is moved into order. This ordering applies both to its position and its verb/vector. Now, before anything else manifests in the world we have to remember that this is conceptual order. The entire operation is happening in a conceptual space. However, this leads to the next two sentences:
                                                                    pass.order(motorSystem);
                                                                    pass.order(languageSystem);
    Once the Je function has determined, cognitively, what the situation is and what the “answer” is for how to order things, it passes that information along to the motorSystem and languageSystem. Whether or not the motor and language systems end up executing on that information is outside of this process’s scope.
    In brief, the articulator() function is a function which determines how to mentally order objects in relation to each other, given their positions and vectors, and passes that information along to other systems which act upon it. But articulator() by itself is just the determinator or judger of what that order is.
                                    else if (objects[í].positions == order) {
                                                    ignore(objects[í]);
    Lastly, we have the above two lines. And just as before, if the objects are seen as already being in order, then they are ignored. The articulator() process is only focused on things which are not in order. This has macro-level effects which will be discussed later but this is the entirety of the articulator() operation. Everything else about it is emergent from it.

    #18882
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Pi
    function worldview() {
    		open_perceptionSystem;
    			load_perceptionSystem.sets;
    			load_perceptionSystem.recall;
    			if (objects[0] ≈≈ recall.sets[í]) {
    				sets[í].appendObject(objects[0]);
    				sets[í].seekAdjacent(sets);
    				if (adjacent.sets ≈≈ recall.sets[í]) {
    				sets[í].adjacent.appendObject(sets);
    				sets[í].adjacent.seekAdjacent(sets);
    				}
    			} else {
    				ignore(objects[0]);
    			}
    }
    

     
    With the function worldview() we again begin by opening perceptionSystem; however, what we load here is load_perceptionSystem.sets; which requires some explanation. The word “sets” in this code refers to an array or (set) of objects tied together, as if in a mathematical matrix. These are what I have called “datasets” in the book and model at various times. We will get into what determines how sets are formed, when we get to Si and Ni differences. For now, what matters is that sets are not identical to objects but are best thought of as matrices of objects.
    The next line we see is load_perceptionSystem.recall; which is an operation that allows us to recall or search these sets/matrices. And this is what I have previously called the “librarian” as opposed to the library. The worldview() function is responsible for pulling up information, but the information itself is part of the broader perceptionSystem.
    We then see this “recall” operation put into effect in the next lines:
                                                    if (objects[0] ≈≈ recall.sets[í]) {
                                                                    sets[í].appendObject(objects[0]);
    What these lines translate to is: If the current object approximates (≈≈) an object in an existing set, then append that object to the set. In other words, if an object is seen as belonging to a given matrix, then it is integrated (“appended”) into it, causing the matrix itself to grow. Notice how, like with Ji, we have “objects[0]” rather than “objects[í]” and this is because we are only examining one object and seeing whether it relates to any sets we have seen before. But that’s not all that happens:
    sets[í].seekAdjacent(sets);
    This next line of code says that, in the case that there is a match and an object is added to an existing set/matrix, then adjacent sets are also called into view. This is in direct contrast to what we saw of Pe (data.seekAdjacent = objects[í]) which sought for adjacent objects. Both Pe and Pi are seeking information, but Pe seeks adjacent objects within “data”, while Pi seeks adjacent sets within memory via recall. In other words, the worldview() program suspects that if there was an object missing from one set, the same object (“objects[0]”) may also be missing from other sets. Which leads to:
                                                                    if (adjacent.sets ≈≈ recall.sets[í]) {
                                                                    sets[í].adjacent.appendObject(sets);
                                                                    sets[í].adjacent.seekAdjacent(sets);
                                                                    }
    So now the operation (recall) is run on the adjacent sets and if there is a match, the object is appended to those sets too. In essence, this operation wishes to insert the newfound object into as many sets as it applies. Notice, however, that just like with Ji there is a tunneling effect that happens where the operation digs deeper into a single loop.
    If one object recalls a set, and that set recalls another set (and so on, ad infinitum) then we begin to see where worldview rambling comes from. All of these sets are loaded into consciousness, causing the mind to be populated by sets in the present moment.
    Additionally, as with Ji, the processing loop takes away time that might otherwise be used to observe new objects, causing the body to remain still in the outer world – which is introversion.
    However, as this happens, the loading of the perceptionSystem and “.recall” causes vestigial effects on the eyes in the form of fixed gazes and searching scowls. The eyes don’t disengage because they need to be engaged in order for “.recall” to work, as “.recall” is part of the perceptionSystem, which needs to be activated.
                                                    } else {
                                                                    ignore(objects[0]);
                                                    }
    Lastly, we see the final snippet of code. This essentially says that if the object doesn’t have any sets that it compares to, the worldview() function ignores it. New information with no comparison to other sets is not something the worldview() function can handle, but this is exactly what the explorer() function handles.
    The explorer() function would investigate the new object, as well as adjacent objects, and from that a set can eventually be made. However, if worldview() is presented with just one object which it has no reference for, it neglects it. This has macro-level effects which we’ll discuss later.
    This is the entirety of the operation of the worldview() function. Every other common attribute of the process is emergent from this equation.

    #18883
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Apparatus
    And that’s all. Although there may be some refining to do in this computational metaphor, nothing that isn’t listed above belongs to the four functions. These four functions above, when put together, are what we might call the apparatus of consciousness. The apparatus passes along information across functions the same way a program would, in order to achieve a net result. A function cannot be taken outside of the apparatus and still operate, any more than an organ can be expected to operate out of the body. And each function, by its essence, is content-less and strictly metabolic.
    Emotional Effects
    Notice that this is compatible with the current omission of emotional attitudes from functions. We’ll talk more about how the apparatus relates to emotions down below, but in general the emotional register exists alongside the apparatus and when the apparatus runs/processes, the emotional register triggers. However, it would be a mistake to ally any emotion to the metabolism of the functions themselves.
    (So, examples of this might be Pe and enthusiasm or excitement, Pi and worry or paranoia, or Je and aggression/assertiveness. Metabolically, Pi is not a worrying function and Pe is not about excitement. However, excitement may be a common emotional side-effect of searching for objects not catalogued, but not necessarily. None of these emotional responses are necessarily tied to the functions.)
    (I'll have to cut myself short here for now, as this is a lot to post!)

    #18890
    Supah Protist
    Participant
    • Type: SeTi
    • Development: ll-l
    • Attitude: Directive

    Congratulations on the first draft of your cognitive architecture!
    Observations:
    1) It seems that part of the worldview function (datasets) is referenced by the the explorer function, but not vice versa. Should this be the case if the the two functions are part of the same oscillation.
    2) It seems that in the compass function that an objects are investigated by going deeper into the sub-properties of the object. However, in the example you gave it seemed to be that mono was reached by zooming out from the object as opposed to zooming into the object. How is mono reached by zooming in on less and less mono?
    3) For the most part, each function seems to stand on its own without a strong symmetric relationship to the other functions. However, most typology systems posit symmetrical properties that make up the functions. For example, in socionics, three dichotomies make up the functions you described; namely, static/dynamic, introverted/extraverted, and rational/irrational. So in a cognitive architecture for socionics, the Je function/program would be dynamic, rational and extraverted. The extraverted/introverted dichotomy seems present in the programs, my question is as to whether or not additional dichotomies such as rational and irrational are intended to be apparent in the construction of the code. The potential issue I see now is that the symmetry of the function programs is not constrained by the the theory. There is a marked difference between the extraverted and introverted function programs, but there is not an analogous difference between the rational and irrational function programs nor the conductor and reviser function programs. I guess there is an explicit mention of open.perceptionSystem and open.logosSystem, however there is no structural aspect of the code that encapsulates this difference as far as I can tell.
    Nice work!

    #18898
    CandyDealer
    Participant
    • Type: NeFi
    • Development: l-l-
    • Attitude: Unseelie

    Hi, I just wanted to say that the pseudo-code you have used for trying and describe the functions contains a lot of syntax incoherence and makes no sense which brings me to wonder why you use coding to explain yourself when you clearly seem to have no clue of how it works that makes me sincerely admire it, since I have no idea how someone finds it easier to express oneself in a language that one does not grasp.

    #18899
    hackphobia
    Participant
    • Type: FiSe
    • Development: lll-
    • Attitude: Unseelie

    I love this. I think mono is my new favorite word now and I understand Ji in a deeper level.
    I also wanna make a little adjustment I can't help myself.
    // Ji
    function compass() {
    open.logosSystem;
    load.logosSystem.define;
    load.logosSystem.mono;
    if (data.objects[0].define != mono) {
    objects[0].properties.define;
    if (objects[0].properties != mono) {
                           compass(objects[0].properties.properties);
    }
    }
    else if (data.objects[0].define == mono) {
    objects[0].ignore;
    }
    }
    just in case the objects properties aren't defined you could call compass again on its sub properties and create a recursive tree of reductions. until you define all the sub properties or get your brain fried.
    recursive calls

    #18903
    Bera
    Moderator
    • Type: SeFi
    • Development: ll--
    • Attitude: Seelie

    @CandyDealer nice to see you on the forum after so much time.
    Do you have any observations about the metabolism itself? 🙂
     

    #18904
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Hey @candydealer ! Nice to see you around.
    Oh yes, there are definitely syntax errors. Hence "fictional coding language" but even so, I may be lacking systemic coherence. So, I would be most grateful if you could help me sort some of it out?
    As for why I chose to express myself in a computational metaphor, it's because I believe computer algorithms come the closest to describing the way cognition operates. I just lack the expertise in coding to express myself properly, but I do think the medium is properly suited to describing cognitive processing. I hope that the essence of my imperfect metaphor can somehow shine through, and maybe some of the members here can help refine it?
     

    #18908
    Alice
    Participant
    • Type: FiSe
    • Development: ll--
    • Attitude: Unseelie

    This is incredibly interesting, and has afforded me a lot clearer understanding of the macrofunctions (Ji, Pe, etc.)! I am already beginning to see how behaviorism stems from metabolism. How Ji can fall infinitely into identity searching / defining, how Pi can be so dismissive of completely new concepts, how Je can come across as blunt and controlling, how Pe can become so easily distracted from what is right in front of them, etc. This whole CT thing has given me a lot of understanding on why people act the way they act, which is hugely comforting and enlightening.
    A few questions though, which I assume you will touch on later:

    • How do the specific functions metabolize information? For example, if we are comparing Ti to Fi, is the same metabolism just processing different kinds of information, or is the metabolism completely different?
    • How does metabolism differ for conscious and unconscious functions? I assume that we are metabolizing with all four of our functions at the same time - is the difference that we are aware of the conscious functions and unaware of the unconscious? This has behavioral ramifications that I am extremely interested in.
    • Why do the more specific functions exist? It seems like the macrofunctions handle pretty much all the information we need to process. Why would they split into functions that handle smaller subsets of data? That seems less advantageous.
    • Which brings me to my final question: Why do the functions split in the way that they do? Why does Fi imply Te, for example, and why are these considered opposites? Do the two form a metabolic whole when the data processed by both is put together?

    Thank you very much for this new and cool description of metabolism! It's really got me going with a lot of different questions and speculations!

    #18909
    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    First of all, I just want to say that it is really cool and satisfying to see an attempt to get to the real core of the functions and figure out what they truly are. I wonder if someday we will be able to use something like fMRI or EEG to really "see" the functions.
    One question I have is about how the axes work in this model. As it’s written, it looks as though each function acts independently rather than on an axis.
    It seems like there should be a process by which information gets passed from one function to another. And I also wonder how a particular function gets “called” in the first place. What causes a function to start running?
    It's also interesting to note that neither the Je nor the Ji processes really make a "judgement" per se (here I am using "judgement" to mean making a decision about what "should" be, so this would not include the act of defining things). In this sense, Ji doesn't seem involved in judgement at all and Je seems to use a previously made judgement ("order") but not to make a judgement itself. This makes me wonder where/how those judgements do get made. And in general, what relationship do these functions have to all the other things the cerebral cortex does?
    ETA: I hadn't read Alice's post before I posted mine. Now that I have, I want to second all of her questions 🙂

    #18919
    EpicEntity
    Participant
    • Type: SeTi
    • Development: l--l
    • Attitude: Directive

    Simply incredible... Pe get squeezes the data sets for new objects while Pi re-formats the data set after each new object. (probably wrong but Im goin with it)
    PS: Wish I had negative feedback but I am tired!

    #18926
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    So many great questions/replies! I'll try to address a few at a time.

    This is incredibly interesting, and has afforded me a lot clearer understanding of the macrofunctions (Ji, Pe, etc.)! I am already beginning to see how behaviorism stems from metabolism. How Ji can fall infinitely into identity searching / defining, how Pi can be so dismissive of completely new concepts, how Je can come across as blunt and controlling, how Pe can become so easily distracted from what is right in front of them, etc.


    @alice
    This one isn't really a question but --yes you hit the nail on the head! Those are some of the emergent effects we see at macro-levels.
    With Ji, if (and I say "if" because not all Ji's do this) the focus of the compass() happens to turn towards one's own nature then it becomes an obsession with fundamental/mono identity. But the same program can be put to use elsewhere, such as in finding the "one" static property/truth of the universe. The particular expression taken is variable to other life factors and emotional dispositions.
    But the way we know these are tied together is not just thematically, but vultologically because all these variations stem from the same visual phenomenon:

    Spoiler

    [collapse]

    So what we see in nature is a vultology (Ji, disengaging eyes, momentum halts, receding energy) that persistently correlates to an assortment of linked effects. I've added a few of those behaviors in the diagram ^ above. Now, if we try to define Ji as "identity" focus, we naturally get exceptions because that's not every Ji person's niche obsession. If we try to define Ji as "perfectionism," we get exceptions too because some individuals may not pick up an artistic medium. But the root metabolism describes all of these possible outcomes from a shared metabolic pathway.
    I know this hasn't answered any of your questions, and I'll get to them later since I have to finish writing the rest of this computational metaphor on the specific functions they each bifurcate into. I think those other questions will be addressed at that time as well. 🙂

    #18930
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    @supahprotist Thanks so much!
    I'm glad too, that this is finally coming together into a more precise cognitive architecture. I know there is still work to do, but this has been a very needed thing and I appreciate your feedback/questions too.

    The potential issue I see now is that the symmetry of the function programs is not constrained by the the theory. There is a marked difference between the extraverted and introverted function programs, but there is not an analogous difference between the rational and irrational function programs nor the conductor and reviser function programs.

    Pertaining to theoretical symmetries, yes they do have those symmetries you mentioned (E/I, J, static/dynamic) -- but not only those. There's more involved, which I'll get to below.

    Spoiler

    But while on the topic I just wanna say a little about symmetries. Firstly, I'm a huge fan of dichotomous symmetry. But that's also why I see it as a very easy temptation to fall into assumptions regarding their existence, and I try to avoid that as much as possible. Just like there are parts of human anatomy that are not symmetrical (i.e. digestive system) there is no necessary reason to believe that everything about cognition would be symmetrical at every level.
    This is why I find vultology so valuable because the raw data tells you what is what, without needing to over-rely on logic alone the way that I feel Socionics can. In the final analysis, the structure of the human mind may very well be complex, elegant, but also clunky, redundant and imperfect in some of its circuitry. This has to be a possibility, when investigating this, if we wanna find out the truth and not just a convenient coherence.
    This is not at all to depreciate coherence, but the notion that for every property that function [x] has, function [y] has to have an identical mirror/reflection in place.. is not something want to just assume or take for granted. If it turns out that way, then wonderful! But it may or may not be like that, depending on what task the psyche needs to perform overall and what kind of coding it needs in order to accomplish its aim in a real-world setting.

    [collapse]

    Function Axes as Co-Dependent Operations (But not Perfect Mirrors)

    Which brings me to what I think is the proper definition of function axes. A function axis has an cohesive aim in mind, but it is divided into two roles in order to accomplish that aim from two equally necessary angles. It's important, metabolically, that the roles don't get in the way of each other the same way that factory workers in an assembly line need to specialize their tasks and pass things onto each other, while each not doing the same job.
    Regarding the J axis
    The similarities in the J functions are, as you said, they belong to the logosSystem, and are focused on rationality.  I do think that both J functions qualify as rational, I do think that one is introverted ("objects[0]" = subject-oriented) and the other is extroverted (objects[í] = "objects-oriented"), and I do think that one is static and the other is dynamic in its analysis style (although in CT this comes from I and E respectively, and these are not the Socionics terms). But that in itself doesn't say much about how it works. These are properties they each have, but that is not itself what they do.
    So there's a specific engineering reality at work here that goes beyond the labeling of processes under dichotomies (which are more like "features" imo). And that's the aforementioned operations in the code. Primarily:

    • Je - conceptual position + conceptual vector + desired order (The registration of conceptual position, conceptual vector, and the organization of those positions/vectors into order.)
    • Ji - object definition + desired mono (The examination of an object's properties, and the measurement of those properties against a standard of self-existent non-contingency.)

    ^ This is the core "engine" of the J axis. Collectively, the J axis' aim is to figure out "what is [x]" (via logos) and to mobilize that understanding ("how [x] works"). From there, sheer human survival and emotional disposition takes over and applies the J program to a broad assortment of human necessities. But the J axis does this by having one pole focus on the discrete definition of each object, and the other pole focusing on the precise interactions of objects.
    These are necessary and complementary. Without a discrete definition of objects (Ji), Je would not be able to form a successful understanding of object-interactions (Je). Thus Je is inherently dependent on Ji. The more precisely something is defined by logos, the better it can be operationalized. A good example of this is law, where there is need for very strong precision in definition/semantics, and the law (Je) can be effective only to the degree that it is clearly spelled out.
    Functions as Dichotomy-Sets? No...
    But going back to dichotomies for a sec, I do wish to emphasize that the functions are not synonymous with them. There are many more feature-dichotomies that could be extracted from these code/operations above. For example, in the metabolism articles I've described them before using emergent properties such as:

    Analysis Style is Dynamics-vs-Static, Verb is Order-vs-Critique, Fear is Imperfection-vs-Failure, Reasoning is Causal-vs-Axiomatic. These and more could stand alongside Intro/Extro, Rational/Irrational (etc) as the dichotomies that define them. And it'd be hard to say which dichotomies are most important. But these are not themselves the "engine," if that makes sense?
    So the Ji-Je axis is more than any three dichotomies coming together into a conjunction. It's a mechanical operation which I believe can't be simplified down to any three facets, hence why this computational metaphor is necessary. It's only in something like this, that I can manage to explain the core operation without treating functions as dichotomy-conjunctions.
    I don't think the truth of these processes can be captured that way. And I believe this is because the brain is an organ which works by achieving outcomes, and dichotomous symmetry (which is really evolutionary "redundancy") is one way it does so. But mental processes are not literally compounds of facets, any more than the Krebs cycle can be explained by dichotomies. That was a lot more long winded that I wanted it to be, but I hope it made sense!
    Regarding the P axis
    With the P axis, there is also an equal interdependence at play, and a goal in mind. The expression "in one ear and out the other" captures what would happen to Pe if it didn't have a relationship to Pi. The infinite loop of data absorption (data.seekAdjacent = objects) and data discarding (data.sets.oldest.delete) would be meaningless if the data intake wasn't converted into something else, in this case into Pi "sets."
    If you can imagine an assembly line, Pe is gathering shiny shells by the beach and putting them on a small table. Then Pi is putting the shells into bags by their similarity. If Pi doesn't put them into bags, the table gets full and things start falling off the table pretty soon. But actually, I do think you have a point about something:

    1) It seems that part of the worldview function (datasets) is referenced by the the explorer function, but not vice versa. Should this be the case if the the two functions are part of the same oscillation.

    Pi does relate to Pe in a way that may need better syntax in the code above. When Pi looks at an object and then recalls a set/matrix, that set actually enters consciousness and becomes part of the mind's "data" (and thus it becomes part of the environment/objects that Pe can observe). Memory recall brings objects into the information field. I will try to adjust the code to better reflect this element, because this is also important for the bi-directional relationship of Pi and Pe.
    In the shells metaphor you can think of this as Pi grabbing shells out a given bag and putting them on the table in order to see whether the new shell it fits in the pile. From there, Pe can also examine the shells that Pi has recalled. If Pe is Ne, then Ne can compare the old shells to new shells and say "y'know i think these shells fit together better", thus creating new sets. But Ne couldn't do this if Pi (in this case Si) wasn't recalling old sets and unpacking them for Ne to dabble with.
    But yes, to answer one of @fayest42 's questions along with this one...

    It seems like there should be a process by which information gets passed from one function to another.

    Indeed, and this is the apparatus. The Ji function "passes on" its effects to Je by providing object-definitions that help Je mobilize them. And Pe passes on data to Pi by providing objects that can be sorted into a matrix.
    I certainly need to work more on explaining this with more precision, but this is essentially the apparatus I mentioned above. Here is a very rough first draft of what it looks like:

    Cognitive Apparatus


    Again this is is the "engine" (i.e. krebs cycle metaphor) that determines what role each function plays in the collective metabolism of information in order to form a coherent picture of reality which, by multiplying thousand-fold times, creates macro-level effects.
    I'm not entirely happy with this diagram but I hope it provides a general answer to what I mean?

    #19287
    Supah Protist
    Participant
    • Type: SeTi
    • Development: ll-l
    • Attitude: Directive

    @auburn, I actually have more of a general question. How have you formulated these functional definitions and their computational metaphors from the empirical data?

Viewing 17 posts - 1 through 17 (of 17 total)
  • You must be logged in to reply to this topic.
© Copyright 2012-2021 Juan E. Sandoval - Use Policy
searchhomecommentsenvelope