A Computational Metaphor

Home Page Forums General Psychology A Computational Metaphor

  • Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Hello,

    This is a thread about information metabolism, but also an attempt to explain it in a more useful way than it has been explained before, by making use of a computational metaphor. Although CT is a model of information processing, I’ve resisted making any absolute claims about the structure of the equations that comprise the functions.

    In part this has been because I find it very easy to fall into the trap of over-systematizing things from an impulse for systemic coherence, but without evidence. It’s been my experience that metabolic descriptions are the most susceptible to variability in interpretation if not done correctly. What follows is my attempt to achieve a robust description of cognitive processing while hopefully avoiding all of the problems that come with this endeavor. This is going to be a little dense, but I hope it’s comprehensible.

    Computation

    As I have mentioned before, I believe the functions are content-less at core (void of opinion/belief/topics) and run at the millisecond level, much like computer code, to produce effects which become evident to us only after they’ve magnified to recognizable scales. I’ll be describing the four energetic processes first, using a fictional coding language as a metaphor.

    (Please pardon the syntax errors, as I’m not a programmer by trade, and I am trying to convey an essential idea by it. I hope my explanations can clarify what I mean by each line, even if the syntax doesn’t reflect that. But for any programmers out there, help with the syntax would also be most appreciated. )

    • This topic was modified 7 months, 4 weeks ago by Auburn.
    • This topic was modified 7 months, 4 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

     

    // Pe
    
    function explorer() {
    
    		open_perceptionSystem;
    			load_perceptionSystem.scan;
    			load_perceptionSystem.data;
    
    		perceptionSystem.scan(data.objects),
    
    		foreach perceptionSystem.scan(objects[í]) {
    			if (objects[í] != data.sets) {
    				data.sets.appendObject(objects[í]);
    				data.seekAdjacent(objects[í]);
    				
    			} else {
    				ignore(objects[í]);
    			}
    			if (perceptionSystem.objects > maxCapacity) {
    				delete(data.sets.oldest);
    			};
    }
    
    
    

     

    In order to understand the above I’m gonna try to go into it line by line, beginning with open_perceptionSystem; . It is my understanding that evolution is a conservative enterprise (see: Neural Recycling Hypothesis) and if at all possible it will reuse existing circuits by modifying them, rather than create new ones from scratch. Therefore the function explorer() first loads the same perceptionSystem that is used for other parts of the brain. This is similar to calling up a “library” for those into programming. The “library” of perceptionSystem contains many built in functions which are utilize throughout this code.

    The second line of this code is load_perceptionSystem.scan; which activates visual search. Visual search is well studied, and it’s when the eyes search the environment (such as by reading a paper or looking at the features of something). The reason this module is loaded is because the mental operation we want to perform (i.e. “conceptual exploration”) is already largely encoded within visual search. Therefore, due to the aforementioned neural recycling hypothesis, “.scan” is loaded even though we are not actually going to explore the environment. It is a vestigial effect of using the same operations for different reasons. Therefore, it’s “.scan” here that causes the vultological secondary effects, namely eye-toggles, that we read in order to identify that the explorer process is activated.

    Thirdly, along with the need to load the scan function, we also have load_perceptionSystem.data; which gives us the content to observe and to scan.  The word “data” here requires detailed explanation.

    “Data” is information. But it has already undergone processing from other psychic systems before it gets to the cognitive processes in question. One example of this is evident in the effects of optical illusions which change the properties of what we see. This happens so rapidly that our very experience of our visual feed is already adjusted before we know we’re registering the image. In other words, we don’t ever really “see” the unmodified feed from the optic nerve in our mind’s eye. This also definitively puts Ne and Se in different categories than “the senses.” Ne and Se are cognitive processes that handle the exploration of information / “data.” Data is presented in front of them both from the visual system.

    Moving forward, in the next line perceptionSystem.scan(data.objects), we see that “.scan” directed towards “data.objects.” And here we need clarification on what an object is. While for most other animals an object is very closely allied to the physical analog in question, in the human mind this process is eventually represented as a “mental object.”. My main source for how this happens is Ray Kurzweil and his theory of mind, in which he says that humans build hierarchical structures in our neurons when it comes to representation. In brief, we all begin by representing (“quantizing”) the world as objects in a way that is very closely allied to physical analogs. However, as we become more complex, so do our objects while never being fundamentally different metabolically. Eventually things like “round”, “pretty”, “money”, “husband”, “police officer” become conceptual objects that we manage. This is not a feature of any one cognitive process, but an emergent feature of the amount of cortical processing we have and how many stacked layers of conceptualization we can perform.

    The entirety of our cognitive processing is mental and thus conceptual. All eight functions are conceptual in their metabolism. However, if there’s a close alliance in the data to a physical analog, then the visual search (“.scan”) happens on the physical objects themselves. And if there is no physical analog to the mental object, then the visual search activates nonetheless, but it searches the mind’s information field for mental objects—thus creating eye-toggles.

                                    foreach perceptionSystem.scan(objects[í]) {

                                                    if (objects != data.sets) {

                                                                    data.sets.appendObject(objects[í]);

                                                                    data.seekAdjacent(objects[í]);

                                                                   

                                                    }

     

    The next few lines have to be viewed together, as they comprise a “for” loop. This means that an array of objects passes through the same operation and the operation is run on each one. An organic way to read this code is as follows: For every (mental) object scanned, if the object is not already in the data sets, add it to the sets. And if you do so, seek adjacent objects to that one.

    The “sets” here refer to Pi which I’ll explain later but for now you can consider this an archive. Essentially, “.scan” is scanning the information field looking for what has not yet been cataloged into sets. If an object has not been cataloged, it is indexed (data.sets.appendObject(objects[í])) and another round is run for surrounding objects. In other words, the program suspects that if there was one non-cataloged object, there might be more nearby. So, the identification of one non-cataloged object activates .seekAdjacent which is also responsible for the toggling. The toggling stops after no new looping is triggered by the identification of new objects.

    Notice how this entire function explorer() is literally exploring non-cataloged objects. And this brings us to the next line of code: else {ignore(objects[í]);} . What this means is that if an object is already within “sets” then it is skipped over, or “ignored.” The explorer function has no use for that which has already been explored, and this ignoring is also what allows it to seek for new information. This ignoring feature, like all other lines in this code, ends up having macro-level effects which we’ll talk about later.

    Finally we get to the last bit of code:

                                                    if (perceptionSystem.objects > maxCapacity) {

                                                                    delete(data.sets.oldest);

                                                    };

    When read organically, what this says is that if the number of mental objects being handled are beyond the perceptualSystem’s capacity, then the oldest sets (Pi) are deleted to make room for new objects. This is a necessary part of the symmetry in the code because an infinite loop of data-gathering cannot happen while not also having a means to make room for that infinity. The human mind is not infinite, and so this function explorer() requires the tossing-out of the old to make way for the new. Once again this has macro-level effects which we’ll get to later.

    The word “.delete” here is not literal, as what will happen is the “sets” that are depreciated are unloaded from consciousness, and thus fall into the unconscious. Some may still be retrievable, but not necessarily.

    And that’s the end of the function explorer(). Nothing that isn’t in the above code is part of the explorer function itself. There are no other features that are fundamental to the explorer function, and so everything else is emergent effects. The emergent effects, and how they emerge, will be discussed further down.

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 3 weeks ago by Auburn.
    • This reply was modified 7 months, 3 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Ji
    
    function compass() {
    
    		open_logosSystem;
    			load_logosSystem.define;
    			load_logosSystem.mono;
    
    		if (data.objects[0].define != mono) {
    
    			logosSystem.define(objects[0].properties);
    
    			if (objects[0].properties != mono) {
    
    				logosSystem.define(objects[0].properties.properties);
    			}
    		} 
    
    		else if (data.objects[0].define == mono) {
    
    			ignore(objects[0]);
    
    	}
    

     

    As with Pe we begin this function by calling up a library (open_logosSystem). “Logos” in this case is the left-brained proclivity, according to Iain McGilchrist, to make distinctions between things, and to quantize information. This has been happening for as long as we’ve had a nervous system and we needed to differentiate one substance from another, to know how to act. When we look at a physical object, it’s logos that allows us to tell it apart from its environment. This is the task that current A.I. computer vision is trying to perfect by being able to recognize objects within natural settings.

    So logosSystem is a system of information differentiation that is called up. It has sub-operations within it already, which work to set the boundaries around an object. Again, this happens so rapidly that we don’t even notice it. It’s very fast and is shared by other animals. For example, when a dog sees you and recognizes that you are you, they’re using this logosSystem. This same logosSystem is what is being called up here. But humans use this system in new and different ways, as we’ll see.

    Additionally, the activation of the logosSystem causes rigidity of the body as a vestigial effect. I can only speculate as to why this is. If the logosSystem evolved from an evolutionary setting in order for animals to identify what-is [Ji], and then how to act [Je] according to what-is, then it follows suit that the entire logosSystem is deliberate. This means that no action is random. The fluidity of the body is restricted so that there is either a restraint of movement, or very intentional movement, because the purpose of the logosSystem is to define reality precisely and to move precisely in it.

    Coming back around, the next line of code is load_logosSystem.define, which loads the operation to put boundaries around mental objects. However, it needs a criterion to do so. The next line of code (load_logosSystem.mono) is responsible for providing that criteria. “Mono” is a method of definition that is singular and is essentially measuring how self-defined (non-contingent) an object is. For example, a keyboard has imperfections in “mono” because it could be argued as being part of a larger object called “computer.” A finger has low mono too, because it belongs to a larger object, the hand, which itself has lower mono than the body as a whole. Inversely, Platonic Forms have very high mono, and are self-existent/self-defined without contingency.

    The next two lines can be analyzed together:

                                    if (data.objects[0].define != mono) {

                                                    logosSystem.define(objects[0].properties);

     

    First we notice that unlike Pe which handled “objects[í]” here we see “objects[0]”. The term [0] refers to a singular object, not an array of objects. In other words, the operation here is being performed on one object only. This singular objects[0] can also be called the “subject,” as that is how it is treated by the operation.

    So what these two lines say is that if the given object’s boundaries are not perfectly mono/self-defined, then the compass() moves its “.define” operation down a level, to examine the object’s properties.

    Now, as we discussed mental objects are layers of conceptualization. So mental objects are themselves made up of smaller mental objects (i.e. “properties”). So the word “properties” here refers to the sub-objects that make up a larger mental object. So, the aim here is to find the lower layer at which there is mono. Which leads to the next line:

                                                    if (objects[0].properties != mono) {

                                                                    logosSystem.define(objects[0].properties.properties);

                                                    }

    If you notice, this is the same code as the above, except it is now directed towards an object’s propertiesproperties. If no mono is found at the level of an object’s properties, then those properties are investigated too. This can create a potentially infinite loop, until a satisfactory level of mono is found. And this also has macro-level effects which we’ll discuss later.

    Also, from a vultological perspective, it’s worth noting that since this operation is not a “for each” loop with an array of objects, but an investigation into one object’s properties and sub-properties, the mental attention of the person sinks, with every second, deeper into a hole— causing disengagement with the array of objects in the environment.  What could have been ten processing loops carried across ten objects become ten operations carried on the same object and its sub-properties. During this time the body remains frozen in rigidity. And this is what causes the effects of Ji introversion.

                                    else if (data.objects[0].define == mono) {

                                                    ignore(objects[0]);

     

    Lastly, if we see that an object’s mono is true, then it is ignored. The operation compass() is focused only on identifying what isn’t mono, and trying to apply boundaries using logos on objects until it does.

    And that’s the extent of the compass() function. Nothing else is fundamentally part of the process but is emergent from it.

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 3 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Je
    
    function articulator() {
    
    		open_logosSystem;
    			load_logosSystem.objects.positions;
    			load_logosSystem.objects.vectors;
    			load_logosSystem.order;
    
    				ready_logosSystem.motorSystem;
    				ready_logosSystem.languageSystem;
    
    		foreach (objects[í].positions|vectors) {
    		if (objects[í].positions|vectors != order) {
    
    			order(objects[í].positions|vectors);
    				pass.order(motorSystem);
    				pass.order(languageSystem);
    		} 
    
    		else if (objects[í].positions|vectors == order) {
    
    			ignore(objects[í]);
    
    	}
    

     

    With Je, we begin by opening up open_logosSystem; but this time instead of loading “.define” we load the operations load_logosSystem.objects.positions; and load_logosSystem.objects.vectors;. What we want to know is an object’s position and its vector. Notice that the definition of the object itself is not the concern, but where it is and where it is going.

    But it’s also important to remember that this is a mental object, not necessarily a physical analog, which means that a mental object’s “position” may not be a 3D-space coordinate, and an object’s “vector” may not be a 3D-space vector. For example, an object’s position might be the vice-president being right next to the president. This is a conceptual “position,” whether or not the two are truly adjacent each other very much in real life.

    As for “vector,” at higher levels of conceptualization, the same thing occurs. For example, a pencil’s vector is its production of ideas in writing. Here an object’s vector is what the object “does,” its verb or functionality.

    The next line of code is load_logosSystem.order; and here “.order” is the analog we have to Ji’s “.mono” as far as providing a criterion for measuring. This is because, even if we know where an objects is and what its vector/verb is, we don’t know how it should be positioned. This criteria is provided by “.order” based on a notion of what is the proper arrangement into order. I realize the word “should” carries in it a value judgment, which will be discussed further down as we differentiate into function axes. But for Je by itself, we treat the operation without knowing what the nature of this specific order is.

                                                                    ready_logosSystem.motorSystem;

                                                                    ready_logosSystem.languageSystem;

    The next two lines “ready” other systems, the motor system and the language system. For anyone interested in why this is, you can read about the bi-directional hypothesis of language and action. Essentially, the motor systems and the linguistic centers are tied to the conceptual registration of vectors/”verbs”. There is a direct link, which is measurable in fMRI scans, between the comprehension of verbs (both abstract and concrete) and motor systems. In other words, when objects.vectors loads, that mental object’s registration in the mind also “readies” these two other systems of the body. Even if no action is literally performed in the world, the motor system triggers brain activity. And by this we know there is a tie between articulation, motor movement and language. This is responsible for the vultological effects of Je.

                                    foreach (objects[í].positions|vectors) {

                                    if (objects[í].positions|vectors != order) {

                                                    order(objects[í].positions|vectors);

    Now we get to the root operation.  We can translate the above by saying that for each object, if that object is not positioned in “order” in relation to other objects, then it is moved into order. This ordering applies both to its position and its verb/vector. Now, before anything else manifests in the world we have to remember that this is conceptual order. The entire operation is happening in a conceptual space. However, this leads to the next two sentences:

                                                                    pass.order(motorSystem);

                                                                    pass.order(languageSystem);

    Once the Je function has determined, cognitively, what the situation is and what the “answer” is for how to order things, it passes that information along to the motorSystem and languageSystem. Whether or not the motor and language systems end up executing on that information is outside of this process’s scope.

    In brief, the articulator() function is a function which determines how to mentally order objects in relation to each other, given their positions and vectors, and passes that information along to other systems which act upon it. But articulator() by itself is just the determinator or judger of what that order is.

                                    else if (objects[í].positions == order) {

                                                    ignore(objects[í]);

    Lastly, we have the above two lines. And just as before, if the objects are seen as already being in order, then they are ignored. The articulator() process is only focused on things which are not in order. This has macro-level effects which will be discussed later but this is the entirety of the articulator() operation. Everything else about it is emergent from it.

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 3 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    // Pi
    
    function worldview() {
    
    		open_perceptionSystem;
    			load_perceptionSystem.sets;
    			load_perceptionSystem.recall;
    
    			if (objects[0] ≈≈ recall.sets[í]) {
    
    				sets[í].appendObject(objects[0]);
    				sets[í].seekAdjacent(sets);
    
    				if (adjacent.sets ≈≈ recall.sets[í]) {
    				sets[í].adjacent.appendObject(sets);
    				sets[í].adjacent.seekAdjacent(sets);
    
    				}
    
    			} else {
    				ignore(objects[0]);
    			}
    
    }
    

     

    With the function worldview() we again begin by opening perceptionSystem; however, what we load here is load_perceptionSystem.sets; which requires some explanation. The word “sets” in this code refers to an array or (set) of objects tied together, as if in a mathematical matrix. These are what I have called “datasets” in the book and model at various times. We will get into what determines how sets are formed, when we get to Si and Ni differences. For now, what matters is that sets are not identical to objects but are best thought of as matrices of objects.

    The next line we see is load_perceptionSystem.recall; which is an operation that allows us to recall or search these sets/matrices. And this is what I have previously called the “librarian” as opposed to the library. The worldview() function is responsible for pulling up information, but the information itself is part of the broader perceptionSystem.

    We then see this “recall” operation put into effect in the next lines:

                                                    if (objects[0] ≈≈ recall.sets[í]) {

                                                                    sets[í].appendObject(objects[0]);

    What these lines translate to is: If the current object approximates (≈≈) an object in an existing set, then append that object to the set. In other words, if an object is seen as belonging to a given matrix, then it is integrated (“appended”) into it, causing the matrix itself to grow. Notice how, like with Ji, we have “objects[0]” rather than “objects[í]” and this is because we are only examining one object and seeing whether it relates to any sets we have seen before. But that’s not all that happens:

    sets[í].seekAdjacent(sets);

    This next line of code says that, in the case that there is a match and an object is added to an existing set/matrix, then adjacent sets are also called into view. This is in direct contrast to what we saw of Pe (data.seekAdjacent = objects[í]) which sought for adjacent objects. Both Pe and Pi are seeking information, but Pe seeks adjacent objects within “data”, while Pi seeks adjacent sets within memory via recall. In other words, the worldview() program suspects that if there was an object missing from one set, the same object (“objects[0]”) may also be missing from other sets. Which leads to:

                                                                    if (adjacent.sets ≈≈ recall.sets[í]) {

                                                                    sets[í].adjacent.appendObject(sets);

                                                                    sets[í].adjacent.seekAdjacent(sets);

                                                                    }

    So now the operation (recall) is run on the adjacent sets and if there is a match, the object is appended to those sets too. In essence, this operation wishes to insert the newfound object into as many sets as it applies. Notice, however, that just like with Ji there is a tunneling effect that happens where the operation digs deeper into a single loop.

    If one object recalls a set, and that set recalls another set (and so on, ad infinitum) then we begin to see where worldview rambling comes from. All of these sets are loaded into consciousness, causing the mind to be populated by sets in the present moment.

    Additionally, as with Ji, the processing loop takes away time that might otherwise be used to observe new objects, causing the body to remain still in the outer world – which is introversion.

    However, as this happens, the loading of the perceptionSystem and “.recall” causes vestigial effects on the eyes in the form of fixed gazes and searching scowls. The eyes don’t disengage because they need to be engaged in order for “.recall” to work, as “.recall” is part of the perceptionSystem, which needs to be activated.

                                                    } else {

                                                                    ignore(objects[0]);

                                                    }

    Lastly, we see the final snippet of code. This essentially says that if the object doesn’t have any sets that it compares to, the worldview() function ignores it. New information with no comparison to other sets is not something the worldview() function can handle, but this is exactly what the explorer() function handles.

    The explorer() function would investigate the new object, as well as adjacent objects, and from that a set can eventually be made. However, if worldview() is presented with just one object which it has no reference for, it neglects it. This has macro-level effects which we’ll discuss later.

    This is the entirety of the operation of the worldview() function. Every other common attribute of the process is emergent from this equation.

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 4 weeks ago by Auburn.
    • This reply was modified 7 months, 3 weeks ago by Auburn.
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Apparatus

    And that’s all. Although there may be some refining to do in this computational metaphor, nothing that isn’t listed above belongs to the four functions. These four functions above, when put together, are what we might call the apparatus of consciousness. The apparatus passes along information across functions the same way a program would, in order to achieve a net result. A function cannot be taken outside of the apparatus and still operate, any more than an organ can be expected to operate out of the body. And each function, by its essence, is content-less and strictly metabolic.

    Emotional Effects

    Notice that this is compatible with the current omission of emotional attitudes from functions. We’ll talk more about how the apparatus relates to emotions down below, but in general the emotional register exists alongside the apparatus and when the apparatus runs/processes, the emotional register triggers. However, it would be a mistake to ally any emotion to the metabolism of the functions themselves.

    (So, examples of this might be Pe and enthusiasm or excitement, Pi and worry or paranoia, or Je and aggression/assertiveness. Metabolically, Pi is not a worrying function and Pe is not about excitement. However, excitement may be a common emotional side-effect of searching for objects not catalogued, but not necessarily. None of these emotional responses are necessarily tied to the functions.)

    (I’ll have to cut myself short here for now, as this is a lot to post!)

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    Supah Protist
    Participant
    • Type: SeTi
    • Development: ll-l
    • Attitude: Directive

    Congratulations on the first draft of your cognitive architecture!

    Observations:

    1) It seems that part of the worldview function (datasets) is referenced by the the explorer function, but not vice versa. Should this be the case if the the two functions are part of the same oscillation.

    2) It seems that in the compass function that an objects are investigated by going deeper into the sub-properties of the object. However, in the example you gave it seemed to be that mono was reached by zooming out from the object as opposed to zooming into the object. How is mono reached by zooming in on less and less mono?

    3) For the most part, each function seems to stand on its own without a strong symmetric relationship to the other functions. However, most typology systems posit symmetrical properties that make up the functions. For example, in socionics, three dichotomies make up the functions you described; namely, static/dynamic, introverted/extraverted, and rational/irrational. So in a cognitive architecture for socionics, the Je function/program would be dynamic, rational and extraverted. The extraverted/introverted dichotomy seems present in the programs, my question is as to whether or not additional dichotomies such as rational and irrational are intended to be apparent in the construction of the code. The potential issue I see now is that the symmetry of the function programs is not constrained by the the theory. There is a marked difference between the extraverted and introverted function programs, but there is not an analogous difference between the rational and irrational function programs nor the conductor and reviser function programs. I guess there is an explicit mention of open.perceptionSystem and open.logosSystem, however there is no structural aspect of the code that encapsulates this difference as far as I can tell.

    Nice work!

    CandyDealer
    Participant
    • Type: NeFi
    • Development: l-l-
    • Attitude: Unseelie

    Hi, I just wanted to say that the pseudo-code you have used for trying and describe the functions contains a lot of syntax incoherence and makes no sense which brings me to wonder why you use coding to explain yourself when you clearly seem to have no clue of how it works that makes me sincerely admire it, since I have no idea how someone finds it easier to express oneself in a language that one does not grasp.

    hackphobia
    Participant
    • Type: FiSe
    • Development: lll-
    • Attitude: Unseelie

    I love this. I think mono is my new favorite word now and I understand Ji in a deeper level.

    I also wanna make a little adjustment I can’t help myself.

    // Ji

    function compass() {

    open.logosSystem;
    load.logosSystem.define;
    load.logosSystem.mono;

    if (data.objects[0].define != mono) {

    objects[0].properties.define;

    if (objects[0].properties != mono) {

                           compass(objects[0].properties.properties);
    }
    }

    else if (data.objects[0].define == mono) {

    objects[0].ignore;

    }

    }

    just in case the objects properties aren’t defined you could call compass again on its sub properties and create a recursive tree of reductions. until you define all the sub properties or get your brain fried.

    recursive calls

    • This reply was modified 7 months, 4 weeks ago by hackphobia.
    Bera
    Moderator
    • Type: SeFi
    • Development: ll--
    • Attitude: Seelie

    @CandyDealer nice to see you on the forum after so much time.

    Do you have any observations about the metabolism itself? 🙂

     

    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Hey @candydealer ! Nice to see you around.

    Oh yes, there are definitely syntax errors. Hence “fictional coding language” but even so, I may be lacking systemic coherence. So, I would be most grateful if you could help me sort some of it out?

    As for why I chose to express myself in a computational metaphor, it’s because I believe computer algorithms come the closest to describing the way cognition operates. I just lack the expertise in coding to express myself properly, but I do think the medium is properly suited to describing cognitive processing. I hope that the essence of my imperfect metaphor can somehow shine through, and maybe some of the members here can help refine it?

     

    • This reply was modified 7 months, 4 weeks ago by Auburn.
    Alice
    Participant
    • Type: FiSe
    • Development: ll--
    • Attitude: Unseelie

    This is incredibly interesting, and has afforded me a lot clearer understanding of the macrofunctions (Ji, Pe, etc.)! I am already beginning to see how behaviorism stems from metabolism. How Ji can fall infinitely into identity searching / defining, how Pi can be so dismissive of completely new concepts, how Je can come across as blunt and controlling, how Pe can become so easily distracted from what is right in front of them, etc. This whole CT thing has given me a lot of understanding on why people act the way they act, which is hugely comforting and enlightening.

    A few questions though, which I assume you will touch on later:

    • How do the specific functions metabolize information? For example, if we are comparing Ti to Fi, is the same metabolism just processing different kinds of information, or is the metabolism completely different?
    • How does metabolism differ for conscious and unconscious functions? I assume that we are metabolizing with all four of our functions at the same time – is the difference that we are aware of the conscious functions and unaware of the unconscious? This has behavioral ramifications that I am extremely interested in.
    • Why do the more specific functions exist? It seems like the macrofunctions handle pretty much all the information we need to process. Why would they split into functions that handle smaller subsets of data? That seems less advantageous.
    • Which brings me to my final question: Why do the functions split in the way that they do? Why does Fi imply Te, for example, and why are these considered opposites? Do the two form a metabolic whole when the data processed by both is put together?

    Thank you very much for this new and cool description of metabolism! It’s really got me going with a lot of different questions and speculations!

    fayest42
    Participant
    • Type: FiNe
    • Development: ll--
    • Attitude: Unseelie

    First of all, I just want to say that it is really cool and satisfying to see an attempt to get to the real core of the functions and figure out what they truly are. I wonder if someday we will be able to use something like fMRI or EEG to really “see” the functions.

    One question I have is about how the axes work in this model. As it’s written, it looks as though each function acts independently rather than on an axis.

    It seems like there should be a process by which information gets passed from one function to another. And I also wonder how a particular function gets “called” in the first place. What causes a function to start running?

    It’s also interesting to note that neither the Je nor the Ji processes really make a “judgement” per se (here I am using “judgement” to mean making a decision about what “should” be, so this would not include the act of defining things). In this sense, Ji doesn’t seem involved in judgement at all and Je seems to use a previously made judgement (“order”) but not to make a judgement itself. This makes me wonder where/how those judgements do get made. And in general, what relationship do these functions have to all the other things the cerebral cortex does?

    ETA: I hadn’t read Alice’s post before I posted mine. Now that I have, I want to second all of her questions 🙂

    • This reply was modified 7 months, 4 weeks ago by fayest42.
    EpicEntity
    Participant
    • Type: SeTi
    • Development: l--l
    • Attitude: Directive

    Simply incredible… Pe get squeezes the data sets for new objects while Pi re-formats the data set after each new object. (probably wrong but Im goin with it)

    PS: Wish I had negative feedback but I am tired!

    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    So many great questions/replies! I’ll try to address a few at a time.

    This is incredibly interesting, and has afforded me a lot clearer understanding of the macrofunctions (Ji, Pe, etc.)! I am already beginning to see how behaviorism stems from metabolism. How Ji can fall infinitely into identity searching / defining, how Pi can be so dismissive of completely new concepts, how Je can come across as blunt and controlling, how Pe can become so easily distracted from what is right in front of them, etc.


    @alice
    This one isn’t really a question but –yes you hit the nail on the head! Those are some of the emergent effects we see at macro-levels.

    With Ji, if (and I say “if” because not all Ji’s do this) the focus of the compass() happens to turn towards one’s own nature then it becomes an obsession with fundamental/mono identity. But the same program can be put to use elsewhere, such as in finding the “one” static property/truth of the universe. The particular expression taken is variable to other life factors and emotional dispositions.

    But the way we know these are tied together is not just thematically, but vultologically because all these variations stem from the same visual phenomenon:

    Spoiler:

    So what we see in nature is a vultology (Ji, disengaging eyes, momentum halts, receding energy) that persistently correlates to an assortment of linked effects. I’ve added a few of those behaviors in the diagram ^ above. Now, if we try to define Ji as “identity” focus, we naturally get exceptions because that’s not every Ji person’s niche obsession. If we try to define Ji as “perfectionism,” we get exceptions too because some individuals may not pick up an artistic medium. But the root metabolism describes all of these possible outcomes from a shared metabolic pathway.

    I know this hasn’t answered any of your questions, and I’ll get to them later since I have to finish writing the rest of this computational metaphor on the specific functions they each bifurcate into. I think those other questions will be addressed at that time as well. 🙂

Viewing 15 posts - 1 through 15 (of 17 total)
  • You must be logged in to reply to this topic.

© Copyright 2012-2020 J.E. Sandoval
SEE HERE FOR MORE INFORMATION

DISCLAIMER

The content on this site is not
intended for medical advice, diagnosis,
or treatment. Always seek the advice
of your physician or other qualified
health provider with questions you
may have regarding a medical condition.
For more information visit this link.

SHARE: FACEBOOK, SUPPORT: PATREON