This IN idea—aggregated chains of data--seems to make sense to me. Also this discussion has helped me better grasp what's meant by Ni being 'isomorphic.'
In addition to the thematic data-chains, I’d also tend to assume that IN is involved with things like thinking in analogies, cross-contextualization & induction. I’ve thought of it as forming associations between longer-term datasets that have some similar features or exude similar impressions, no? Also, that its assumptions are what tends to fill in the sparse or missing parts in my worldview tapestry (sometimes without me even realizing it).
I’ve noticed that I’ve had a tendency to generalize or infer broader patterns, sometimes based on the features of something of which I've only actually seen a few instances. I'm more careful about this than I was when I was younger, but here are some generalized examples of the kind of thing I mean:
I'd assume a lot of this is IN, rather than purely Ne, because it relates to information I think I know over a longer period of time, rather than moment-to-moment assumptions based on the real-time absorption of new data.
As much as Si has a scholarly reputation, and as scholarly as many Si-leads I know are (even myself to an extent), Si's still a compressed process, as I understand it, so it’s easier for me to make use of the information I already know—even if I have to stretch it a bit--than to go digging around externally for new information. I guess I would assume this way of inducing & inferring is an IN sort of thing.
@hrafn - that's a very interesting potential addition to the IN page! I'm still surveying people to try to figure out what the core features of Ne+Si working together produces, and I think you may be onto something here. More on that below but first~
Also this discussion has helped me better grasp what's meant by Ni being 'isomorphic.'
Yes, a diagram that could represent this difference is this:
Imagine that the circles are Si datapoints, and the lines are Ne associations. They are short-span associations (because Ne is real-time). To the left side you see a big ball, forming an Ne-Si modular worldview, which is very anecdotal and contextual. In general, Ne-Si worldviews look very modular and anecdotal ("unique") because the data isn't following isomorphisms, and so there's a historical imprint on them.
Now, coming out of that modular worldview we see "a series of modular Ne associations, such that they chain together into a longer and persistent array." Essentially, Ne can micro-stitch together a series of Si anecdotes, such that they are continuous and form a wider narrative. In this way, an association chain runs across long sets of data, but it isn't Ni. Aside from retaining a faithfulness to the historicity or the evolutionary process of the thought-chain, it abides by no particular structure, and so it retains that infinite modularity that's central to Si-Ne.
To get more nitty-gritty on the details of what I mean and how this isn't isomorphic, we can use a few examples. Here's a re-rendering of the above diagram with associative detail:
These are the different "threads" that are running through the association-chains. At the base (left) of the thread we see two original patterns-- a green line and a blue line. Two things "in common" held these datapoints together. Now, as we go right, the green pattern kindof runs out of steam, it no longer applies, but the blue line keeps going. In other words, the blue line's pattern is still holding, so it keeps going. There are a few bottlenecks there, where the datasets are only being held together by one associative factor.
Then we enter a cluster, and the blue line meets the yellow line. Yellow and blue connect the dots for a while going southward. However, at some point the blue line also runs out of steam and in the end, the yellow line keeps going southward on its own. Then of course eventually the yellow line runs out of datasets and reaches speculation land.
So there are a few things I wanna point out. One is that the structure of the association-chain changed several times during the course of this trip. Associations entered and left, so that it did not retain an isomorphic object across its structure. Even though the chain is one-chain, it is not made of the same material. The material is swapped out several times. Even though the thread started out green and blue, by the end it's yellow, and the original pattern isn't even there anymore -- but as a "whole" it remains a connected chain of information.
To put this into a tangible example, if someone asked us, "What is New York City like?" - we could start going through details-- beginning with the trees, the roads, then the restaurants, the shops, the people, and all of it falls under the broader association chain of "New York City", but as we discuss these details, we're switching threads several times. The trees may not be directly connected to the people, or the roads. But in one way or another, you can make your way to each data point by switching buses, so to speak. This is the modular information approach of Ne-Si in action.
And there may even be situations in which the blue line does go all the way out to the end of our current knowledge, yet it too will end at some point. Ne-Si chains are each independently finite, but they're long enough to where-- when combined with Si data, they can collectively form an infinitely long data structure (so long as we recognize the frame-switching that goes on between things).
After all, wool yarn is made out of independent hairs, which each cease at some point, while collectively making a continuous object. The same applies to IN.
Having said all that, I'm excited to talk about this bit:
Halfway there, I realize I'm not actually sure where it is. Instead of stopping & looking up directions, I'll just follow a vague hunch and see if I can stumble across it that way.
I know this experience, heh. I see this as a form of Ne-Si brainstorming. As I've mentioned elsewhere, when Ne-Si users reach the end of their knowledge, Ne is in charge. But since Ne is in charge, in a way it "could be anything." So Deltas and Alphas have a tendency to be optimistic at the fringes of knowledge. (Or, I suppose, paranoid if "it could be anything" is taken to mean all possible scary threats are also possible). In any case, when there is some Si to work with, not just Ne, then there is some constraint for the speculation to navigate under, while still remaining unspecific and still requiring Ne trial-and-error and troubleshooting. So it's a bit like "directed" or educated guesswork. A way to visualize this would be:
^ So lets say we have ample datasets around, but a gap in the middle. The fact that we have Si anchors around us constraints the amount of Ne possibilities which could fill the gap, and this manifests as a hunch.
Ultimately, Ne will try to guess the missing data, using a combination of free-imagination (Ne) and precedent (Si) stitched together in the local context. I think something like IN Gap-Filling could be added to the IN page, once I have a better handle on how it works in more Ne-Si users. Anyone else out there have experiences to add? 🙂 Curious to know other's thoughts too.
Disclaimer: An analogous effect of all this exists for Ni-Se users, since Ni-Se users also have limits to their knowledge, and they also speculate. But they do so by taking recourse to the cross-penetrating Ni trendlines they know of, and the sort of guesswork they do is also less freely-experimental, and instead anticipates the Ni symbols to recur inside this space, but I'll get to that in a different post.
Since I've been soft-typed SiTe by @Auburn and others on Discord, I thought I'd throw in my $0.02.
When reading about how IN fills in gaps in its worldview tapestry, my very first thought was, "what gaps?" My worldview doesn't feel gappy, and it seems that by definition, it shouldn't be, if a worldview is supposed to be a comprehensive view of everything. What I'd say there are instead of gaps are, as with eyesight, blurry areas on the periphery that contrast with more focused areas of interest. In fact, the blurry areas probably account for the vast majority of the total view. It could be that IN is an accurate description of how my mind "fills in" the blurry areas, but it's not something I consciously concern myself with--it just happens.
"Tapestry" is also not a metaphor I'd use for a worldview, though having said that, it's an extremely interesting one. Tennyson's poem "The Lady of Shalott" is a staple of high school English literature curricula (or at least it was when I was kid--you never know what's been cancelled), so I'd imagine that many here have read it. If you haven't, or want to refresh your memory, then I recommend Loreena McKennitt's rendition: https://www.youtube.com/watch?v=80-kp6RDl94 There are of course countless compelling interpretations of the poem, but when I was in high school, I wrote an essay from a Jungian typological perspective (as was my habit in almost all of my classes, much to the annoyance of many teachers) arguing that it's a tragic allegory of Introverted Perception and its associated fear of living life directly, as symbolized by the lady in the tower viewing the outside world from a mirror and weaving images of what she sees. I also argued that Tennyson himself was Si dominant (SiFe, to be exact). This interpretation still makes a lot of sense to me (as does the typing, going by his photos--he looks very Si!), especially now in light of Auburn's characterizing of Pi leads as weaving worldview tapestries.
Maybe tapestry really is a good metaphor for the nature of my worldview and I'm just not aware of it, but to me it's more like a synthesis among the worldviews of others. In principle, the most accurate worldview would take into account all worldviews without prejudice, recognizing that each one is a valid perspective on at least some aspect of reality, including both areas of focus and "gaps", or as I would prefer to say, blurry areas in the peripheries. But again, I'm not conscious of the worldview synthesizing--I just see the effects of other worldviews that I've absorbed into my own.
I suppose if I'm describing my own worldview as a synthesis of other worldviews, then that begs the question of what those other worldviews consist of--logically, they can't all be defined like mine!
Perhaps the kind of "datasets" from which one's worldview emerges is relevant in determining whether it evolves through gap-filling or the "synthesis" I've tried to describe (or maybe I'm just splitting hairs). I seem to recall that @Hrafn is an anthropologist or historian, and maybe gap-filling makes more sense with worldviews derived directly from history. While history does interest me, I think the greatest influences on my worldview are fiction.
Also, in practice, there are probably some worldviews that I would reject from any synthesis. Nazism comes to mind. But rejected ones would probably be those that have not proven to be timeless in their influence with diverse groups of people or lacking in any aesthetic appeal. A worldview doesn't have to be old though if it's obvious that it will be timeless, such as one conveyed through an exceptionally great work of new art.
However, I do appreciate understanding and synthesizing worldviews that are radically different than mine. This would seem to go against the view that a Pi type's worldview is pretty much established early in life and undergoes little more than minor tweaks and gap-filling for the remainder of life. So maybe what I'm describing as my worldview isn't one (e.g., it might be more of an Fi "palate"), or maybe I'm not even a Pi-lead. I'm not attached to the idea of being one, and Auburn is certainly welcome to change his mind if subsequent evidence suggests another type for me.
I like your way of describing your Pi @lapis-lazuli. I’m not sure if I see a big difference between what I called gaps and you called very blurry areas. If I think of my worldview as an unbroken whole with some out-of-focus areas, I would still tend to think of IN as bootstrapping off of adjacent in-focus areas to provide sort of artificial, speculative clarity as needed.
I’ve sometimes tended to think of Si as islands of clear datasets within seas of impressionistic assumptions. While the former are largely formed through conscious learning & experience, the latter seem largely subconsciously formed, as lapis lazuli suggested. The more islands there are within a given area, the clearer the overall picture; fewer islands mean that the picture is more fuzzy.
If the sea is small enough and there are enough islands—or if it’s similar enough to something I’ve seen elsewhere—I can sometimes fill in blurry areas with pretty solid assumptions about the details. This often seems to be a subconscious process, but i certainly also engage in conscious guesswork of this sort.
On the other hand, I was younger I used to get accused of being a bullshitter, especially by my younger brother. I think this came from casually trying to infer too much based on too little information—“sounds like it could be true based on what I know, so I’m gonna talk about it as though it’s true”
By the way, I also appreciate auburns wool yarn analogy, among the other visuals in in that post—it all makes good sense for how I experience IN.
Running a bit with the wool-yarn thing, here’s how I’d see the difference between the more-grounded IN and the more untethered Ne. If there were a hole in the knitting, Ne might fill it in with whatever kind of thread, pattern, or texture struck its fancy, even if this were incongruous with the surrounding fabric. Its approach might have a fantastic quality, like the sea serpents lurking at the blank edges of an old nautical chart. By contrast, IN would look at the fabric around the edges of the hole and try to fill it in with something congruous—something that continued the pattern from the nearest points from which definite detail was known.
But this would still differ from Ni, which I’d imagine would patch up a hole using a single endless thread and patterning it from one or two themes that extend throughout the entire tapestry.