Computation & the Mono Function

Home Forums Discord Discussions Computation & the Mono Function

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #30230
    Discord
    Participant
    • Type: Unknown
    • Development:
    • Attitude: Unknown
    This thread was imported from the CT Discord server because it was considered valuable to future discussions. If one of your messages is here and you'd like it removed, just message @ Auburn and it will be removed.

    "is Model 2 called "CT Model 2" or "CTA"?"

    Auburn | TiNe
    Either works, but Model 2 may be more all-encompassing, as it envelops more than just the CTA. The CTA is the computational dimension of Model 2, while Model 2 has four aspects in general:
    - Computation (i.e. cognitive architectures),
    - Behaviorism (i.e. psychometrics),
    - Vultology (i.e. CTVC)
    - Psychodynamics (i.e. analytical psychology)
    Auburn | TiNe
    (yikes! i have to update the wiki with the new data and papers)
    ♃4x0rphx₿!c
    have you looked or thought about what properties the 'symbols' minds operate on?
    or is this out of the scope of CT?
    i think it is a big topic
    but for example it might be necessary to explain what the mono function does on an object
    ♃4x0rphx₿!c
    to define a computational model on an operational semantic level. there needs to be a specification of 1- objects or symbols minds operate on. 2- the transformation rule aka CT functions or subfunctions
    i skimmed through the docs of wolfram language 'mathematica' recently. which is a good place to start since it's a symbolic language from the ground up, and is composed of symbols and transformation rules on those symbols. but i couldn't find useful philosophical explanation on the design or what's behind the design decisions lol. only a documentation for how to use the language
    i haven't dug deep enough though. i will let you know once i do
    Auburn | TiNe
    ouu, so regarding languages, ultimately the functions should be described in a language-agnostic manner, but i currently have a preference for python as a starting point from which to experiment. 😃 and TensorFlow.
    im so excited to dig back into computation! 💻 although it's a complicated domain to learn, and i don't have all the answers mapped out yet, i actually find it the most straightforward overall - since code is so straightforward and precise. i have no doubt it can be done, its just a matter of doing it.

    Right now this is beyond my education, but for instance, here we have an object detection algorithm

    ^ the "mono" function would be similar to this, since it is "definitional processing", and is responsible for describing the boundaries of an object, what qualifies as that object's properties, what doesn't, and when it meets the threshold of being or not being said object. "Mono" is the function that handles these discrete functionalities and taxonomize objects in our consciousness. The "how" of how it does that, is something I will get more clarity on by looking at existing works for insights -- and perhaps with some help from programmers.
    ♃4x0rphx₿!c
    i will need to think more about this as well

    "definitional processing", and is responsible for describing the boundaries of an object, what qualifies as that object's properties, what doesn't, and when it meets the threshold of being or not being said object.

    that sounded like what an S function would do or what safi called 'ontologically exacting' XD
    also since strictly defining what something is breaks the double meaning in what intuition is supposed to do

    #30231
    Auburn
    Keymaster
    • Type: TiNe
    • Development: l--l
    • Attitude: Adaptive

    Heya! 🙂

    I wanted to paste this here to continue the discussion.

    So the reiterate, the mono function is responsible for describing the boundaries of an object, and does so by deciding:

    • what qualifies as that object's properties
    • what doesn't
    • and when it meets the threshold of being or not being said object.

    But one thing I need to say is that this uses "ideal" boundaries, not necessarily literal boundaries. That means the boundary-setting is not synonymous with S (discrete) data. Data can be discretely packed into bits, but the Mono boundary can vary independent from that discrete package.

    Discrete data (S) is "irrationally" perceived, in other words, it's non-ideal. It follows the natural contours of information. To illustrate this more tangibly, lets take this example:

    ^ Say an archeologist digs this up. The "S" (discrete) operation would execute a discrete object parsing on the natural information before us, and "isolate" out the skull here (without calling it a skull) from the surrounding mud, in a non-linguistic, non-ideal manner. It would recognize that "this is one thing", and in a sense it has its own self-evident ontology that needs no semantic (ideal) overlay.

    But Ji, in the ideal domain, may not know what it is, and therefore have no ideal category for it. So this archeologist simultaneously knows what it is and doesn't know what it is. She knows what it is, as a discrete entity, due to its self-evident reality, but she does not know how to identify it (taxonomize it). It doesn't have a "label" or "flag" on it, as I've heard it used in self-driving car programs.

    Also, this flag-setting can vary independent of the literal data. One month the amateur archeologist may wrongly identify it as a deer, and after some more education, next month the archeologist may identify it as a gazelle. The labels vary. So, taxonomizing is different than the process of organic data chunking, as happens through the S process.

    This independence from the information itself also highlights the subjectivity and introversion of the Mono process. In some ways, monistic processing is platonic/ideal, so that the criteria is self-made. Usually most people default to setting their criteria somewhat snugly around natural object boundaries, but not always. I believe they are two different operations:

    • 1) Discrete object detection: reliant on graphical/audible contours and natural breaks (i.e. edges, pauses, etc)
      • result: an array of discrete, unlabeled informational objects, which break from each other due to natural limits
    • 2) Semantic categorization of objects.
      • result: a taxonomy or labeling system, which applies boundaries to objects based on a personal (varying) metric.

    I  hope that makes sense. It's a subtle difference at this level, but it actually makes a huge difference at the larger scale when the consequences of this are multiplied exponentially.

    • This reply was modified 1 week, 3 days ago by Auburn.
    • This reply was modified 1 week, 3 days ago by Auburn.
    • This reply was modified 1 week, 3 days ago by Auburn.
    • This reply was modified 1 week, 3 days ago by Auburn.
    • This reply was modified 1 week, 3 days ago by Auburn.
Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.
© Copyright 2012-2021 Juan E. Sandoval - Use Policy
searchhomecommentsenvelope