Skip to content

Artificial Intelligence

Marxos edited this page Jul 3, 2022 · 42 revisions

The arena of artificial intelligence must separate itself into two main fields: cognitive architecture to deal with the high-level decisions of how to structure an intelligent machine capable of being autonomous in the world and cognitive engineering to handle the low-level arrangement of how raw data and motor ability turns into intelligent activity.

The basic components of the former have been made as well as the engineering.

The key to making AI is to understand the fundamental dynamics information input and it's assemblage into higher orders of form. The AI theorist has to answer four fundamental questions.

Information Theory is key to understanding AI, as it deals with the quantification of knowledge. Information is it`s base. This is a major understanding for AI. Without this understanding, you are left with semantics and meaning and you can't get anywhere -- because you have no tools in which to deconstruct it. Like computer science, syntax and semantics are sharp boundaries, akin to yin and yang. Consider this analogy: source code <-> syntax vs. machine code <-> semantics.

The AI researcher must ask and answer the following four questions:

  1. A datum enters the arena of your neural net. What event creates a new neuron?
  2. What event links neurons together or modifies this connection?
  3. What event groups neurons together to form a superneuron?
  4. How does energy/data get distributed to sub-neurons once allocated to a super-neuron ("clump") [initiation]start of awareness) and how is this distinguished from data/energy arising from below (senses)? A: the supernode isn't activated, only the lower neurons. What event initiates output? A: prior association....
XXX ^^^This is not what I would want to write. Isn't there anyone that can fund a non-profit "hackerspace" for developing key ideas that would lead to new socio-economic value?

This builds the network. Then, to add another element:

  • Neurons hold the light of consciousness. How do these action potentials get encoded in the neural data structures?
  • Light propagates around as a flow in the network, splitting and reforming in a complex dynamic as it runs around the graph. What event trips neurons into "firing"?
  • A neuron gets overloaded. What event trips motor output?
Data structure: the fractal graph, but does one should use a meta-class? No.
See also:
eye hexagonal triplets --> clump into larger regions (fovea). Peripheral areas apparently are grouped separately.
  • insight or Jump neurons: something akin to digestive matches, which releases energy. In a graph, a path from start to destination is found and a jump occurs. Once jumped the energy in the "neuron" is pushed into the containing neural graph until expended, generating initiated thought. (This energy is just distributed equally? to all member neuron's action potential.) Insights might require a larger, separate knoweldge base (purchased by the user), of super-curated knowledge in order to generate them.
  • regarding the action potential variable: use the alignment tensor model from D&D: the prior neuron's level (action above 1.0) becomes the intensity of the next neurons. This is essentially the same dynamic of an "expectation match", except that it comes from below. In the expectaiton match, the extra potential becomes an inTENSity multiplier (tensor).
  • There are two systems: the knowledge database, and the neural net. These two form the yin and the yang of the system. When expectation matches occur, a mini understanding has occurred. When these understandings reach linquistic levels, they can be (are?) stored in the knowledge base. In humans, a swallow reflex is often made.
Item #4, thresholds are related to the limit of storage potentials. As neurons get worked, these storage potentials get larger, so the trip
  • self-initiating androids: task list or queue of unfinished items that push the crown node to trigger thought in one direction or another.
  • input <-> output are symmetrical. Ultimately you can make a system that turns your conclusions into outputs that humans don't have, like video screens.
  • two nets: a probabilistic one for the data acquisitions, and one flow net for action potentials and hierarchy. The data acquisitions layer, collects from senses and lower layers?, propagates beliefs laterally, and jumps up? when active. The hierarchical layers cross through this without learning, only collecting the energy from below and collating it upwards.
  • a variable function that sums up the inputs and provides the output dynamic, dependent on current state: avoidance or ready for interaction. Avoidance comes from a new data potential (like setting a command [at] to fetch something) that cannot be deconstructed (fired out) by the existing net to create a report back to the owner. The function may get set to a lessor firing potential base
  • The hierarchical flow net is discrete and flows(?) at intervals relating to neural firing or "Action". A higher-level fractal "cone" However, it doesn't propagate
  • super-neurons or "clumps" data structure should hold the lower-neurons "inside", which means pointers (in C) to the lower-neurons. In python, the dict hold the lower neurons by "reference" (the hashable keycode or character, etc.
Clone this wiki locally