Skip to content

Artificial Intelligence

Mark Janssen edited this page Mar 24, 2019 · 42 revisions

The key to making AI is to understand the fundamental dynamics information input and it's assemblage into higher orders of form. The AI theorist has to answer four fundamental questions.

Information Theory is key to understanding AI, as it deals with the quantification of knowledge. This is a major breakthrough for AI. Without this understanding, you are left with semantics and meaning and you can never get anywhere.

To AI researcher must ask and answer the following four questions:

  1. A datum enters the arena of your neural net. What event creates a new neuron?
  2. Two data arrive closely linked in time. What event links neurons together or modifies this connection?
  3. Data repeats itself. What event groups neurons together to form a superneuron?
  4. Action values reach past a threshold. What event initiates output?
This builds the network. Then, to add the interactive element:
  • Neurons hold the light of consciousness. How does these action potentials get encoded in the neural data structures?
  • Light propagates around as a flow in the network, splitting and reforming in a complex dynamic as it runs around the graph. What event trips neurons into "firing"?
  • A neuron gets overloaded. What event trips motor output?
Data structure: the fractal graph, but does one should use a meta-class? No.
LAWS of AI STUB:
  1. An AI is minimally composed of an input source, a processor, an output source, and a teacher. This is just the foundation on which to build. If you're missing an input source, you can't learn anything. If you don't have a processing layer, you can't correlate data. If you don't have an output layer the teacher can't see what's going wrong nor the AI feedback what has been learned. If you don't have a teacher, the system won't correct itself when things go wrong.
  2. Any kind of data is a potential input source. But in order to use dull (sources with a lot of "noise"), you'll need a secondary input source to sort it out. Otherwise, use "sharp" input sources (a directional microphone at a speaker rather than an omnidirectional mircophone for the room).
  • Maximize data and minimize noise from your input source. Rather than a photo-sensor carry, use a lens in front of it, and shield everything else, so its gets directional data and not spurious light sources. Use two microphones to learn acoustic environment or one unidirectional one for interfacing only with a designated speaker. Et cetera. The more data you process, the more potential knowledge that can be gathered, if you can process it as fast as it enters. If you can't use the file system and simulate time by recording the data first. Sound, vision, olfactory -- these are time-tested input sources which still yield huge knowledge, but consider financial data, light polarity data, thermal data, radio data -- data that hardly any being has organized has the potential to yield huge new insights.
  1. Decompose input sources into the minimal, yet most consistent data stream that you can. Don't assume the normal data acquisition is adequate. You might need a motor-like focus or a servo to get directionality, etc. Like, for example, rise and fall of the normal audio data (simulating an ear drum). The eye remains an amazing example of a data source as it takes what would be chaotic data and turns it into color, direction. Most attempts to preprocess data leave out some critical data that might be unknown to the teacher's mind, yet is used subconsciously. The strength of the rise and fall can then be the energy amount in the neuron. However, you processing may not be fast enough for this and you may need to decompose into frequency spectra for audio signals. This is more recognizable for speech processing than phase data in a standard audio feed (although use both if you want to perceive/create spatial data). Or, if stock market data, buy times and price, rather than price alone. Phased audio data is useful for stereo audio, where something like wave interference then provides the highest amount of data. GPS data, accelerometers, polarity....
  2. AIs need at least 2 dimensions in which to build a neural network. It could be an audio intensity + time, as a normal audio file generally contains (a audio rise and fall data source automatically encodes this time data in the input source), or something else, but without two dimensions to your data there is nothing to learn at all. It is generally convenient to have your AI exist in the same time dimension as the teacher. It will also probably be necessary because the processing will occur in the same time dimension, yes? Biological processors are able to process in other time dimensions (called chirality). Also, if you have clean input sources (where causal forces are distinct and separate) which don't cross over into each other, then your AI will learn faster. SO you can either add an extra data source for it to learn to separate signals (like visual "lip reading" signal in a cocktail party audio feed) or what until it learns sufficiently to categorize the noisy data.
  3. If your AI is learning too slowly, add more independent data sources. If it is processing too slowly, add more processors. For example, you could add another video source to give it binocular vision so that it can learn not only depth data, but what a person is (but what an object is). You could add three orthogonal directions of video feed to have it learn when an object is approaching and avoid a hazzard, etc. Your input sources must be able to be processed otherwise you might lose so much data the neural network can't correlate the causal relationships. You either need to have more preprocessing hardware or upgrade your NN machine.
  4. Next, get the MAXimum amount of correlated data (1 causal step -- the AI has to figure out others). For example, all visual data is correlated by the fact of an ordered universe. For data arriving at the same time, these are labeled (with location perhaps or time stamped) chunks in a ordered sequence.
  5. Assign a label to each node created. For example "440Hz" for an audio input. If you use a secondary input (for example a visual wavelet form for an audio one), you can label it with that.
  6. Repeat in fractal fashion. For the next layer, "time" is now slower.
  7. When any two nodes fire at the same time, create a new node with a label appropriate
  8. A second process can go through all labels and look for patterns, where there is an error of some kind (or is a teacher better here?). For example a clump-node spelled wrong because of particularities of how the data was gathered can be corrected and clean up a whole tree of nodes bvased on a suboptimal ordering.
  9. For vision, each light at any pixel fades but turns into "food" which gets sensed in a different fashion, leaving change as the primary element. That means there are two dimensions to visual data: fixed light sources and moving ones (deltas between neurons).

eye hexagonal triplets --> clump into larger regions (fovea). Peripheral areas apparently are grouped separately.

Item #4, thresholds are related to the limit of storage potentials. As neurons get worked, these storage potentials get larger, so the trip

Clone this wiki locally