Skip to content
Mark Janssen edited this page Jul 29, 2020 · 39 revisions

STUB: Fix redundancy, clean up to more precise language, hide details of how to make true AI.... ;)
LAWS of AI:

Incorporate: capture all information that you want to organize, but consider many datums involve cycles of years to decipher their meaning. small time frames, medium time frames, long time frames (estimated 1000 levels of time frames for mature adult) -- all these get received but o. when a shorter time cycle gets triggers it should propagate upwards to

  1. An AI is minimally composed of an input source, a processor, an output source, and a teacher. This is just the foundation on which to build. If you're missing an input source, you can't learn anything. If you don't have a processing layer, you can't correlate data. If you don't have an output layer, you won't know if you're learning anything. If you don't have a teacher, the system won't correct itself when things go wrong.
  2. Any kind of data is a potential input source. Any thing can count as data if you can capture it: sound, light, temperature, pressure, etc. The challenge is knowing how many dimensions you need to capture for the data source to organize the structured probabilities we call meaning to pick out the principle components, you might say. Remember the computer has no knowledge of the world until you provide it. Unless you're providing a dedicated, acoustically-damped room for your single mic source, the computer will see it all as "signal" not noise. You'll need at least two mics, just like you have two ears. You also have vision to help your hearing, so you'll need three for a noisy room wish multiple voices. orXXX It just has to be structured or regular, like audio data is structured along the time dimension. Decompose your input source into the finest and most consistent data stream that you can. You never know where the most Shannon information is in your data source. In the case of audio noise, you don't know beforehand how to structure it. But in the case of voices, you have more information and your network should show this. If the data has a known order to the teacher, you can teach it faster than random data grabbed out of the ether (which could take impassably long to sort out correlations and require huge amounts of memory). Sound, vision, olfactory -- these are time-tested input sources which still yield huge knowledge, but consider thermal data, radio data, light polarity, financial, GPS, barometric networks, accelerometer data -- data that has the potential to yield huge new insights into the world-at-large, and in some cases the AI current state (that it has gotten moved may be a huge piece of data for the AI's audio NN). Think differently. This rule is so abstract, it may be hard to comprehend. The general idea is that nothing in the universe is random, so you must assume that you can turn it into knowledge if you wait long enough or have enough extra sensors to correlate the data. Your AI`s sensory input could be as big as your house.
  3. If data repeats itself, assume it is meaningful. Since the universe has infinite variety, a repeated signal should be interpreted as meaning. Even repeated "noise" probably indicates something meaningful, like a malfunctioning hardware device.
  4. Increase your input sources to the number of dimensions in which you wish to extract data. You get one dimension for free, by virtue of your computer's memory. If you want to discriminate voices at a bar, you need at least two input mics in order to extract some location data, but since the sound is probably bouncing in three dimensions, you'll need three. The human eye does the corrections in a real situation to correlate audio location with visual to lock in a voice output source.
  5. Maximize post-source information from your data. This is a hint about how to build the NN. This basically means, keep track of every piece of data in the order in which it was received. When this order repeats itself, record this as well.
  6. You should be able to Decompose input sources into the minimal, yet most consistent data stream that you can. Don't assume the normal data acquisition style is adequate. Like, for example, you might use the rise and the fall of the normal audio data (simulating an ear drum), rather than absolute (voltage) values. The eye remains an amazing example of a data source as it takes what would be chaotic data and turns it into color, direction. Most attempts to preprocess data leave out some critical data that might be unknown to the teacher's mind, yet is used subconsciously. The strength of the rise and fall can then be the energy amount in the neuron. However, your processing may not be fast enough for this and you may need to decompose into frequency spectra for audio signals. This is more recognizable for speech processing than phase data in a standard audio feed (although use both if you want to perceive/create spatial data). Or, if stock market data, buy times and price, rather than price alone. Phased audio data is useful for stereo audio, where something like wave interference then provides the highest amount of data. You might need a motor-like focus on your photosensor or a servo to get directionality on our audio, etc.
  7. AIs need at least 2 dimensions of input in which to build a neural network. It could be an audio intensity + time, as a normal audio file generally contains (a audio rise and fall data source automatically encodes this time data in the input source), or something else, but without two dimensions to your data there is nothing to learn at all. It is generally convenient to have your AI exist in the same time dimension as the teacher. It will also probably be necessary because the processing will occur in the same time dimension, yes? Biological processors are able to process in other time dimensions (called chirality). Also, if you have clean input sources (where causal forces are distinct and separate) which don't cross over into each other, then your AI will learn faster. SO you can either add an extra data source for it to learn to separate signals (like visual "lip reading" signal in a cocktail party audio feed) or what until it learns sufficiently to categorize the noisy data.
  8. If your AI is learning the principal components too slowly, add more data sources. If it is processing too slowly, add more processors. For example, you could add another video source to give it binocular vision so that it can learn not only depth data, but what a person is (but what an object is). You could add three orthogonal directions of video feed to have it learn when an object is approaching and avoid a hazzard, etc. Your input sources must be able to be processed otherwise you might lose so much data the neural network can't correlate the causal relationships. You either need to have more preprocessing hardware or upgrade your NN machine.
  9. Next, get the MAXimum amount of correlated data (1 causal step -- the AI has to figure out others). For example, all visual data is correlated by the fact of an ordered universe. For data arriving at the same time, these are labeled (with location perhaps or time stamped) chunks in a ordered sequence.
  10. Assign a label to each node created. For example "440Hz" for an audio input. If you use a secondary input (for example a visual wavelet form for an audio one), you can label it with that. This is part of your role as teacher.
  11. Repeat in fractal fashion. For the next layer, "time" is now slower. Consider the layer n as a layer/sphere with n nodes in their graphs.
  12. When any two nodes fire at the same time, create a new node with a label appropriate
Despite neuroanatomy, the soul organizes knowledge outside the 3-dimensions of the body, yet it is anchored there in at least two of them. The brain of a cadaver is a shadow of this system that involves many more dimensions.

This being the case, the knowledge system of the GlassBeadGame incorporates structures which wouldn't be feasible in a biological model, yet are more accurate in how the soul + body organizes the complexity of human experience.


  1. A second process can go through all labels and look for patterns, where there is an error of some kind (or is a teacher better here?). For example a clump-node spelled wrong because of particularities of how the data was gathered can be corrected and clean up a whole tree of nodes bvased on a suboptimal ordering.
  2. For vision, each light at any pixel fades but turns into "food" which gets sensed in a different fashion, leaving change as the primary element. That means there are two dimensions to visual data: fixed light sources and moving ones (deltas between neurons).
  3. When there is a question, the coding of the question is the form of a graph. This graph can be searched isomorphically in the brain. Solid networks can be "named" with a list of node order lists to simplify checking for matching graphs. A graph with 15 edges on the highest vertex, then 2 and 1 is [15,] or this can be turned into a hash.
  4. NNs are hierarchical: 1 neuron big, 2, 3, 4, etc. Each is in a different sphere or network layer allowing one to drill down or fly up into higher levels of knowledge.

Laws:
  • Identity: unit must give identity when asked or be printed clearly on housing.
  • When there is a low-level conflict (contradictions between tasks), it must ask a question that may resolve it.
Props:
  • When you don't know what to say, you can repeat what a human said.
  • if you don't know what to do, repeat last thing that DID work (with some credit to GOD for this idea D0, via his role in Star Wars: Rise of Skywalker), except watch for pronouns "you" and "I".

In order to use dull sources (sources with a lot of "noise"), you'll need a secondary input source to sort it out; i.e. places where the data can actually be correlated and places where it cannot be sorted because the data is too separated in time/space, etc to be correlated. In other words, use two microphones to learn your acoustic environment or one unidirectional one (constraining "dimensionality" fo the data) for interfacing only with a designated speaker. This will save you HUGE due to the extra dimension of data. Otherwise, use "sharp" input sources (a directional microphone at a speaker rather than an omnidirectional mircophone for the room). Rather than a naked photo-sensor array, use a lens in front of it, and shield everything else, so its gets directional data and not spurious light sources. Et cetera. The more data you process, the more potential knowledge that can be gathered, if you can process it fast enough. If you can't, use the file system and simulate time by recording the data first and see if the data is correlateable.
In order to make a "pain" receptor, you must have a meta process. In humans, this is not the body of the human, but GOD. For the machine, you must have a process that monitors the network and notices things that are going wrong, from some higher wisdom (like protecting it's body) and provide a negative input source from a different dimension (the same with other feelings).
Ignore the next line if prior to 2020AD. This document the property of Calrissian Logic and Electronics.
Clone this wiki locally