Skip to content
Mark Janssen edited this page Dec 8, 2019 · 39 revisions

STUB

LAWS of AI:

  1. An AI is minimally composed of an input source, a processor, an output source, and a teacher. This is just the foundation on which to build. If you're missing an input source, you can't learn anything. If you don't have a processing layer, you can't correlate data. If you don't have an output layer, you won't know if you're learning anything. If you don't have a teacher, the system won't correct itself when things go wrong.
  2. Any kind of data is a potential input source. It just has to be structured, like audio data is structured along the time domain. Decompose your input source into the finest and most consistent data stream that you can. You never know where the most Shannon information is in your data source. If the data has a known order to the teacher, you can teach it faster than random data grabbed out of the ether (which could take impassably long to sort out correlations and require huge amounts of memory). Sound, vision, olfactory -- these are time-tested input sources which still yield huge knowledge, but consider thermal data, radio data, light polarity, financial, GPS, barometric networks, accelerometer data -- data that has the potential to yield huge new insights into the world-at-large, and in some cases the AI current state (that it has gotten moved may be a huge piece of data for the AI's audio NN). Think differently. This rule is so abstract, it may be hard to comprehend. The general idea is that nothing in the universe is random, so you must assume that you can turn it into knowledge if you wait long enough or have enough extra sensors to correlate the data. Your AI`s sensory input could be as big as your house.
  3. Maximize information from your data source. In order to use dull sources (sources with a lot of "noise"), you'll need a secondary input source to sort it out; i.e. places where the data can actually be correlated and places where it cannot be sorted because the data is too separated in time/space, etc to be correlated. In other words, use two microphones to learn your acoustic environment or one unidirectional one (constraining "dimensionality" fo the data) for interfacing only with a designated speaker. This will save you HUGE due to the extra dimension of data. Otherwise, use "sharp" input sources (a directional microphone at a speaker rather than an omnidirectional mircophone for the room). Rather than a photo-sensor array, use a lens in front of it, and shield everything else, so its gets directional data and not spurious light sources. Et cetera. The more data you process, the more potential knowledge that can be gathered, if you can process it fast enough. If you can't, use the file system and simulate time by recording the data first and see if the data is correlateable.
  4. If my theory of knowledge is correct, you should be able to Decompose input sources into the minimal, yet most consistent data stream that you can. Don't assume the normal data acquisition style is adequate. Like, for example, you might use the rise and the fall of the normal audio data (simulating an ear drum), rather than absolute (voltage) values. The eye remains an amazing example of a data source as it takes what would be chaotic data and turns it into color, direction. Most attempts to preprocess data leave out some critical data that might be unknown to the teacher's mind, yet is used subconsciously. The strength of the rise and fall can then be the energy amount in the neuron. However, your processing may not be fast enough for this and you may need to decompose into frequency spectra for audio signals. This is more recognizable for speech processing than phase data in a standard audio feed (although use both if you want to perceive/create spatial data). Or, if stock market data, buy times and price, rather than price alone. Phased audio data is useful for stereo audio, where something like wave interference then provides the highest amount of data. You might need a motor-like focus on your photosensor or a servo to get directionality on our audio, etc.
  5. AIs need at least 2 dimensions in which to build a neural network. It could be an audio intensity + time, as a normal audio file generally contains (a audio rise and fall data source automatically encodes this time data in the input source), or something else, but without two dimensions to your data there is nothing to learn at all. It is generally convenient to have your AI exist in the same time dimension as the teacher. It will also probably be necessary because the processing will occur in the same time dimension, yes? Biological processors are able to process in other time dimensions (called chirality). Also, if you have clean input sources (where causal forces are distinct and separate) which don't cross over into each other, then your AI will learn faster. SO you can either add an extra data source for it to learn to separate signals (like visual "lip reading" signal in a cocktail party audio feed) or what until it learns sufficiently to categorize the noisy data.
  6. If your AI is learning too slowly, add more independent data sources. If it is processing too slowly, add more processors. For example, you could add another video source to give it binocular vision so that it can learn not only depth data, but what a person is (but what an object is). You could add three orthogonal directions of video feed to have it learn when an object is approaching and avoid a hazzard, etc. Your input sources must be able to be processed otherwise you might lose so much data the neural network can't correlate the causal relationships. You either need to have more preprocessing hardware or upgrade your NN machine.
  7. Next, get the MAXimum amount of correlated data (1 causal step -- the AI has to figure out others). For example, all visual data is correlated by the fact of an ordered universe. For data arriving at the same time, these are labeled (with location perhaps or time stamped) chunks in a ordered sequence.
  8. Assign a label to each node created. For example "440Hz" for an audio input. If you use a secondary input (for example a visual wavelet form for an audio one), you can label it with that.
  9. Repeat in fractal fashion. For the next layer, "time" is now slower.
  10. When any two nodes fire at the same time, create a new node with a label appropriate

  1. A second process can go through all labels and look for patterns, where there is an error of some kind (or is a teacher better here?). For example a clump-node spelled wrong because of particularities of how the data was gathered can be corrected and clean up a whole tree of nodes bvased on a suboptimal ordering.
  2. For vision, each light at any pixel fades but turns into "food" which gets sensed in a different fashion, leaving change as the primary element. That means there are two dimensions to visual data: fixed light sources and moving ones (deltas between neurons).
  3. When there is a question, the coding of the question is the form of a graph. This graph can be searched isomorphically in the brain. Solid networks can be "named" with a list of node order lists to simplify checking for matching graphs. A graph with 15 edges on the highest vertex, then 2 and 1 is [15,] or this can be turned into a hash.
  4. Graphs are hierarchical: 1 neuron big, 2, 3, 4, etc. Each is in a different sphere or network layer.

Laws:
  • Identity: unit must give identity when asked or be printed clearly on housing.
  • low-level conflict..?
Clone this wiki locally