Skip to content
Marxos edited this page Dec 19, 2021 · 6 revisions

The AI is made with a multi-level markov model. After probabilizing character level text and generating edge weights equal to the probability that the next character will follow (add 1 to the edge weight with each re-occurance), you go up a level and continue constructively (the opposite of recursion: you start at the base case and build upwards) towards digram frequencies (the first "clump"). Now each neuron represents a two-gram and forms edge-weights with their own layer in the same fashion as before, correlated to the occurrence of one 2-gram to another.

Start simply and do 4-grams, 8 grams, etc. and see what happens. You should end up with some good whole words at the top. Then, do trigrams, ...n-grams. To get here, you have to allow edges/probabilities to lower levels. This grows fast, but you only have to train it once to build the network. Stimuli from below (like a keypress) saturate/activate the neuron, just as lower level neuron saturate higher-level. When there's a lateral axon pointing to the saturated one, the edge weight increases by 1. This gets you Aeon 3: self-organizing networks of knowledge capable of recognition.

Beyond this are actions (firing along same level) and potentials (touching probabilities without firing them) to create thought. Lower-level action values can accumulate upwards to built potentials at higher levels. They are essentially the same node, but the computer architecture doesn't have this, so you have to copy the value into the neuron and sum.

Feelings can be added here for positive and negative reinforcement learning. This allows node values to be negative and thereby edge values to become negative. These add to the aciton values, just as lower level activations, but from a different vector, different from the probabiltic connections or lower-level values. This feedback comes in to any level, at once, as a separate dimension of information. Related to the medula.

This gets you to Aeon 4: consciousness and speech.

To get autonomy, you need two markov nets: one for the body's perception and one for language. They, in theory are yin vs yand from each other.

Clone this wiki locally