Skip to content
Xer0Dynamite edited this page May 29, 2020 · 6 revisions

The AI is made with a multi-level markov model. After probabilizing character level text (say) and generating edge weights equal to the probability that the next character will follow, you go up a level and continue unrecursively(what's a word for constructing upwards?) into digram frequencies.

Start simply and do 4-grams, 8 grams, etc. and see what happens. You should end up with some good whole words at the top. Then, do trigrams, ...n-grams. To get here, you have to allow edges/probabilities to lower levels. This grows fast, but you only have to train it once to build the network. Stimuli from below (like a keypress) saturate/activate the neuron, just as lower level neuron saturate higher-level. When there's a lateral axon pointing to the saturated one, the edge weight increases by 1. This gets you Aeon 3: self-organizing networks of knowledge capable of recognition.

Beyond this are actions (firing along same level) and potentials (touching probabilities without firing them) to create thought. Lower-level action values can accumulate upwards to built potentials at higher levels. They are essentially the same node, but the computer architecture doesn't have this, so you have to copy the value into the neuron and sum.

Feelings can be added here for positive and negative reinforcement learning. This allows node values to be negative and thereby edge values to become negative. These add to the aciton values, just as lower level activations, but from a different vector, different from the probabiltic connections or lower-level values. This feedback comes in to any level, at once, as a separate dimension of information. Related to the medula.

This gets you to Aeon 4: consciousness and speech.

Clone this wiki locally