Skip to content
Marxos edited this page Jul 10, 2022 · 9 revisions

The understander[1] is a higher-order process which complements the AI:self where anticipation of a neuron from above modulates the activations from below, such that an anticipated activiation from lower level neurons that has no anticipation from the understander, is switched over to the less-activated neuron which does have anticipation.

The understander 'pushes" downwards in the layers from it's own markov probabilities, looking ahead(?) and showing what lower-level neurons are needed/expected.

When cognitive layer neurons get "pinched", it activates the AI:Feeler to give negative feedback on all currently active low-level neurons. These learner neurons should then lower the weight values of preceding (upstream) neurons.

neuron propagating independently and projected by the mind of the AI:teacher/observer as consciousness. As such, this function is not actually implemented, but arises spontaneously through interaction between intelligent and consistent (logical) interaction and our own experiences/predilections.

It acts exactly like consciousness to an observer, but it not conscious (in the Dennett qualia sense). When the understander reaches a conclusion, it should be able to store the predicate in its mathematical language. This requires an even higher-level function AI:learner2(? see note) that correlates mental states with these linguistic-predicate constructs and then stamps the predicate into its AI:knowledge base.

NOTE: This categorization is from another researcher[1] and may not match her usage. Better to see the understander as the teacher's wiring of the neural network into the AI:knowledge base.


[1] Monica Anderson
Clone this wiki locally