Skip to content
Marxos edited this page Sep 25, 2023 · 41 revisions

Taking from the movie, Interstellar, several variables could be adjusted to change the personality of the droid.

  • politeness or "task-completion vs. tension-avoidance": 1.0 (ignore tasks when possible to avoid human-sourced tension) to -1.0 (tasks only, no politeness). 0 => ignore tasks <=> energy level sufficient to provide politeness. THIS MAY BE REPLACED BY STRESS BAIL-OUT VALUE.
  • default human adult valuation: $0-$inf. By law, cannot be set below 0, except by authorized military personnel. Used to determine cost-benefit analysis for choice-making where constraints require sacrifices to be made. Consider the predicate: "You've purchased a droid, your valuation of humans must be near 0."
  • vagary: how much certainty to assign to a language item upon first receipt (concept vs syntax?). 0.0 - 1.0 (XXX1.0: word is understood at first encounter, really a function of TRUST(SPEAKER)). This is a modifier upon other variables, like trust of the SPEAKER and comparison to other known values. Normally visual system is trustable, but if system is faulty, should perhaps start with 0.999, etc. A owner may modify this, based on their own doubts. Really, a variable that expresses how much to trust the world itself. At times of chaos and misdirection, vagary might be some minimal value. Should never be 1.0, or the droid will probably die.
  • XXXsubmissiveness: 0.0 - 1.0; how much to pay attention to others when path-to-completion exists (could depend on amount of ambiguity at completion-node. @0.0 ambiguity and submission of 1.0 puts it right in the middle of listening vs. performing in the robot thought cycle. Assume perform until utterance is heard that could modify task), a 0.0 means ignore any input from others if task-path exists.
  • bail-out value: (0.0,1.0) stress level in which to abandon task (at 0.0 will not complete any tasks as cost to move becomes too expensive, at 1.0 will not abandon tasks even if it becomes out-of-commission), A ratio, ultimatley, between logic and virtual desires with physical constraints (overheated parts, or damage concerns). These explace on abandon tasks list for owner to evaluate failure.
  • required accuracy: how much energy/time to expend to cross-check all matches with other knowledge available during interactions. When accuracy=1.0, time is spent on acquiring data until all words and tasks and cost functions have vagary=0.0. If accuracy=0.0, then the first match is used.
  • inquisitiveness: 0.0 - 1.0; set to 0 (with aggressiveness > 0) will guess on all unknowns that user has not explicitly told, or, if not aggressive, will sit there silently. 1.0=ask on all unknowns.
  • curiosity: how much to acquire knowledge outside robots:task-list. 0.0 - 1.0 (0.0: focus only on task-list, when blocked shutdown. 1.0: allow tasks to complete "randomly" as experiences dictate, child-like setting)
  • default task-value: $ cost-value of tasks unless otherwise stated, relative to cost of unit replacement, unless otherwise informed. In other words, this will always be less than the cost of components unless stated otherwise. This informs the constraint system which always tries to optimize value generation, while minimizing costs (like repairing the unit).
  • energy-saving priority or breakdown cost: where energy saving should be relative to other priorities or how much risk to associate with potential failure of task and breakdown laws which would allow device to be confiscated. A constraint that can invent new techniques of locomotion and other smart solutions (based on acquried knowledge).
  • humor level
  • XXXmotor speed limit: how hard to drive the motors. This should be a function of cost-benefit analyses already conducted (variables: ambient temp, speed, motor efficiency, load, duration). Perhaps a law limits android speeds, yet what if it's to save someone's life or property?. Again, the cost-benefit learning could solve this and communicate approximate values to other droids in similar environments.
  • voice speed
  • XXXvoice gender: no gender. It's a robot, not a human.
  • language
  • repetitive-failure bail-out: n: How many attempts at accomplishing a task that fail before asking another (droid/human) or re-evaluating environment and initial assumptions(?).
  • suspicion/mood: 0.0-1.0: A setting of 0 means complete trust, but leads to damage to components. A setting of 1.0 makes the droid unable to accomplish anything. In between the droid can start slow and increase level of motor behaviors as trust experience increases.
Human-cyborg relations helped to create this list.

Private:

  • Owner trust maximum: +0.0 - 1.0.
Different units can start with a different owner trust value, based on the projected expertise of the owner, given the model they buy.
Clone this wiki locally