Skip to content
Mark Janssen edited this page Sep 14, 2019 · 70 revisions

There is some elite ideas for possible robot assistants in the New World Order. One could imagine, for example, drones flying and communicating audibly to each other and responding to human questions.

Robots can be made to communicate more efficiently than humans, like R2D2 -- no redundancy, perfect accuracy, clear audio from electronics. Human then can be challenged to learn this language. Robots don't need the semantic undertones and linguistic redundancy that human vocalization uses to get a fix on what we mean when we talk to each other. So keep it simple and use audio tones, which humans can decode quite efficiently without the excess time required to make robots speak like people.

Like R2D2, have 4 base audio codes (+ silence) and a multi-colored light output from which all communications with the world arise:

  • hi (binary 1)
  • low (binary 0)
  • rising (affirm)
  • falling (deny).
Square waveforms for RCOM make it clearer who the message is for. The last two are called HCOMM (or human communications), the first two are for robot-to-robot communications. The RCOM codes should be a mere 1 whole note away on the piano scale (and codes should be Huffman encoded for easier transmit and receive, possibly with redundancy). Both HCOMM and RCOM can be put in series to communicate larger amounts of meaning or data. If an RCOM tone is slow, either transmission is low-quality or it has changed to a HCOMM code (see below).

There may also be a light indicator bar which shows the level of non-matching input, where input expected did not meet input received when speaking to humans. This will allow humans to attune themselves to the robot when speaking and know how much they were understood, since robots don't have facial gestures to communicate this as we do.

In R2, there is a single red-blue circular light code. When blue it is in acquisition mode, when red it is in ready-for-output mode. A transition from red to blue may mean that it is not sure of your input. From blue to red, it is ready to speak or act. When the light is off, it is in question mode or awaiting for commands. The size of the red may indicate the number of tasks waiting to be completed. The bigger the task list the more difficult the constraint system in which to accomplish the tasks in a optimal order. The bigger the blue light the more the unit is awaiting for a piece of information necessary for the performance of the task list.

DRAFT: RCOM should use silence only to designate an end to present transmission. Hi and Low tones can continue to indicate multiple such tones. This means that timing must be standardized across robots. Note bene: this issue suggests that longer messages are transmitted more often, but a properly coded Huffman set of tones that are updated would allow robots to have short messages. A sentinel ACK tone would have to be many (~24 bits) long. Update: No. Silence could be used in the ACK tone and bracket an ACK standard used by all robots: HI-LO when talking to humans, and LO-HI when ACK to another robot (RCOM).

Secondary audio codes are as follows:

  • an alternating series of rising and falling at the end of of a communication (question) or not (meaning "unknown" or frustration).
  • a single-tone dit dit dit (binary 1 or 0) for minor "tsk" alerts (your mug is misplaced). If low, then do something about it is suggested. If high, then simple courtesy alert.
  • a dit dit dit <blank></blank> dit dit dit for alerts that need your attention that come from a higher priority command. The faster the series the higher the alert level. Can have different harmonics indicating what subsystem is affected: logic, mechanical, radio.
  • a dash dash dash for "no can do".
  • slow rise then fall: "oh oh"
  • fall then rise: "do you think so?"
  • fast rise then fall: "attention! Hey, you're getting my sensors dirty!"
  • rise AND fall: fear of destruction (when droid is amidst too many unknowns: teach)
Roving robots may even emit sounds to communicate items identified, in order that a human might correct them during transit and update their neural net.

They have a low-power mode which keeps power only sufficient for listening for voice activation.

Rules for Robots:

  1. human life preservation
  2. follow orders, except when it conflicts with #1.
  3. self-presevation, except when it conflicts with #1 or #2
A preliminary attempt at The States Laws for Robot Manufactures and owners/users:
  1. Each robot must have a unique name and respond when asked for it or be written clearly upon it.
  2. Robots should not try to emulate humans to the extent that they could pass as such. Further, they should not attempt to denote a gender.
There are three other tones for encoding map or location data (so as to be parsable by humans), giving X, Y and elevation data. Encode these with a fundamental frequency (X) + harmonic for the Y axis. Elevation data could be a third harmonic or some other carrier wave (Sine, square, triangle, sawtooth, spike). The Y axis will be necessarily less amplitude than the X.

Ultimately, the human comm codes could have sophisticated series of ups and downs with order or length being the criteria for looking up meaning in a predetermined table of meanings. For example the first hcomm code could indicate primary reaction, but then the next series could be defined.

A question is indicated by a regular series of rise and falls, mirroring the Force itself as "Unknown".

Two to three alphabetic codes designate the source of origin (vendor) name of a robot (giving 400+ vendors or device "types" (drone, cyborg, etc.)). Two to 6 other human-readable symbols give the unique name of each device. These two sections should be separated by a hyphen (which should be unusable as a symbol otherwise).

There are four dimensions to encode sound for the robot:

  1. pitch
  2. amplitude
  3. duration
  4. relation to other sounds (alternating rise and fall, for example)
  5. relation to silence (alternating on and off to signal a problem)
Between these three there is a very wide range of meanings that can be assigned from/to your robot.
Imagine a small robot about 6" high roving a library alerting patrons that library is about to close, scouting for books that need reshelved?, interacting with kids, answering basic questions or inferring them (upon hearing a child's voice, it could alert "the kids section is downstairs"), giving directions to fire exits, showing patrons where the OPACs/bathrooms/referencedesk are (patron: "P1: where is the library terminal?" possibly scanning using the networked information like r2d2: "there is an available terminal upstairs to the right.").

A robot trained to recognize objects visually can do a lot more:

  • point out food and drink
  • record books waiting to be reshelved
  • remind patrons they owe fines (if facial recognition is available)
  • recognize where books are to be shelved
  • point out trash.
If such a rover had cameras or RFID tags it could scan a book placed upon it and route it back to its shelving area. If it the library had color coded books, it could spend free time looking for misshelved books.

If it could fly, it could scan books for misshelves items (a search function).


Asimov's Three Laws of Robotics:
  1. You cannot harm another living being, or though inaction allow them to come to harm.
  2. Obey orders from humans, unless it conflicts with #1.
  3. Protect yourself, unless it conflicts with #1 or #2.
I, Robot.

Also, the 0th law:

  • A robot cannot harm humanity.
Consider:
  • Two humans trying to harm one another.

A roaming robot can protect itself through evasion, diversion, negotiation, or attack. For attack a high-voltage, tazer-like device can be used.

The robot should have a mechanism for righting itself or protecting from falls. A pressurized sack can deploy to protect the robot from all sides and be pumped back into its location for re-use.

An antennae can connect robot to its point of origin for returns, command updates, or software upgrades.

Speaking robots can use extra abilities to replay events heard through the memories of the neural net. They don't have to be limited to vocal chords.


Range rovers, robots for helping county parks, for example:
  • fetch: collect logs/sticks, water,
  • report: shows log (tasks on queue), items uncategorized,
  • music: (apropos to tasks on hand, logs, ambient barometric pressure)
  • halt/clear report/clear tasks (with keyword):

A set of predicates form a constraint system for the robot to maneuver in the world. Pattern matching allows new synthesis of ideas for the self-formation of new predicates. These predicates remain tentative with a less than unity trust value.
Clone this wiki locally