2019-07-03

I couldn't sleep last night because my brain was on fire. I think it was a confluence of a bunch of things: reading Out of Control, starting the sensor fusion class, resurrecting the Omen (my gaming/AI laptop), grabbing the Jetson Nano, and stockpiling some parts for building a robot. Where am I going with this? Let's look at the factors in play.

Out of Control is about complex adaptive systems; the key idea that was on my mind is to

  • build systems starting with simple components that work
  • add new layers on top of these
  • perfect each layer
  • each component should work on its own

To quote Kelly, "the chunking of control must be done incrementally from the bottom." Actually, the quote that really stuck with me was

it took evolution 3 billion years to get from single cells to insects, but only another half billion years from there to humans, "this indicates the nontrivial nature of insect level intelligence."

And indeed, I find swarms and flocks fascinating, a complex amalgam of independent parts manifesting emergent behaviour.

The sensor fusion class is just starting, so it's not that necessarily that I've learned a ton from it yet. Importantly, what it's done is make me start reviewing my notes from the self-driving car course; I'm not aiming to build a self-driving car, at least not in the sense of the class, but I have a ton of notes on lots of relevant information, like control theory and sensor processing and whatnot. It's putting me in that state of mind, and kindling the fire to work on autonomous systems. There was a certain excitement while doing the AI class and the SDC class where I was fully engaged with what I was working on.

What's the Omen? It's a gaming laptop that I picked up fairly cheaply from Best Buy and spruced up a bit; it's got an Nvidia 1050 GPU and I upgraded it with a 1T SSD and 32G of memory. I had to upgrade it from an older Ubuntu LTS to Bionic, and that involved upgrading all the Nvidia jetpack [1], which meant I was on their site looking for instructions. Of course I was exposed to a lot of material on things I'm pretty interested in. So that sort of fired me up about that. I hadn't turned it on in a while (since finishing the SDC class in December), so it was nice taking a tour of old code and poking around on the hard drive to explore what was on it.

The reason I resurrected it was because I got a Jetson Nano, and I didn't really have a Linux laptop - there's another underpowered (and spinny drive only) machine I have, but if I want to train any models, I need a strong processor and a GPU, which means I have to press the Omen back into service. So the Nano is pretty interesting, and the kit I got from Sparkfun came with a raspberry pi camera. That thing kept flopping around and being a pain, so I fashioned a stand for it out of cardboard from the box it came in, dental floss, and some staples. Which made me start thinking that building on a chassis could be done more easily if I was to do it more creatively.

Finally, I've been stockpiling parts to build robots. It all started when I was building the board of education (BoE) bot with an arduino. It was fun, and I think I have video somewhere where I let it loose in a room to just not hit things. I've now got a 4WD chassis, the Raspberry Pi camera, a fixed lidar, etc, and I've got the parts sitting in a box at home right now. The only things I'm really missing are a motor driver for a raspberry pi (and I'm debating hooking up an arduino to the motors and building out a control system that acts as an I2C slave that a raspberry pi can talk to) and a USB battery pack that will fit on the chassis. I've found both, and they should be here by Friday.

Where is this going?

Well it occurred to me last night that I could build a neat little robot that is a system of independent parts that work together as a whole. My first thought was using a bunch of independent processors (e.g. Feathers) for this, and while that works, it quickly adds a ton of overhead to the system because of all the extra wiring and power requirements and whatnot. Then I remembered ROS.

ROS basically works as a bunch of programs that communicate over a standard messaging bus (defaulting to a TCP server on the host). I could achieve the same thing as a bunch of independent processors by instead using independent processes on a single processor all communicating via message passing. This feels at least true to the spirit if not actually equivalent. I'd like to experiment with the bare minimums here, so probably not using any of the prebuilt sensor or SLAM or what-have-you modules. Messages, goals, processes.

I have a Raspberry Pi Zero that I might use at first just due to its small size and not having to work out a bunch of other stuff, but the goal is to end up with the Jetson so I can integrate some neural networks into this and build something that adapts via reinforcement learning (which is an area that I'm really interested in but sorely lacking any knowkledge on). Either way, I'm going to start simple:

  • chassis
  • battery pack
  • camera
  • lidar or ultrasonic ranging sensor (I have a lot of these lying around)
  • IMU
  • controller + motor driver

I'd like to add a GPS at some point too. More on that later. In the interests of starting simple, I might even cut the camera, especially since I have next to no experience with CV (and none with anything that's not pytorch). The goal is to approach "insect-level intelligence", which means simple behaviours, and the camera will probably mostly be used for obstacle detection.

I realised from a software perspective, I should organise the processes in a loose sort of grouping:

Autonomic processes (maybe poor naming) are processes that receive messages and send feedback; the prime example is the motor controller. Messages have a priority and a blocking status along with some request. For example, if this was a struct it might be something like:

typedef enum {
  Curious,
  Routine,
  Urgent,
  Emergency
} Priority;

typedef enum {
  Stop,
  Forward,
  TurnLeft,
  TurnRight,
  Resume
} MotorBehaviour;

struct MotorRequest {
        uint64         from;
        bool           block;
        Priority       priority;
        MotorBehaviour behaviour;

};

Blocking means that no other lower-or-equivalent priority request will be handled until the same from process sends block == false; I'm not sure if this is useful; I kind of want to experiment to see if not having this results in the robot spazzing out.

Basically, this autonomic function will maintain a priority queue of requests, and will service them as needed.

Actually, the queue probably isn't important. And yet, maybe it is. I'm thinking of the scenario where the robot is navigating (e.g. routine, nonblocking, drive forward) when it sees something that it decides is worth aiming the camera at to examine. Right now, the camera will be fixed, so that means the robot will need to stop and turn (e.g. urgent, blocking, turn right), stop when it it's aimed in the right direction (e.g. urgent, blocking, stop), and then release control once it has gotten enough signal (e.g. urgent, nonblocking, release).

On the other hand, an emergency priority probably means that all the previous activity should be halted.

Another thought is that the motor control shouldn't care about these things; a higher level function should be figuring out where it is (some proprioceptive function?) and the motor function should only handle immediate concerns.

The feedback part sends maybe health check and status updates; maybe a description of the current behaviour and an alert flag? I think the right thing to do is to start with simple, and to have the motor just respond to whatever it's told to do, and not make decisions about priorities and blocking, actually. It probably makes more sense instead to couple behaviour with speed.

Sensory processes handle talking to a sensor and sending information out on a channel. A constant stream of distance settings from the lidar, orientation values and acceleration from the IMU, etc.

Cognitive processes take goals as input, read sensory channels to build a sense of the world, and talk to autonomic functions to effect changes. For example, a navigation process might take a goal of a point to end up at, and constantly monitor a GPS channel to figure out how to get there: does it need to turn, what about obstacles, etc. I'm not sure how mapping and localisation fit into this scheme yet, which is why I want to build this and experiment. There's probably different levels of cognition, too; navigation might be one part of a system that is also handling other tasks, and that higher level process would decide when the navigation requests should be active vs. when say a mapping process needs to turn the robot in a circle to build a view of the world.

I think it would be cool to have a Jupyter notebook running on the bot for interacting with it: if needed, taking direct control over autonomic processes to test things, but really to inject goals and observe the message bus.

I'm unreasonably excited by all of this.

As I started reading up on this, I learned that autonomic is the wrong word for what I wanted; I think somatic better describes that I'm talking about. To clarify my thinking a little bit more:

The key idea behind this system is experiment with a hierarchy of independent processes that roughly mimic how the biological world tends to operate. That is, starting with a low layer of simple processes to interact with the corpus machinae, and building higher-level processes from that.

The hierarchy is roughly composed of the following layers; layer is probably a misnomer, there are a few layers that are at the bottom of the stack. These foundational layers are the ones that directly interact with hardware.

  • The somatic layer is a foundational layer responsible for physical motor control; at first, this is the drive motors. Later on, this will include servos that control camera or sensor orientation and any manipulators. These processes take some requested behaviour as input, and output some feedback.
  • The sensory layer are those processes that do not take any input from the system, but instead output sensory data from whatever sensor they control.
  • The cognitive layer are those processes that fuse data from other layers. There are higher-level cognitive processes that fuse data from other cognitive processes, and those that fuse data directly from the somatic and sensory layers.

A rough sketch of the processes:

Level Name Inputs Outputs Fuses
somatic left/front motor behaviour & speed motor status  
somatic left/rear motor behaviour & speed motor status  
somatic right/front motor behaviour & speed motor status  
somatic right/rear motor behaviour & speed motor status  
sensory lidar   range  
sensory IMU   acceleratio n & orientation  
cognitive movement behaviour & speed motion status motors, IMU, lidar
cognitive navigation location goal progress propriocept ion, goals
cognitive mapping   beliefs about the world lidar
cognitive propriocept ion where am I? position & movement movement, mapping

I'm coming to understand that mapping is really two concerns:

  1. What is my belief about what the world looks like?
  2. Tasking sensors to gather more information to reduce uncertainty.

There's also the idea of spatial memory but I haven't even gotten to memory yet.

[1]Jetpack is Nvidia's deep learning / AI / ML toolkit for use with their GPUs, particularly the Jetsons.

Tags: