MIDI controlled robots

When I was a young kid we had the most intriguing device in our local warehouse. It was a glass box with monkeys in it. Not real monkeys, but to me they seemed alive. Each monkey had an instrument, flute, banjo drums, etc. and they sat in a nice scenery with palm trees. Every time you put a dime in that box the monkeys would play a song. It was the same song always, but I never got bored with it. IMG_3413.JPG

Now I am not so young anymore, and I decided to recreate my childhood fascination using Mindstorms. I wanted to create a band of small robots, each with an intrument. They should be able to play together and give the impression that they actually make music. Just like the monkeys from my youth.
But I also wanted to improve on the monkeys. They always played the same song and, as I found out when I grew up, they didn’t even play in sync with the music. The robots should be able to do both, play different songs and play in tune as well.

I created a prototype of the software and a rudimentary robot to proof this could work. This prototype drew the attention of Matthias Paul Scholz who then teamed up with me to further develop the concept.

I am proud to now show you our music playing robots. I hope they will intrigue you like these monkeys intrigued me. I will explain how they work. Here’s a few examples of the robots playing music.

(Note: For some examples I had to overdub the music because of disturbances from the US sensor. HenceThat is why you don’t always hear the motors running.)

Like it? Curious how it works? This is how. The music you hear is generated by a synthesizer that interprets a Midi sequence. Midi is the standard for electronic music. It is a set of instructions, called messages, that tell electronic instruments what sound to create. A midi message could be like “make sounds like a piano”. The next message could be “play note x”, etcetera. Midi messages can be generated by a keyboard. They can also be stored in a file. You can find thousands of midi files on the Internet.
Most Midi devices are electronic instruments, like synthesizers, drum computers or samplers. But that is not a rule, anything that deals with Midi messages is a Midi device. Also the software that drives these robots is a midi device. This Software is called BoR, meaning Band of Robots.

Let us look at the internals of this software. The ultimate goal of the software is to translate a Midi message into an action of a robot. The sum of all they actions shou il give the impression of a robot playing music.
The midi messages are generated from a MIDI file by a device called a sequencer. The BoR software uses the Midi sequencer that is shipped with the Java sound API. The sequencer is a transmitter of Midi messages and implements the Transmitter interface (from the Java sound API). One can connect other Midi devices, like synthesizers, to Transmitters like the sequencer to process the Midi messages. In the Java sound API these devices implement the receiver interface. One can connect any number of transmitters to any number of receivers. In case of the BoR software there are two receivers. One is a software synthesizer that is also part of the Java sound API, this device is responsible for generating sound. The other is the BoRConductor, it’s job is to filter and distribute Midi messages to the robots.

The first task of the BoRConductor is to filter the Midi messages it receives. Not all messages should be translated into robot actions.
Its second task is to distribute messages over different robots. You want the piano player to receive only the piano messages , not the percussion messages . Distribution is based on midi channels. Midi messages are grouped in channels, a channel normally corresponds to an instrument. The BoR software lets you assign a robot to a channel. It will only receive the messages coming from this channel. This is how each robot is able to play a different part of the song. There are two ways to assign robots to channels, as a musician or as a singer. This allows the robots to play the music using their instrument and to sing a melody at the same time without messing the two up. The assignments are specific to each song as the structure of the midi files vary and instruments used in each song are different.

So the BoRConductor takes care of filtering and distribution. But it does not transport the messages to the robots. For this it hands over the messages to objects that represent the robots in the band. These objects are called BoRBricks. They accept the Midi messages from the BoRConductor, store it in a buffed and send it to the Brick as soon as the Bricks is ready to accept the message. There is also some filtering done here. If the messages are coming in faster than they can be processed by the robot then all messages except the most recent one are discarded. This ensures that the robots are never lagging behind from the music.
The BoRBricks translate the Midi messages into remote object calls to the robot. This Java technique called RMI allows for the execution of a method of an object existing on a remote machine. In this case the remote machine is the robot. RMI uses TCP to communicate with the remote machine. This makes it easy to utilize an existing (wireless) network for transport.
On the receiving side are the robots. Each robot is an RMI server capable of executing remotely invocated methods. These methods are defined by the Musician interface. Its most important methods are NoteOn and voiceOn. These methods have two parameters, tone and intensity. Tone giving information about the pitch and intensity about the volume of the note. So what started as a Midi message on a PC now is a method invocation on the EV3.
How to implement this method is up to the robot that implements the Musician interface. It is different for each robot but in general it is translated into some kind of motor action that makes the robot move. The guitar player for example positions its left hand according to the pitch and moves it’s right hand with a speed related to volume.

To control the BoR software we have created a shell interface. The interface accepts commands to select a song, to assign robots to midi channels and to start and stop a song. One can also execute playlists of songs to be played.

With the BoR software Matthias and I created a band called the S3NSORH3ADS. It currently has four musicians, a singer, a piano player, a guitar player and a drummer. We were given the opportunity to show the band at a robot event in Prague. The expressions on the faces of the audience told me that I succeeded in my initial intention,  to fascinate.

Motor synchronization problems – part 2

As explained in Part 1 of this post, motor synchronization of motor operations in leJOS depends primarily on starting these operations at the same time. Once started the underlying mechanism will ensure that the motors remain in sync. Unfortunately the existing motor API has no mechanism to express this relationship. So we need a way to indicate to the system that move operations (this also includes changing the speed of an already running motor) for two or more motors should happen at the same time. We then need to work out how to actually implement this. In designing this new API there were a number of other considerations, or requirements:

  • It should be easy to convert existing code to take advantage of the new API.
  • The new API should be able to function with local or remote motors (though it is probably acceptable to require that all of the motors that are synchronized are attached to the same brick).
  • It should be possible to synchronize the return of motor move information (like getTachoCount) as well as actual motor move rotate requests.
  • Some parts of the API are likely to be called frequently so if possible these methods should avoid generating garbage (as causing the GC to be invoked often may cause problems for other real time tasks).

After some discussion an experimental interface that is based upon extending the current RegulatedMotor interface has been chosen. This allows the use of the existing mechanism of using separate calls to set things like target velocity, acceleration etc, while still meeting the above requirements. The basic approach is to use a “lead” or master motor to hold the list of motors to synchronize with, and to bracket calls to the existing API with synchronization markers.  So for example suppose we have code that rotates two motors by 360 degrees:

RegulatedMotor mA = new EV3LargeRegulatedMotor(BrickFinder.getDefault().getPort("A"));
RegulatedMotor mD = new EV3LargeRegulatedMotor(BrickFinder.getDefault().getPort("D"));

mA.rotate(360, true);
mD.rotate(360, true);
mA.waitComplete();
mB.waitComplete();

The above code says nothing about synchronization. Using the new API we would have:

RegulatedMotor mA = new EV3LargeRegulatedMotor(BrickFinder.getDefault().getPort("A"));
RegulatedMotor mD = new EV3LargeRegulatedMotor(BrickFinder.getDefault().getPort("D"));
mA.synchronizeWith(new RegulatedMotor[] {mD});

mA.startSynchronization();
mA.rotate(360, true);
mD.rotate(360, true);
mA.endSynchronization();
mA.waitComplete();
mB.waitComplete();

The above indicates that we wish to synchronize some operations between motors A and D, and the start/end calls bracket the operations that should be synchronized. Operations outside of these brackets function just as before.

The underlying operation is relatively simple. When the call to startSynchronization is made the current state of all motors that are to be synchronized is obtained. This in effect freezes the results that calls to getTachoCount etc. will return. This frozen state can then be modified by making calls to setSpeed etc or rotateTo etc. However none of these operations is actually executed until the final endSynchronization call is made At this point all of the accumulated state is passed into the kernel module to allow all of the motor operations to start at the same time.

So what happens if there are multiple rotate calls for the same motor within a single set of start/end brackets. To understand this we need to be aware of a further rule. Only non blocking calls can be used between start/end. With this rule things become easier to understand. If multiple calls are made to rotate then in effect only the last call will actually have any effect, this is pretty much what would happen if you performed the same thing without synchronization and followed one non blocking rotate by another. There are lots of possible special cases, but if you keep things simple things should work as you would expect.

The end result of this can be seen in the following image:

composit2

This is an updated version of the image from part 1, showing the position of the robot after 32 (16 in the case of leJOS 0.8.1) move operations. Red is the starting position, blue leJOS on the NXT, Yellow is leJOS EV3 0.8.1 and pink is the Lego firmware. In this image green is the result from leJOS EV3 using the new synchronization primitives. As can be seen this is a considerable improvement .

With the addition of the synchronization mechanism movements on the EV3 can be as good as if not better than that obtained on the NXT. However we are not finished yet. Take another look at the image, do you see how all of the results show movement backwards from the starting position? What is causing this? I hope to investigate this further in the next post.

Motor synchronization problems – Part 1

This series of posts investigates a number of problems identified with the motor control system in leJOS 0.8.1. I hope it provides an insight into how some of these systems work and also into the process of identifying a problem and producing a possible solution.

A few weeks ago Roger one of the leJOS developers mentioned that he was seeing some strange results when running tests with the leJOS DifferentialPilot class. This class provides pretty much all you need to control a simple robot that uses two motors configured to use tank, or differential steering. Something like the one shown below.

Simple robotThe problem that Roger was seeing was that the robot didn’t seem to be able to drive a simple series of straight lines and rotations very well. The robot finished out of place. The error wasn’t huge but it was significantly worse then a similar robot built using the NXT. Roger investigated further and discovered that during a move operation the two motors did not seem to be staying in sync very well.  After some time investigating this Roger provided several key observations:

There is a delay in bringing the second (right) motor up to speed. When travel is called, robot initially swerves to right , travels straight, then turns after left motor stops and the right completes its rotation.

Call the pilot.travel(15) and output the tacho count of each motor at 100 ms intervals. The maximum difference (left count – right count) usually occurs after about 300 ms, and remains usually within 1 degree of the max till deceleration begins at the end of the travel.

In the early experiments, the recorded max difference is much larger (in the range of 20 to 40 deg) in the first call to pilot.travel(15) then in subsequent calls. If travel(0) or rotate(90) is called at the start of the experiment, this effect largely disappears.

To better understand what was going on I built the above robot. This robot has three significant features:

  • It has a relatively narrow track width, this means that any small differences in motor position will be amplified and errors will be more obvious.
  • It is designed so that the same base of motors+caster can be used with either the EV3 or the NXT enabling direct comparisons to be made.
  • The robot is designed to use a combination of a jig and a laser to ensure that the initial start position can be reproduced easily.

The test was very simple. Carefully align the robot start position. Perform a series of 16 travel(25); travel(-25); operations (so 32 moves in total). Observe the final position of the robot. The following images show the results for the NXT and EV3:

sync1

The first image shows the NXT version in the starting position (it also shows the alignment jig). The second image shows the position after the 32 move operations, as can be seen there is some loss of position over the move. The third image shows the results when using the EV3 and version 0.8.1 of leJOS, the robot actually fell off the test track (not good). The final image shows the positions combined to make it easier to see how they compare, in this case the red rectangle is the start position, blue is the NXT after 32 moves and yellow is the EV3 after 16 moves (because it was not possible to record the final position accurately).  As we can see there is a considerable error in the case of the EV3 confirming Roger’s observations. The following image is a bigger version of the final one of the set above, I have also included the results for the EV3 running a similar test program using the standard Lego software and firmware (in pink).

composit

 

Looking at these results in more detail we can see that NXT is rotated and shifted slightly backwards, the EV3 has smaller rotation but a large right shift and the Lego software has a similar rotation to the NXT but a larger backwards shift.

During this investigation there was some discussion among the leJOS developers and further testing, then a theory began to develop. What seems to be happening is that there is sometimes a delay between the two motors starting. This delay then results in an offset of the motor positions for the remainder of the move and results in the swerves described by Roger and the large right shift. This raises a number of questions:

  • Why is this problem not seen on the NXT?
  • Why was the error much larger on the first call to travel?
  • What can we do to fix it?

To explain the first problem we need to understand what leJOS does to synchronize motor movements and the different Java implementations on the two platforms. The simple answer is that leJOS performs no explicit motor synchronization. Instead it relies on the motor regulator providing accurate positional control of the motor rotation during a move. This means that so long as the motors start at the same time they will remain in sync for the duration of the move. On the NXT starting the motors at the same time is easy. The Virtual Machine used to execute the byte code is a reasonably good real time system and uses a strict priority based scheduling mechanism and the only code executing on the NXT is the VM. I won’t go into all of the gory details here but what this means is that it is simple to ensure that both motors start to rotate at pretty much the same time. On the EV3 things are very different. To begin with the EV3 is running a full up Linux system, this means much, much, more is going on (the leJOS menu system, networking code and a bunch of other system threads). Secondly the thread scheduling is different which means that it is much more likely that the thread running the Pilot code will be interrupted. Finally the motor control code on the EV3 is very different to that on the NXT. To get the level of control we need we make use of a kernel module for the hard real time operations and hardware access, our Java code has to make system calls to talk to this module. This means that again there is much greater chance that a thread switch can happen introducing a delay between the two motors starting.

OK so it looks like delays between the two calls to start the motors rotating may be due to differences between the two systems. But why is it much worse for the first call? Again we need to understand the differences between the two systems. On the NXT the leJOS VM executes Java byte code directly using an interpreter. This is slower than executing native code but allows the use of very compact code which on the memory limited NXT is important. On the EV3 we use a standard Oracle JVM one of the many features of this VM is that it contains a JIT (Just In Time) compiler. What this means is that although the system starts off interpreting byte code (just like on the NXT), the VM can choose to convert that byte code into native code and run that instead. In general this is a good thing, as it means our code runs much faster. The downside is that when the system decides to convert a method from byte code to native code this can cause a delay. It is this delay that creates that large initial error.

So what can we do to fix this. There are probably a number of things we could do to “work around” the problem, adjust the threading priorities, give the system chance to switch to other threads just before we run the critical code, force the JIT to compile our code before we need to use it in a critical way (this is what adding the call to travel(0) did that Roger discovered helped reduce the size of the initial error). But really none of these is the right solution. The fundamental problem we have here is that we have two operations (starting the motors) that we need to happen at the same time, but we have no way of telling the system that is what we require. We need to modify the motor control system such that we can make it aware of this synchronization requirement and then work out a way to ensure that happens.

Part two of this series will look at an experimental solution to this problem and how well it works.

 

Using the PCF8574 IO expander with the EV3

2014-09-26 11.25.32

Mindsensors sell a breadboard sensor making kit with a PCF8574 for the NXT. I bought one a long time ago, and I thought I would try it on the EV3, after seeing some interest in the chip on the leJOS forums.

I am using the sensor in read mode to read a set of buttons. You can also use it in write mode to switch up to 8 devices. For example, you could use it to drive a seven-segment LED, or to control motors.

I had to modify the ev3classes project to allow read only i2c transactions. All the other sensors we currently support use a register value that is written to the device. The PCF8754 just reads or writes a single byte of data. This means the code below will only work on the Git master version of leJOS or the 0.9.0 release when it comes out.

The code for my read mode sample provider is:

import lejos.hardware.port.Port;
import lejos.hardware.sensor.I2CSensor;
import lejos.hardware.sensor.SensorMode;
import lejos.robotics.SampleProvider;

public class PCF8574Sensor extends I2CSensor  {

	public PCF8574Sensor(Port port) {
		super(port, 0x40);
		setModes(new SensorMode[] { new ReadMode()});
	}
	
	private class ReadMode implements SensorMode {
	
	    @Override
	    public int sampleSize() {
	    	return 1;
	    }
	
	    @Override
	    public void fetchSample(float[] sample, int offset) {
	    	byte[] buf = new byte[1];
	    	port.i2cTransaction(0x40, null, 0, 0, buf, 0, 1);
			sample[offset] = buf[0] & 0xFF;
	    }
	
	    @Override
	    public String getName() {
	    	return "Read";
	    }
	}
	
	public SampleProvider getReadMode() {
		return getMode(0);
	}
}

And my test program:

import lejos.hardware.Button;
import lejos.hardware.lcd.LCD;
import lejos.hardware.port.SensorPort;
import lejos.robotics.SampleProvider;
import lejos.utility.Delay;

public class PCF8574Test {
	public static void main(String[] args) {
		PCF8574Sensor sensor = new PCF8574Sensor(SensorPort.S1);
		SampleProvider sp = sensor.getReadMode();
		float[] sample = new float[sp.sampleSize()];
		LCD.drawString("Binary:", 0, 2);
		LCD.drawString("Decimal:", 0, 3);
		
		while(Button.ESCAPE.isUp()) {
			sp.fetchSample(sample,0);
			LCD.drawString(String.format("%8s", Integer.toBinaryString((int) sample[0] & 0xff)).replace(' ', '0'), 9, 2);
			LCD.drawInt((int) sample[0] & 0xff, 3, 9, 3);
			Delay.msDelay(1000);
		}	
		sensor.close();
	}
}

Homemade IMU

2014-09-18 20.46.18

Here is an example of a more useful homemade EV3 UART sensor. It is an IMU that uses an MPU-9150 breakout board that can be bought cheaply on ebay. I am usinh a cheap Arduino Nano clone.

The sensor supports 4 modes: Temperature, Compass, Gyro and Acceleration. The last three are all 3D, and have X, Y and Z values. you can program the sensor to do interesting things like on-board fusion of the gyro and acceleration data.

The USB cable in the picture is used for programming the Arduino Nano, and for diagnostics. When used in anger, the device is powered from the EV3.

The EV3UARTEmulation library is used to implement the EV3 UART functionality and the code for driving the MPu-9150 is based on that on the Arduino playground page.

This is the Arduino code for the sensor:

// Some code copied from the MPU-9150 Arduino Playground page.
#include <EV3UARTEmulation.h>
#include <SoftwareSerial.h>
#include <Wire.h>

EV3UARTEmulation sensor(7, 8, 99, 38400);

#define MPU9150_TEMP_OUT_H         0x41
#define MPU9150_TEMP_OUT_L         0x42
#define MPU9150_PWR_MGMT_1         0x6B

#define MPU9150_CMPS_XOUT_L        0x4A   // R
#define MPU9150_CMPS_XOUT_H        0x4B   // R
#define MPU9150_CMPS_YOUT_L        0x4C   // R
#define MPU9150_CMPS_YOUT_H        0x4D   // R
#define MPU9150_CMPS_ZOUT_L        0x4E   // R
#define MPU9150_CMPS_ZOUT_H        0x4F   // R

#define MPU9150_GYRO_XOUT_H        0x43   // R  
#define MPU9150_GYRO_XOUT_L        0x44   // R  
#define MPU9150_GYRO_YOUT_H        0x45   // R  
#define MPU9150_GYRO_YOUT_L        0x46   // R  
#define MPU9150_GYRO_ZOUT_H        0x47   // R  
#define MPU9150_GYRO_ZOUT_L        0x48   // R

#define MPU9150_ACCEL_XOUT_H       0x3B   // R  
#define MPU9150_ACCEL_XOUT_L       0x3C   // R  
#define MPU9150_ACCEL_YOUT_H       0x3D   // R  
#define MPU9150_ACCEL_YOUT_L       0x3E   // R  
#define MPU9150_ACCEL_ZOUT_H       0x3F   // R  
#define MPU9150_ACCEL_ZOUT_L       0x40   // R 

void setup() {
  Serial.begin(9600);
  
  Wire.begin();
  
  // Clear the 'sleep' bit to start the sensor.
  MPU9150_writeSensor(MPU9150_PWR_MGMT_1, 0);
  
  MPU9150_setupCompass();
  
  sensor.create_mode("Temperature", true, DATA16, 1, 2, 0);
  sensor.create_mode("Compass", true, DATA16, 3, 5, 0);
  sensor.create_mode("Gyro", true, DATA16, 3, 5, 0);
  sensor.create_mode("Accel", true, DATA16, 3, 5, 0);
  
  sensor.reset();
}

unsigned long last_reading = 0;
short heading[4];
short gyro[4];
short accel[4];
short t;

void loop() {
  sensor.heart_beat();
  if (millis() - last_reading > 100) {
    switch (sensor.get_current_mode()) {
    case 0:
      t = ((int) MPU9150_readSensor(MPU9150_TEMP_OUT_L,MPU9150_TEMP_OUT_H) + 12412.0) / 340.0;
      sensor.send_data16(t);
      break;
    case 1:
      heading[0] = MPU9150_readSensor(MPU9150_CMPS_XOUT_L,MPU9150_CMPS_XOUT_H);
      heading[0] = MPU9150_readSensor(MPU9150_CMPS_YOUT_L,MPU9150_CMPS_YOUT_H);
      heading[1] = MPU9150_readSensor(MPU9150_CMPS_ZOUT_L,MPU9150_CMPS_ZOUT_H);
      sensor.send_data16(heading, 4);
      break;
    case 2:
      gyro[0] = MPU9150_readSensor(MPU9150_GYRO_XOUT_L,MPU9150_GYRO_XOUT_H);
      gyro[1] = MPU9150_readSensor(MPU9150_GYRO_YOUT_L,MPU9150_GYRO_YOUT_H);
      gyro[2] = MPU9150_readSensor(MPU9150_GYRO_ZOUT_L,MPU9150_GYRO_ZOUT_H);
      sensor.send_data16(gyro, 4);
      break;
    case 3:
      accel[0] = MPU9150_readSensor(MPU9150_ACCEL_XOUT_L,MPU9150_ACCEL_XOUT_H);
      accel[1] = MPU9150_readSensor(MPU9150_ACCEL_YOUT_L,MPU9150_ACCEL_YOUT_H);
      accel[2] = MPU9150_readSensor(MPU9150_ACCEL_ZOUT_L,MPU9150_ACCEL_ZOUT_H);
      sensor.send_data16(accel, 4);
      break;     
    }
    last_reading = millis();
  }
}

int MPU9150_I2C_ADDRESS = 0x68;

int MPU9150_readSensor(int addr){
  Wire.beginTransmission(MPU9150_I2C_ADDRESS);
  Wire.write(addr);
  Wire.endTransmission(false);

  Wire.requestFrom(MPU9150_I2C_ADDRESS, 1, true);
  return Wire.read();
}

int MPU9150_readSensor(int addrL, int addrH){
  Wire.beginTransmission(MPU9150_I2C_ADDRESS);
  Wire.write(addrL);
  Wire.endTransmission(false);

  Wire.requestFrom(MPU9150_I2C_ADDRESS, 1, true);
  byte L = Wire.read();

  Wire.beginTransmission(MPU9150_I2C_ADDRESS);
  Wire.write(addrH);
  Wire.endTransmission(false);

  Wire.requestFrom(MPU9150_I2C_ADDRESS, 1, true);
  byte H = Wire.read();

  return (int16_t)((H<<8)+L);
}

int MPU9150_writeSensor(int addr,int data){
  Wire.beginTransmission(MPU9150_I2C_ADDRESS);
  Wire.write(addr);
  Wire.write(data);
  Wire.endTransmission(true);

  return 1;
}

void MPU9150_setupCompass(){
  MPU9150_I2C_ADDRESS = 0x0C;      //change Adress to Compass

  MPU9150_writeSensor(0x0A, 0x00); //PowerDownMode
  MPU9150_writeSensor(0x0A, 0x0F); //SelfTest
  MPU9150_writeSensor(0x0A, 0x00); //PowerDownMode

  MPU9150_I2C_ADDRESS = 0x68;      //change Adress to MPU

  MPU9150_writeSensor(0x24, 0x40); //Wait for Data at Slave0
  MPU9150_writeSensor(0x25, 0x8C); //Set i2c address at slave0 at 0x0C
  MPU9150_writeSensor(0x26, 0x02); //Set where reading at slave 0 starts
  MPU9150_writeSensor(0x27, 0x88); //set offset at start reading and enable
  MPU9150_writeSensor(0x28, 0x0C); //set i2c address at slv1 at 0x0C
  MPU9150_writeSensor(0x29, 0x0A); //Set where reading at slave 1 starts
  MPU9150_writeSensor(0x2A, 0x81); //Enable at set length to 1
  MPU9150_writeSensor(0x64, 0x01); //overvride register
  MPU9150_writeSensor(0x67, 0x03); //set delay rate
  MPU9150_writeSensor(0x01, 0x80);

  MPU9150_writeSensor(0x34, 0x04); //set i2c slv4 delay
  MPU9150_writeSensor(0x64, 0x00); //override register
  MPU9150_writeSensor(0x6A, 0x00); //clear usr setting
  MPU9150_writeSensor(0x64, 0x01); //override register
  MPU9150_writeSensor(0x6A, 0x20); //enable master i2c mode
  MPU9150_writeSensor(0x34, 0x13); //disable slv4
}

and here is the leJOS sensor driver class:

import lejos.hardware.port.Port;
import lejos.hardware.sensor.SensorMode;
import lejos.hardware.sensor.UARTSensor;
import lejos.robotics.SampleProvider;

public class IMU extends UARTSensor  {
	private static final long SWITCHDELAY = 200;

	public IMU(Port port) {
		super(port);
		setModes(new SensorMode[] { new TemperatureMode(), new CompassMode(), new GyroMode(), new AccelerationMode()});
	}
	
	private class TemperatureMode implements SensorMode {
	    private static final int   MODE = 0;
		private int temp;
	
	    @Override
	    public int sampleSize() {
	    	return 1;
	    }
	
	    @Override
	    public void fetchSample(float[] sample, int offset) {
	    	switchMode(MODE, SWITCHDELAY);
	    	temp = port.getShort();
			sample[offset] = temp;
	    }
	
	    @Override
	    public String getName() {
	    	return "Temperature";
	    }
	}
	
	public SampleProvider getTemperatureMode() {
		return getMode(0);
	}
	
	private class CompassMode implements SensorMode {
	    private static final int   MODE = 1;
		private short[] heading = new short[sampleSize()];
	
	    @Override
	    public int sampleSize() {
	    	return 3;
	    }
	
	    @Override
	    public void fetchSample(float[] sample, int offset) {
	    	switchMode(MODE, SWITCHDELAY);
			port.getShorts(heading, 0 ,3);
			for(int i=0;i<sampleSize();i++) sample[offset+i] = heading[i];
	    }
	
	    @Override
	    public String getName() {
	    	return "Compass";
	    }
	}
	
	public SampleProvider getCompassMode() {
		return getMode(1);
	}
	
	private class GyroMode implements SensorMode {
	    private static final int   MODE = 2;
		private short[] gyro = new short[sampleSize()];
	
	    @Override
	    public int sampleSize() {
	    	return 3;
	    }
	
	    @Override
	    public void fetchSample(float[] sample, int offset) {
	    	switchMode(MODE, SWITCHDELAY);
			port.getShorts(gyro, 0 ,3);
			for(int i=0;i<sampleSize();i++) sample[offset+i] = gyro[i];
	    }
	
	    @Override
	    public String getName() {
	    	return "Gyro";
	    }
	}
	
	public SampleProvider getGyroMode() {
		return getMode(2);
	}
	
	private class AccelerationMode implements SensorMode {
	    private static final int   MODE = 3;
		private short[] accel = new short[sampleSize()];
	
	    @Override
	    public int sampleSize() {
	    	return 3;
	    }
	
	    @Override
	    public void fetchSample(float[] sample, int offset) {
	    	switchMode(MODE, SWITCHDELAY);
			port.getShorts(accel, 0 ,3);
			for(int i=0;i<sampleSize();i++) sample[offset+i] = accel[i];
	    }
	
	    @Override
	    public String getName() {
	    	return "Acceleration";
	    }
	}
	
	public SampleProvider getAccelerationMode() {
		return getMode(3);
	}
}

and a test program:

import lejos.hardware.Button;
import lejos.hardware.lcd.LCD;
import lejos.hardware.port.SensorPort;
import lejos.robotics.SampleProvider;
import lejos.utility.Delay;

public class IMUTest {
	public static void main(String[] args) {
		IMU sensor = new IMU(SensorPort.S1);
		SampleProvider temp = sensor.getTemperatureMode();
		SampleProvider heading = sensor.getCompassMode();
		SampleProvider gyro = sensor.getGyroMode();
		SampleProvider accel = sensor.getAccelerationMode();
		float[] sample = new float[heading.sampleSize()];
		
		LCD.drawString("Temp:", 0, 1);
		LCD.drawString("H X       Y", 0, 2);
		LCD.drawString("  Z", 0, 3);
		LCD.drawString("G X       Y", 0, 4);
		LCD.drawString("  Z", 0, 5);
		LCD.drawString("A X       Y", 0, 6);
		LCD.drawString("  Z", 0, 7);
		
		while(Button.ESCAPE.isUp()) {
			temp.fetchSample(sample, 0);
			LCD.drawInt((int) sample[0], 4, 5, 1);
			heading.fetchSample(sample,0);
			for (int i=0;i<2;i++) LCD.drawInt((int)sample[i], 6, 3 + i*8, 2);
			LCD.drawInt((int)sample[2], 6, 3, 3);
			gyro.fetchSample(sample, 0);
			for (int i=0;i<2;i++) LCD.drawInt((int)sample[i], 6, 3 + i*8, 4);
			LCD.drawInt((int)sample[2], 6, 3, 5);
		    accel.fetchSample(sample, 0);
			for (int i=0;i<2;i++) LCD.drawInt((int)sample[i], 6, 3 + i*8, 6);
			LCD.drawInt((int)sample[2], 6, 3, 7);
		}	
		sensor.close();
	}
}

Webcam as Color Detector

You can also use a webcam to detect colors:

package mypackage;

import java.io.File;
import java.io.IOException;

import lejos.hardware.BrickFinder;
import lejos.hardware.Button;
import lejos.hardware.Sound;
import lejos.hardware.ev3.EV3;
import lejos.hardware.video.Video;

public class ColorDetector {
	private static final int WIDTH = 160;
	private static final int HEIGHT = 120;
	private static final int NUM_PIXELS = WIDTH * HEIGHT;
	private static final File BLACK = new File("black.wav");
	private static final File WHITE = new File("white.wav");
	private static final File RED = new File("red.wav");
	private static final File GREEN = new File("green.wav");
	
	private static long lastPlay = 0;

	public static void main(String[] args) throws IOException  {
		
		EV3 ev3 = (EV3) BrickFinder.getLocal();
		Video video = ev3.getVideo();
		video.open(WIDTH, HEIGHT);
		byte[] frame = video.createFrame();
		
		while(Button.ESCAPE.isUp()) {
			video.grabFrame(frame);
			int tr = 0, tg = 0, tb = 0;
			
			for(int i=0;i<frame.length;i+=4) {
				int y1 = frame[i] & 0xFF;
				int y2 = frame[i+2] & 0xFF;
				int u = frame[i+1] & 0xFF;
				int v = frame[i+3] & 0xFF;
				int rgb1 = convertYUVtoARGB(y1,u,v);
				int rgb2 = convertYUVtoARGB(y2,u,v);
				
				tr += ((rgb1 >> 16) & 0xff) + ((rgb2 >> 16) & 0xff);
				tg += ((rgb1 >> 8) & 0xff) + ((rgb2 >> 8) & 0xff);
				tb += (rgb1 & 0xff) + (rgb2 & 0xff);
			}
			
			float ar = tr/NUM_PIXELS, ag = tg/NUM_PIXELS, ab = tb/NUM_PIXELS;
			System.out.println((int) ar + " , " + (int) ag + " , " + (int) ab);
			if (ar < 10 && ag < 10 && ab < 10) play(BLACK);
			else if (ar > 250 && ag > 250 && ab > 250) play(WHITE);
			else if ( ar > 1.8*ag && ar > 2*ab) play(RED);
			else if ( ag > 1.8*ar && ag > ab) play(GREEN);

		}
		video.close();
	}
	
	private static void play(File file) {
		long now = System.currentTimeMillis();
		
		if (now - lastPlay > 2000) {
			System.out.println("Playing " + file.getName());
			Sound.playSample(file);
			lastPlay = now;
		}
	}
	
	private static int convertYUVtoARGB(int y, int u, int v) {
		int c = y - 16;
		int d = u - 128;
		int e = v - 128;
		int r = (298*c+409*e+128)/256;
		int g = (298*c-100*d-208*e+128)/256;
		int b = (298*c+516*d+128)/256;
	    r = r>255? 255 : r<0 ? 0 : r;
	    g = g>255? 255 : g<0 ? 0 : g;
	    b = b>255? 255 : b<0 ? 0 : b;
	    return 0xff000000 | (r<<16) | (g<<8) | b;
	}
}

You will need to record some mono wav files in a program such as Audacity, export them as 16-bit PCM and upload them to the EV3, in order to get the name of the color spoken when it is detected.

Webcam Streaming

2014-09-04 08.17.54

Following on from Andy’s post on Webcam support, here is how to use your Webcam to stream color video to your PC.

The leJOS program is:

package mypackage;

import java.io.BufferedOutputStream;
import java.io.IOException;
import java.net.Socket;

import lejos.hardware.BrickFinder;
import lejos.hardware.Button;
import lejos.hardware.ev3.EV3;
import lejos.hardware.lcd.LCD;
import lejos.hardware.video.Video;

public class Stream {
	private static final int WIDTH = 160;
	private static final int HEIGHT = 120;
	private static final String HOST = "192.168.0.2";
	private static final int PORT = 55555;

	public static void main(String[] args) throws IOException  {

		EV3 ev3 = (EV3) BrickFinder.getLocal();
		Video video = ev3.getVideo();
		video.open(WIDTH, HEIGHT);
		byte[] frame = video.createFrame();

		Socket sock = new Socket(HOST, PORT);
		BufferedOutputStream bos = new BufferedOutputStream(sock.getOutputStream());
		long start = System.currentTimeMillis();
		int frames = 0;
		LCD.drawString("fps:", 0, 2);

		while(Button.ESCAPE.isUp()) {
			try {
				video.grabFrame(frame);
				LCD.drawString("" + (++frames * 1000f/(System.currentTimeMillis() - start)), 5,2);

				bos.write(frame);
				bos.flush();
			} catch (IOException e) {
				break;
			}
		}

		bos.close();
		sock.close();
		video.close();
	}
}

And the PC program is:

package mypackage;

import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.awt.image.BufferedImage;
import java.io.BufferedInputStream;
import java.net.ServerSocket;
import java.net.Socket;

import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;

public class CameraFrame {
	private static final int WIDTH = 160;
	private static final int HEIGHT = 120;
	private static final int NUM_PIXELS = WIDTH * HEIGHT;
	private static final int BUFFER_SIZE = NUM_PIXELS * 2;
	private static final int PORT = 55555;

	private ServerSocket ss;
	private Socket sock;
	private byte[] buffer = new byte[BUFFER_SIZE];
	private BufferedInputStream bis;
	private BufferedImage image;
	private CameraPanel panel = new CameraPanel();
	private JFrame frame;

	public CameraFrame() {	
		try {
			ss = new ServerSocket(PORT);
			sock = ss.accept();
			bis = new BufferedInputStream(sock.getInputStream());
		} catch (Exception e) {
			System.err.println("Failed to connect: " + e);
			System.exit(1);
		}

		image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
	}

	public void createAndShowGUI() {
		frame = new JFrame("EV3 Camera View");

		frame.getContentPane().add(panel);
		frame.setPreferredSize(new Dimension(WIDTH, HEIGHT));

		frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
	    frame.addWindowListener(new WindowAdapter() {
	        @Override
	        public void windowClosing(WindowEvent e) {
	        	close();
	        }
	    });

		frame.pack();
		frame.setVisible(true);
	}

	public void close() {
    	try {
    		if (bis != null) bis.close();
    		if (sock != null) sock.close();
			if (ss != null) ss.close();
		} catch (Exception e1) {
			System.err.println("Exception closing window: " + e1);
		}
	}

	private int convertYUVtoARGB(int y, int u, int v) {
		int c = y - 16;
		int d = u - 128;
		int e = v - 128;
		int r = (298*c+409*e+128)/256;
		int g = (298*c-100*d-208*e+128)/256;
		int b = (298*c+516*d+128)/256;
	    r = r>255? 255 : r<0 ? 0 : r;
	    g = g>255? 255 : g<0 ? 0 : g;
	    b = b>255? 255 : b<0 ? 0 : b;
	    return 0xff000000 | (r<<16) | (g<<8) | b;
	}

	public void run() {
		while(true) {
			synchronized (this) {
				try {
					int offset = 0;
					while (offset < BUFFER_SIZE) {
						offset += bis.read(buffer, offset, BUFFER_SIZE - offset);
					}
					for(int i=0;i<BUFFER_SIZE;i+=4) {
						int y1 = buffer[i] & 0xFF;
						int y2 = buffer[i+2] & 0xFF;
						int u = buffer[i+1] & 0xFF;
						int v = buffer[i+3] & 0xFF;
						int rgb1 = convertYUVtoARGB(y1,u,v);
						int rgb2 = convertYUVtoARGB(y2,u,v);
						image.setRGB((i % (WIDTH * 2)) / 2, i / (WIDTH * 2), rgb1);
						image.setRGB((i % (WIDTH * 2)) / 2 + 1, i / (WIDTH * 2), rgb2);
					}
				} catch (Exception e) {
					break;
				}
			}
			panel.repaint(1);
		}
	}

	class CameraPanel extends JPanel {
		private static final long serialVersionUID = 1L;

	    @Override
	    protected void paintComponent(Graphics g) {
	        super.paintComponent(g);
	        // Ensure that we don't paint while the image is being refreshed
	        synchronized(CameraFrame.this) {
	        	g.drawImage(image, 0, 0, null);
	        }
	    }	
	}

	public static void main(String[] args) {	
		final CameraFrame cameraFrame = new CameraFrame();
		SwingUtilities.invokeLater(new Runnable() {
			@Override
			public void run() {
				cameraFrame.createAndShowGUI();	
			}
		});
		cameraFrame.run();
	}
}