you're reading...
Connecting robots, Moving around, Projects

Line Following & Cooperating Robots (part 1)


Following on from the series of articles about EV3 to EV3 communications I wanted to build some actual robots and have them cooperate, but I wanted to keep things simple and avoid the complexities of navigation and dealing with the environment as much as possible. I wanted to be able to concentrate on the cooperation side of things. It struck me that using a line following robot would be ideal. A line follower combines a nice mobile robot with a very constrained environment, perfect. At this point I have a confession to make, despite building many robots over a lot of years I have never built and programmed a line follower! By a nice coincidence Aswin had been working on a new navigation model for leJOS the Chassis which was well suited to controlling a line follower and had an odometry  based pilot built in that would make reporting the robot position nice and simple. So this project looked like it might make a nice test case for these new classes as well!

The basic concept was to have two or more robots that moved around a track and which are aware via communications of the positions of the other robots. This information can then be used to allow the robots to cooperate. In this initial project the track is a simple oval and the cooperation is simple collision avoidance, but in future projects I hope to extend the track (to add sidings and passing places), and enhance the cooperation to allow robots to move in opposite directions and to allow faster robots to overtake slower ones. But we are getting a little ahead of ourselves, first we need to build a robot and make it follow a line.

With this project I need more than one robot (3 or 4 eventually), so I wanted to keep things as simple as possible. So the basic design is a differential drive robot with a single light sensor (to track the line). The robot uses a castor wheel to enable turns. My initial design used the LEGO ball castor, but I only have one of those, so the final design switch to using a Rotacastor wheel (thanks Aswin!), which works really well. The robots are connected to a PC and to each other via WiFi (or Bluetooth). So far I built two examples one using the LEGO EV3 Light sensor, the other using the LEGO NXT Light sensor. Both work very well and require only minimal code changes to support the two different sensors (which is an interesting example of the flexibility provided by the leJOS sensor framework). So let’s take a look at the robot.


The only significant difference between the two is that I have added some additional structure around the NXT sensor to act as a shade. The EV3 sensor is much better at coping with different ambient light levels and this shade provides a simple solution for the NXT sensor. The track is a simple board with a line made out of electrical insulating tape. The corners are smooth but are reasonably tight, which as we will see can cause a few problems!


The track also has a small strip of reflective tape added to it. This is used as a lap marker that is easily detected by the robots and tells them when they start a new lap of the track.

The basic control loop for the line following is reasonably simple, there are lots of articles on the web that will tell you how to write it, so I’ll just note the more interesting aspects of my version.

My follower only has a single sensor so it actually follows the edge of the line. Ideally the sensor would return 0 when the sensor is on the line edge and -1 when over the line and 1 when over the track. But all we have is the intensity of the reflected light. So the first step is to calibrate things to obtained the desired readings.  This consists of scanning the sensor over the line to identify the high low values and calculate a suitable scale and offset. Luckily the sensor framework has a filter to do this for me (the LinearCalibrationFilter). For more details of this sort of calibration see Sensor calibration: a bit of background.

Once we have the calibrated reading we need to use it to control the robot. The new chassis makes handling the control robot like this very easy. We simply use the output of our control algorithm to drive the angular velocity component of the velocity based API (in this case via the setVelocity method). The actual control system uses a slightly modified PID algorithm. The code for this is shown below.

                tracker.fetchSample(sample, 0);
                float error = sample[0];
                // Accumulate errors for I term
                if (error*totalError <= 0)
                    totalError = totalError*0.80f;
                totalError += error;
                if (totalError*I > MAX_STEER/2)
                    totalError = MAX_STEER/2/I;
                else if (totalError*I < -MAX_STEER/2)
                    totalError = -MAX_STEER/2/I;
                // calculate PID value
                float output = P*error + I*totalError + D*(error - prevError);
                prevError = error;
                // limit it
                if (output > MAX_STEER)
                    output = MAX_STEER;
                else if (output < -MAX_STEER)
                    output = -MAX_STEER;
                // adjust speed if needed
                if (curSpeed != targetSpeed)
                    float accel = acceleration*LOOP_TIME/1000;
                    if (curSpeed < targetSpeed)
                        curSpeed += accel;
                        if (curSpeed > targetSpeed)
                            curSpeed = targetSpeed;
                        curSpeed -= accel;
                        if (curSpeed < targetSpeed)
                            curSpeed = targetSpeed;
                // steer as needed
                chassis.setVelocity(curSpeed, output);

This code runs an a separate thread and is executed every 50mS constantly correcting the robot heading. The only unusual aspects to the PID control is how the integral term is handled. It turns out that with my robot it is not easy to provide nice smooth control and deal with the sharp turns on the track. The problem is that sensor only has a very small working zone before it becomes saturated (using multiple light sensors is often used to extend the usable zone). This means that when the robot reaches a turn a pretty large control input is needed to keep the robot on track. This can be provided by the P term, but having a large P term results in rather jerky movement on the straight parts of the track. Instead I allow the robot to go further off track (which results in the sensor saturating) and rely on the I term to increase the turn rate if the robot remains “off track”. Unfortunately this results in the well known problem of “Integral windup“, which causes the robot to over-correct and swing off the other side of the ideal line. In this case I have modified the integral part of the PID calculation to reduce the integral term if it is producing a control output that would move the robot the “wrong way”. With this small modification the robot tracks the line pretty well at a range of speeds using just a single sensor.

The other feature of the above code is that it provides smooth linear acceleration when the speed of the robot changes.

All of this provides the basic line following capability of the robot. To see this in action (and have a sneak view of second part of this series), take a look at the following video clip.

In the second part of this series I will take a look at how these robots can be made to cooperate and also show a PC based monitoring program that allows the user to control the robots and monitor their position.



4 thoughts on “Line Following & Cooperating Robots (part 1)

  1. Hello,
    nice video.
    Are the program and monitor tool in the snapshot?
    I would like to test it.


    Posted by Fabian Katins | 2015/10/29, 01:39
    • Hi,
      no this code is not part of our standard release. I’m currently working on Part II of this series and will see if I can get the code into some sort of shape and bundle it up as a download. Will be a week or two though.

      Posted by gloomyandy | 2015/10/29, 21:38
      • Has the code for this sample been posted somewhere? I looked at part 2 of this article and could not find the source there.

        Posted by Marcelo Gallardo | 2016/03/25, 17:30
      • No the code has not been posted, and to be honest it is unlikely to be. The core parts have either been posted or explained and I just done’t have the time (or motivation) to get the source into some sort of shape fit for posting.

        Posted by gloomyandy | 2016/03/25, 18:07

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

About leJOS News

leJOS News keeps you up-to-date with leJOS. It features latest news, explains cool features, shows advanced techniques and highlights amazing projects. Be sure to subscribe to leJOS News and never miss an article again. Best of all, subscription is free!
Follow leJOS News on WordPress.com
%d bloggers like this: