Thursday, 2 December 2010

The end course project

Starting up:
Participants: Johan & Jakob
Date: 24/11-2010
Duration: 4 Hours
Goal: Determine which project will be made for the end course project, describe that and two alternatives.
Plan: Discuss and agree upon three overarching themes for projects, then brainstorm within each area.
Consider the possibilities and problem areas within each project and describe them according to NXT programming excercise 11.

The three themes
We decided upon choosing between a predator/prey system inspired by [1], a set of music robots inspired in part by The Trons [2] and some sort of scenario involving sexrobots and the exchange of genes, inspired by [3].


Music bots:
A flock of robots synchronized in such a way that they can agree on a global time, tempo or beat and act accordingly. This encompasses that an external observer should get the sense that the robots are aware of each other and playing music together. 


Sex bots:
A flock of robots that when they meet exchange genomes representing different behaviours or parts of behaviours.
Whether they exchange parts of behaviour or complete patterns would be determined at a later point.



Music Robots:
As describe above the robots need to agree on a global clock, disregarding order of activation and time of activation. Different methodologies have been aired as to how we could go about this. Using the microphone seemed the most difficult as we would then need to detect beats. Different time synchronization algorithms for distributed systems exists and one of these could be employed after having established the NXTs in a network over bluetooth. We have also discussed having a central NXT dictate the beat and use motor output wired via the converters to the sensor ports of the performing robots.


Hardware needed:
A bunch of NXT's possibly with microphones for synchronizing, if this approach is chosen. Convertercables between NXTs and RCXs might also be needed.

Softwareplatform:
Each NXT must have the same base software component developed in LeJOS, there might also need to be developed a seperate central controller for tempo dictating purposes should that approach be chosen.

Expected Difficulties:
There are several challenges in this project, the first being the part of actually  getting them synchronized on a level where the fine tuned human ear does not get annoyed at the system being a bit out of sync. Furthermore the challenge of dynamically adding or removing robots from of the system without disrupting the state of the system might not be a trivial task.
The system might also employ some sort of leader election algorithm to determine which robot is allowed to play a solo. 
On another level the purely artistic value of the system is going to be extremely challenging to bring forth.

Possible presentation:
At the end course presentation the robots sould be able to perform a piece of rythmic music, after being started seperately and deciding on a common beat. 
Possibly this could include internal communication to determine a soloist and so on.




Sex bots
The overarching theme in this project is to observe emergent behaviour in a flock of robots that exchange different behaviour genomes. A bunch of robots wander around on a map of some sort and each time they bump into each other they exchange part of their genome with the other robot.


Hardware requirements:
A bunch of NXTs with sensors enabling navigation and a bunch of motors, enable movement on the field.


Softwareplatform:
Each NXT needs to have a common framework, that controls the robot.
This framework can then contain an assortment of different genomes all implementing the same "genome" interface.
This way each robot can exchange genomes and use them very similar to behaviourbased architectures.


Expected difficulties:
One of the great challenges in this project is to express the behaviour of the individual robots in such a way that it is obvious when the different genomes are switched around. Also defining meaningful genomes and the whole exchange protocol could pose difficult.


Possible Presentation:
If all ends well, the flock of robots will at the end of the project be able to exchange behaviours in a way that manifests itself clearly in the environment.
Then it would be interesting to test out different algorithms for genome exchange as well as framework function and how this influences any convergence on specific behaviours if those are experienced.


Predator/Prey:
The general idea of this project is to have robots of two different kinds, predators and prey. Within this system there would be different conditions that resulted in death - hunger, age and so on. When a robot is dead there would need to be different conditions in which they can revived to create a more persistent world. Within this system we would like to have the robots seem like they're acting in a flock, even though they're all having individual behaviour patterns.


Hardware requirements:
A bunch of NXTs along with sensors and actuators from the standard kit.


Software requirements:
Two different controllers one for predators and one for prey. There might also be a need for a central controller operating the environment.


Other requirements:
A mat or other material representing the environment, this might actually take quite some time to fabricate as we need to create representations of safezones, foodzones, and homes. And in such a way that the robots are able to navigate it.


Expected difficulties:
The hardest part of this project is probably going to be creation behavioural patterns that are easy for the audience to interpret. Other than that navigating the environment in a meaningful way might also pose a challenge.





References:
  1. http://www.imm.dtu.dk/upload/institutter/imm/cse%20laboratories/wurtz.pdf  - project from iMars lab
  2. http://www.youtube.com/watch?v=c2JChnwv2Ws - The Trons
  3. http://hackaday.com/2005/03/25/sex-bots/ - Sex bots

Color Recognition

Starting up:
Participants: Johan & Jakob
Date: 2/12-2010
Duration: 3 Hours
Goal: Investigate the light sensor and determine whether the sensor can be used to detect more colors than merely black and white.
Plan: First we will try and determine how precise the sensor can differentiate between black, white and green. Next we will try to utilize this to create a program which can detect which of the three colors the surface is.

Experiments:
Our initial experiments were designed to give us a better insight into the sensitivity of the light sensor. We placed the sensor over the different colored surfaces (black, white and green) and took readings from the sensor using the BlackWhiteSensor class. The sensor readings were fairly consistent with an error margin of approximately +/- 2-3. This error margin is mainly due to the sensor vibrating, thereby making the sensor get closer and farther away from the surface.
The sensor reading also gave us an impression of what the different colors' value were and it was evident that the readings from black and green were very close to each other, however we still believed that by choosing a small enough error margin we would be able to interpret the reading correctly and choose the correct color.

Line follower:
We tried the LineFollower class and it worked as expected. Next we looked at the code to try and come up with a method to extend the BlackWhiteSensor class to incorporate the color green. We decided that a simple addition of the color green in the calibration routine and a method for returning whether the current color is green would be sufficient to be able to use all three colors. The method green (to determine whether the surface is green) is implemented in a way so that it has an error margin of +/- 2. Meaning that if the sensor reading is within +/- 2 of the reading from the calibration routine, the surface would be interpreted as being green. A fairly basic approach which we believed would be sufficient for the job.

We the implemented a test program called ColorTest based on the LineFollower class. The only significant alteration is the behaviour when confronted with a color. The robot moves forward continuously and analyzes the color of the surface. If the color is green the robot makes a buzzing noise and writes green on the LCD, if the color is white it beeps twice and writes white on the LCD and if the color is black the robot beeps once and writes black on the LCD.
Since the sensor reading of green is within the threshold of black it is important to start out checking if the color is green, otherwise it would default to black each time. This is because we didn't want to make major changes to the class and we felt this approach would be sufficient.

Tests and results:
The test program was run several times and while there were a few anomalies it worked fairly well. The biggest anomaly was the fact that the transition from a black surface to a white surface and vice versa was often interrupted with a reading of green in between. We concluded that this was due to the sensor reading half white half black and thereby triggering a green interpretation because it is between black and white in the light spectrum. Another small anomaly was the occasional black reading while on a green surface. This was concluded to be caused by the vibrations in the robot making the sensor get closer to the surface and thereby reading a darker color than in the calibration. We attempted to fix this with the green error margin, but found that increasing it only caused the problem to migrate to the black surface.
All in all the tests were successful besides the few anomalies. the robot was able to interpret the surface correctly the majority of the time.

Conclusion:
The experiments showed that it was possible with a low probability of error to detect other colors than merely black and white. By making further experiments it might also be possible to add another color closer to the white spectrum and identifying this color as well. This, however, remains untested.

References:
1.
http://www.daimi.au.dk/~bubbi/lego/ColorSensor.java - ColorSensor class for detecting black, white and green.
2.
http://www.daimi.au.dk/~bubbi/lego/ColorTest.java - Test program for detecting surface colors.

Wednesday, 24 November 2010

Behavioural Therapy

Starting up:
Participants: Johan & Jakob
Date: 18/11-2010
Duration: 6 Hours
Goal: Investigate how LeJOS NXJ implements a behaviour-based architecture with the subsumption API, with specific focus on the the interface lejos.subsumption.Behaviour and the class lejos.subsumption.Arbitrator.
Plan: This weeks programming exercise differs from the previous weeks, by having a very specific set of tasks.
We intend to solve these individual exercises and investigate further using a recent checkout of the LeJOS sourcecode.


Experiments:

Getting BumperCar running on the NXT was a trivial task and afterwards we were able to conduct the following experiments. 


Keeping the touch sensor held in:
When the touch sensor is pressed the DetectWall behaviour takes control and the robot turns to avoid the object. When the sensor button is kept down (or the sound sensor detects something close) the behaviours' takeControl() method continuously returns true thereby making the robot turn around itself over and over.
We must make note here that we were extremely confused at first at the robots behaviour, until we noted that the UltraSonic sensor was attached and had an influence on the actual behavior of the robot.


Properties of the Arbitrator regarding the running of takeControl() methods
As is seen in [2:line 121 through 129] the Arbitrator uses a for loop to select which behaviour will gain control by having its action() method called.
The arbitrator runs through its behaviour list from highest to lowest priority. When a behaviours' takeControl() method returns true, that behavior assumes control and the loop breaks. This means that as long as the DetectWall behaviours' takeControl() method returns true (meaning that the touch sensor is pressed or the SoundSensor detects an obstacle), the takeControl method of the behaviour DriveForward is never run.
This holds true for any ordering of any number of behaviours, the takeControl() methods of the lower priority behaviours will never be run when a higher priority behaviour becomes active.


Adding a behaviour to enable exit by keypress
In [1] we have added a behaviour called Exit that contains a takeControl() method that returns true on keypress and an action() method that exits the system.
This Behaviour can be seen below.



class Exit implements Behavior {
    public boolean takeControl() {
return Button.ESCAPE.isPressed();
    }
    public void suppress() {
//The behavior kills the program so if activated suppress wont matter.
    }
    public void action() {
System.exit(0);
    }
}

A suppress method is deemed unnecessary as the action method unconditionally terminates the program.
When the ESCAPE buttons pressed while the DriveForward behavior is active the program shuts down instantly due to the fact that the DriveForward behavior does not block the program. When the ESCAPE button is pressed the DriveForward behavior is suppressed and the Exit behavior kills the program. 


The DetectWall behavior is forced to block the program in order to assure that its motor rotation calls are not overwritten by the DriveForward behavior. This causes a delay when pressing the ESCAPE button, as the behavior guarantees a correct rotation before yielding control.


The Sound.pause(int x) call in the takeControl() method of the behavior DetectWall makes the entire behavior selection in the arbitrator slower by x-milliseconds because the loop cannot continue until the takeControl() method returns a boolean. This restricts the number of behavior changes possible each minute and makes the robot less responsive since the value the sensors return are only relevant immediately after the Sound.pause(int x) call. This also makes the Exit behavior less responsive since it has to wait for the next behavior selection loop before being able to execute. This can potentially also result in loss of input, for instance if the ESCAPE button is pressed while the takeControl() method of the behaviour DetectWall is running.
When the input of sound.pause() was increased to 2000 ms the robots behaviour became almost useless.
Each time the arbitrator tried to select an active thread there now was a blocking call lasting 2 seconds.
This of course results in a very unresponsive behaviour as the entire system is locked up while waiting for the loop to check for the data.


Using a local thread to avoid blocking call
We now implemented a thread in the behaviour detectWall to be able to continuously optain values from the sensor without blocking the system. This can be seen in [1].
This eliminated the need for the sound.pause(20) in the takeControl behaviour and should thusly make the Arbitrator run more smoothly. The 20 miliseconds is such a small value though, and we did not see any noticable effect from this change. 
Regarding this question we noticed something interesting, in [ref til bumpercar fra nyeste lejos checkout ), that is, the bumperCar example taken from the developer repository via svn, the call sound.pause(20) is commented out.
We also tried this approach and noted no difference in behaviour. Why the line is commented out rather than removed is anyones guess.


Backing up for a full second, before turning
For this rather uninspired exercise we utilized Sound.paused(1000) to wait a second while backing up before turning.


Interrupting a running behaviour
This part of the exercise regards restarting a running behaviour due to the same condition triggered, as caused the  action to run in the first place. This caused us quite a bit of trouble figuring this out, as the question asks us to implement the behaviour Detectwall in such a way that it can be interrupted and reset by the Arbitrator. This assumes that the Arbitrator runs the takeControl() method of the behaviour, get a true value as return value, and there suppresses the current Behaviour and calls the action() method of the new intended behaviour, but as can be seen in [2], the new behaviour only gets to run if it has a strictly higher priority than that of the active behaviour, and as such our Arbitrator model does not support the implementation of such a Behaviour.


Motivation functions:


First it must be noted that the behaviour framework as described in [4] is not suitable for this task, similarly to the last exercise, a behaviour based system is best geared towards an interaction model where you can interchange the different behaviours at any given time, disregarding the state the current behaviour is in at the time of switch.
Nothing more than the suppress call should be needed to make it possible for a smooth transition.


If the touch sensor is pressed again while turning it would still not make sense to reactivate the action. Since the rotation hasn't completed we do not know whether the direction the robot will be pointing after the turn is towards the object which it pressed against or if it is pointing in a completely different direction and therefore does not need further turning. Another action dictating backing off a little before completing the turn might be more suitable.


Let the integer range 0-100 define our motivation values where 0 is the least motivating and 100 the most motivating. We would obviously reserve the Exit behavior to be the most important and it is therefore the only behavior which is allowed to use motivation 100 (since we don't want to wait for our robot to exit.) If we then look at our takeControl methods we would then have them return an integer which represents the importance of the behavior based on the situation. Meaning that if our touch sensor registers an input it is important to start turning and if it doesn't it is very unimportant, returning a high and low motivation respectively. The driving forward behavior would then return an average integer, so it can both be overwritten and seize control easily.



Conclusions and further work:
From this labsession we have gotten the knowledge that Behaviour based models isn't fit for every task in robotics.
Some of the exercises left us trying to figure out how to make the screw work with the hammer, which wasn't an easy task.
In the future, more time could be spent actually trying to implement the integer returning takeControl() methods with a new Behaviour interface, and an Arbitrator that would be able to handle these return values.
This would allow for a significantly more dynamic Behaviour selection.

References:
  1. http://daimi.au.dk/~fx/dLego/BumperCar.java - The modified BumperCar.java
  2. http://daimi.au.dk/~fx/dLego/Arbitrator.java  - The Arbitrator.java from developer checkout of LeJOS source
  3. http://daimi.au.dk/~fx/dLego/Behaviour.java - The Behaviour.java from developer checkout of LeJOS source
  4. http://legolab.daimi.au.dk/DigitalControl.dir/Krink.pdf

Monday, 15 November 2010

Finding your place in the world

Starting up:
Participants: Johan & Jakob
Date: 11/11-2010
Duration: 6 Hours
Goal: Experiment with different approaches regarding calculating the position of the robot


Out-of-the-box
In the current version of Lejos ( 0.85 beta ) the package nxt.robotics, provides among other things SimpleNavigator and TachoPilot. SimpleNavigator allows the programmer to control the robot in terms of a cartesian coordinates, if given a pilot class. The pilot class gives the navigator a more abstract control over the robot. The two tables below gives the distance from the starting point to the end point of a four-step route with the robot starting in (0,0) and trying to stop in (0,0). 
Wheel diameter(mm):Error(cm):
81.650
81.665
81.680
81.675
49.615
49.622
49.613
49.67
49.67


As is seen from the above, the smaller wheel size actually did quite well with regards to returning to the robots point of origin, while the big wheels caused the robot to be so bad at returning to its starting position, that an external observer would not have been able to determine that returning to the beginning was the intended behaviour of the robot. We attribute this to the fact that even a tiny error in the motor usage can trigger a large discrepancy of the course when actuated with larger wheels attached.
Regarding the difference in the robots ability to determine distance versus its aptness with regard to do precise turns, this can also be attributed to the same issue. Especially with the large wheels, this is accentuated, but even with the smaller wheels attached, a slight error in angle, will over distance grow to a large error regarding distance from the outset. This only becomes truer with regard to turning as the changes in motorrotation to turn is quite more minute that the more rough movement necessary just to go straight.
We had the added problem of being unable to get the robot to drive in straight lines as it curves a bit to the right when trying to go straight, we attribute this to poor construction and more significantly differences in motors. We tried with a few different motors, and where able to get a bit better results, but it is still a possible course that can be taken to improve further upon the robots general performance.


From A to B ( via Z?)
The challenge now is to propose a solution that takes care of obstacles while navigating the world as a simple cartesian coordinate system. Concretely that means that you must be able to use goTo(x,y) and even though there exists an obstacle between your current position and (x,y) end up at (x,y). In earlier weeks we have built wall-followers, line followers and more generic avoidance bots. We have devised a very basic strategy building on the fact that as long as we only use goTo(x,y) calls rather than interact directly with the motors the state is consistent with the robots actual position, if not absolutely then at least as good as we generally can maintain the state. Thusly we can for each goTo(x,y), when detecting an object ( via touch, ultrasonic or more arcane means ), go off on some non-parellel vector from our current direction, and then when satisfied call goTo(x,y) again. We have tried to illustrate this approach on this picture:


This approach should work, given that we at all times can obtain the vector between our current position and the desirable end position.


Better sensing

In [2] they use "states" to differentiate between turning and moving straight forwards. When moving straight the average of the encoders are used to determine the distance traversed by the robot. This gives a certain error margin which grows as the bot travels further if the wheels or motors are not correctly calibrated or have defects. When turning the method utilizes trigonometric calculations to calculate a new heading based on the number of counts on each wheel, this method (assuming the wheels do not slip) is quite accurate.

The kinematics approach uses both counters constantly and does not differentiate between moving and turning. By calculating a new position for each step it is able to make smooth turns and because it does not assume that moving straight forward equals the average count of each wheel, it is better able to keep track of its position. It also, of course, assumes that the wheels do not slip on the surface, as all the calculations are based around the number of counts on each wheel.

Because the kinematics approach does not use as many assumptions, it is more accurate when determining the position of the robot.

Based on the observations from testing the lejos navigation system we believe that the system is very similar to the algorithm in [2], since the robot sometimes diverged due to crooked tires and it made no attempt to compensate for this on the return trip. We believe that using the kinematics solution would have given the robot a more accurate position of itself, thereby enabling it to compensate for the error on the return trip.



References

Thursday, 28 October 2010

A clever, yet ineffective stroll in the park

Starting up:
Participants: Johan & Jakob
Date: 6/10-2010
Duration: 4 Hours
Goal: Solve the Alishan Train Track as fast as possible


Different Strategies
As the track is well-defined, the robot has no need to actually reads the course and react to the sensor inputs.
Considering this and the fact that the score is only determined by the best track, and not the robots ability to perform at any given time, the best approach is probably to just hardcode the track and optimise for speed.
This strategy seems to be tedious and not a very interesting approach.

Another category of strategies are those that try to read the black line with the lightsensors and navigate the course based on this. We chose to pick this "inferior" approach due to seeing another team run the track much faster than we ever expect to be able to, and if we can't be best, we can possible make a more clever solution.

Staying on course
This particular track presents the robot with two specific challenges other that the more or less trivial linefollowing issue. First the track is different from normal line following track by having a spike that goes to the edge of the track on each of the two plateaus, this means that the standard procedure for following the line with two sensor cannot be used as the sensors would also detect this spike, which is not desirable when we want to follow the line, and not the spike. As if this is not complicating the matter enough the field is also no longer just a flat plane, but rather a three dimensional course. Due to the placement of the light sensors, every time the robot climbs onto one of the plateaus the robot tilts and thusly exposes the light sensors to less reflected light, and this complicates the process further.

One is better than two?
The way we approached these challenges was to build a standard 5-minute bot from nxtprograms.com and mounted two lightsensors on this chassis. Our strategy was then to use only one of the sensors at any given time for following the line, using the other only to keep track of where we are by counting the number of times we have seen something dark ( either the spike or the tilt of the robot ). Then after we have passed the spikes we can switch which sensor follows the line and which keeps track of where we are. This way we avoid the problem with following the line with two sensors. This leaves us with a quite inefficient bang-bang line follower, which time allowing could be substituted with a better PID line-follower.


We said we wouldn't hardcode it, but..
Instead of hardcoding the whole process  we hardcoded a sequence of behaviours. Thus start in state 1 where we have not seen anything yet, count up until we switch, and so on. This approach was simple and worked in most cases, albeit slowly. The trouble with this is that the sensor sometimes registered dark while following the line in a non-tricky area, this of course might sabotage the clever partition of the track into distinct parts, thus switching sensor behaviour too early or not being ready to detect the endzones when entering them.
We were able to complete the track most times and are as such content with the result.
Some might say we got the best of both worlds, the simplicity and effectiveness of the hardcoded approach combined with the resiliency of the algorithmic approach. 
We still had to tinker with the tedious  count values and were not able to complete the track everytime, so evil tongues might say we got the drawbacks of both worlds, the tediousness of hardcoding it and the complexity of solving the problem algorithmically. 

Thursday, 7 October 2010

The first steps

Starting up:
Participants: Johan & Jakob
Date: 30/9-2010
Duration: 3 Hours
Goal: Construct a segway and make it balance.


The Inner Ear
As the standard Lego kits does not contain a gyroscope sensor to monitor the robots tilt, we took the inspiration from the exercise sheet and used the light sensor to determine in which direction the robot is currently falling.
This brings all the usual trials and tribulations of using the light sensor, susceptibility to the environment, both external lightsources as well as the texture and color of the surface on which we tried to make the Failbot balance. 

Knowing which way is up
Similarly to the code handed out we start with an initial calibration, where we obtain the light sensor value representing the desired state, the robot in balance. When the robot tilts the light sensor moves closer or further away from the surface, and we can thusly use this value to compute which way we want the motors to turn.
Heavily inspired by the handout we used a similar PID scheme to avoid overshooting too much.




Standing Tall
After a bunch of failures of epic proportions, mostly the Failbot seemed to make an exhaustive search through the space of possible ways to fall, tilt and run off tables, we finally got the Failbot to keep balancing a few seconds, there was much rejoicing. One very pratical issue is that it has been very hard to set a prober initial state due to the fact that poorly constructed lego robots might not be very well balanced. The robot is programmed such that it cannot balance better than the initial state it is supplied with. An improvement could be using the gyroscope sensor that can be obtained, beyond being a better sensor for the task, it also gives the possibility of a decent calibration. If the robot is placed lying down, one could use this value to obtain a initial state ( perpendicular to this  state ) and balance with that in mind. If one used another light sensor on the opposite side of the robot, one might also be able to better balance.

Friday, 17 September 2010

Working with Sound

Starting up:
Participants: Johan & Jakob
Date: 16/9-2010
Duration: 3 Hours
Goal: Attach the Sound sensor to Failbot 9797 and test the sensor as described in the given exercise.

Building the robot was not a challenge, other than locating the correct blocks in the box.

Analyzing the input
After creating a basic program similar to SonicSensorTest.java, but with respect to the sound picked up by the sound sensor, we tested the sensor by producing various noises at different distances and angles.
Increasing the distances did lower the readings as one would expect, the farther away the lower the readings, however the angle from which the sound was coming from had a much bigger impact. A sound produced for behind the sensor was much lower than a sound produced right in front of the sensor.
Afterwards we tested the DataLogger, which worked as expected.

Sound Controlled Car
We uploaded the program to the Failbot 9797 and ran it. It started to drive as expected and responded to various sounds by driving turning each way and then stopping. Clapping was the least effective means of controlling it. Claps had to be very loud and preferably in front of the car for it to register. Johan tried making a high pitched yelping sound, which was very effective and worked almost every time, this appeared to be the best way of controlling it.

Turning it off was a challenge until we read the code and realized why. We fixed this by using a ButtonListener which calls System.exit() if the escape button is pushed. Very effective.

Clap Controlled Car
We discussed ways of analyzing the input from the sensor to determine whether a clap had been made. In the end we decided just to try to implement it by three if-sentences to determine whether the data corresponded to the pattern described in the exercises. This technique proved to be very effective - more effective than we both expected - and it was able to detect claps almost perfectly, while also filtering out other sounds such as the yelping sound which was so effective earlier.

Conclusions
The clap pattern described in the exercises proved to be very accurate and the simplistic algorithm implemented was very effective. By experiments and data logging it would probably be possible to tweak the constants to get less thread sleeping thereby enabling the Failbot to register more claps per minute, however we didn't find time for this.