Thursday 28 October 2010

A clever, yet ineffective stroll in the park

Starting up:
Participants: Johan & Jakob
Date: 6/10-2010
Duration: 4 Hours
Goal: Solve the Alishan Train Track as fast as possible


Different Strategies
As the track is well-defined, the robot has no need to actually reads the course and react to the sensor inputs.
Considering this and the fact that the score is only determined by the best track, and not the robots ability to perform at any given time, the best approach is probably to just hardcode the track and optimise for speed.
This strategy seems to be tedious and not a very interesting approach.

Another category of strategies are those that try to read the black line with the lightsensors and navigate the course based on this. We chose to pick this "inferior" approach due to seeing another team run the track much faster than we ever expect to be able to, and if we can't be best, we can possible make a more clever solution.

Staying on course
This particular track presents the robot with two specific challenges other that the more or less trivial linefollowing issue. First the track is different from normal line following track by having a spike that goes to the edge of the track on each of the two plateaus, this means that the standard procedure for following the line with two sensor cannot be used as the sensors would also detect this spike, which is not desirable when we want to follow the line, and not the spike. As if this is not complicating the matter enough the field is also no longer just a flat plane, but rather a three dimensional course. Due to the placement of the light sensors, every time the robot climbs onto one of the plateaus the robot tilts and thusly exposes the light sensors to less reflected light, and this complicates the process further.

One is better than two?
The way we approached these challenges was to build a standard 5-minute bot from nxtprograms.com and mounted two lightsensors on this chassis. Our strategy was then to use only one of the sensors at any given time for following the line, using the other only to keep track of where we are by counting the number of times we have seen something dark ( either the spike or the tilt of the robot ). Then after we have passed the spikes we can switch which sensor follows the line and which keeps track of where we are. This way we avoid the problem with following the line with two sensors. This leaves us with a quite inefficient bang-bang line follower, which time allowing could be substituted with a better PID line-follower.


We said we wouldn't hardcode it, but..
Instead of hardcoding the whole process  we hardcoded a sequence of behaviours. Thus start in state 1 where we have not seen anything yet, count up until we switch, and so on. This approach was simple and worked in most cases, albeit slowly. The trouble with this is that the sensor sometimes registered dark while following the line in a non-tricky area, this of course might sabotage the clever partition of the track into distinct parts, thus switching sensor behaviour too early or not being ready to detect the endzones when entering them.
We were able to complete the track most times and are as such content with the result.
Some might say we got the best of both worlds, the simplicity and effectiveness of the hardcoded approach combined with the resiliency of the algorithmic approach. 
We still had to tinker with the tedious  count values and were not able to complete the track everytime, so evil tongues might say we got the drawbacks of both worlds, the tediousness of hardcoding it and the complexity of solving the problem algorithmically. 

Thursday 7 October 2010

The first steps

Starting up:
Participants: Johan & Jakob
Date: 30/9-2010
Duration: 3 Hours
Goal: Construct a segway and make it balance.


The Inner Ear
As the standard Lego kits does not contain a gyroscope sensor to monitor the robots tilt, we took the inspiration from the exercise sheet and used the light sensor to determine in which direction the robot is currently falling.
This brings all the usual trials and tribulations of using the light sensor, susceptibility to the environment, both external lightsources as well as the texture and color of the surface on which we tried to make the Failbot balance. 

Knowing which way is up
Similarly to the code handed out we start with an initial calibration, where we obtain the light sensor value representing the desired state, the robot in balance. When the robot tilts the light sensor moves closer or further away from the surface, and we can thusly use this value to compute which way we want the motors to turn.
Heavily inspired by the handout we used a similar PID scheme to avoid overshooting too much.




Standing Tall
After a bunch of failures of epic proportions, mostly the Failbot seemed to make an exhaustive search through the space of possible ways to fall, tilt and run off tables, we finally got the Failbot to keep balancing a few seconds, there was much rejoicing. One very pratical issue is that it has been very hard to set a prober initial state due to the fact that poorly constructed lego robots might not be very well balanced. The robot is programmed such that it cannot balance better than the initial state it is supplied with. An improvement could be using the gyroscope sensor that can be obtained, beyond being a better sensor for the task, it also gives the possibility of a decent calibration. If the robot is placed lying down, one could use this value to obtain a initial state ( perpendicular to this  state ) and balance with that in mind. If one used another light sensor on the opposite side of the robot, one might also be able to better balance.