Starting up:
Participants: Johan & Jakob
Date: 6/10-2010
Duration: 4 Hours
Goal: Solve the Alishan Train Track as fast as possible
Different Strategies
As the track is well-defined, the robot has no need to actually reads the course and react to the sensor inputs.
Considering this and the fact that the score is only determined by the best track, and not the robots ability to perform at any given time, the best approach is probably to just hardcode the track and optimise for speed.
This strategy seems to be tedious and not a very interesting approach.
Another category of strategies are those that try to read the black line with the lightsensors and navigate the course based on this. We chose to pick this "inferior" approach due to seeing another team run the track much faster than we ever expect to be able to, and if we can't be best, we can possible make a more clever solution.
Staying on course
This particular track presents the robot with two specific challenges other that the more or less trivial linefollowing issue. First the track is different from normal line following track by having a spike that goes to the edge of the track on each of the two plateaus, this means that the standard procedure for following the line with two sensor cannot be used as the sensors would also detect this spike, which is not desirable when we want to follow the line, and not the spike. As if this is not complicating the matter enough the field is also no longer just a flat plane, but rather a three dimensional course. Due to the placement of the light sensors, every time the robot climbs onto one of the plateaus the robot tilts and thusly exposes the light sensors to less reflected light, and this complicates the process further.
One is better than two?
The way we approached these challenges was to build a standard 5-minute bot from nxtprograms.com and mounted two lightsensors on this chassis. Our strategy was then to use only one of the sensors at any given time for following the line, using the other only to keep track of where we are by counting the number of times we have seen something dark ( either the spike or the tilt of the robot ). Then after we have passed the spikes we can switch which sensor follows the line and which keeps track of where we are. This way we avoid the problem with following the line with two sensors. This leaves us with a quite inefficient bang-bang line follower, which time allowing could be substituted with a better PID line-follower.
We said we wouldn't hardcode it, but..
Instead of hardcoding the whole process we hardcoded a sequence of behaviours. Thus start in state 1 where we have not seen anything yet, count up until we switch, and so on. This approach was simple and worked in most cases, albeit slowly. The trouble with this is that the sensor sometimes registered dark while following the line in a non-tricky area, this of course might sabotage the clever partition of the track into distinct parts, thus switching sensor behaviour too early or not being ready to detect the endzones when entering them.
We were able to complete the track most times and are as such content with the result.
Some might say we got the best of both worlds, the simplicity and effectiveness of the hardcoded approach combined with the resiliency of the algorithmic approach.
We still had to tinker with the tedious count values and were not able to complete the track everytime, so evil tongues might say we got the drawbacks of both worlds, the tediousness of hardcoding it and the complexity of solving the problem algorithmically.
No comments:
Post a Comment