So far, I have covered the mechanical part of my school team's miniature soccer robots we made last year, but I have not gone over the computer software that ran on them (or was intended to).
Most competitors at the level we competed at based the control of their robots around some combination of Arduinos, ultrasonic sensors, infra-red sensors (the soccer ball emits IR light), digital compasses and colour sensors. The Arduino is programmed to process all the data inputs and move the robot accordingly to find the ball, shoot goals and defend. The ultrasonic sensors can be used to find the location of the robot on the field by pinging the raised walls around the field, provided the robot is properly aligned (which is where the compass comes in). Having ultrasonic sensors pointing forwards, backwards and to the sides gets around the problem of other robots getting in the way of the sensors: if the total distance measured doesn't add up to the length or width of the field, discard the readings, or perhaps figure out which sensor is being blocked. Colour sensors are used as another source of positioning data because the fields are either marked with black lines on a green field, or the green field has a few zones of different shades of green. Unfortunately the location and orientation finding cannot be based of something like an optical mouse or rotary encoder because the robots are often picked up and replaced by referees according to whatever rules have been broken or a goal being scored. Any calibration only gets to be done once: at the start of the match (so the compass can be set to point towards the opposition side of the field).
Our team agreed that we could take a more adventurous approach and use a camera and a computer vision library running our own software on an ARM processor (I convinced the team to let me bite off more than I could chew as the person who would have to code the computer vision and tactics module). Webcams are cheap and plentiful, there are lots of small and powerful ARM boards on the market and there is a comprehensive open-source computer vision library available: OpenCV; all the ingredients for this project exist. Before getting into the details - this system failed because the execution of the project was lacking, though I gained a lot of useful programming, embedded ARM Linux and computer vision experience. Looking back on it at the moment, the traditional approach with an Arduino is no walk in the park either. Fingers crossed the Arduino approach is effective, as we are working on it for this year's competition.
We used OpenCV to identify the field and calculate our robot's position based off of perspective, knowing the actual size of the field and what could be seen from our position. It was also used to recognise goals and the ball. Also created was a module to automatically tune the HSV colours that the robot recognises as the field and goals because we couldn't know how they would appear under lighting on the day. We were hoping to one day extend the CV component to recognising enemy and friendly robots, though our progress was a long way off. None of the computer vision techniques used up to the point we finished at involved machine learning and the methods I came up with were crude and primitive compared with some of what is going on in the field. Note also that all the following Windows screenshots are from testing and development on the computer and that the code was later ported to the Odroid, with optimisations in a few places and the GUI disabled.
Above: this was a trial of the field identification/dimensioning, it's not an actual field, but the code is easily adapted, as in both cases the "field" is square and there is a clear colour difference between the field and outside it. First an edge detection was done, then the Hough straight line transform was used, this of course still let straight lines in the background through, so each line segment had the colour of the image directly to each side of it checked: one side had to be the field colour, and the other had to be the background colour. Then, lines of similar gradient and x/y intercept were merged and extended, the intersections between the lines were the corners of the field.
Above we see the thresholding of the field colour, sort of like green screening, as seen before, identifying the colour of the field is another important component to identifying it, however, hard coding the shade of green that the field is doesn't work, as the camera sometimes does funky colour correction, and the lighting in two places is rarely the same. So what has happened in the screenshot above is auto-tuning: I hard coded a really wide HSV green colour range into the software, and then it narrowed it down by itself. The key to this is that when the colour range is well chosen, the colour mask will have very few blobs. Just iterate through, changing the HSV ranges and pick the one which gives the fewest blobs. Note, the image used was found on the internet and isn't quite the same as the environment our robot had to operate in, the reason for this is that our robot and testing field was not yet ready at the time of writing the OpenCV code.
Above is the ball detection process. I noted that the ball gave off a few things which are easily seen by a camera and not common to see in any other place together - white dots from specular reflection and in the centre of LEDs, red regions from the red LEDs and also purple bits from the IR LEDs: though 920nm infrared light is not visible to the human eye, it renders as faint purple on cameras that don't filter it out properly! So in the top left image is the input, top right is looking for purple (didn't work so well - probably needed better HSV ranges), bottom left is looking for red and bottom right is looking for white. Next, each of those coloured blobs, which have been visualised to let you see what the software found, were put onto three black and white masks and each blob was massively increased in size, the red mask and purple mask were then combined/unioned as the purple only sometimes showed up at close range. Then the intersection of the purple/red mask and the white mask were taken, this whole process removed the false points seen in the last two images. From there, the centre of the ball could be easily estimated and more difficultly, its distance determined by its relative size. There were two other screenshots I had - of the combined mask and the estimated center of the ball and also another of the ball a couple metres away being identified, but I've since lost those images and changed the code, so they're not easy to recreate visualisations of.
Above: this was a trial of the field identification/dimensioning, it's not an actual field, but the code is easily adapted, as in both cases the "field" is square and there is a clear colour difference between the field and outside it. First an edge detection was done, then the Hough straight line transform was used, this of course still let straight lines in the background through, so each line segment had the colour of the image directly to each side of it checked: one side had to be the field colour, and the other had to be the background colour. Then, lines of similar gradient and x/y intercept were merged and extended, the intersections between the lines were the corners of the field.
Above we see the thresholding of the field colour, sort of like green screening, as seen before, identifying the colour of the field is another important component to identifying it, however, hard coding the shade of green that the field is doesn't work, as the camera sometimes does funky colour correction, and the lighting in two places is rarely the same. So what has happened in the screenshot above is auto-tuning: I hard coded a really wide HSV green colour range into the software, and then it narrowed it down by itself. The key to this is that when the colour range is well chosen, the colour mask will have very few blobs. Just iterate through, changing the HSV ranges and pick the one which gives the fewest blobs. Note, the image used was found on the internet and isn't quite the same as the environment our robot had to operate in, the reason for this is that our robot and testing field was not yet ready at the time of writing the OpenCV code.
Above is the ball detection process. I noted that the ball gave off a few things which are easily seen by a camera and not common to see in any other place together - white dots from specular reflection and in the centre of LEDs, red regions from the red LEDs and also purple bits from the IR LEDs: though 920nm infrared light is not visible to the human eye, it renders as faint purple on cameras that don't filter it out properly! So in the top left image is the input, top right is looking for purple (didn't work so well - probably needed better HSV ranges), bottom left is looking for red and bottom right is looking for white. Next, each of those coloured blobs, which have been visualised to let you see what the software found, were put onto three black and white masks and each blob was massively increased in size, the red mask and purple mask were then combined/unioned as the purple only sometimes showed up at close range. Then the intersection of the purple/red mask and the white mask were taken, this whole process removed the false points seen in the last two images. From there, the centre of the ball could be easily estimated and more difficultly, its distance determined by its relative size. There were two other screenshots I had - of the combined mask and the estimated center of the ball and also another of the ball a couple metres away being identified, but I've since lost those images and changed the code, so they're not easy to recreate visualisations of.
Seen above is the goal detection, the goals had blue backings and sometimes one side would have yellow and the other blue. The standards in these competitions don't seem too widely spread and accepted. However, just as a side note, the field in this image is the same as the one in our competition, we didn't have goals at our school test field, so this image is again off the internet. To identify the goals, the robot would have first been moved up close to one, so it covered most of its field of view, then the blue or yellow colour would have been auto tuned, shown above are the steps after this. The whole image is thresholded for the blue colour, but we can clearly see erroneous points in the mask. To get rid of these, I got the code to check the colour of 10 points along the bottom each blob a few pixels below it: if the blob was indeed a goal, there would be the green colour of the field underneath. Then the centre of the goal blob was taken and in the top right window you can see it rendered. If this technique hadn't worked, plan B would have been to use the squareness of the goal to identify it.
After getting these techniques nutted out, I moved them to the Odroid and got them running a lot faster. For instance, for whatever reason, when trying to make the white pixels in masks much larger (morphological dilation), as in 30 - 70 pixels extra round the edge, it would take seconds for this process on a single frame. I think from memory, in the one place this happened - in the ball finding code, instead of performing a dilation on the red, purple and white masks, I simply used a blob detector to find the white parts of the mask and then put a big circle on the centre of every identified blob - hacked, but it worked blindingly fast. That was why it was so easy to show little circles in the visualisation a few images above. I just made the radii of the circles small, changed their colour and drew them onto copies of the original image.
So, for the competition, with only time enough for a bodged solution, I began piecing together the monolithic program which had to do computer vision and communicate data to the Arduino over serial, which the rest of my team had written movement code (harder than it sounds for omniwheel robots with no rotary encoders on the wheels) and some simple tactics on. The tactics, if they had been more complicated, should have resided on the Odroid, but there was no time for that! The field detection and identification code was useless at this time as there was no code to go with it which could determine the robot's position from the perspective and apparent size of the sides of the field. So that left colour autotuning, ball detection, goal detection and serial communication. I got these all integrated together the night before the competition and on the way there. I also removed all the GUI stuff and never got the chance to test it with the robot because a team member had it. So in the hour before the competition at the venue, we got to test the whole system together, it didn't work of course, the software was crashing - something in the goal detection - so that module was commented out. Then it seemed that even though the auto tuning and ball detection was working, sending data to the Arduino over UART wasn't working properly, the Arduino was receiving garbled strings even though we could read from it alright. We never got that part working; in Linux, writing to the serial port should be as easy writing and reading to a file after you've changed some settings of the serial port, in this case: /dev/ttyACM0. I even got it working on my Linux laptop, but the same code didn't play nicely on the Odroid, I think I didn't get the configuration of the serial port right - as in none of the settings I thought I was applying were actually being applied. I should have used the Boost serial communications library.
And that, folks, was where the project was left after doing abysmally in the competition. OpenCV and the Odroid have been abandoned, though writing this piece makes me think we got close to something working - if we had ironed the bugs out... but who knows. I learnt a lot, though little to do with time management, and the Odroid, its Arduino shield and Linux make an awesome little platform. Maybe this blog will have some more computer vision projects in the future!
After getting these techniques nutted out, I moved them to the Odroid and got them running a lot faster. For instance, for whatever reason, when trying to make the white pixels in masks much larger (morphological dilation), as in 30 - 70 pixels extra round the edge, it would take seconds for this process on a single frame. I think from memory, in the one place this happened - in the ball finding code, instead of performing a dilation on the red, purple and white masks, I simply used a blob detector to find the white parts of the mask and then put a big circle on the centre of every identified blob - hacked, but it worked blindingly fast. That was why it was so easy to show little circles in the visualisation a few images above. I just made the radii of the circles small, changed their colour and drew them onto copies of the original image.
So, for the competition, with only time enough for a bodged solution, I began piecing together the monolithic program which had to do computer vision and communicate data to the Arduino over serial, which the rest of my team had written movement code (harder than it sounds for omniwheel robots with no rotary encoders on the wheels) and some simple tactics on. The tactics, if they had been more complicated, should have resided on the Odroid, but there was no time for that! The field detection and identification code was useless at this time as there was no code to go with it which could determine the robot's position from the perspective and apparent size of the sides of the field. So that left colour autotuning, ball detection, goal detection and serial communication. I got these all integrated together the night before the competition and on the way there. I also removed all the GUI stuff and never got the chance to test it with the robot because a team member had it. So in the hour before the competition at the venue, we got to test the whole system together, it didn't work of course, the software was crashing - something in the goal detection - so that module was commented out. Then it seemed that even though the auto tuning and ball detection was working, sending data to the Arduino over UART wasn't working properly, the Arduino was receiving garbled strings even though we could read from it alright. We never got that part working; in Linux, writing to the serial port should be as easy writing and reading to a file after you've changed some settings of the serial port, in this case: /dev/ttyACM0. I even got it working on my Linux laptop, but the same code didn't play nicely on the Odroid, I think I didn't get the configuration of the serial port right - as in none of the settings I thought I was applying were actually being applied. I should have used the Boost serial communications library.
And that, folks, was where the project was left after doing abysmally in the competition. OpenCV and the Odroid have been abandoned, though writing this piece makes me think we got close to something working - if we had ironed the bugs out... but who knows. I learnt a lot, though little to do with time management, and the Odroid, its Arduino shield and Linux make an awesome little platform. Maybe this blog will have some more computer vision projects in the future!
No comments:
Post a Comment