Wednesday, 8 July 2015

RoboCup Soccer Robot Pt. 2 - OpenCV (Computer Vision)

So far, I have covered the mechanical part of my school team's miniature soccer robots we made last year, but I have not gone over the computer software that ran on them (or was intended to).

Most competitors at the level we competed at based the control of their robots around some combination of Arduinos, ultrasonic sensors, infra-red sensors (the soccer ball emits IR light), digital compasses and colour sensors. The Arduino is programmed to process all the data inputs and move the robot accordingly to find the ball, shoot goals and defend. The ultrasonic sensors can be used to find the location of the robot on the field by pinging the raised walls around the field, provided the robot is properly aligned (which is where the compass comes in). Having ultrasonic sensors pointing forwards, backwards and to the sides gets around the problem of other robots getting in the way of the sensors: if the total distance measured doesn't add up to the length or width of the field, discard the readings, or perhaps figure out which sensor is being blocked. Colour sensors are used as another source of positioning data because the fields are either marked with black lines on a green field, or the green field has a few zones of different shades of green. Unfortunately the location and orientation finding cannot be based of something like an optical mouse or rotary encoder because the robots are often picked up and replaced by referees according to whatever rules have been broken or a goal being scored. Any calibration only gets to be done once: at the start of the match (so the compass can be set to point towards the opposition side of the field).

Our team agreed that we could take a more adventurous approach and use a camera and a computer vision library running our own software on an ARM processor (I convinced the team to let me bite off more than I could chew as the person who would have to code the computer vision and tactics module). Webcams are cheap and plentiful, there are lots of small and powerful ARM boards on the market and there is a comprehensive open-source computer vision library available: OpenCV; all the ingredients for this project exist.  Before getting into the details - this system failed because the execution of the project was lacking, though I gained a lot of useful programming, embedded ARM Linux and computer vision experience. Looking back on it at the moment, the traditional approach with an Arduino is no walk in the park either. Fingers crossed the Arduino approach is effective, as we are working on it for this year's competition.

We used OpenCV to identify the field and calculate our robot's position based off of perspective, knowing the actual size of the field and what could be seen from our position. It was also used to recognise goals and the ball. Also created was a module to automatically tune the HSV colours that the robot recognises as the field and goals because we couldn't know how they would appear under lighting on the day. We were hoping to one day extend the CV component to recognising enemy and friendly robots, though our progress was a long way off. None of the computer vision techniques used up to the point we finished at involved machine learning and the methods I came up with were crude and primitive compared with some of what is going on in the field. Note also that all the following Windows screenshots are from testing and development on the computer and that the code was later ported to the Odroid, with optimisations in a few places and the GUI disabled.


 Above: this was a trial of the field identification/dimensioning, it's not an actual field, but the code is easily adapted, as in both cases the "field" is square and there is a clear colour difference between the field and outside it. First an edge detection was done, then the Hough straight line transform was used, this of course still let straight lines in the background through, so each line segment had the colour of the image directly to each side of it checked: one side had to be the field colour, and the other had to be the background colour. Then, lines of similar gradient and x/y intercept were merged  and extended, the intersections between the lines were the corners of the field.

 Above we see the thresholding of the field colour, sort of like green screening, as seen before, identifying the colour of the field is another important component to identifying it, however, hard coding the shade of green that the field is doesn't work, as the camera sometimes does funky colour correction, and the lighting in two places is rarely the same. So what has happened in the screenshot above is auto-tuning: I hard coded a really wide HSV green colour range into the software, and then it narrowed it down by itself. The key to this is that when the colour range is well chosen, the colour mask will have very few blobs. Just iterate through, changing the HSV ranges and pick the one which gives the fewest blobs. Note, the image used was found on the internet and isn't quite the same as the environment our robot had to operate in, the reason for this is that our robot and testing field was not yet ready at the time of writing the OpenCV code.

 Above is the ball detection process. I noted that the ball gave off a few things which are easily seen by a camera and not common to see in any other place together - white dots from specular reflection and in the centre of LEDs, red regions from the red LEDs and also purple bits from the IR LEDs: though 920nm infrared light is not visible to the human eye, it renders as faint purple on cameras that don't filter it out properly! So in the top left image is the input, top right is looking for purple (didn't work so well - probably needed better HSV ranges), bottom left is looking for red and bottom right is looking for white. Next, each of those coloured blobs, which have been visualised to let you see what the software found, were put onto three black and white masks and each blob was massively increased in size, the red mask and purple mask were then combined/unioned as the purple only sometimes showed up at close range. Then the intersection of the purple/red mask and the white mask were taken, this whole process removed the false points seen in the last two images. From there, the centre of the ball could be easily estimated and more difficultly, its distance determined by its relative size. There were two other screenshots I had - of the combined mask and the estimated center of the ball and also another of the ball a couple metres away being identified, but I've since lost those images and changed the code, so they're not easy to recreate visualisations of.

Seen above is the goal detection, the goals had blue backings and sometimes one side would have yellow and the other blue. The standards in these competitions don't seem too widely spread and accepted. However, just as a side note, the field in this image is the same as the one in our competition, we didn't have goals at our school test field, so this image is again off the internet. To identify the goals, the robot would have first been moved up close to one, so it covered most of its field of view, then the blue or yellow colour would have been auto tuned, shown above are the steps after this. The whole image is thresholded for the blue colour, but we can clearly see erroneous points in the mask. To get rid of these, I got the code to check the colour of 10 points along the bottom each blob a few pixels below it: if the blob was indeed a goal, there would be the green colour of the field underneath. Then the centre of the goal blob was taken and in the top right window you can see it rendered. If this technique hadn't worked, plan B would have been to use the squareness of the goal to identify it.

After getting these techniques nutted out, I moved them to the Odroid and got them running a lot faster. For instance, for whatever reason, when trying to make the white pixels in masks much larger (morphological dilation), as in 30 - 70 pixels extra round the edge, it would take seconds for this process on a single frame. I think from memory, in the one place this happened - in the ball finding code, instead of performing a dilation on the red, purple and white masks, I simply used a blob detector to find the white parts of the mask and then put a big circle on the centre of every identified blob - hacked, but it worked blindingly fast. That was why it was so easy to show little circles in the visualisation a few images above. I just made the radii of the circles small, changed their colour and drew them onto copies of the original image.

So, for the competition, with only time enough for a bodged solution, I began piecing together the monolithic program which had to do computer vision and communicate data to the Arduino over serial, which the rest of my team had written movement code (harder than it sounds for omniwheel robots with no rotary encoders on the wheels) and some simple tactics on. The tactics, if they had been more complicated, should have resided on the Odroid, but there was no time for that! The field detection and identification code was useless at this time as there was no code to go with it which could determine the robot's position from the perspective and apparent size of the sides of the field. So that left colour autotuning, ball detection, goal detection and serial communication. I got these all integrated together the night before the competition and on the way there. I also removed all the GUI stuff and never got the chance to test it with the robot because a team member had it. So in the hour before the competition at the venue, we got to test the whole system together, it didn't work of course, the software was crashing - something in the goal detection - so that module was commented out. Then it seemed that even though the auto tuning and ball detection was working, sending data to the Arduino over UART wasn't working properly, the Arduino was receiving garbled strings even though we could read from it alright. We never got that part working; in Linux, writing to the serial port should be as easy writing and reading to a file after you've changed some settings of the serial port, in this case: /dev/ttyACM0. I even got it working on my Linux laptop, but the same code didn't play nicely on the Odroid, I think I didn't get the configuration of the serial port right - as in none of the settings I thought I was applying were actually being applied. I should have used the Boost serial communications library.

And that, folks, was where the project was left after doing abysmally in the competition. OpenCV and the Odroid have been abandoned, though writing this piece makes me think we got close to something working - if we had ironed the bugs out... but who knows. I learnt a lot, though little to do with time management, and the Odroid, its Arduino shield and Linux make an awesome little platform. Maybe this blog will have some more computer vision projects in the future!

Saturday, 14 March 2015

MHS RoboCup Soccer Robot - Pt 1: Mechanical

The RoboCup Junior is a robotics competition aimed at secondary schools in Australia and around the world. Last year, I competed in it as part of a team of five other students from my school (Melbourne High School).  The gist of the soccer competition is there are two teams in each soccer match, and each team has two robots on the field, which is about the size of a large tabletop and is covered in green felt with lines marked on it. The aim is to shoot goals. The constraints: the robots have to be autonomous - they can communicate with each other but cannot be remote controlled in any way, they can't damage other robots, and they have to fit within a certain size. How your team carries out tactics, choice of motors, wheels, sensors, etc. etc. is up to themselves.

A lot of work went into our team's robot and it is of the technical and engineering nature of this blog, so I have finally got around to writing up a blog post about my involvement in the project. At this stage, I think I'll separate the posts into the mechanical and software side of things. This part will go over the mechanical design. The GitHub repo for all the code/CAD models is at: https://github.com/BillyWoods/RCJ_Soccer_Robot

That model above is our solution to the problem. The robot has three omni-wheels set 120 degrees apart. This lets the robot turn on the spot and move in any direction regardless of where the front is pointing. All the red parts on the model are electronics and the green parts are 3D printed. I modeled/coded the whole robot in OpenSCAD, it's got a pretty hefty code base to it considering how simple it may look (about 4000 lines of code at the moment). This is due to the detail and how parametric it is, most of this turned out to be overkill. Still a good learning experience however, and having the model killed a few problems before the robot was even built and allowed us to change the electronics layout very quickly when we tested a few different boards.

One advantage to modeling out every little detail can be seen above: drilling/cutting templates can be produced for each of those layers of the robot automatically. Simply export as a .dxf file, add the cross hairs for the holes manually and then print out at 1:1 scale. If I ever need to make a large batch of these robots, the plates can also be made on a CNC with the .dxf files.


Seen above - the tricks of the trade. Stick the template onto whatever sheet material you're making the robot out of, centre punch all the holes, then cut out the circular plates with a jigsaw. Since the aluminium is so thin, the jigsaw was extremely rough on the 1.2mm aluminium sheet with even a small length sticking over the side of the bench. Using the "custom jig" on the right was my solution; having all that thick MDF around supporting the aluminium as it was cut made cutting less dangerous, less noisy and much more fun.

Here's a bottom plate, it has a curved section at the front cut out to trap the soccer ball. The DiBond it is made of looks much nicer than aluminium.

Next step, print out the motor mounts on the 3D printer:

It is very satisfying watching what started as a design on a screen get translated into something tangible.


Skip a month or so and a few missing photos and here's an operational robot:
 

The robot in these photos is controlled by an Odroid U3 and an Arduino on top of it as a shield. The Odroid is a single board computer based on the same ARM chip as the Samsung Galaxy S3. It is like a more powerful Raspberry Pi. We used it so I could code a computer vision program to make use of the camera on the robot. The Arduino shield on top of the Odroid is an Arduino Uno in all but shape. It uses an Atmega328, the Arduino bootloader and has all the same pins available. It communicates with the Odroid via serial/UART. The Arduino IDE runs on the Odroid and sketches are uploaded via it.

The wheels used on the robot were not straightforward to attach to the motors. They have a 9mm bore which only goes halfway through the wheel hub. To attach them, I machined couplings out of steel on my lathe and tapped a hole for an M3 grub screw in them. These couplings would then press fit into the wheel hub and the grub screw would secure them on the motor's shaft.

This worked, however the drill bit I used to bore out the hole for the motor's shaft was slightly bent so The hole it drilled was slightly too large and as a result the wheels were wobbly. Not wanting to have to spend time making more, and keeping in mind that I had to make multiples of all these parts for other groups in the school's robotics club, I looked for a 3D printed alternative. The first thing I tried was almost exactly a copy of the press fit metal couplings, though in plastic. These didn't work so well because the printed parts did not have a good enough tolerance for a snug press fit and the plastic couldn't take a thread well enough for the grub screw that secures the motor shaft.

The second attempt was much less naïve. The design made use of the spokes in the wheel to allow more torque to be transferred. There was also room for a captive M3 nut for the grub screw. This design worked very well once it was glued and pressed into the wheel.



























Here's a bonus shot of the soccer ball and the fenders I designed and printed for the robot. These were an afterthought when we realized the bottom plate was too low and thin and would just chip the ball into the air.

In terms of improvements to the robot's design, I do have concerns about the motors wearing down, in particular the gearboxes. Though they have metal gears inside, we've noticed an increase in backlash after some use. The motors' output shafts only use bushings instead of ball bearings and the motors' shafts are carrying the entire weight of the robot. Stepper motors would be the best alternative. NEMA 17s are cheap and powerful, the same goes for their drivers. NEMA 17s are also very tough in terms of wear. Stepper motors are also designed for much more precise control of position compared with DC motors and the omnidirectional drive system needs precise control of the relative speeds of the wheels of the robot. Despite these concerns, in the end the mechanical side of the robot turned out to be the most reliable aspect of the robot. The success of the robot, or lack of it, was as far as we can tell, because we ran out of time to get the sensors, electronics and software working well. The good part is that all this work can be used and built upon in this year's coming competition.