Tuesday, 19 April 2016

Combining the ESP8266 WiFi module with cheap 433MHz sensors

The ESP8266 is an incredibly cheap and powerful WiFi chip which is geared towards "internet of things" applications, it comes from an obscure Chinese company, but they've released its SDK as open source. With its high availability on eBay, it was hard to resist, so I bought one to play with. At the moment, recently inspired by cnlohr's work, I've got it serving webpages and detecting when "dumb" wireless PIRs and door sensors around my house are triggered.

A long story (skip it if you're here only to see stuff working):

For a long time after purchase, it sat unused like all useful electronics junk off the 'Bay. The thing was frustrating: I didn't have a 3.3v regulator, and I didn't have a USB to serial adapter to talk to the thing. Nevertheless, an Arduino board has both those things, so I powered it off the Arduino's 3v3 pin, and connected it to the tx and rx pins, using a voltage divider to make sure the ESP8266 didn't get fried. The aim was to get the Arduino to talk to it and send "AT commands" to it - the default firmware on the ESP8266 makes it act as a modem which can be controlled over serial/UART. This was a long shot. The Arduino's 3.3v rail is far undersized for an ESP8266, and three devices, one of them at a different voltage, on a single serial line, is asking for undefined behavior.

On testing, it didn't work; the ESP8266 never joined my WiFi network as it was supposed to. The Arduino was printing AT commands fine though (I could only sniff the Arduino's output because that is where the FTDI's rx is wired to). The setup not working wasn't surprising because it was a terrible way of doing things and there were too many unknowns.

Trying to make do without having to order then wait weeks for the proper parts off eBay, I forgot about talking to the Arduino, and tried to communicate directly with the ESP8266 through a serial monitor on the computer. To do this, the ESP8266's rx needs to be wired to the Arduino rx pin and tx to tx. The idea was to have the ESP8266 talk to the FTDI adapter and have the Arduino's ATmega328 remain silent. From memory, I think this worked just well enough to flash the chip, but commands and such often failed and caused resets because the Arduino's 3.3v regulator couldn't supply enough current. That sort of explains the first setup not working either.

This is the point where the ESP8266 went into storage for a long time.


Renewed interest:

The ESP8266 has become much more powerful and flexible with the release of its SDK, so I gave it another shot. I setup the SDK on Ubuntu in a virtual machine, I would highly recommend this; it works very well with USB pass through, and Linux is a joy to develop on (in my opinion). Also, I made a programming/breakout board for the module, otherwise it's a bit of a PITA as it can't be plugged directly into a breadboard.


The breakout board simply passes through all the pins to a header, but CH_PD (chip power down) and RST (reset) are pulled high through 4.7k resistors; this is a pre-requisite for normal operation (I'll go into the ESP8266's strange boot modes a bit later on).


The jumper on the breakout board, when inserted, pulls GPIO 0 to ground. This boots the ESP8266 into flash programming mode when the chip is reset by pulling RST low or by powering the chip off and on. To change the jumper, the ESP8266 has to be removed and then reinserted, this became a pain, so I started temporarily sticking a wire where the jumper would go and then briefly tapping a lead connected to RST to GND. The TX LED on the ESP8266 module would flash very briefly when RST was grounded, then stop completely - this behaviour indicates (not with certainty) that the chip is booted in flashing mode and is waiting for a program to be sent over serial to it. GPIO 0 can be un-grounded at this point. 

If you were to be listening to the serial port at this point, you'd see a bunch of garbage, unless your baud rate was set to 74880. This is the baud rate which the ESP8266's bootloader prints a bunch of information at on boot. One important piece of information is boot mode. Boot mode (1,6) or (1,7) indicates the module is in flashing mode. Boot mode (3,6) or (3,7) indicates it will load and run the program stored in flash. Listening to the ESP8266's serial output is the most reliable way to verify if it has booted into UART flashing mode properly, rather than trying to judge the board's TX LED.

It is important to leave GPIO 2 floating or pull it high (it has an internal pull up resistor anyway) whenever you reset or boot the ESP8266, otherwise it goes haywire, possibly trying to boot into SD card mode - but of course there is no SD card. This advice stands whether you are trying to boot into flashing mode or run-from-flash mode. GPIO 0 and 2 act as perfectly normal IO ports after booting, but the fact that they are so important at reset and when power is applied make them hard to use in any permanent application if you want to attach sensors. For example, with this project, I had to detach the 433MHz radio receiver from GPIO 2 every time the ESP8266's bootloader had control (on resets or power applied) then reconnect the receiver once booted.

This is the radio receiver I had lying around and used. It comes with a matching 433MHz transmitter, but I didn't need it.  The receiver has two data pins which are connected together; I'm not sure of the point of doing this. Anyway, the module drives the data pin high when it detects any carrier being present in the vicinity of 433MHz, and low when there isn't.

An antenna has to be soldered onto the board, or the reception range is merely a few centimetres. For 433MHz, a quarter wave monopole should be about 17 cm long. Use solid core wire for an antenna.
The 433MHz receiver module is regenerative one, rather than being a more expensive heterodyning one. From what I can tell, it has two stages of RF tuning and amplification: the first being being wide band and the second being more selective. Then the signal is fed into an op-amp, which I guess provides the massive gain which makes the output seem digital: high when a carrier is present, and low when there isn't.


This is an example of a 433MHz door/window sensor which the receiver can listen to. The door sensor simply chirps an ID repeatedly for a few hundred milliseconds when the magnet is taken away. The short transmission occurring only on triggering is pretty much a requirement for saving battery and not hogging the shared frequency (there is no collision avoidance or re-transmission). The downside is that the base station has to keep track of the state of all sensors and not miss any transmissions. Also, the battery in a sensor can die without the user knowing. This system would also be easily jammed or otherwise hacked, so these alarms are just toys, but that tends to be the nature of wireless - wireless on the cheap, anyway. These observations came from listening in with an RTL-SDR dongle, which I'll go into more detail on later.

The sensor's PCB is mostly un-populated. The 433MHz oscillator on the top right and the sheer lack of components (especially RF ones) hinted that data modulation wouldn't be much more sophisticated than switching a 433MHz carrier on and off like a Morse code transmitter. The name for this is on-off keying (OOK), which is a subset of amplitude-shift keying (ASK). The datasheet for the HS1527 (the IC on the sensor board), would've confirmed this guessing, had I had it at the time.

So here's an annotated screenshot from SDR Sharp, tuned in to the 433MHz area. I had just triggered a door sensor and a PIR sensor, which can be seen on the waterfall chart. Just quickly, the whole RTL-SDR project is amazing - it's made radio affordable to get into and incredibly flexible. Definitely check it out if you're interested even slightly in radio or what $15 of technology can do.

Signal 1. I figured was unrelated, or at least irrelevant, because it occurred every ~20 seconds and was comparatively weak. The weak signal suggested it came from farther away than the other two signals. Additionally, if it came from a door or PIR sensor, then I would have seen more than one instance of this signal, as there are about 8 sensors around my house. I wondered if it was the base station, but what use is it having the base station transmit when none of the remote sensors have receivers in them? Anyway, I was intending to bypass the existing base station.

Signal 2. and 3. appeared when I triggered a PIR and then a door sensor by walking along a corridor, stopping, opening a door, then walking back. So the burst which appeared twice must surely be from the PIR, having detected movement twice, leaving the other signal to be the door sensor.

Additionally, it was apparent that all the transmissions came from transmitters which must have cost the lesser part of a few dollars in RF-related components. It was something between the massive bandwidth the signals occupied, the ever-so-slight frequency drifting and how far off the labelled oscillator frequency of 433.92MHz they were. But the spread in frequency just made it easier to tell sensors apart from a glance in SDR Sharp, so it's not all bad.

The next step from here was to record the signals as audio and take a closer look in Audacity. I set SDR Sharp to AM demodulation and set a decent squelch, so noise wouldn't appear between transmissions in recordings. I made audio recordings on the lowest sampling quality setting. On-off keying is a subset of amplitude modulation (but with only two signal levels), so this was most appropriate.

Above is one of the signals, shown in Audacity, zoomed in on a single packet from a much larger transmission. A typical transmission is made up of the same packet repeated over and over again. This is done so that a receiver's AGC and noise suppression have time to respond (if present) and also to provide some noise immunity. Much like the output of the hardware receiver the ESP8266 was connected to, we see a high when the transmitter's oscillator has power and a low when it doesn't. In short, the waveform shows the state of the output pin of the HS1527 encoder inside a sensor over a period of time.

The datasheet for the HS1527 indicates that in its on-off encoding scheme, a short pulse is a 0 and a long pulse is a 1, additionally, the unique identifier each chip spits out is 20 bits long. Below is the output of the 433MHz receiver module; it shows a different sensor from the one in the Audacity screenshot transmitting, hence some of the differences:
This output on the oscilloscope was promising because it looked exactly like what the RTL-SDR snifffed, showing that the 433MHz receiver was receiving the sensors well. The software decoder I wrote for the ESP8266 works by interrupting on a rising edge, then on a falling edge, and so forth, to measure the duration of pulses and gaps between them. Measuring the gaps, in addition to the pulses, turned out to be useful for detecting the end of a packet or a whole transmission and for rejecting non-conforming signals.

The packet on the oscilloscope screen, despite its similarities, is a different length to the one recorded in SDR Sharp. This is because, in the 10ms radio silence between packets, the RTL-SDR's AGC re-adjusted, raising the noise floor, which triggered the squelch (seen in the recording where the slight noise disappears completely). This would then have had to all be undone when transmission resumed, hence why the first two bits are missing in the Audacity screenshot. Set a fixed gain and turn squelch off if you want to record in SDR Sharp. I didn't even notice at the time.

But back to analysing the packet captured on the oscilloscope, which is actually complete... The datasheet for the HS1527 says that: a short pulse is a zero; a long pulse is a one; a short zero pulse with a slightly longer gap afterwards is a pre-amble bit. This works out well: the pulse at the start of the packet is a pre-amble; then there are 20 bits, which is the sensor ID; then there are the last 4 bits, which supposedly indicate the state of four digital inputs on the chip.

When I wrote the ESP8266 OOK decoder, I didn't know what was inside the sensors, let alone have the datasheet for the HS1527 chip. I mistakenly treated the whole packet, including pre-amble and digital input state, as the sensor ID. Luckily the pre-amble is constant and the digital inputs never change for these simple sensors (they aren't connected to anything), so I got away with it.

Above is a packet from signal 1. received through the 433MHz receiver module. Even though it wasn't relevant to this project, I had a look at it anyway. At first glance it looks like it might use the same on-off keying scheme as the sensors in my alarm system, but it is actually encoded using Manchester encoding from what I can tell. If you try interpreting this signal using the same coding scheme as the previous signal, you might notice that the length of time each apparent bit takes up is varying, i.e. pulses aren't padded predictably with gaps. This is odd and tends to indicate that the signal had been misinterpreted because data streams usually have a constant data rate.

These two captures show the typical noise which appeared on the 433MHz module's output when there was no transmission present. Though this noise is easily rejected in software because it doesn't conform to what an expected OOK signal looks like, all the edges unfortunately create a lot of interrupts for the ESP8266 to deal with. It did not cause problems, from what I could tell, but if it did, the number of edges could be reduced with a small capacitor between the data line and ground. The capacitor would need to be connected to the output via a resistor to increase charge/discharge times, and then the ESP8266's GPIO would have to be connected to the junction of the resistor and capacitor.

More noise - in more detail.

Above: the ESP8266 all connected up. The board at the top is a Zigbee CC2530 development board. I discovered it lying around and detached the Zigbee module to reduce power usage. It was perfect for the ESP8266 because it has a CP2102 USB-to-UART/serial converter on it, a beefy 3.3V rail and a 5V rail fed from USB power or the 3xAA battery pack on the bottom of it. Importantly, all the buttons, LEDs, UART pins and power rails come out to header pins so they were easy to connect over to my ESP8266 programming board. Also, it's hard to see in the photo, but I added a decoupling capacitor between the ESP8266's VCC and GND, anecdotally it seemed to improve it's accuracy when flashing and stability in general. I did this because a few things made me suspicious that my programs were getting corrupted when I flashed them.

The grey wire and green wire left dangling grounded RST. A reset button on the programming board would have been much better. Also, another annoyance was having to disconnect the 433MHz receiver every time the ESP8266 was reset or turned on (I've mentioned this quite a few times now), hence the connection via an orange jumper and blue jumper, which made for easier disconnection.

The circuit above didn't work well though: the 433MHz receiver module's reception range was atrociously short because I was powering it off 3.3v so that I wouldn't have to level-shift its output for the ESP8266 and then have two power rails if I made a dedicated board for this project in future. The 433MHz receiver is designed to be powered off 5V, but I tested it briefly with 3.3V on the oscilloscope and it seemed fine. Granted, it was being tested with a nearby sensor. Later, I switched to powering it off 5V and used a voltage divider on the output; this improved reception dramatically, as expected. 

Results/final product:

On startup, or when there are no triggered sensors:

When sensors have been triggered:

The source code for this project is here and could be useful for finding examples of how to use the ESP8266's interrupts and create a simple HTTP web server on it. The ESP8266's SDK is slightly thin on coherent documentation, but looking through other people's code and the SDK's header files is enough information to figure most things out.

Other than that, be mindful of clearing dynamic memory as soon as possible, and be careful where your pointers point before passing them to functions or otherwise using them. If you try writing or reading from a memory address which is obviously invalid e.g. outside dynamic memory, the ESP8266 tends to catch it and print some debugging information then reset itself, like this: 

Fatal exception (29):
epc1=0x400127f0, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000064, depc=0x00000000

Exception 29 indicates that the program tried to write to a prohibited memory address. This is similar to exception 28, which results from trying to read a prohibited address. In this particular exception, the program instruction at 0x400127f0 has tried to write to 0x00000064... epc1 and excvaddr are the two most important details.

The rest of my run time debugging consisted of os_printf statements and commenting out lines.

An important consideration when working with the ESP8266 is that it has a whole bunch of internal registers which aren't changed when you flash a program to it. For example, WiFi credentials, station mode and whether to automatically reconnect to WiFi. These are modified at run time by making calls in your program to functions such as wifi_station_set_auto_connect(1);. If you are not explicit in setting or disabling settings such as this in your program, expect undefined and inconsistent behavior. It also appears that sometimes these persistent internal registers get corrupted or misconfigured, which apparently results in the alarming but not-always-fatal error MEMCHECK FAIL!!! being printed. I found that I stopped getting this message after upgrading the version of the SDK I was using.

It felt like there were endless traps and general weirdness with the ESP8266, but it is possible to do useful stuff using it and its SDK. 

Friday, 5 February 2016

LED Night Lamp

A bit more of an artistic project I had a play round with to make use of a 10W LED and driver combo I got off eBay. The files are here: http://www.thingiverse.com/thing:1318807

The bottom piece was designed in OpenSCAD, the top piece - the "box" - was designed in Blender because it was originally meant to have complex patterns on the inside which would only be visible when lit.

Finished product:

The 10W LED is still surprisingly bright when covered, the colour output of the LED is white but the plastic box unfortunately yellows the light a lot.


Make sure you fit the hidden captive M3 nuts on the back side of those screw holes. This can be done by poking a long M3 screw through the hole, threading a nut on and then pulling the nut down into the nut trap. If the nut trap is undersized, heat the nut up with a soldering iron then quickly pull it into the nut trap.
Small 12v switch mode PSU on the left with a ~0.8A current limiter integrated which is important for powering LEDs and not having them burn up due to their non-ohmic behavior. The quality of the PSU isn't bad either, there's filtering and proper separation between high and low voltage, I didn't inspect the transformer's wingdings but I hope it's of the same quality inside as the rest of the board. The LED "chip" is on the right.

The makeshift bending process for the dual aluminium heat sink/outer casing. Had to keep that protective wrap on till the end so it didn't scratch! Those edges where the wrapping is curling up did get scratched though - turns out toothpaste is a scarily effective (and still pretty coarse) abrasive if you don't have anything else to polish with.

Oops... something went wrong, but this wasn't totally unexpected given the tooling. More important than the total height of the base or the slightly mis-matched bend radii was getting the sides level so I cut down the left side.
Here it is drilled and re-sized through much chiseling, and sanding. The amount I had to take off was a bit small for a hacksaw, too large to sand and don't even think about grinding aluminium!

Make the aluminium outer first and see how it comes out then modify the printed base to suit the errors - that's why it's parametric. In this case the radius has been made larger on one side and the overall height reduced. Some gaps are visible, and the aluminium isn't flat in the middle so viewing distance, angle and favourable lighting are late additions to the BOM.

All wired up, the PSU is hot glued to the base because it has no mounting holes anyway. Since the device runs on 240V mains and has exposed metal, a three pronged plug and lead with earth are crucial - even more crucial is to connect the ground lead to the metal casing: note the bottom right screw pillar which was made slightly shorter to allow room for a connection. I'm not an electrician or nuanced in electrical codes - so wiring is your own responsibility and most LED drivers from China aren't certified either. I don't leave this light running for long periods of time unattended.




Wednesday, 8 July 2015

RoboCup Soccer Robot Pt. 2 - OpenCV (Computer Vision)

So far, I have covered the mechanical part of my school team's miniature soccer robots we made last year, but I have not gone over the computer software that ran on them (or was intended to).

Most competitors at the level we competed at based the control of their robots around some combination of Arduinos, ultrasonic sensors, infra-red sensors (the soccer ball emits IR light), digital compasses and colour sensors. The Arduino is programmed to process all the data inputs and move the robot accordingly to find the ball, shoot goals and defend. The ultrasonic sensors can be used to find the location of the robot on the field by pinging the raised walls around the field, provided the robot is properly aligned (which is where the compass comes in). Having ultrasonic sensors pointing forwards, backwards and to the sides gets around the problem of other robots getting in the way of the sensors: if the total distance measured doesn't add up to the length or width of the field, discard the readings, or perhaps figure out which sensor is being blocked. Colour sensors are used as another source of positioning data because the fields are either marked with black lines on a green field, or the green field has a few zones of different shades of green. Unfortunately the location and orientation finding cannot be based of something like an optical mouse or rotary encoder because the robots are often picked up and replaced by referees according to whatever rules have been broken or a goal being scored. Any calibration only gets to be done once: at the start of the match (so the compass can be set to point towards the opposition side of the field).

Our team agreed that we could take a more adventurous approach and use a camera and a computer vision library running our own software on an ARM processor (I convinced the team to let me bite off more than I could chew as the person who would have to code the computer vision and tactics module). Webcams are cheap and plentiful, there are lots of small and powerful ARM boards on the market and there is a comprehensive open-source computer vision library available: OpenCV; all the ingredients for this project exist.  Before getting into the details - this system failed because the execution of the project was lacking, though I gained a lot of useful programming, embedded ARM Linux and computer vision experience. Looking back on it at the moment, the traditional approach with an Arduino is no walk in the park either. Fingers crossed the Arduino approach is effective, as we are working on it for this year's competition.

We used OpenCV to identify the field and calculate our robot's position based off of perspective, knowing the actual size of the field and what could be seen from our position. It was also used to recognise goals and the ball. Also created was a module to automatically tune the HSV colours that the robot recognises as the field and goals because we couldn't know how they would appear under lighting on the day. We were hoping to one day extend the CV component to recognising enemy and friendly robots, though our progress was a long way off. None of the computer vision techniques used up to the point we finished at involved machine learning and the methods I came up with were crude and primitive compared with some of what is going on in the field. Note also that all the following Windows screenshots are from testing and development on the computer and that the code was later ported to the Odroid, with optimisations in a few places and the GUI disabled.


 Above: this was a trial of the field identification/dimensioning, it's not an actual field, but the code is easily adapted, as in both cases the "field" is square and there is a clear colour difference between the field and outside it. First an edge detection was done, then the Hough straight line transform was used, this of course still let straight lines in the background through, so each line segment had the colour of the image directly to each side of it checked: one side had to be the field colour, and the other had to be the background colour. Then, lines of similar gradient and x/y intercept were merged  and extended, the intersections between the lines were the corners of the field.

 Above we see the thresholding of the field colour, sort of like green screening, as seen before, identifying the colour of the field is another important component to identifying it, however, hard coding the shade of green that the field is doesn't work, as the camera sometimes does funky colour correction, and the lighting in two places is rarely the same. So what has happened in the screenshot above is auto-tuning: I hard coded a really wide HSV green colour range into the software, and then it narrowed it down by itself. The key to this is that when the colour range is well chosen, the colour mask will have very few blobs. Just iterate through, changing the HSV ranges and pick the one which gives the fewest blobs. Note, the image used was found on the internet and isn't quite the same as the environment our robot had to operate in, the reason for this is that our robot and testing field was not yet ready at the time of writing the OpenCV code.

 Above is the ball detection process. I noted that the ball gave off a few things which are easily seen by a camera and not common to see in any other place together - white dots from specular reflection and in the centre of LEDs, red regions from the red LEDs and also purple bits from the IR LEDs: though 920nm infrared light is not visible to the human eye, it renders as faint purple on cameras that don't filter it out properly! So in the top left image is the input, top right is looking for purple (didn't work so well - probably needed better HSV ranges), bottom left is looking for red and bottom right is looking for white. Next, each of those coloured blobs, which have been visualised to let you see what the software found, were put onto three black and white masks and each blob was massively increased in size, the red mask and purple mask were then combined/unioned as the purple only sometimes showed up at close range. Then the intersection of the purple/red mask and the white mask were taken, this whole process removed the false points seen in the last two images. From there, the centre of the ball could be easily estimated and more difficultly, its distance determined by its relative size. There were two other screenshots I had - of the combined mask and the estimated center of the ball and also another of the ball a couple metres away being identified, but I've since lost those images and changed the code, so they're not easy to recreate visualisations of.

Seen above is the goal detection, the goals had blue backings and sometimes one side would have yellow and the other blue. The standards in these competitions don't seem too widely spread and accepted. However, just as a side note, the field in this image is the same as the one in our competition, we didn't have goals at our school test field, so this image is again off the internet. To identify the goals, the robot would have first been moved up close to one, so it covered most of its field of view, then the blue or yellow colour would have been auto tuned, shown above are the steps after this. The whole image is thresholded for the blue colour, but we can clearly see erroneous points in the mask. To get rid of these, I got the code to check the colour of 10 points along the bottom each blob a few pixels below it: if the blob was indeed a goal, there would be the green colour of the field underneath. Then the centre of the goal blob was taken and in the top right window you can see it rendered. If this technique hadn't worked, plan B would have been to use the squareness of the goal to identify it.

After getting these techniques nutted out, I moved them to the Odroid and got them running a lot faster. For instance, for whatever reason, when trying to make the white pixels in masks much larger (morphological dilation), as in 30 - 70 pixels extra round the edge, it would take seconds for this process on a single frame. I think from memory, in the one place this happened - in the ball finding code, instead of performing a dilation on the red, purple and white masks, I simply used a blob detector to find the white parts of the mask and then put a big circle on the centre of every identified blob - hacked, but it worked blindingly fast. That was why it was so easy to show little circles in the visualisation a few images above. I just made the radii of the circles small, changed their colour and drew them onto copies of the original image.

So, for the competition, with only time enough for a bodged solution, I began piecing together the monolithic program which had to do computer vision and communicate data to the Arduino over serial, which the rest of my team had written movement code (harder than it sounds for omniwheel robots with no rotary encoders on the wheels) and some simple tactics on. The tactics, if they had been more complicated, should have resided on the Odroid, but there was no time for that! The field detection and identification code was useless at this time as there was no code to go with it which could determine the robot's position from the perspective and apparent size of the sides of the field. So that left colour autotuning, ball detection, goal detection and serial communication. I got these all integrated together the night before the competition and on the way there. I also removed all the GUI stuff and never got the chance to test it with the robot because a team member had it. So in the hour before the competition at the venue, we got to test the whole system together, it didn't work of course, the software was crashing - something in the goal detection - so that module was commented out. Then it seemed that even though the auto tuning and ball detection was working, sending data to the Arduino over UART wasn't working properly, the Arduino was receiving garbled strings even though we could read from it alright. We never got that part working; in Linux, writing to the serial port should be as easy writing and reading to a file after you've changed some settings of the serial port, in this case: /dev/ttyACM0. I even got it working on my Linux laptop, but the same code didn't play nicely on the Odroid, I think I didn't get the configuration of the serial port right - as in none of the settings I thought I was applying were actually being applied. I should have used the Boost serial communications library.

And that, folks, was where the project was left after doing abysmally in the competition. OpenCV and the Odroid have been abandoned, though writing this piece makes me think we got close to something working - if we had ironed the bugs out... but who knows. I learnt a lot, though little to do with time management, and the Odroid, its Arduino shield and Linux make an awesome little platform. Maybe this blog will have some more computer vision projects in the future!

Saturday, 14 March 2015

MHS RoboCup Soccer Robot - Pt 1: Mechanical

The RoboCup Junior is a robotics competition aimed at secondary schools in Australia and around the world. Last year, I competed in it as part of a team of five other students from my school (Melbourne High School).  The gist of the soccer competition is there are two teams in each soccer match, and each team has two robots on the field, which is about the size of a large tabletop and is covered in green felt with lines marked on it. The aim is to shoot goals. The constraints: the robots have to be autonomous - they can communicate with each other but cannot be remote controlled in any way, they can't damage other robots, and they have to fit within a certain size. How your team carries out tactics, choice of motors, wheels, sensors, etc. etc. is up to themselves.

A lot of work went into our team's robot and it is of the technical and engineering nature of this blog, so I have finally got around to writing up a blog post about my involvement in the project. At this stage, I think I'll separate the posts into the mechanical and software side of things. This part will go over the mechanical design. The GitHub repo for all the code/CAD models is at: https://github.com/BillyWoods/RCJ_Soccer_Robot

That model above is our solution to the problem. The robot has three omni-wheels set 120 degrees apart. This lets the robot turn on the spot and move in any direction regardless of where the front is pointing. All the red parts on the model are electronics and the green parts are 3D printed. I modeled/coded the whole robot in OpenSCAD, it's got a pretty hefty code base to it considering how simple it may look (about 4000 lines of code at the moment). This is due to the detail and how parametric it is, most of this turned out to be overkill. Still a good learning experience however, and having the model killed a few problems before the robot was even built and allowed us to change the electronics layout very quickly when we tested a few different boards.

One advantage to modeling out every little detail can be seen above: drilling/cutting templates can be produced for each of those layers of the robot automatically. Simply export as a .dxf file, add the cross hairs for the holes manually and then print out at 1:1 scale. If I ever need to make a large batch of these robots, the plates can also be made on a CNC with the .dxf files.


Seen above - the tricks of the trade. Stick the template onto whatever sheet material you're making the robot out of, centre punch all the holes, then cut out the circular plates with a jigsaw. Since the aluminium is so thin, the jigsaw was extremely rough on the 1.2mm aluminium sheet with even a small length sticking over the side of the bench. Using the "custom jig" on the right was my solution; having all that thick MDF around supporting the aluminium as it was cut made cutting less dangerous, less noisy and much more fun.

Here's a bottom plate, it has a curved section at the front cut out to trap the soccer ball. The DiBond it is made of looks much nicer than aluminium.

Next step, print out the motor mounts on the 3D printer:

It is very satisfying watching what started as a design on a screen get translated into something tangible.


Skip a month or so and a few missing photos and here's an operational robot:
 

The robot in these photos is controlled by an Odroid U3 and an Arduino on top of it as a shield. The Odroid is a single board computer based on the same ARM chip as the Samsung Galaxy S3. It is like a more powerful Raspberry Pi. We used it so I could code a computer vision program to make use of the camera on the robot. The Arduino shield on top of the Odroid is an Arduino Uno in all but shape. It uses an Atmega328, the Arduino bootloader and has all the same pins available. It communicates with the Odroid via serial/UART. The Arduino IDE runs on the Odroid and sketches are uploaded via it.

The wheels used on the robot were not straightforward to attach to the motors. They have a 9mm bore which only goes halfway through the wheel hub. To attach them, I machined couplings out of steel on my lathe and tapped a hole for an M3 grub screw in them. These couplings would then press fit into the wheel hub and the grub screw would secure them on the motor's shaft.

This worked, however the drill bit I used to bore out the hole for the motor's shaft was slightly bent so The hole it drilled was slightly too large and as a result the wheels were wobbly. Not wanting to have to spend time making more, and keeping in mind that I had to make multiples of all these parts for other groups in the school's robotics club, I looked for a 3D printed alternative. The first thing I tried was almost exactly a copy of the press fit metal couplings, though in plastic. These didn't work so well because the printed parts did not have a good enough tolerance for a snug press fit and the plastic couldn't take a thread well enough for the grub screw that secures the motor shaft.

The second attempt was much less naïve. The design made use of the spokes in the wheel to allow more torque to be transferred. There was also room for a captive M3 nut for the grub screw. This design worked very well once it was glued and pressed into the wheel.



























Here's a bonus shot of the soccer ball and the fenders I designed and printed for the robot. These were an afterthought when we realized the bottom plate was too low and thin and would just chip the ball into the air.

In terms of improvements to the robot's design, I do have concerns about the motors wearing down, in particular the gearboxes. Though they have metal gears inside, we've noticed an increase in backlash after some use. The motors' output shafts only use bushings instead of ball bearings and the motors' shafts are carrying the entire weight of the robot. Stepper motors would be the best alternative. NEMA 17s are cheap and powerful, the same goes for their drivers. NEMA 17s are also very tough in terms of wear. Stepper motors are also designed for much more precise control of position compared with DC motors and the omnidirectional drive system needs precise control of the relative speeds of the wheels of the robot. Despite these concerns, in the end the mechanical side of the robot turned out to be the most reliable aspect of the robot. The success of the robot, or lack of it, was as far as we can tell, because we ran out of time to get the sensors, electronics and software working well. The good part is that all this work can be used and built upon in this year's coming competition.

Saturday, 1 November 2014

Compiling OpenCV with MinGW for Windows (and setup in CodeBlocks IDE)

For a robotics project, which I may document on this blog at a later stage, I needed OpenCV working on my computer so I could play around with algorithms and ideas. Later the code was ported, without any hassle really, onto a robot that I built with a school robotics team that had an ARM processor doing some computer vision. I discovered that getting the pre-compiled OpenCV binaries to work with my preferred compiler and IDE was never going to work - the only option was to compile the source code into binaries and headers that MinGW64 would be able to work with. Luckily this isn't as hard as it sounds.

Compiling OpenCV yourself can be useful if you want to use a compiler or computer architecture for your projects which does not have pre-compiled binaries officially released for it. Compiling yourself also lets you choose what you want and don't want included. Unless you are cross-compiling, the only way to make things work reliably is to compile your programs with the exact same compiler that you compiled OpenCV with.

I wrote this guide after a lot of research on Google and trial and error in the first step to trying to get into some computer vision. Disclaimer: I accept no responsibility for anything that may come of following this guide or things that may not work.

--Download the following or newer versions--:

OpenCV 2.4.3 for windows:        
http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.3/OpenCV-2.4.3.exe/download

MinGW compiler:                  
http://sourceforge.net/projects/mingwbuilds

CMake 2.8.12 installer for windows:
http://www.cmake.org/cmake/resources/software.html

CodeBlocks IDE with no compiler: 
http://sourceforge.net/projects/codeblocks/files/Binaries/13.12/Windows/codeblocks-13.12-setup.exe/download


--Setup the programs--:


    1.) The OpenCV file you downloaded was a self-extracting executable. Extract it to whatever location you want, I would recommend C:\opencv because 
it is easy to remember and get to. You will be entering this path A LOT.

    2.) The MinGW file you downloaded is not actually MinGW, but a program that allows you to choose very precisely what version to download and install. Run it and pick your own settings, I chose these, but it doesn't really matter-
        -Version:               4.8.1
        -Architecture:       x64 (assuming your machine is 64 bit, if not pick x86)
        -Threads:              Posix
        -Exception:           Seh
        -Build revision:     5
After getting MinGw installed(I would recommend putting it in C:\MinGW64), you will have to set up your Windows' PATH variable for Windows to be able to find MinGW's  executables. This is so that you can call any of MinGW's modules, e.g. g++(the c++ compiler) from anywhere on the command prompt. This saves you having to navigate to the MinGW directory every time. It is also crucial that MinGW is included in the PATH so that other programs which rely on it can find it by themselves. The path which you  add to PATH will look something like this: "C:\MinGW64\mingw64\bin" no quotation marks of course.

The PATH can be edited from the command line (google for it), or using  a Windows tool with a GUI: just search for "path" on Windows 7 and 8 in the start menu and a program called "Edit the system environment variables" will come up. Start the program as an administrator (right click on its icon - it's in the menu), click "environment variables". Under the list "System variables", search for "Path" double click it to edit it. All you have to do is add the location of MinGW's bin (stands for binary) directory to the end of the list. Make sure you put a semicolon before (if required) and after to separate it from other paths.

To check that MinGW is setup properly, go to the command prompt and type in "g++", you should get a message saying "g++ fatal error: no input files compilation terminated". Though  it's an error, it is actually a good error, because it confirms MinGW is in the path as the error is coming from g++ rather than Windows. 

    3.) CMake's executable will install CMake all properly for you.
   
    4.) Install CodeBlocks, once again, its installer will do everything for you. While it's installing, it should find the MinGW installation we did earlier, if not, 
don't worry. If it didn't find MinGW, or you installed CodeBlocks before MinGW, we'll have to tell it where it is.

Even if CodeBlocks did find MinGW by itself, I would still go through the following steps to check that everything is setup properly  by default anyway.

Start CodeBlocks, then head to the drop down menu item at the top called "settings", then click on "Compiler...", click the tab "Toolchain Executables".
First set "Compiler's installation directory" to where MinGW was installed(same as the thing we added to PATH, just minus the bin folder on the end). As for the other  settings:

        -C compiler:                           gcc.exe
        -C++ compiler:                      g++.exe
        -linker for dynamic libs:        g++.exe
        -linker for static libs:             ar.exe
        -Debugger:                             GDB/CDB debugger
        -Resource compiler:               windres.exe
        -Make program:                     mingw32-make.exe (yes, I know we installed a 64bit MinGW)

If for some reason these executables don't exist in your installation, it is most likely they are there but named something slightly different. Go into the \bin directory  of MinGW and have a look and see if you can see something similarly named. Now CodeBlocks should be ready to go. Create a hello world project and see if it all works.


--Compile OpenCV for MinGW--:


    1.) Start cmake-GUI.exe, it should be under "C:\Program Files (x86)\cmake-2.8.12.2-win32-x86\bin", or you could just search for it.

For the text field "Where is the source code:" in the CMake GUI, enter the directory where openCV was extracted (should be: "C:\opencv") don't enter any deeper folders which look like they may have some code. The correct directory is the one with, among other things, a "CMakeLists.txt" file in it. This is the file that tells CMake what user-selectable options there are for our library and how to create compilation instructions for the compiler. CMake WILL NOT compile stuff, just configure stuff ready for compilation.

Now, for "where to build the binaries" I would reccomend making another opencv directory called "opencv_mingw" under "C:\" and use that. Once the directories are sorted, click the "configure" button, in the window that pops up, select "MinGW Makefiles" in the dropdown, and select the radio button "use default compilers" then click "finish". A progress bar will appear and some text in the output, this may last for a minute, all CMake is doing is mainly testing our compiler that we installed(MinGW), which it found because it was included in the PATH. Now, for all those checkboxes in CMake that appear the, defaults will be fine, but the point of compiling is that you get to choose, so if there is anything from other guides that you want to follow, do it. Now click "generate".
   
    2.) Now we leave CMake and go to the command prompt. Navigate to "C:\opencv_mingw" or whatever you named it, just make sure it's the directory you put down for "where to build the binaries" in CMake. You can navigate to the file in the command prompt using the command: "cd <directory>", which stands for change directory. The command won't appear to do anything, but if you look at the bit to the left of the flashing cursor in the prompt, you will see that it now has the name of the directory(if it went successfully). Now type in "mingw32-make", now the command propmt will light up for about 30 minutes depending on your PC's power. When that is done, type in "mingw32-make install" DON'T miss this second step because you see some newly created files and think you are done.

    3.) Open up CodeBlocks, create a new "console application", now you will see it appear in the directory tree on the left of the CodeBlocks window. Right click on your project(should have a logo next to it with 4 coloured blocks), select "build options". Go to the tab "search directories" and select the second-tier tab "compiler" add the directory "C:\opencv_mingw\install\include", now select the second-tier tab "linker" and add the directory "C:\opencv_mingw\install\lib". Now go to the first-tier tab "linker settings", under the currently-empty list "Link libraries" add all the files in "C:\opencv_mingw\install\lib", the directory will contain about 20 files, all which end in ".dll.a". To add them all quickly, just click the "add" button, then the button in the window that pops up with the three dots (right next to where you'd normally ype a path). Now navigate to "C:\opencv_mingw\install\lib" and shift click to select all the files. If you are asked whether you want to use relative paths, click no, this won't affect operation either way, but reltive paths are messy.

YOU WILL HAVE TO DO THE STEPS ABOVE FOR EVERY NEW CODEBLOCKS PROJECT YOU CREATE WHICH YOU WANT TO USE OPENCV IN. There are ways around this: by doing the previous steps in global rather than project settings. To do this, go to the menu bar, go to "settings" then "compiler...". Note if you do this that it will apply to any project you open in CodeBlocks.

    4.) CodeBlocks, OpenCV and MinGW are now all set up to work, but if you try to compile a program which includes OpenCV libraries, the program will compile but crash when you run it. This is because windows can't find the OpenCV dll's we created when we compiled OpenCV (in simple terms dll's are libraries written by humans in C++, etc. turned into machine-readable code through compilation and are used at runtime). So what we are going to do now is add the OpenCV dll's to Windows' PATH environment variable, like we did 
with MinGW. Add the directory "C:\opencv_mingw\install\bin" to PATH.


    5.) Test everything by compiling and running a test program. Here is a test program which should run if everything is working fine:


#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
 
using namespace cv;
 
int main()
{
    Mat image;// new blank image
    image = imread("test.png", 0);// read the file
    namedWindow( "Display window", CV_WINDOW_AUTOSIZE );// create a window for display.
    imshow( "Display window", image );// show our image inside it.
    waitKey(0);// wait for a keystroke in the window
    return 0;



If it compiles but shows no image, there are a few common reasons why, and you should keep the following in mind when using CodeBlocks:

    -You need to create a .png image called "test.png" and put it under the folder "<my_project>\bin\Debug", this is where the executable file will be put whenever you compile your own programs.
    -The "run" button in CodeBlocks (Looks like a green play button) seems to run programs but never see any files which you put in the same directory as the executable (and refer to in software with relative paths). So instead of using the run button in CodeBlocks, navigate to "<my_project>\bin\Debug" in Windows explorer and click the executable file yourself.

 

 

--Credits and Resources--:

http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/
http://zahidhasan.wordpress.com/2013/02/16/how-to-install-opencv-on-windows-7-64bit-using-mingw-64-and-codeblocks/

Friday, 31 January 2014

DIY CNC Linear Rail Design

For my CNC machine I decided that I would try and make my own linear rail system using 608 skate bearings and flat steel. This was to try and save money, and I must day that my design, which made the best of the equipment I had on hand, failed miserably. Luckily I decided to try making a prototype with a 1m length and half a carriage before diving in and buying metres and metres of steel and wasting my time building any further.

Here's a drawing of the rail with a carriage riding on it:

This is one side of that carriage made and put together, to get adjustability and thus make up for lack of tolerance (had to do this by hand), I drilled the holes a bit large on purpose so I could (in theory) get the bearings pressing tightly against the linear rail and then just tighten up the bolt:
These are the four pieces of SHS cut and drilled, two of them have cuts which allow for access to get at bolts which have to be tightened. It turns out that there was not adequate room to get at bolts with this design anyway:
The linear rail I put together, it is made of two pieces of steel bolted together, namely 50mm SHS with 3mm walls, and also steel flat which was 5mm thick by 75mm wide if I remember correctly.
For the prototype, I only used construction-grade steel and this showed in the surface finish, in the final design I planned to replace the steel flat with precision-ground steel flat and just use structural SHS for support and rigidity.
Another problem with the overly large holes which I hoped would allow me to get adjustability was that the nut would have less flat area to sit on and thus tend to skew off at an angle easily and not be all that rigid, this problem was exacerbated by the fact that the square tube had very thin walls. Also I couldn't get a washer into the tight space to help the nut. This led me to trying to mill slots for adjustability, since I don't have a milling machine or a milling attachment for my lathe, I just clamped the piece to the tool post which was very finicky, also not very rigid when all the milling force was on the clamp rather than the tool post. It sure did mill fairly well though.

In the end I decided against this method of making home-made linear rails even if I could get easy adjustability and rigidity which were major challenges with the tools I have. The final reasons I decided against this design were: the carriages were too bulky, making the carriages was way too time-consuming given the number I would have to make in total and also the fact that precision-ground steel flat for the bearings to run on would cost a lot and negate any cost savings of going homemade. Bottom line: you're going to get what you pay for and your labour is not worth wasting when the products out of China are so cheap.

Instead I've been considering 16mm supported linear rail mounted on a frame made out of 50mm SHS or larger with thickest walls I can get and the linear bearings to go with it. Lucky for me it turns out that the supplier of the linear rail is a short drive away from me.