Tag: "lego"

The evolution of a Lego Mindstorms Mars Rover

Last weekend I (oh, and about 9000 other people) participated in NASA's International Space Apps Challenge. My team worked on the Curiosity at Home challenge, split in three parts: translating NASA's SPICE data format to a more readable form, parsing that into commands to the rover, and building a representation of the Curiosity Mars rover itself.

The code is available on Github. It still needs some work, but, you know, hackathon.

I worked on building the rover, using Lego Mindstorms, and it proved to be trickier than I had anticipated. Most times it would look great, but then refuse to steer or even move at all, as soon as some weight was put onto it. And by weight I mean the NXT brick, which we felt was an indispensable component of the rover.

I'm in the process of disassembling the rover and taking photos of it, so that I can then rebuild it and document the build steps. But, in the meantime, a quick recap of how it evolved throughout the (mostly sleepless) weekend. Unfortunately, I don't have pictures of every intermediate version.

Iteration 2

By this point, we had an initial, flimsier version. But I wanted more robustness, as well as proper front-wheel steering. At this point, the motor powered 4 wheels. Here it is (with Frits, as a bonus):

Lego rover, iteration 2

Iteration 6. Probably

Yeah, I don't remember exactly which iteration these pictures correspond to. Have I mentioned "sleepless"?

Anyway, you'll notice this version is shorter, meaning less strain on the middle section, better distribution of weight and, we hoped, better steering. By then, we had already moved to a motor powering only two wheels, and now we have finally started using two separate motors, one for each rear wheel.

Lego rover, iteration 6

Iteration 8, I think

Shorter, sturdier, and uses different rotations of the the back wheels for steering, as well as the front wheel gears.

I was disappointed with the middle wheels, by this point they were mostly just for show. But the deadline approached, and we had to make decisions.

Lego rover, iteration 8

Iteration 10, final version

Not too many changes from the previous iteration, mostly some incremental adjustments. This is what we presented, and worked reasonably well (all things considered).

Lego rover, iteration 10 (final)

comments Comments
 

Yahoo! Open Hack Day Brasil 2010

On March 20th and 21st, Yahoo! Brazil brought us our second Open Hack Day. I'd been to the previous one, in 2008, and it was amazing! Our project even won at a newly-created category, aptly named "What the Hack?"

This year, I wanted once again to try a hardware hack, using whichever parts I could get my hands on. Not necessarily anything useful, though. That's what I love about the Hack Day. I can do useful stuff throughout the rest of the year :)

Image © brhackday, used with permission

Yahoo! Hack Days

Before the 2008 Hack Day, I had pretty much written off Yahoo! as a company that was no more. I wasn't even particularly interested on the event, and only decided to go at the last minute. Boy, what a change in perspective. Of course, Yahoo!'s São Paulo team has some very clever people. But, more generally, I was very impressed with the data-gathering tools that Yahoo! had started offering. YQL is simply fantastic. It perfectly captures what the Internet is about, data-wise (I'm not saying it's perfect, but it embodies the right spirit). I hadn't had that much geek fun in a long time. So you can probably tell my expectations were high for this year's edition. Could Yahoo! deliver?

Not to fret, they knew what they were doing. Just put some 250 hackers in a fish bowl, give them food, coffee, wifi (surprisingly good, some silly proxy restrictions excluded), show them Monty Python, and wait for it!

Our hack, and our hackers

Image © brhackday, used with permission

Even before the announcement of this year's Hack Day, I'd been toying with the (admittedly silly) idea of a firefighting robot, that lurked online waiting for people to report fires anywhere in the world. It would then bravely roll over to wherever the fire was, and put it out. Bravely.

Trouble is, it'd probably have to be one gargantuan robot. So I thought I'd settle for a more modest, Lego-built, Arduino-controlled one. With the Hack Day approaching, I suggested this project to a few friends and we created a Wave to discuss the idea. I was planning to use a box of old Lego pieces from my childhood (see mom? I told you I'd have eventually use those again!) and a couple of servo motors I had lying around, but Rodolpho brought his Mindstorms NXT into the picture, making the project much cooler (and, incidentally, much more manoeuvrable).

On Saturday morning I rode with Gola to the (really cool) auditorium of Senac University, where the previous Hack Day had already been hosted. There we met Rodolpho and Mobi, our original team. We had found out the day before that there would be a limit of 4 people on each team, this year. In 2008 our team had been comprised of... 12? 15? I never even knew. We'd simply started building weird, blinking stuff, and people had gathered around for the fun. We got, therefore, a bit disappointed with this edition's limit. But, since we weren't really expecting to win anything (and therefore disqualification wasn't an issue), we bent the rules a bit, and Werneck, Lucmult and Mauro hacked with us. Also, Aline joined us a bit later. Since Werneck and Lucmult eventually had to leave and didn 't return for Sunday, and Aline and Mobi didn't program, I think we were sort of in the clear. Technically. Sort of.

And then we started building.

What we built

The robot, in construction. Photo by alickel

The body of the robot ended up using mainly the Mindstorms parts. We had started building a larger, sturdier body with assorted Lego pieces on top of a rigid board, pulled by front-wheel drive with four wheels in the front and a loose trailing one. But, after a lot of testing, the loose wheel kept veering the robot out of track, so we rebuilt everything with a lighter, smaller frame, and a pair of caterpillars tracks. In hindsight, I think we should have kept the larger body (even if rethinking wheel traction), as it gave the robot more stability, which we would come to miss later. But we had to make a quick decision, and we worked with what we had. The NXT controller sat on the robot and communicated with an external server via bluetooth, relaying control input to the wheels and sending back odometry information.

This external server (actually, Rodolpho's notebook) continuously used YQL to query Twitter, Yahoo! Meme and news feeds (which I originally wanted to aggregate using Yahoo! Pipes, but we didn't have time for it), searching for people reporting fires - a simple string search, filtered by the Yahoo! Term Extraction API. Whenever there were reports, the server would send them through the Placemaker API to extract location information. It then determined which location in the world was in most urgent need of aid (by number of reports).

Now, the tricky part: the robot needed to be aware of a rectangular projection of the world onto the room, and to know its own location on it. I don't mean a visual projection. We did consider it, but realised that it would be unfeasible for the Hack Day. We toyed with the idea of printing (or drawing) a world map on large sheets and taping then to the ground, but the robot would surely slip, trip, or tear the sheets. So we settled for an abstract projection, and, based on information sent back from the robot, plotted its estimated current position in the "world", at each instant, using Yahoo! Maps and the Yahoo! Geocoding API (via geopy).

To locate itself, the robot relied on Mindstorms' odometry feedback, and on QR Code markers distributed around the room, read by an Android phone using Python and the Barcode Scanner app. The Android phone sent the information encoded in each QR Code to a custom service built with Python's SimpleHTTPServer, which our server polled. The server then sent back new movement controls, according to the robot's current position and desired target. We tried to use Mindstorms' sonar and colour sensors, but they just wouldn't work reliably with the Python interface (which we needed to send commands programatically via bluetooth).

A happy robot. Image © codepo8 CC BY 2.0

No one likes an impersonal robot, and ours, accordingly, had a face in order to convey emotions. We used an XO laptop as the head, and defined that the robot would have pre-determined emotions depending on the situation: it would be "at rest" when there were no fires going on, "worried" when it was going towards a fire, and "happy" when it had put the fire out. A Python script running in the background continuously searched Flickr (using YQL) for expressions associated with each of these emotions, and downloaded a number of related pictures. Another script queried the robot's current emotional state (set by the server), and displayed an appropriate random subset of these pictures. For every few of them, we'd display a face that Aline had drawn specifically for that emotion, to make up for the fact that we couldn't be sure if the pictures would depict it (people give the weirdest tags and descriptions to their Flickr uploads...).

We originally meant to use an Arduino to control a servo that would squish some water from a syringe, thus putting out the fire once the robot got to its destination. But all that proved too much for a mere 24h, and we had to settle for a manual pump.

A camera-shy robot, and a sleepy presenter

Of course, we'd tested (almost) everything, and the robot worked (almost) perfectly. (Almost) Great. But, when it was time to go up on the stage, we hit a few snags.

First, there was only one large screen. Of course, we'd known that, but it hadn't occurred to us that people would need to see not only the robot (which was tiny, especially at a distance), but also the map showing its movement towards the fire, at several different moments. Yahoo! had predicted even such an eventuality (seriously, guys, kudos!) and provided us with a way to switch between our server's screen and the feed from a video camera which Aline used to film the robot. But I didn't manage to coordinate the switch properly, and we didn't get to show the animated map.

Also, we had already noticed that the XO weighed a lot compared to the rest of the robot (remember what I said about the previous, sturdier version?), and that, in order to steer, we needed to balance it carefully. I didn't, and the robot soon went out of its path.

Finally, we had used so many different technologies and APIs on this project, and most of them simply vanished from my mind when I started presenting! Note to self: next time, write some hints on the back of my hand...

All I have to say for myself is that I had slept for only about 1 hour since the previous morning, ours was one of the last projects to be presented, and by then I was very sleepy and barely able to react. I should have switched from the camera to the map more often, so people could see what was going on. I should have grabbed the robot when it went off track and restarted the demo. I should have asked my teammates for help when trying to list everything at work on the robot. Shame on me.

But, all in all, I think it was a great project, and I'm very proud of it. I'm especially proud of my team, you rock much more than I was able to show :)

And, next year, I promise to take a nap before presenting!

The Robot. Image © brhackday, used with permission

comments Comments