The evolution of a Lego Mindstorms Mars Rover

Last weekend I (oh, and about 9000 other people) participated in NASA's International Space Apps Challenge. My team worked on the Curiosity at Home challenge, split in three parts: translating NASA's SPICE data format to a more readable form, parsing that into commands to the rover, and building a representation of the Curiosity Mars rover itself.

The code is available on Github. It still needs some work, but, you know, hackathon.

I worked on building the rover, using Lego Mindstorms, and it proved to be trickier than I had anticipated. Most times it would look great, but then refuse to steer or even move at all, as soon as some weight was put onto it. And by weight I mean the NXT brick, which we felt was an indispensable component of the rover.

I'm in the process of disassembling the rover and taking photos of it, so that I can then rebuild it and document the build steps. But, in the meantime, a quick recap of how it evolved throughout the (mostly sleepless) weekend. Unfortunately, I don't have pictures of every intermediate version.

Iteration 2

By this point, we had an initial, flimsier version. But I wanted more robustness, as well as proper front-wheel steering. At this point, the motor powered 4 wheels. Here it is (with Frits, as a bonus):

Lego rover, iteration 2

Iteration 6. Probably

Yeah, I don't remember exactly which iteration these pictures correspond to. Have I mentioned "sleepless"?

Anyway, you'll notice this version is shorter, meaning less strain on the middle section, better distribution of weight and, we hoped, better steering. By then, we had already moved to a motor powering only two wheels, and now we have finally started using two separate motors, one for each rear wheel.

Lego rover, iteration 6

Iteration 8, I think

Shorter, sturdier, and uses different rotations of the the back wheels for steering, as well as the front wheel gears.

I was disappointed with the middle wheels, by this point they were mostly just for show. But the deadline approached, and we had to make decisions.

Lego rover, iteration 8

Iteration 10, final version

Not too many changes from the previous iteration, mostly some incremental adjustments. This is what we presented, and worked reasonably well (all things considered).

Lego rover, iteration 10 (final)

comments Comments
 

Automated Python testing with nose and tdaemon

If you're testing your code at all (and you are, right?), it's awfully convenient to automatically have your test suite run whenever something in your project changes. This is particularly handy if you're doing test-driven development (TDD), where you'll be writing a lot of tests and need immediate feedback on them.

With Python this is made easy with the help of nose and tdaemon.

In a hurry? Jump straight to the summary!

Installing

Both tdaemon and nose can be installed via Cheeseshop (or "PyPI", for the suits), using pip:

sudo pip install nose tdaemon

(You might not need sudo, depending on your setup).

An interlude: pip versus easy_install

You can use easy_install instead of pip. It should work just fine, for what we're trying to do here. But do yourself a favor and switch over to pip. It's as simple as

sudo easy_install pip

Another interlude: whither tdaemon?

If you google "tdaemon", the first result is a github tree from Bruno Bord, tdaemon's author. The version at Cheeseshop (yes, I'll keep calling it "Cheeseshop", damn it!) lists John Paulett as author and points back to the github repository as its home page. Both versions are almost exactly alike, except that the Cheeseshop one has a slight enhancement (a command-line argument to ignore specific directories). John Paulett took tdaemon, added that feature and packaged it for Cheeseshop. So we'll use his version.

Getting cute notifications

So far, if you simply execute tdaemon on a terminal, it'll monitor the current directory and run nosetests whenever it detects a change. Which is fine, but I don't want to switch to the terminal all the time while I'm programming, if I don't have to. So let's arrange our environment so that we get visual alerts every time the tests are run.

On Mac OS X

I assume you already have Growl on your Mac. If you don't, install it, it makes your life easier (if you don't know, it should be at the bottom row on your System Preferences panel).

The NoseGrowl nose plugin provides Growl notifications. Unfortunately, NoseGrowl installation from PyPI is currently broken: there is only an egg for Python 2.5 on Cheeseshop. The original source code on bitbucket has a bug on setup.py, but Osvaldo Santana was kind enough to fork and fix it. So:

hg clone http://bitbucket.org/osantana/nosegrowl
cd nosegrowl/nose-growl
sudo python setup.py install

By the way, NoseGrowl's author, Victor "crankycoder" Ng, told me a few months ago that he was looking for someone to take over for him. So, if you find the project useful, please consider talking to Victor and volunteering to maintain it. Might make him less cranky :) [Edit: perhaps Osvaldo will step up?]

On Linux (Gnome)

There is a NoseGrowl-inspired plugin adapted to use Gnome's notification system. It's called nose-notify and can be installed directly from Cheeseshop:

sudo pip install nose-notify

I don't know if there is a notification plugin for KDE. If you do, let me know and I'll add it here!

Putting it all together

Open a terminal window and cd to the root directory of your project (tdaemon recursively looks at everything down from there).

If you're on OS X, type:

tdaemon --custom-args="--with-growl"

If you're on Linux, type:

tdaemon --custom-args="--with-notify --no-start-message"

Since you're passing custom arguments to nosetests, tdaemon will ask you to confirm that this is the command you want to run. Simply type "y".

Note that, on Linux, nose-notify has a "--no-start-message" option. This is handy, as the start message is mostly useless, and, on Ubuntu, it sticks for a few seconds and delays the actual test results.

Now you can create a file called, say, "tests.py", add tests to it and watch what happens as you save it!

tdaemon-red

tdaemon-green

 

Summary

Install nose and tdaemon:

sudo pip install nose tdaemon

On Mac OS X, install NoseGrowl:

hg clone http://bitbucket.org/osantana/nosegrowl
cd nosegrowl/nose-growl
sudo python setup.py install

and run as:

tdaemon --custom-args="--with-growl"

On Linux, install nose-notify:

sudo pip install nose-notify

and run as:

tdaemon --custom-args="--with-notify --no-start-message"

That's it. Happy coding!

comments Comments
 

Yahoo! Open Hack Day Brasil 2010

On March 20th and 21st, Yahoo! Brazil brought us our second Open Hack Day. I'd been to the previous one, in 2008, and it was amazing! Our project even won at a newly-created category, aptly named "What the Hack?"

This year, I wanted once again to try a hardware hack, using whichever parts I could get my hands on. Not necessarily anything useful, though. That's what I love about the Hack Day. I can do useful stuff throughout the rest of the year :)

Image © brhackday, used with permission

Yahoo! Hack Days

Before the 2008 Hack Day, I had pretty much written off Yahoo! as a company that was no more. I wasn't even particularly interested on the event, and only decided to go at the last minute. Boy, what a change in perspective. Of course, Yahoo!'s São Paulo team has some very clever people. But, more generally, I was very impressed with the data-gathering tools that Yahoo! had started offering. YQL is simply fantastic. It perfectly captures what the Internet is about, data-wise (I'm not saying it's perfect, but it embodies the right spirit). I hadn't had that much geek fun in a long time. So you can probably tell my expectations were high for this year's edition. Could Yahoo! deliver?

Not to fret, they knew what they were doing. Just put some 250 hackers in a fish bowl, give them food, coffee, wifi (surprisingly good, some silly proxy restrictions excluded), show them Monty Python, and wait for it!

Our hack, and our hackers

Image © brhackday, used with permission

Even before the announcement of this year's Hack Day, I'd been toying with the (admittedly silly) idea of a firefighting robot, that lurked online waiting for people to report fires anywhere in the world. It would then bravely roll over to wherever the fire was, and put it out. Bravely.

Trouble is, it'd probably have to be one gargantuan robot. So I thought I'd settle for a more modest, Lego-built, Arduino-controlled one. With the Hack Day approaching, I suggested this project to a few friends and we created a Wave to discuss the idea. I was planning to use a box of old Lego pieces from my childhood (see mom? I told you I'd have eventually use those again!) and a couple of servo motors I had lying around, but Rodolpho brought his Mindstorms NXT into the picture, making the project much cooler (and, incidentally, much more manoeuvrable).

On Saturday morning I rode with Gola to the (really cool) auditorium of Senac University, where the previous Hack Day had already been hosted. There we met Rodolpho and Mobi, our original team. We had found out the day before that there would be a limit of 4 people on each team, this year. In 2008 our team had been comprised of... 12? 15? I never even knew. We'd simply started building weird, blinking stuff, and people had gathered around for the fun. We got, therefore, a bit disappointed with this edition's limit. But, since we weren't really expecting to win anything (and therefore disqualification wasn't an issue), we bent the rules a bit, and Werneck, Lucmult and Mauro hacked with us. Also, Aline joined us a bit later. Since Werneck and Lucmult eventually had to leave and didn 't return for Sunday, and Aline and Mobi didn't program, I think we were sort of in the clear. Technically. Sort of.

And then we started building.

What we built

The robot, in construction. Photo by alickel

The body of the robot ended up using mainly the Mindstorms parts. We had started building a larger, sturdier body with assorted Lego pieces on top of a rigid board, pulled by front-wheel drive with four wheels in the front and a loose trailing one. But, after a lot of testing, the loose wheel kept veering the robot out of track, so we rebuilt everything with a lighter, smaller frame, and a pair of caterpillars tracks. In hindsight, I think we should have kept the larger body (even if rethinking wheel traction), as it gave the robot more stability, which we would come to miss later. But we had to make a quick decision, and we worked with what we had. The NXT controller sat on the robot and communicated with an external server via bluetooth, relaying control input to the wheels and sending back odometry information.

This external server (actually, Rodolpho's notebook) continuously used YQL to query Twitter, Yahoo! Meme and news feeds (which I originally wanted to aggregate using Yahoo! Pipes, but we didn't have time for it), searching for people reporting fires - a simple string search, filtered by the Yahoo! Term Extraction API. Whenever there were reports, the server would send them through the Placemaker API to extract location information. It then determined which location in the world was in most urgent need of aid (by number of reports).

Now, the tricky part: the robot needed to be aware of a rectangular projection of the world onto the room, and to know its own location on it. I don't mean a visual projection. We did consider it, but realised that it would be unfeasible for the Hack Day. We toyed with the idea of printing (or drawing) a world map on large sheets and taping then to the ground, but the robot would surely slip, trip, or tear the sheets. So we settled for an abstract projection, and, based on information sent back from the robot, plotted its estimated current position in the "world", at each instant, using Yahoo! Maps and the Yahoo! Geocoding API (via geopy).

To locate itself, the robot relied on Mindstorms' odometry feedback, and on QR Code markers distributed around the room, read by an Android phone using Python and the Barcode Scanner app. The Android phone sent the information encoded in each QR Code to a custom service built with Python's SimpleHTTPServer, which our server polled. The server then sent back new movement controls, according to the robot's current position and desired target. We tried to use Mindstorms' sonar and colour sensors, but they just wouldn't work reliably with the Python interface (which we needed to send commands programatically via bluetooth).

A happy robot. Image © codepo8 CC BY 2.0

No one likes an impersonal robot, and ours, accordingly, had a face in order to convey emotions. We used an XO laptop as the head, and defined that the robot would have pre-determined emotions depending on the situation: it would be "at rest" when there were no fires going on, "worried" when it was going towards a fire, and "happy" when it had put the fire out. A Python script running in the background continuously searched Flickr (using YQL) for expressions associated with each of these emotions, and downloaded a number of related pictures. Another script queried the robot's current emotional state (set by the server), and displayed an appropriate random subset of these pictures. For every few of them, we'd display a face that Aline had drawn specifically for that emotion, to make up for the fact that we couldn't be sure if the pictures would depict it (people give the weirdest tags and descriptions to their Flickr uploads...).

We originally meant to use an Arduino to control a servo that would squish some water from a syringe, thus putting out the fire once the robot got to its destination. But all that proved too much for a mere 24h, and we had to settle for a manual pump.

A camera-shy robot, and a sleepy presenter

Of course, we'd tested (almost) everything, and the robot worked (almost) perfectly. (Almost) Great. But, when it was time to go up on the stage, we hit a few snags.

First, there was only one large screen. Of course, we'd known that, but it hadn't occurred to us that people would need to see not only the robot (which was tiny, especially at a distance), but also the map showing its movement towards the fire, at several different moments. Yahoo! had predicted even such an eventuality (seriously, guys, kudos!) and provided us with a way to switch between our server's screen and the feed from a video camera which Aline used to film the robot. But I didn't manage to coordinate the switch properly, and we didn't get to show the animated map.

Also, we had already noticed that the XO weighed a lot compared to the rest of the robot (remember what I said about the previous, sturdier version?), and that, in order to steer, we needed to balance it carefully. I didn't, and the robot soon went out of its path.

Finally, we had used so many different technologies and APIs on this project, and most of them simply vanished from my mind when I started presenting! Note to self: next time, write some hints on the back of my hand...

All I have to say for myself is that I had slept for only about 1 hour since the previous morning, ours was one of the last projects to be presented, and by then I was very sleepy and barely able to react. I should have switched from the camera to the map more often, so people could see what was going on. I should have grabbed the robot when it went off track and restarted the demo. I should have asked my teammates for help when trying to list everything at work on the robot. Shame on me.

But, all in all, I think it was a great project, and I'm very proud of it. I'm especially proud of my team, you rock much more than I was able to show :)

And, next year, I promise to take a nap before presenting!

The Robot. Image © brhackday, used with permission

comments Comments
 

Pushing up Python on Android

A few days ago, I put on the manliest voice I could muster and made an announcement to my wife: "Stand aside, woman! I am going to the gym!"

Needless to say, she was thoroughly unimpressed but somewhat amused when, half an hour later, she found me at the computer, programming.

Despite what she might tell you, I hadn't given up on exercising. You see, I had recently taken up the One Hundred Pushups, Two Hundred Situps and Two Hundred Squats programs. These involve a few sets of a varying number of repetitions each, with timed pauses between each one. So, for instance, on my first day I'd do 10 pushups, rest for 60 seconds, do 12 pushups, rest, 7 pushups, rest, 7 pushups, rest, and finally as many pushups as I can (but at least 9). These numbers of repetitions vary as you progress. My problem was keeping track of how many repetitions of which exercise to perform on any given day.

Now, there are nice PDFs with the whole exercise program on each site, but they're supposed to be printed. On paper. How low tech! Some people use spreadsheets, but... Meh. So I decided I should turn to my Android phone for help.

There is an iPhone app for One Hundred Pushups et al, and at first I considered writing an Android app to match. And I still do, but, of course, that's a full-on project, one that would definitely not be usable in time for me to exercise that night. So: pragmatic program, must be running in a very short time (my wife was laughing out loud, by then) and improve as need arises. This looks like a job for... Python!

Python is not (yet?) a first-class citizen on the Android, but it's a respectable second-class one, thanks to Damon Kohler and his Android Scripting Environment. ASE lets you run several interpreted languages on the Android, amongst them Python, Lua, Perl and JRuby. However, these are limited on what they can access on the Android API. More specifically, you can't build arbitrary user interfaces or create new activities (though you can invoke existing ones).

Still, having a Python interpreter on your mobile can be handy. I needed to input three sets of repetitions (one for each exercise program), and have Android let me know how many repetitions to do next, and for how long to rest between them. I'm still meddling with this code (trying to weigh making it better versus building a proper app versus actually, you know, exercising), this is just a quick hack I cooked up to get going, but it's growing on me. Anyway, here it is:

from time import sleep
import android

droid = android.Android()

# How long to rest between repetitions
rest = 60
# Warning before starting next round
wake = 10

# One line per exercise set, each number of repetitions separated by spaces
# For instance (pushups, situps, squats):
# 10 12 7 7 >=9
# 9 9 6 6 >=8
# 19 24 19 19 >=27
user_input = droid.getInput("Series", "Describe all repetition sets in your series:")
series = [i for i in user_input["result"].splitlines() if i.strip()]

def interval(theres_more=True, rest=rest, wake=wake):
    droid.makeToast("Rest for %d seconds..." % rest)
    sleep(rest - wake)

    if theres_more:
        droid.makeToast("Ready? %d seconds to start!" % wake)
        droid.vibrate(500)
    sleep(wake)

    if theres_more:
        droid.makeToast("Go!")
    droid.vibrate(3000)

for s in series:
    droid.getInput("New series!", "Ready?")
    sets = s.split()
    l = len(sets)
    for n, repetitions in enumerate(sets):
        droid.getInput(repetitions, '(press Ok when finished)')
        interval(theres_more=(n+1<l))

droid.makeToast("w00t! Congratulations!")

(you can also download it here)

As explained in the comments, it initially expects lines containing the number of repetitions on each set. So, if I'm undertaking pushups, situps and squats (respectively), I might input:

10 12 7 7 >=9
9 9 6 6 >=8
19 24 19 19 >=27

Of course, ">=9" is not a number, but the script will use whatever you input there as labels for prompting you to perform your repetitions.

You'll notice that the script uses getInput for displaying messages when it expects the user to press "Ok" (even though it doesn't expect any typed input at all). That's because, currently, getInput is the only graphical widget provided by the the Python proxy for the Android API. But more on that later.

So, try it out (if you're willing to exercise at all, or if you're just curious), and let me know what you think! Did it help you exercise?

comments Comments