So I spent part of my weekend tinkering with robots. I should actually have committed more but I was conflicted and a bit overscheduled, but anyway I got to the UK NAO user group’s hackathon out at Queen Mary College. This was an interesting experience; I’ve been to a few events like this before in my serious life with companies like Twilio, and even though I know much more about VoIP, web, and general purpose Python than I know about robotics, I’m usually a bit intimidated by the prospect of being found out as a wing-it Brit. On this occasion, showing up late, I got assigned to work with someone and had to pitch in.
Some points: Aldebaran Robotics have really succeeded in making a charming robot. And the degree of hardcore controls engineering that went into making it stand up with tone doesn’t bear thinking about. Their visual design was inspired, apparently, by toons – they wanted to make a cartoon character clank about. Photos are in the usual place.
This has deeper consequences, though. The software that lets you do stuff with Nao is much influenced by this. You can imagine two tracks – one that involves animation, and one that involves programming. Aldebaran terms this “behaviour”. One of the difficult things to understand is that the animation takes priority. An application in Nao is organised in animation frames, and at the end of each frame, flow of control reverts to the animation timeline. If you want any of your code to execute, you’ve got to remember to stop the animation, or else the animation timeline takes over and proceeds until it finds something to do in its own context.
There’s a huge graphical IDE that is meant to help you build both animations, and behaviours. You can assemble stuff in a Yahoo! Pipes sort of way, but very often you end up editing the underlying Python code that implements the various graphical components. In the usual weird way, very much like Pipes, graphical programming seems like such a great idea but in practice you could go so much faster in text, even as a beginner.
You can program the whole thing as a Python script, but then you can’t distribute through their app store, and you have to code all the gestures from the ground up. This is an important point. Annoying as the animation stuff can be, it’s absolutely vital.
Working with a robot is very unlike scripting or web development. Scripts don’t really have user interaction – you pass stuff on the command line at startup, it does stuff, and dumps to standard output. Client-server things generally follow the pattern that they start up, wait for user input, respond to it, and then wait for more stuff to happen. Also, it’s possible to stylise the user’s interaction with them strongly. You don’t need to worry about people dancing in front of port 80 on a web server.
Robots are different, especially ones that are meant to work with humans. Just standing there like furniture, or a web server, is unacceptable. People expect expression. Also, software on a general purpose robot like Nao has to do something to project its interaction affordances. You can see that Nao has hands, can walk. You might reason that it probably has some sensors, and guess where they are. But until it does something, it’s inscrutable.
As a result, the considerations involved are surprisingly theatrical. How will the robot show what is going on? Can they hear it? It can’t just sit there; it has to have stage presence, to project itself. This goes double if it’s meant to merge into the background and wait.
And this has a lot of consequences. In practice, things like how long it waits between remarks will have a big impact on the personality it projects and its usability. Had I had more time, I’d have done a lot more of this – I’m sure any autistic sprog exposed to our special education app would have run a mile to get away from its brutal pace. (We cut all the waitstates out of it, though, because we discovered that the i18n was interpreting 2,000000 (i.e 2 with six decimal points on a Finnish kbd) as 2 million seconds and therefore causing the robot to freeze.)