Casey Newton at The Verge was one of many journalists who went to google’s show-and-tell about the Google Car, where it trundled them around a parking lot on top of a re-purposed mall that Google now has offices in.
Lots of the pieces about this even were sort of wide-eyed wonder combined with utter banality. “The car drives itself!” and “It was so normal and boring”.
But the thing that I want to know from all the pieces I’ve read is what happens at the edges of what the car can do? And how much of what gets demonstrated is smoke and mirrors?
Back in 2002 or so, I put together a bit of speech recognition software that seemed to recognise anyone’s speech in a very complex and open-ended domain. Actually, it was all a trick and the performance was acheived through carefully selected scenarios, some very basic social engineering expectation management and some very simple coding. I actually did it to show that it wasn’t possible to make speech recognition work in that domain — the point was that it only worked within the carefully managed scenarios I had arranged.
Like a Penrose Triangle, it only works if you stand in the right spot.
We already know that Google’s fleet of modified Lexus self-driving SUVs need crazy detailed maps of the streets around Mountain View to work at all. And they don’t work in the rain.
So how much of the “it just works” is a social engineering hack and how much is it actually working? And is there even a difference if Google can lobby enough governments to allow its robot cars on the road?
Compare and contrast Google’s robot car amazingly avoiding “pedestrians” and “cyclists” on the roof of Google’s offices with this video of robots falling over.