There are many successful human-machine interactions, interfaces and devices that don’t make a great deal of sense from the human-machine design perspective. Although it’s not necessary to go into the wonder and confusion that is a TV remote control, it’s clear that we are not getting away from the need to interact with devices that are powered by computers (and more each year). In recent years, the public perception of cool-and-advanced technology has been have been especially dominated by awesome Natural User Interface (NUI) advances, such as mind blowing multitouch mobile interface (considerably older that’s it’s 2007 iPhone success story) and gestural input such as Wiimote, XBox Kinect and PlayStation Move (again using technology and knowledge that’s older than their first mass cultural imprint in Minority Report of 2002). But what will be next, what will take us to the next level of man-machine interface? Something that introduces impressive computing power into artefacts that already just work for us – a super smart pen perhaps? Something that brings the power of the cloud into a friendly embodied tabletop device perhaps? Something that already comes almost everywhere with us in our pocket perhaps?
Of course, the answer here is going to be either “it could be any or all of these”, or else “social robotics of course”. In the context of this blog, that’s the null hypothesis, but Guido Sandini gave a concise perspective on why embodying computing and communications applications in a humanoid robot makes sense at an IEEE-RAS workshop on Humanoids in 2010 which is much better than null. Worth a quick look! (If I collect a few of these references, I’ll never need to articulate the answer myself – the mantra of a networked knowledge worker!)