NUI & EMBODIED COGNITION

NUI is based on the science of embodied cognition, which says that thinking is grounded in our body mechanics. Here’s an example: We associate control with up and being controlled with down, right? Because you have experience with down as heavy and up as light… When we don’t understand something, we say it’s “over our heads” because we naturally relate not being able to understand something with the physical experience of not being able to see it…it’s about embodiment…Ok, how about this one? We associate anger with heat, because our body temperature literally rises when we’re pissed off. You get it, but what does that have to do with Natural User Interface design?

You’re familiar with ASIMO, yes? He’s the Honda robot who can dance, run, climb stairs… stuff like that. He’s quite impressive and pretty cute too, but somehow, a bit sluggish, even geriatric at times. The thing about ASIMO, is that it’s really easy to screw him up. Put down a wad of paper in his path, and he’s as befuddled as Grandpa from the Simpsons. That’s because ASIMO’s “thinking” style or “computational approach” is non-embodied, or “top down”. ASIMO makes all his decisions by assimilating input (the wadded up piece of paper on the floor), processing that input in a central processing center (his “brain”) and then sending output back to his body parts (move foot five inches to the left). Are you thinking, wait, isn’t that how humans do it? Wasn’t that how they said it worked on School House Rock? Actually, we humans are way better than that.It turns out that people use embodied cognition too. Think about catching a baseball in the outfield. You don’t just stand there, looking at the ball. You look at the ball AND simultaneously, without “thinking” about it, run in a curve, which makes it easier for you to figure out where the ball is going to end up. This is what cognitive psychologists call a “perception-action loop”, and a great way to understand embodied cognition.

At Kicker, we’re all about dynamic design. We recently worked with DARPA to design a Natural User Interface for the robot called Alpha Dog. Alpha Dog is to be a companion to soldiers. He’ll walk over rugged, unpredictable terrain, carrying heavy loads, and need to react quickly to stimuli. His thinking process needs to be more dynamic and way less cumbersome than ASIMO’s, who’d be utterly confounded by an unpredictable landscape, and end up on the floor most of the time. Designed and built like a dog, Alpha Dog has amazingly bouncy and flexible legs and joints, and interestingly, not a lot of brain. His legs are built specifically to interact with the surface they’re on, and with each other, without needing to wait for recomputational information from the big boss, his “brain”. His anatomy is constructed to respond to stimuli directly, meaning he’s not just thinking with his brain; he uses embodied cognition.

Alpha Dog was designed to be nimble and strong, but best of all, we designed him to be trained like a dog. This was Kicker’s contribution to the project, designing the way in which Alpha Dog will interact with people. We observed how the soldiers interact with one another in the field and quickly realized Alpha Dog needed to be trained to understand their language. Alpha Dog will sense the soldiers, discern their commands and respond accordingly every time, in the same language the soldiers use. If the soldier gestures, Alpha Dog will gestures back. If the solider speaks, Alpha Dog speaks back, etc. Dog training for robots. If the soldiers had been forced to interact with a screen in order to use Alpha Dog, the experience would have been clunky and he wouldn’t have been half as useful. Alpha Dog worked seamlessly with the soldiers because he was designed to conform to the way the soldiers functioned naturally in the world.

We humans have an array of senses at our disposal. Vision is the sense that currently dominates the way we interact with technology. Vision is great, amazing even, but at Kicker, we believe that focusing so narrowly on vision is ultimately hampering our capacity to innovate effectively. We communicate with our whole bodies, not just with our eyes or our voices, but also with body language, touch, etc. We use the sense that works best, depending on the scenario. Think about a cup of coffee. If you want to know its temperature, you don’t examine it to see whether or not steam is rising from it. Most likely, the very first thing you do, without even thinking about it, is touch the coffee cup, right? What if we lived in a world where touching coffee cups just wasn’t an option, and in order to check the temperature of your coffee you were obliged to examine it for steam? Silly, right? What kind of ridiculously limited world is that? You can’t make me live there… well, the truth is, you do live there technologically, but not for long, because we’ve made it our mission to push past this limitation. We design by focusing on what people are naturally inclined to do in the real world, and use emerging technologies like touch, haptics, gesture, voice, kinesthetics, orgomomics, anthropometrics, optical flow, bio and emotion sound signatures, and physical sensors to interact with that process. We look at how people exist, process information, and communicate, then make technology understand that and talk back to you.

Leave a Reply

Your email address will not be published. Required fields are marked *