Screens Make Me Sick

Screens make me sick. Like, literally sick. I get very bad motion sickness. I can’t use technology in any moving vehicle, not even on a train. Every time I get on BART, I get really jealous that everyone else is able to retreat into their own little bubble. The only people who don’t seem to take advantage of this ability are the creepy stare-you-back-in-your-eyes types that I don’t want to have to deal with… at all. I’ve learned to stand up and offer the inside seat when my lack of looking at a device is mistaken for an invitation — especially when he’s homeless and smelly…. The fact that I have no technology bubble is kind of depressing and sometimes, downright infuriating. I’m totally wasting my BART time staring at the bald spots of people who’re actually being happily productive while I flounder around like a loser. Something must be done about this.

The reason I can’t have a bubble on BART is because the experience relies on my eyes focusing on a screen, which is not possible for me without the potential of introducing puke into the situation. What it comes down to is this: Some activities, like riding a bus or driving a car or crossing the street or a whole host of James Bond type of eventualities are just not compatible with looking at screens. At Kicker, we understand this, so we’re working on alternatives that can help people work easily and more efficiently in lots of different scenarios, with the goal of making them (meaning you!) more like Iron Man than one of those lesser superheroes whose main superpower involves staring at screens… wait a minute, there IS no superhero like that, is there? Yeah… exactly.

We’re working on a couple of different wearable devices, and more and more, we’re understanding the importance of eyes-free interactions. The more mobile we are, the more important it is to be able to interact with technology through our other senses, not just me, with my merciless motion-sickness, but all of us who want to unlock our superhero potential. For example, we can prevent people walking into oncoming traffic or falling off cliffs while texting, or crashing their cars while reading emails, if we provide them with technology that relies on verbal and tactile interfaces instead of a screen.

Studies show that people are very well versed at multi-tasking while listening to things, but not so much while looking at things. People who are heads down, reading their screens are immersed in a way that prevents multi-tasking, and since there’s no way we can avoid multi-tasking in this crazy beautiful world, we need to make it easier to do well.

So we’re spending some time contemplating the significance and potential impact of eyes–free technology, and audio is a big part of that, however, we feel that ultimately, strictly audio interface is less than ideal. For example, it’s difficult to edit text using an audio interface, and also, there are many situations where speaking out loud is just not practical. Voice interface is great in certain situations, but not ideal on a bus or any other time you want privacy. I don’t need everyone around me to know that I want to listen to Agatha Christie’s book on tape (not that I do, I was speaking hypothetically just now…)

What some devices do, is confirm a user’s voice commands with text that appears on the screen as the device hears it, so the user can quickly look to see what the device heard, and what it’s planning to do with the command. This is ok, but sometimes lands us in a situation where we’re cursing Siri to hell and back- I know, there’s a certain amount of satisfaction in that… huh? I mean, poor thing! …she’s only trying her best.

But wait… We do have other senses besides vision and hearing, correct? Well at Kicker, we’ve been exploring ways to innovate beyond current audio/visual offerings, and we’ve discovered that there’s another, niftier approach we like even better than cursing Siri…

Tactile interface combined with adequate spatial mapping of tasks can provide an effective non-visual method to navigate. As a woman, I often carry a purse, aka the black hole. To find something, I don’t open it and look inside, instead I stick my hand in, feel around and magically pull out what I want through a combination of touch, sound, and spatially relevant pockets and pouches. The thought process is something like this: that’s jangly — must be keys; that’s smooth and long — must be pens; then there’s the sparkle pouch vs. the vinyl pouch for telling the difference between lipstick and aspirin; my phone is in the front pocket vs. the inside ID pocket.

Now, imagine if you could feel the difference between Wolfmother and Britney Spears on the device in your pocket. You would never have to exclaim to your music player and an entire busload of innocent people, that you (actually) want to listen to Britney Spears, but because you suffer from motion sickness, can’t secretly communicate this (sick) desire to your device, via screen, without barfing. See? You’d get the music you want, without barf on the seats, and your dignity would remain intact. I feel so much better already.

Anyway, what all this means is that we’re working on designing a device that can be controlled by touch AND audio, in combination, with the ability to switch back and forth, depending on the specific situation, and also, another device that utilizes tactile interface and spatial mapping, because sometimes you’re gonna need that. Sound good? Of course it does.

We’ll keep you updated as we finalize the results of these concepts. We are currently in the prototyping stages and look forward to sharing the details with you soon.