Now that technology is increasingly accessed through touch, isn’t it odd that it all feels like glass? Technology is at our fingertips. How can we use the sense of touch to control it?
Several recent studies, the latest published in June 2011 in Science, have shown that we as humans take in information with our whole body. We see this as an opportunity to develop alternative channels for interfacing with technology beyond just visual and audio. At Kicker Studio, we have been working with haptics (vibrotactile feedback) for a few years now, and have become very interested in touch as a method of communication. Touch is one of the first senses we develop and therefore, it carries a lot of significance in our understanding of the world. At Kicker, we decided to investigate and develop a baseline vernacular for tactile interface for digital devices.
WE STARTED BY TALKING TO THE BLIND.
We started by talking to the blind. We realized the blind spend a lot of time communicating with the world through touch, and would likely be articulate about which types of tactile systems aide in their comprehension of the world.
Our conversations led us to investigate Braille as a tactile system. We aren’t suggesting that we teach the world how to read Braille. Even among the visually impaired, this is a specialized skill set that requires time and study. But for tactile encoding of large amounts of information, Braille is extremely successful. We suspected there were many conventions in the system of raised print that we could use to develop a multi-purpose tactile grid.
We also discovered a series of MRI studies first done by Sadato, et. al., in 1996 that showed that the blind use the same visual cortex to process Braille as sighted people do reading printed text. In other words, both the blind and the sighted use some sort of spatial cognition when reading. Therein lay our mission– design a tactile interface that would benefit both the blind and the sighted.
THE INTERFACE OF VISUAL READING
First things first, we took a look at visual reading. In visual reading, the eye moves around the page in a series of expected ways to digest information. The underlying grid enables reading activities including scanning, skimming, and searching.
Scanning is essential to reading. There are many efficiencies in written language. Most readers only read the first couple letters and the length of the word in order to comprehend it.
Skimming is a function of spatial control. The ordered rows allow the eyes to briskly look for a particular item at a speed three to four times faster than normal reading. It is the temporal control of information. It is sometimes referred to as verbosity control because it enables the reader to control the amount of information they are taking in. The reader can move quickly, only taking in general information like the shape and size of the word, or slowly, focusing on every specific letter, depending on how fast they move their eyes.
A reader can search text easily as a result of skimming and scanning because they can rapidly sort through a bunch of information to find and focus on the important portion.
THE INTERFACE OF BRAILLE
Braille as a method of communication may have a limited audience, but it is helpful to examine as a successful example of tactile interface for large amounts of information. Entire libraries are printed and read through this method of encoding by thousands of people every day.
We learned from our Neuroscience friend, Dr. Alan Rorie, that there are several neurological factors that come together to enable someone to read Braille. The key contribution is from the Merkel Cell. It is stimulated by angles and points enabling the reader to detect the raised Braille cells. Additional neurological contributors are the Mishear Corpuscles. These nerves are “rapidly adapting”. In other words, they quickly notice frequencies, and in essence, go numb to them. This is why it’s necessary for the finger to move over the texture, rather than the texture be fed to the finger in place.
A type-1 Braille character is made up of 6 cells. These 6 cells contain dots that are either raised or not raised.
The consistency of the grid enables the encoding of language.
Just like with visual reading, reading Braille relies on scanning, skimming, and searching. The Braille “interface” is similar to printed text. In Western culture, text is printed left to right and top to bottom, in ordered rows.
The hands work in a method similar to the eyes in visual reading in order to develop spatial context. One hand reads specific characters while the other gathers spatial information, such as word and sentence length.
This much is probably clear by looking at a page of Braille. We learned something very interesting from Noel Runyon, an individual who has been working on interfaces for the blind since the early days of IBM. He is, I believe he phrased it, “coincidentally also blind.” He explained the one thing that sighted people always miss is the importance of the negative spaces to a Braille reader. They are equally as important as the positive spaces because it is those absences that define the edges of content. The gullies where there is no printed text help the hands to keep track of the direction and location of the Braille while it helps the reader to establish where the cells start and stop. The resulting grid ultimately provides direct spatial manipulation of text. The reader can then skim, scan, and search just like a sighted reader does with printed text.
Modern accessibility tools for visual interface are problematic. They provide a very narrow window into digital content, making it nearly impossible to develop the spatial cognition that is so essential to reading. Basically, imagine reading War and Peace one letter at a time. It would be painstaking and nearly impossible. That book is no where near as vast as the amount of digital information that is being generated every single day, all of which is increasingly delivered through visual interfaces on touch screen devices.
Mechanical Pin Readers
Mechanical Pin Readers translate screen content into Braille through a series of Braille cells. Mechanical pins act as the raised and lowered cells. The challenge is that they can only translate a small portion of information at a time, usually limited to 16-32 characters of Braille.
Audio tools read the screen to the user who uses key commands to navigate around the visual interface. The user is limited to moving around the screen in relative ways, which allows the user to understand the relationship between elements, but not the overall spatial relationship of elements on the page. It also provides control over verbosity, the amount of data provided from syllables, whole words, or individual letters depending on the amount of detail the user requires. Apple’s VoiceOver 3 replaced the key commands with touchpad multi-touch gestures making it possible to map these controls spatially, if only we had a tactile interface.
V-Braille is an experimental assistive technology for touchscreens developed at the University of Washington. It divides the screen into 6 equal sections. As the user touches each section, they receive a haptic response indicating if it is occupied or not. Unfortunately, it limits focus to one cell of one letter at a time. Recently Nokia added time coding to V-Braille, so that the tones are received by the user like Morse Code.
KICKER TACTILE TOUCHSCREEN READER
Here is the Kicker tactile touchscreen reader. We believe we can create a better spatial understanding of information on touch screen devices by creating a tactile grid developed with hi-fidelity, multi-channel haptics — which will soon be widely available in mobile devices. Here’s how it works:
A basic underlying grid helps the user to feel where a column of information lives. We call them “sight lines”. They act as the negative spaces (or gullies) between lines of information.
Speed of drag provides dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided.
Gestural control enables easy modality control. With a one-fingered drag, the reader receives audio feedback; with a double-fingered drag, the reader receives V-Braille time-encoded feedback.
The resulting tactile interface will restore for visually impaired persons a cognitive sense of space essential to reading, unlike any available modern accessibility tools. And because it is software and can be added to any tablet, suddenly the entire catalog of digital content instantly becomes available to people with visual impairment.
But it also enables keyboards like this one, which can easily translate into simple grids for navigating all kinds of screens and surfaces, eyes-free. Modality controls can instead control menus, and perhaps a double swipe across keys provides letters, another with numbers. There are all kinds of possibilities.
We’re currently working on prototypes to do a series of tests with users to create just the right haptic frequencies for our purposes. We look forward to telling you about the results very soon.
In the meantime, we are continuing to develop an understanding of the etiquette and vernacular of touch as a method of communication. We are examining how such a language could be used, and what it might be used to transmit. There are endless possibilities. Stay tuned.
To see how we can help you on your next project, contact us »