This form does not yet contain any fields.
    « DIY: GAFFTA Paper Computing Workshop | Main | DIY: Workshop Weekend in Oakland »

    Device Design Day 2012

    When you're an independent consultant designing for devices and non-web interfaces it can sometimes feel as if you're swimming upstream as those around you talk web and computer interactions. That's why I was delighted when 3 years ago Kicker Studio began hosting Device Design Day (D3), a 1-day interaction, industrial and product design conference.

    Hosted the past 2 years at the wonderful Art Institute building on Chestnut Street, the day is filled with short, intense talks from people across the year's theme.

    Natural User Interfaces

    Jennifer Bove and Judy Medich of Kicker Studios started the day off by talking about this year's theme - natural user interface development. Natural in this case is equated with "human" interactions. While these types of interactions are often talked about in relation to multitouch environments, this year's conference would move into reality-based interactions as well.

    What are these interfaces attempting to do? They're working to detect and make sense of touch, voice, and movement. They are starting to appear in our clothes, doorknobs and glasses as well as on our mobile devices. Time for computers to learn the human interface and "speak human."

    Fast Prototyping X
    The talks for the day started off with a bang as Tom Chi the experience lead of the Google X team that brought us innovations such as the self-driving car, Google Goggles and interior maps. His talk was on Rapid Prototyping of the X team variety - definitely not what most of us think of as fast prototyping. I was so inspired by this talk that I'll post my notes with some pictures to tell you how this particular topic has shifted my entire way of working in the past 3 weeks.

    Eye, Tanya
    Next Tanya Vlach told us about her very personal and powerful Eye, Tanya project . After an accident left her without an eye, Tanya, inspired by I, Robot she began experiments to turn her eye into a robot. She created a Kickstarter project and in the midst of her journey was confronted by and began investigating the deep fear of tech and the body in our culture. She showed examples of 7 of 9, Ghost in the Shell, Aeon Flux and asked whether she was now part commodity. Is she for our entertainment because of what she is doing? Her exploration continues as she works with leading scientists to create interfaces between her brain and eye.

    Implicit Interaction
    Then Chris Noessel and Stefan Klocek from Cooper gave us a tour through the conversations that they're having about HOW to talk about the devices and interactions that we're working on now. They point out that people are good at people. Children are really learning about how to be people and interact with other people. 

    Tne talk then went on to show how current computers are far from being good at people. For instance, computers don't have a way to parse deeper context when we trigger and stop their automatic behaviors. The door to your car auto unlocks regardless of whether you're going to the car or simply putting out the trash. They don't understand paralinguistics - gesture, tone, irony. Flow-based interactions are intriguing as we seek to flow from one device to another. In this arena things like tribal agreements, dialects and customs can hang up the interactions. It is however fun to contemplate where we want to head as computers do become better at people.

    Lunch was catered and gave everyone the chance to enjoy the awesome view of the bay from the top decks as well as to enjoy, interact with and chat with the creators of the robots in the Robot Petting Zoo.

    Design Meets Science
    In the afternoon we had 2 more talks before breaking into workshops and panels. I enjoyed Alan Rorie's attempt to convince us that we should get comfy talking to scientists because from their point of view we're designing things for brains & the nervous system. This makes a scientist's research into the why behind haptics, tactile interfacing, eye movement, language process, visual attention and gesture planning a great resource for inspiring our design work - if you talk to them.

    While I thought his view of the mindset of the scientist was a bit unrealistic - process oriented, forward thinking, uses intuition/hunch/luck to guide where to go but reality to come to conclusions - there was some merit to the similarities in process. The knowledge that he felt that scientists had to offer was about how the brain functions (neurobiology) and generates behavior. He encouraged designers to understand the sub-fields: Behavioral (Decision-making), Perceptual (sensory systems), and Cognitive (attention) that could be explored for use in their designs.

    In particular he gave us great advice on searching for research content around the type of work you're doing and contacting the authors. Finding the right keywords is the hardest part. He suggests pairing a search term in quotes with keywords such as Google scholar, Pubmed, or review paper to find papers. As you find relevant content you can mine them for references and contact information. If you can't get access - try typing the title along with filetype:pdf into the search field. If that still doesn't work, then contact the author directly to see if you can get a copy of the research. Be sure to let them know how interested you are in their work.

    Designing Robots That Get Things Done
    Next up, Matt Powers gave a fun talk that showed where robotics is right now. He pointed out that entertainment has given us a really unrealistic expectations of how robots can work. "It's maddeningly difficult to get a robot to recognize a human much less prevent themselves from hurting one." And thus, the shift in focus in the robotics world from embodied cognition that can deal with messy reality to creating intelligent machines that using statistical inference can make concrete predictions. Thus the emphasis is on creating robots to do jobs that people don't or can't do. Since the robot cannot operate without interaction from people - super capable robot but can't read your mind - we still have to tell what to do and when. His favorite depiction of robots in the movies is Wall-E. "They got a lot right with robots that specialize in a single task."

    Workshop: Prototyping Natural Interactions Using Arduino and Immersion's Neutrino
    When the groups broke into panels and workshops, I thought it would be most fun to do some hands on. When we arrived we found a stuffed animal, an Arduino board and a haptic (vibrational) output. After a short intro those pieces were joined by an Immersion Neutrino board to provide us with a platform that had preprogrammed haptic feedback options and a pick of sensors. My team chose the tilt sensor and created a quick story about a narcoleptic cow that would fall asleep, fall down and begin to snore. We programmed the snoring, did surgery on our cow to add sensor and later the haptic feedback. We were so involved we missed cocktail hour but it was fun to get a chance to collaborate and play.


    I particularly appreciated talking with Immersion's Product Manager David Birnbaum, who showed me some great UI examples on Android. Now I know what kind of development I need to start on the Android side. Developing standard UI holds no appeal - but for haptic interfaces - I'm happy to climb over the development hurdles.

    The rest of the evening was a party for Kicker Studios with great drinks, amazing food from the food trucks outside and a fun group of folks to hang with. Another excellent year for D3. I look forward to next year!

    Reader Comments

    There are no comments for this journal entry. To create a new comment, use the form below.

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    All HTML will be escaped. Hyperlinks will be created for URLs automatically.