MIA - MEMORIES RELIVED

A mood-sensing wearable camera concept that lets users re-experience their memories

Role  
interaction design

TEAM
JAMES PAI, DD DING, ELSA Ho, ANN LIN, ADAM RIDDLE

TIMEFRAME  
4 WEEKS

Tools  
Photoshop, Premiere PRO, After Effects, Illustrator, DSLR

Mia (Mood-Intelligent Assistant) is a wearable point-of-view camera concept that uses mood sensors to capture memories and automatically create immersive experiences based on emotion.  A gestural interface allows the wearer to navigate memories and tune the experience to his mood.  The concept was designed for a final project in an interaction design course, by a team of 5 MHCI+D graduate students.  We produced a video prototype to present the concept to the class.

On this project, I was responsible for redesigning the concept and experience of the point-of-view camera.  I accomplished this through research, ideation methods (e.g. sketching, functional decomposition), illustrating experience storyboards, creating digital interface mockups, designing a logo, and producing a video prototype.  I also taught my teammates how to use Premiere and After Effects.


RESEARCH

Our team started out by conducting market research on current point-of-view cameras and identifying common issues in their designs.  We examined a number of existing products (e.g. GoPro, Google Glass) and framed our research questions in terms of future outlook and effect on society.  We also discussed the kind of relationships we have with technology, using our own experiences as well as specific examples from cinema.  Finally, we looked at the different ways in which we currently browse recorded photos and videos.

Key insights

  • most point-of-view cameras are unwieldy, especially when mounted on our bodies
  • people generally feel uncomfortable when they think they are being recorded 
  • controlling a point-of-view camera can be unintuitive and inconsistent across similar products
  • managing and retrieving recorded experiences can be a monumental task

 


IDEATION

Functional decomposition

Ideation began with a one-hour functional decomposition session as a process of breaking down concrete ideas in order to innovate.  As a team, we deconstructed the concept of a camera step-by-step into abstraction, then rebuilt the concept back into something completely new.  The stages of decomposition in the abstraction hierarchy are physical form, physical function, generalized function, abstracted function, and functional purpose.  The result of our decomposition was the functional purpose of the camera: to capture sensory information from the environment.  Using this as a baseline, we built a new camera concept: a wearable camera that could recreate the mood of an experience based on the mood of the wearer.  

The next step was to develop the camera design and method of retrieval.  We wanted to create something that could improve the well-being of the wearer by altering mood through past memories.  To do this, we incorporated a mood sensing mechanism based on EEG technology that would automatically retrieve and play back recorded experiences.  For example, the device would know when the wearer is feeling stressed out and project a calming experience to help relaxation.  I drafted an early storyboard of an ideal scenario that would communicate our design.

 

STORYBOARD

The scenario starts with a man frustrated by his work. The device senses his frustration and links with another device somewhere else in the world that is sensing the opposite: a feeling a calmness.  The second device then transmits sensory data from the environment back to the initial device.  Using light projectors built into the device, an immersive relaxing sunset ambience is recreated in the man's room.  Both parties now share the same feeling of relaxation and the man is given a break from his former frustration.

I drafted the storyboard in Adobe Photoshop and I found it to be an interesting challenge in how to tell a story without using any words.  It was also an experiment in breaking the grid layout of traditional storyboard panels.  Initially, I wanted to avoid using color but due to the nature of the ambient projection, I found it challenging to communicate the feature without some coloring.
 

Rough storyboard sketch

Rough storyboard sketch

Final storyboard

Final storyboard


PRODUCTION

After our design was developed, we set out to produce a video prototype of the experience.  Drawing inspiration from the emotionally-driven advertisement style popular in Asia, I came up with a handful of narratives for our video.  We decided on a husband and wife scenario as a team and then I storyboarded it shot-for-shot in a team whiteboard session.  Since our video prototype needed to show how people interact with the interface, we then developed an interface concept as a team.  I designed a logo and tagline, while my teammates wireframed the visual interface and produced a 3D-printed physical prop.

I wanted the logo to convey a sense of personality for Mia and settled on a script typeface that gives it a more human-like / emotional quality.  The colored circle is a visualization of the mood spectrum from our interface.

We spent an entire evening shooting the video, where I was responsible for setting up the shots and filming them with a Canon 6D.  After we finished shooting, I compiled a rough cut of the video in Premiere Pro and handed it to a teammate for further edits.  I then polished it up with final edits, image correction, and audio.

The last part of production involved animating the interface in After Effects and compositing it onto the footage.  Since I had the most experience with After Effects, I helped teach my teammates how to use it and we split up the scenes.  When the animation and compositing was completed, I compiled everything together to render the final cut.

 


Results

Overall, I thought our video prototype was quite successful at communicating the interaction we designed for our point-of-view camera concept while also telling a touching story without using any words.  Our presentation consisted of our video prototype + slides and received very positive feedback from faculty and peers.  The concept of Mia addressed most of the issues we found with current point-of-view cameras and media browsers.  Stored content is automatically curated based on mood in order to lift the burden of searching and browsing off the viewer.  Form factor is streamlined in order to record experiences from the wearer's point-of-view while also doubling as a projector so that recorded content can be re-experienced in a more immersive way than a traditional screen.  Finally, the gesture-based interface allows the wearer to take control of curation and fine tune the experience based on preference.  
 

NEXT STEPS

The physical prototype of the device could be more refined in appearance to match the quality of the video prototype.  It would also be valuable to conduct user research on our concept for feedback and evaluation.  For example, a Wizard-of-Oz behavioral prototype could test the effectiveness of our gestural controls.