Master's Project at GT MS-HCI
Fall, 2016 - Spring 2017
At Georgia Tech's MS-HCI program, graduate students are to conduct a year-long project with professor oversight. The project is to execute the iterative HCI process of design-implement-research during the second year at the MS-HCI program.
My project was conducted under Dr. Gregory Abowd, and in the COSMOS group. COSMOS (COmputational Skins for Multifunctional Objects and Systems) is an interdisciplinary group of professors attempting to manufacture a new type of computational material that is flexible, thin, self-powering, and cheap, in order to build a ubiquitous, intelligent environment.
COSMOS Faculty Interview
The first undertaking of my project was to understand the COSMOS professors' visions and intentions with the COSMOS group. The professors are Dr. Gregory Abowd, Dr. Tom Conte, Dr. Michael Filler, Dr. Hadi Ezmaeilzadeh, Dr. Manos Tentzeris, Dr. Eric Vogel. [image]
The interview questions were co-written and interview sessions conducted with Felix Tener, a fellow MS-HCI student, as part of his master's project.
From the interviews, the professors' perspectives shared many similarities in their visions of COSMOS, with several diverging points. The professors believe that COSMOS will differentiate itself by bring computations closer to users, either as a new layer of computational skin, or as a construction material. COSMOS will need to be cheap, computationally efficient, and energy-efficient, to cover as much surface as possible. During the interviews, there were a few points of divergence in the professors' visions. A prominent one was the need for visual displays. While displays provide important information, it is also computationally costly and power-hungry.
The interviews with COSMOS professors inspired the two purposes of my master's project.
- In the fall of 2016, students in the COSMOS group were to research and think up an potentially achievable and compelling use case of COSMOS. I will support the students by designing and testing a Wizard-of-Oz prototype according to their use case
- I wanted to confront the visual display problem by designing and testing a separate use case and Wizard-of-Oz prototype
The COSMOS students, led by PhD student Ding Tian Zhang, researched and developed an "attachable smartphone touch drawing board," drawing inspirations from printable/flexible electronics and applying it to extend the interactive surface area for the common smartphone. I designed the first prototype focusing on the attachable drawing board. The participant would interact with the board, while I would reproduce their actions on a sheet of paper.
For the display problem, I developed a scenario in which COSMOS is made to be an "door-mounted visitor greeter" that works in conjunction with augmented reality (AR). AR is my approach to bypass the display problem, shifting the burden of visual display from COSMOS to a separate gadget. The participant needed to wear a pair of "AR glasses" in order to see the "AR artifacts" of the visitor greeter.
Interactions of Visitor Greeter Prototype
User Testing Procedure
When a participant was recruited for the user testing session, both prototypes were presented for testing. First, the drawing board, then the visitor greeter. A short survey, adapted from System Usability Scale, was administered after each prototype. Notes were taking during the testing session by the experimenter (me!).
1st User Testing Results
The first round of user testing was done with a group of 5 MS-HCI students, 3 males and 2 females, age 24 to 27. While they provided a good amount of information, I summarized the most important ones here.
For the drawing board prototype, the main feedback was the need for a phone interface, to provide proper feedback for their actions on the drawing board.
For the visitor greeter, forcing the participants to use a novel but uncommon device (the AR glasses) would exclude potential users. The visual display should utilize a more common product, such as smartphones.
Taking consideration of user feedback, significant changes were made to both prototypes.
For the drawing board prototype, the major change was adding a clean and understandable smartphone interface. Drawing inspiration from existing drawing apps, the interface features icons for brushes, eraser, color picker, and other tools for such an app, to provide an accurate representation of a user’s action on the drawing board. The drawing board itself was also modified to imitate the smartphone interface.
For the door greeter prototype, the AR element is retained, but is now conducted by a smartphone. The user would view the prototype through a smartphone’s camera. There are two ways of interacting with the prototype: typing on a virtual keyboard provided by the prototype, or writing on a designated space on the prototype itself.
Redesigned interactions of visitor greeter
2nd User Testing Feedback
The second group of participants consists of 9 Georgia Tech students, who are a mix of MS-HCI, Computer Science undergraduates, and Computer Science Graduate students.
With a serviceable smartphone interface, the participants focused on the specific interactions with the drawing board prototype. Changes include grouping the color picker with other drawing-related tools and adding clearer indication of function for the brush/opacity sliders.
With the visitor greeter prototype, the AR interactions remains its most problematic element. While a few participants enjoyed its novelty and enjoyability, most felt augmented reality makes the interactions awkward, especially since the participant needed to hold the smartphone to receive any visual feedback.
Final Prototype Designs
Based on the participant feedback, one final redesigns to both prototypes were made. The drawing board interface was touched up to show more clarity. Icons were added to the brush/opacity sliders to indicate function, the color picker was moved to the left, with other drawing-tool icons, and a “stack shadow” was added under the brush icon to show that it has more options beneath it.
The visitor greeter would have no augmented reality interaction. If interacting through a smartphone, all the instructions from the prototype would be sent to the phone, like text messages. If interacting directly with the prototype, it would feature a low-power visual display (e.g. electronic ink displays).
This master's project was incredibly informative throughout all stages, from the professor interviews, the background research, to prototype testing and redesign. I was able to experience the HCI/UX process and learn to engage with user, even with a prototype for a yet-to-exist technology.
However, there were several parts of the project that I wished I could have done differently, to make this project more effective:
- For starters, perhaps I should have picked one scenario, and focused on designing and testing one prototype, which may have produced richer data.
- My survey questions could also have designed better. Instead of basing my survey on a global survey like the System Usability Scale, a survey with questions on specific features in the prototypes would have given me a more quantitative measure for identifying user pain points.
- One of my most glaring mistake was the double-barrel survey for the visitor greeter prototype. It had two interaction methods, but only one survey, forcing the participant to combine their responses somehow. Due to this mistake, I was not able to make definitive conclusions as to the usability of either interaction method.