The final family of products that was developed, consists of a connected light bulb called Lila, a controller, a gamekit and a phone application. The LED light bulb forms the core of the family of products and is aimed to be placed above the dinner table. The intensity and colour temperature of the light reacts to the clarity of speech. The light intensity and colour temperature changes when speech of conversation members is unclear and not well articulated. The lightbulb contains a microphone to collect audio input to determine the clarity of the speech and a speaker for sound effects.
The ‘controller’ enables the user to regulate the light bulb settings quickly. Only the settings that might be of importance during conversations are controllable this way. By turning the controller upside down, the user can turn the microphone off quickly, for example when a difficult conversation is being held at the table. By fitting the controller onto the lightbulb, the microphone can be turned off for longer periods of time, the device in disconnected from the internet and aestheticly the microphones are neatly packed away. This option could be used when the user wants to read a book, for example, as the added part to the lighbulb has a second light setting that can be specificity tailored to other activities. The controller is also a tool for the gamekit to scan cards and thus initiate a certain game. These games are aimed as a fun way to learn about the device and train clarity of speech of people at the table.
The product was developed in multiple iterations, where the concept was continuously altered based on (user) feedback. The eventual prototype was made by 3D-printing a bottom, top part and the two parts of the controller. To eliminate the printing-line artifacts, all side were carefully sanded and received 6 layers of spray paint for a more high-fidelity finish. At the same time the different Wemos controllers were programmed and later connected to the required sensors. Then all components were put together and attached to a Processing sketch running over the internet and contributed to the collective data-flow of other student projects running in the same IoT setting.
In literature about hearing loss, related products and current solution, we found a general foundation of what challenges we tackle as designers. To get a better understanding of the social consequences, we however chose to do user research. Four interviews with in total six people who have hearing problems were conducted. The interviews were conducted at the participant’s residents and focused around their own experiences. To guide the process a set of conversational cards were made. On these cards several activities were displayed that could be done in and around the house. A second set of cards showed different experiences, both positively and negatively. Participants were asked to combine these cards as a tool to get the conversation structure. From this, we found that the largest problem was conversations in large groups, as people were often unable to hear what was being said.
The first iteration was a solution to this problem, designed as a lamp above the dinner table. It would move up and down, allowing for the reveal of an array of microphones, which could assist in improving the hearing aid of the user with adding additional inputs directly pointed at people at the table. To compliment the concept of a physical light, an app interface was also designed. The biggest concern of this iteration was the social effect it would have. People pointed out that owning a lamp that would highlight the fact that you cannot hear very well is going to be a problem.
Iteration two turned the social roles at the dinning table around. Instead of the person with hearing loss being the one needing hearing aid, the dinner table lamp would be the one that needed clear audio to shine. And instead of it being an entire lamp, the concept would just be put into a device the size of a light bulb. To test whether this concept would be perceived as desired, we deployed a prototype at an elderly couple with hearing loss. The prototype used a microphone to measure loudness in the room and would adjust the brightness of a connected Philips Hue lightbulb accordingly. The user would still have manual control over the light through an additional box with buttons for turning the light on and off and for disabling the microphone. The concept was seen as being ‘okay’ in real life, but small things should be changed to make the use desirable. An example is the transition the lamp makes between understanding ‘clear’ speech and non-understandable speech.
The final iteration made all the conceptual parts reality and addressed final changes to the prototype that resulted out of the feedback from the deployment. The created prototype was designed to include all required components. A Wemos was programmed to drive the behavior of an LED ring and was connected to the internet to to change settings. A control unit was also developed with its own connected Wemos. In here an orientation sensor measured whether the module was put up-side-down to mute the microphone (over internet). In addition was a magnetometer that would trigger when the separate module would be attached to the lamp via magnets. This would then trigger a change to the LEDs over internet. A program on a laptop would do the microphone processing and ran a rhythm-finding process. If a rhythm is speech would be found, it would classify the speech as being clear. This would then over internet be send to the Wemos in the light bulb and change its settings. Finally, an app interface was designed and a matching game to gamify the learning process of improving ones clarity of speech. During demoday everything was presented and attached to the network of other projects, allowing a miniature version of the light to mirror the behavior of the actual prototype.
During this semester, the assignment was to design a family of products instead of one. This required the adaptation of a mindset, which I found very hard at first. I had namely the tendency to implement multiple functions into one device. It became clear to me, however, that dividing these functions over different artifacts can improve the functionality and effectiveness of the different devices. Although I would have normally thought that separating interactions from their reactions is a bad thing, I now would argue that having a separate part or artifact can enhance the emersion between user and product.
During the design process we went through multiple iterations of the concept. The earlier iterations and ideas were far less well developed. The way these concepts were created was too much characterized by an inhumane drive of direct problem solving. Not only that, some attempts to create product families resulted in naïve and gimmicky solutions. By zooming out and looking at the effects that our designs would have on the user, we were able to adapt towards better fitting solutions. I have learnt to not seek for direct solutions, almost as a technology push, but create concepts based on a philosophy that is built on the norms and of the users.
When it comes to protecting user’s privacy for example, using interactions to add friction and guiding reflection for communicative learning using rich interaction provides many effective and creative solutions. I have never thought about the consequences of sharing data as much as during this semester. After the workshops I shaped an opinion on the topic of responsibility from the designer to the user. As mentioned in those workshops, we designers have the responsibility to keep the users of our products from getting harmed by the products, but with the rise of the digital age comes also the responsibility of protecting the users digitally. I feel like it has become essential that designers design their products with the highest possible data security, but also with artifacts that make the user aware of what they are actually doing, sharing and which risks they are taking.