During the course, I focused on developing the ‘voice recognition’ and interface side of the prototype. Using Google Cloud computing and a Raspberry Pi with Google’s AIY voice-kit, I was able to create a program that would test for the words that we defined beforehand in the graphical user interface. On the practical side, I learnt to prototype with the Raspberry Pi and program using python. Both of which were completely new to me, but are actually very useful and often used in the world of prototyping. Furthermore, together with my teammates, we were able to create a ‘chain’ of different programs and devices working together to make the final program function as it does. This taught be about developing internal dataflows and streamlining the communication between the different parts of the prototype.
On a higher level, I learnt about a lot different aspects of AI and the value of implementing an ‘explaining’ layer to the process for users to understand what is going on. Since ML and AI are often seen as black boxes, I came to realize it is an important job of the designer to shape the mental model of the user with the interactions with the AI to be as fluent as easy to understand as possible, while keeping room for complexity.