The primary school sector experiences high workload and educational issues that can have negative outcomes in teacher well-being and the development of children. Our suggested solution uses different algorithms for the automation of conducting a reading test, specifically the Drie-Minuten-Test (Dutch for Three-Minute-Test). One algorithm makes facial recognition possible, which makes it easy for a child to be recognized and log in to start the reading test. The second algorithm is a pre-trained speech recognition AI. This algorithm enables ‘real time’ speech-to-text processing. This is used for the actual assessment of the test, during which pupils need to read words on a screen. The concept of explainable AI is integrated in the system through a test overview, which shows the personalized results of the test to the teacher. The data can help teachers identify specific issues per pupil and create a learning process fitted to each child.
The final design for this course is a system and device that is fully functional and able to automatically conduct the reading test. Future research should show if the system is also able to recognize the word when pronounced by pupils or peers alike. Validation with a primary school teacher concluded that the system looks promising but ideally would facilitate a higher tempo, similar to the original Three-minute-test.
During the course, I focused on developing the ‘voice recognition’ and interface side of the prototype. Using Google Cloud computing and a Raspberry Pi with Google’s AIY voice-kit, I was able to create a program that would test for the words that we defined beforehand in the graphical user interface. On the practical side, I learnt to prototype with the Raspberry Pi and program using python. Both of which were completely new to me, but are actually very useful and often used in the world of prototyping. Furthermore, together with my teammates, we were able to create a ‘chain’ of different programs and devices working together to make the final program function as it does. This taught be about developing internal dataflows and streamlining the communication between the different parts of the prototype.
On a higher level, I learnt about a lot different aspects of AI and the value of implementing an ‘explaining’ layer to the process for users to understand what is going on. Since ML and AI are often seen as black boxes, I came to realize it is an important job of the designer to shape the mental model of the user with the interactions with the AI to be as fluent as easy to understand as possible, while keeping room for complexity.