Group of five
Neuma is a non-verbal tool that enables the exchange of meaning by translating human gestures into sound. Our users are emotinally disconnected people - who struggles with expressing themselves and want to find a new way of connecting with each other. The tool enables emotional dialogue through gesture and sound, using five sonic archetypes: Flux, Repetition, Friction, Interruption, and Rotation. These act as conversational roles such as continuity, mirroring, tension, redirection, and reflection.
In Private mode, Neuma is worn on the body. It quietly logs social and contextual cues from daily routines. Feedback is limited to subtle vibration patterns, keeping the experience embodied and discreet. In Public mode, placing the object on a surface pairs it with another device. Each reveals its wearer’s dominant archetype through sound and light, and partners “speak” by squeezing, tapping, rotating, or pulling. Sensors map these gestures in real time, producing archetype-specific sonic textures and color gradients to form a shared, improvised conversation.
We imagined a speculative world where language is overused and distrusted, reflecting today’s fatigue with AI-generated content, fake news and performative media. In response, Neuma explores how technology might amplify rather than abstract bodily expression. By using rhythm, gesture, and multisensory feedback, it asks how we might design for emotional connection beyond words.
The project followed Research through Design, with speculative framing and embodied ideation. Archetypes evolved from abstract emotions into conversational roles, prototypes progressed from foam to Arduino-based wearables, and multisensory feedback was refined into vibration (private) and light/sound (public). User testing with over 35 participants shaped body placement, clarified pairing cues, and simplified gesture-sound mappings. Material exploration with silicone, TPU, and rigid cores emphasized tactility, ensuring the object feels like a bodily companion rather than a gadget.
We created a large set of prototypes to test how people enganged with the object based on sound and expressions.