Sound Design Process

Test it on
the thing.

Why designing sounds in the studio is only half the job — and what you only find out when you play them through the actual device.

Early in my career I posted something on Twitter that I still think about: "Heads up to all sound designers working on mobile app notifications — everything below 500Hz can cause unwanted distortion." The obvious reading is that early smartphone speakers weren't very good. But the more important lesson is the one underneath it: whenever you design sounds for a device, you have to test them on the actual thing.

This sounds obvious. In practice, it gets skipped more often than you'd think.

The mixing trap

In music production, the goal is a mix that translates — that sounds good on a phone speaker, a car stereo, a club system, a pair of earbuds. To get there you balance the full frequency spectrum: lows, mids, highs, all in relation to each other. It's a craft that takes years to develop.

Functional sounds — notification sounds, UI feedback, device prompts — are simpler. Fewer elements, narrower frequency range. You might think that makes them easier to get right in the studio. In some ways it does. But simplicity has its own traps. A sound that occupies a narrow frequency band can feel perfectly balanced in headphones and then behave completely differently through a small hardware speaker. There's less cushioning, less context. Every frequency decision is exposed.

Mixing a functional sound well is the starting point, not the finish line. The real test only happens when you press play on the device itself, in the environment where it will actually live.

The action around the sound

There's a second dimension to testing that's easy to overlook: the physical context of the interaction. When a sound is triggered by a user action — pressing a button, completing a gesture, holding a surface — the sound doesn't exist in isolation. It exists in relation to everything happening around it.

Is there a physical click when the button is pressed? Does the user's hand or body shift as they do it? Is there ambient noise in the environment? All of this shapes how the sound is perceived. A feedback sound that feels satisfying in silence might feel redundant next to a loud mechanical click. A sound designed for a quiet home might disappear entirely in a busy kitchen.

There's also the question of timing — how long before the sound plays after the action, and how that gap feels. Sound and haptics are a particularly powerful combination here: they can confirm that a long-press has been registered, signal a mode change, or indicate that a threshold has been crossed. But getting that coordination right requires testing the actual interaction, not just listening to the sound file.

Simulating the real thing

When I was working on the Urbanista Malibu — their solar-powered Bluetooth speaker — I didn't have the final product to test with at the start. So I found a device with similar acoustic characteristics and started playing sounds through it as early as possible in the process. It wasn't a perfect simulation, but it was close enough to change how I was working. Hearing the sounds through a real speaker enclosure, rather than studio monitors, revealed things that would have been invisible otherwise — how certain frequencies built up, how the sounds projected into a room, how they felt from a metre away rather than a metre away from a pair of headphones.

It made a significant difference. But there was still something I couldn't simulate: the feeling of pressing the actual button on the actual device. That coordination — sound, haptic response, physical action — could only be properly evaluated once the first versions were implemented in the hardware. There's no shortcut for it.

The studio is a starting point

None of this means studio work doesn't matter. The mix, the frequency choices, the character of the sound — all of that is shaped in the studio, and it has to be right before you take it anywhere else. But the studio is an ideal environment. Flat monitoring, controlled acoustics, no interference. The real world is none of those things.

The sounds that hold up are the ones that have been tested beyond the ideal — played through the actual speaker, triggered by the actual interaction, heard in the actual environment. That's where you find out what you really made. And more often than not, it's where the final 20% of the work happens.

Get it right in the studio. Then go test it on the thing.

← All posts