Someone lost their voice from talking to AI all day
I built Rubber Duck, a voice coding agent. I built a Todoist Ramble clone for Things 3 where audio streams to Gemini Flash 2.5 Live and directly executes tool calls. I built Commandment, an open source WisprFlow alternative. Each project started from the same observation: typing is a bottleneck when the listener understands natural language.
Based on voice AI projects built between late 2025 and early 2026.
The usage pattern that tells you something shifted
Someone in our community literally lost his voice from talking to AI so much. Not a metaphor. He was saying he lost his voice from talking to AI all day. That is the kind of usage pattern that tells you something fundamental has shifted.
When people use a tool so intensely that it causes physical strain, you are past the novelty phase. That is adoption. Uncomfortable, unsustainable, but real.
Voice removes the translation layer
Voice removes the translation layer between thinking and doing. You do not have to figure out how to type what you mean. You just say it. For task management, for coding, for controlling tools — the mouth is faster than the fingers when the listener is an LLM that understands natural language natively.
The friction of typing is not just speed. It is the cognitive overhead of translating a thought into a structured input. Voice skips that step entirely. You think it, you say it, the agent does it.
The interface is collapsing
The tools I am building now would have been science fiction two years ago. Streaming audio to a model that executes tool calls in real time. Talking to your computer and having it actually do what you said.
The interface between human intent and machine execution is collapsing, and voice is the medium that collapses it fastest. The question is not whether speech replaces typing for AI interaction. It is how quickly the tooling catches up to the behavior people already want.