My Journey to a reliable and enjoyable locally hosted voice assistant

6 hours ago (community.home-assistant.io)

actually the hardest part of a locally hosted voice assistant isn't the llm. it's making the tts tolerable to actually talk to every day.

the core issue is prosody: kokoro and piper are trained on read speech, but conversational responses have shorter breath groups and different stress patterns on function words. that's why numbers, addresses, and hedged phrases sound off even when everything else works.

the fix is training data composition. conversational and read speech have different prosody distributions and models don't generalize across them. for self-hosted, coqui xtts-v2 [1] is worth trying if you want more natural english output than kokoro.

btw i'm lily, cofounder of rime [2]. we're solving this for business voice agents at scale, not really the personal home assistant use case, but the underlying problem is the same.

[1] https://github.com/coqui-ai/TTS [2] https://rime.ai

  • 80% of my home voice assistant requests really need no response other than an affirmative sound effect.

If you're less concerned about privacy, I use Gemini 2.5 Flash for this and it's exceptionally good and fast as a HA assistant while being much cheaper than the electricity that would be needed to keep a 3090 awake.

The thing that kills this for me (and they even mentioned it) is wake word detection. I have both the HA voice preview and FPH Satellite1 devices, plus have experimented with a few other options like a Raspberry Pi with a conference mic.

Somehow nothing is even 50% good as my Echo devices at picking up the wake word. The assistant itself is far better, but that doesn't matter if it takes 2-3 tries to get it to listen to you. If someone solves this problem with open hardware I'll be immediately buying several.

  • How about a button?

    I'd prefer to physically press a button on an intercom box than having something churning away constantly processing sound.

  • What's been surprising in my experience regarding the wake word is that it recognizes me (adult male) saying the wake word ~95% of the time. However, it only registers the rest of my family (women and children) ~30% of the time.

    • I have no firsthand knowledge, but I’d strongly bet that the home-assistant effort to donate training data is mostly get adult males, and nearly zero children.

      3 replies →

  • What about your wifi APs sensing which room you are in, with your choice of hilarious dance moves as the trigger ?

    Funky chicken for Gemini

    Penguin dance for OpenAI

    Claude?

  • I have a feeling beamforming microphone arrays might help here, something like this could improve the audio being processed substantially - https://www.minidsp.com/products/usb-audio-interface/uma-8-m....

    • That's a good call. I have a PS3(?) mic/camera that I was using when I was running the original Mycroft project on a Pi. I wonder if that would help with the inbuilt HA mic not waking for most of my family, most of the time. I will have to look at my VA Preview device and its specs later because I'm not sure if you can connect an external mic to it out-of-the-box.

  • Why not use an easier to detect wake “word”, like two claps in quick succession? Or a couple of notes of a melody?

    • Can't clap if your hands are full and I would not subject my family to my attempts at delivering a melody.

      I haven't tried training my own wake word though, I'm tempted to see if it improves things.

      2 replies →

One that I have been experimenting with is using analog phones (including rotary ones!) to act as the satellites. I live in an older home and have phone jacks in most of the rooms already so I only had to use a single analog telephone adapter. [0] The downside is I don't have wake word support, but it makes it more private and I don't find myself missing my smart speakers that much. At some point I would like to also support other types of calls on the phones, but for now I need to get an LLM hooked up to it.

[0] https://www.home-assistant.io/voice_control/worlds-most-priv...

I'm still waiting till the promise of voice AI that was showed during the OpenAI demo in 2024 turn real somehow. It's not clear to me, why there has been zero progress since then.

  • What tech can do vs applying it requires it often to be configured and packaged to be usable in that way.

Do people like talking to voice assistants? I've used one occasionally (mostly for timers when I'm cooking), but most of the time it would be faster for me to just do it myself, and feels much less awkward than talking to empty air, asking it to do things for me. It might be because I just really don't like making more noise than I have to

(Yes, I appreciate that some people may be disabled in such a way that it makes sense to use voice assistants, eg motor problems)

  • I consider each time I need to pull out my phone and "do it myself" to be a failure of my smart home system.

    If a light cannot be automatically on when I need it (like a motion sensor) or controlled with a dedicated button within arms reach (like a remote on my desk) then the third best option is one that lets me control it without interrupting what I'm doing, moving from where I am, using my hands, or possessing anything (a voice assistant).

    • Do you not just turn the light on when you go in a room, and turn it off again when you go out? All the rooms in my flat have switches next to the door

      3 replies →

  • I pretty much only use them for timers and weather, and the occasional lookup for quick random info. And this is all only if I don’t have a phone handy or eg the toddler is going to timeout and I need to set his timer in the midst of him having a meltdown about it.

    It’s why I haven’t and won’t enable Gemini, and I’ll likely chuck my nest minis once I’m forced to have an LLM-based experience. Hopefully they’ll be able to at least function as dumb Bluetooth speakers still but I’m not holding out hope on that end

  • I prefer voice strongly. I don't want to stop what i am doing, find a device, open the app, wait for it refresh, navigate and click to get Milk on a list. Sure you can bring this down a few steps, but all of which still require me to move, have a hand and eye free.

  • I guess most of my use is whilst driving, to start/stop music or audiobooks, change navigation etc. Although changing navigation through Siri is somewhat painful as it often gets my intended destination wrong lol.

  • I would, if they worked even 90%.

    I mostly set timers because it’s one of the few things that always works.

  • I use it frequently for reminders and calendar events when not at a computer, as voice is faster than the mobile interface (with so many screens) for setting something up

  • I love it for lists- like my hands are full making something in the kitchen and I can just tell it to add things to my grocery list as soon as I notice I'm out of something.

  • I started designing and building a voice assistant for myself and then realized that the only time I'd find it useful would be during cooking to set timers. But a loud extractor fan would be running making the voice recognition very difficult.

    • An extractor fan is the kind of consistent noise that good signal processing and voice recognition ought to be able to strip out, especially if using a dispersed mic array. Even if your voice is much quieter (to your human ears) than the fan. It's a channel separation problem.

I've been having a lot of fun using my old Mycroft AI device. Neon is the new software package. It didn't solve the issues highlighted in this thread, but it is a fun open device to hack on. I wrote a little web app that will speak in the standard voice and say things like "hey kids, I'm AI and know everything, and your dad is really cool." They love to yell at me when I do that.

Their first version is most likely already 10x better than Siri.

> Understands when it is in a particular area and does not ask “which light?” when there is only one light in the area, but does correctly ask when there are multiple of the device type in the given area.

  • One of my favorite episodes:

    I set 2 timers for the same thing somehow. I then tried to cancel one of them.

      >“Siri, cancel the second timer”
      “You have 2 timers running, would you like me to cancel one of them?”
      >“Yes”
      “Yes is an English rock band from the 70s…”
      >“Siri, please cancel the timer with 2 minutes and 10 seconds on it”
      “Would you like me to cancel the timer with 2 minutes and 8 seconds on it?”
      >“Yes”
      “Yes is an English rock band from the 70s…”
    

    Eventually they both rang and she listened when I said stop.

    • Helping my kid get ready for shower I had this exchange:

      Me: "Text Jane Would you mind dropping down the robe and underpants"

      Siri: Sends Jane "Would you mind dropping down"

      Me: rolls eyes "Text Jane robe and underpants"

      Siri: "I don't see a Jane Robe in your contacts."

      Me: wishes I could drown Siri in the bathtub

      It's wild to me that Apple got the ability to do the actual speech-to-text part pretty much 100% solved more than half a decade ago, yet struggles in 2026 to turn streams of very simple, correctly-transcribed text into intents in ways that even a local model can figure out. Siri is good STT, a bunch of serviceable APIs that can control lots of stuff, with the digital equivalent of a brain-damaged cat sitting at the center of it guaranteeing the worst possible experience.

    • > "Stop" is a song by English girl group the Spice Girls from their second studio album, Spiceworld (1997).

I've recently purchased a couple of the Home Assistant Voice Preview Edition devices, and they leave a lot to be desired.

The wake word detection isn't great, and the audio quality is abysmal (for voice responses, not music).

Amazon has ruined their Alexa and Echo devices with ads and annoying nag messages.

I'd really like an open alternative, but the basics are lacking right now.

  • Can those devices (Amazon) be _jail broken_? I was just wondering that this morning while taking a shower.

    • Generally no. Big tech companies have gotten good at locking down devices to the boot loader. Some of the signing keys for certain OTA versions have leaked, but you can’t rely on that.

      Some of the devices contain browsers, and people have set up hacky ways to turn them into thin clients through that, but it’s not particularly reliable IME.

      I heard some Chinese brands which made similar hardware for Chinese consumers don’t lock their devices down, letting you flash an open install of Android on them, but I haven’t seen anyone try that IRL.

    • Youtube is trying to push me to watch a video about jail breaking the Echo Show for a week now. I didn't watch it, but it's probably easy to find.