

Bought a bunch for a client project. Pretty decent form-factor, but they really need to have hardware that can do on-device AI inference. There are lots of reasonably priced choices out there.
Their whole “private” voice assistant story falls apart when they have to send everything to the cloud, including third-party services.




I did set up a local LLM on a pretty beefy machine and Whisper to do local voice assistance, but it kept falling apart. The only one that worked reasonably was tying it to a commercial API. This was more than a year ago, so things might have improved.
But if they want to sell these things as an Alexa/Google Home alternative, they’ll have a hell of a story if they built a one-stop hub that ran an on-device model and did everything locally. There are a lot of smaller models now and companies like MediaTek make decent edge-processors with beefy NPUs.
I’m still a big fan of their approach. Sad to see them drop the yellow, but I hope it means they’ll come out with something better.