• 11 Posts
  • 75 Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle
  • I use the HA Voice Preview in two different rooms and got rid of my Alexa Dots. I’ve been trying both speech-to-phrase and whisper with medium.en running on the GPU for STT, tried llama3.2 and granite4 for the LLM with local command handling

    I’ve been trying to get it working better, but it’s been a struggle. The wake word responds to me, but not my girlfriend’s voice. I try setting timers, and it says done, but never triggers the timer.

    I’d love to improve operating performance of my assistant, but want to know what options work well for others. I’ve been experimenting with an intermediary STT proxy to send it to both whisper and speech-to-phrase to see which one has more confidence.



  • I’d love for my HA Voice Preview to be sufficient to replace my Alexa/Google devices. I even unplugged my Alexa devices. However, it’s been rough going for me. It never responds to my girlfriend speaking the wake word and doesn’t set timers. There’s a number of knobs that define how well it works including the physical hardware (there’s obviously the Voice Preview, but also some community made versions with better mics,) wake word model, conservation LLM model and the speech to text model (whisper vs speech to phrase). If it works well for you, can you share your configuration you’re using?
















  • chaospatterns@lemmy.worldtoSelfhosted@lemmy.worldOpen-WebUI v0.6.29 release
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    6 months ago

    A newer release, v0.6.30 is already released to fix an issue with OneDrive integration.

    Looks like they finally finally made their slim image tag smaller than the main image:

    ghcr.io/open-webui/open-webui:v0.6.30-slim    7c61b17433e8   46 hours ago    4.3GB
    ghcr.io/open-webui/open-webui:v0.6.30         c1ac444c0471   46 hours ago    4.82GB
    

    Though only saving .5GB of space is not very slim. I use OpenWebUI in my home lab, but this issue just made me question the quality of the project a tiny bit.





  • Gluetun doesn’t make any sense here. You’re forcing all the traffic for from Jellyfin to go through Mullvad, but you need to be able to connect to Jellyfin because Jellyfin is a service you connect to.

    Since your Tailscale is host network mounted, you’ll be able to expose your Docker network subnets over Tailscale then access Jellyfin. This is done via the TS_SUBNETS env variable. Docker will use a 172.16.0.0/12 subnet.

    You probably intend to gluetun your downloading software, not Jellyfin.