Rendered at 12:17:47 GMT+0000 (Coordinated Universal Time) with Netlify.
7777777phil 4 hours ago [-]
The main thing i see here is that prompts never touch a third-party server. If you're in a regulated industry or just don't want proprietary context hitting an API, running inference on your own hardware with encrypted p2p from any device is really cool (and useful.)
(staying in userspace via tsnet without touching kernel sockets is a nice touch too.)
ZeroCool2u 20 hours ago [-]
I've been trying to use this all morning, but I keep getting 500/auth errors, even on a completely new device. Can't even login to my LM Studio account right now.
(staying in userspace via tsnet without touching kernel sockets is a nice touch too.)