

You have to find the right plugin version. The ones on the website use different versions than the one installed by ujust. I can’t remember where I found them but it was a github repo
You have to find the right plugin version. The ones on the website use different versions than the one installed by ujust. I can’t remember where I found them but it was a github repo
I tried mint and Ubuntu on a jail broken Chromebook and it had no audio, fedora worked out of the box
For VR apparently AMD is the way to go, but they don’t have HDR for hdmi 2.1 so no matter what there are tradeoffs currently, at least on my setup
To be honest, a lot of my issues are probably just getting used to plasma over gnome. The atomic part so far hasn’t really been inconvenient
Yeah apparently the current fix is to just get an AMD card or a wireless headset. I’m trying to get a steam vr alternative running which is non trivial on bazzite, but it just “avoids” the bugs in the nvidia wired drivers
Bazzite for the past two days has not been as easy as everyone makes of sound, and I say this as a software engineer that works with Linux 5 days a week. Some of the UI choices are just weird and VR support with Nvidia is so horrible I may end up having to dual boot.
We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.
Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is “don’t use a llm, if you do don’t build multiple”. We will never get beyond the current techniques essentially being seeded random generators, because that’s what they are intended to be.
It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms
NP, I figured it out by looking through the logs, I think you can turn them on in the settings