You are viewing a single comment's thread from:

RE: 2025 PC Upgrade

in #gaming4 days ago

For most people, I'd agree, no need to really chase upgrades. I used to, but I stopped that, but I depend on my machine to do my job. Most people, unless you game, do ai, or develop, I'd usually just recommend a $300-400 mini pc, they are so great these days. I have a cluster of mini pcs running 60-80 docker containers and a few vms at any given time with sub 3 minute fail over.

My old system wasn't slow by any means, I just didn't want to deal with power supply issues upgrading to a 5090. There were games I play that the 3090 was just not handling as well as I would like.

What models are you using locally? I found most local models just suck until you can get into the few hundred gb of vram and run something like Deepseek or Qwen3 235B with high quants. It really depends on what you are doing though, I find Claude to be my goto for most things but I try to offload smaller tasks to Qwen. Llama is such garbage though. Hunyuan just came out and getting a lot of good press and is a small model considering, some are saying it is on par with Qwen3 235B, but I haven't done testing yet.

I hear good things about Bazzite, I haven't tried it. I went through a lot of distros a while ago until I settled on Arch. Arch is amazing, I've made some changes that make things really handy. For example I have a hook into pacman to save my packages to a text file so I always have a list handy as well as name BTRFS snapshots before and after.

Sort:  

Local models: I've also got some on my laptop, (m2 macbook air w/24gb of ram) but have the bigger versions on my PC - I'm on my laptop at the moment, in the kitchen, away from the server rack in the study :D

image.png

I don't use them heavilly, but the best use case I can think of at the moment is to run a batch prompt to check HIVE posts for suspicion of AI gen content. I think Hermes is pretty good at doing that. (Just using LM Studio - because I am far, far from a dev)

I use the local LLMs very sparingly, but the use case for a lot of them is that.. they're not that excellent for most things, but they are good at helping me figure out why the crap my shitty python code doesn't work, or try and decipher things like dependency trees different virtual environments, mainly for the text to image stuff I've been playing with in the past.

I have several hundred gigabytes of various image to text models on my main PC. I've trained a few models on my own photographic work, and use that to ideate things for new shoots when I'm working with models. Have also been trying to play with detailing or returning focus to missed shots, but on raw images (at 24 megapixels) - it takes 6 minutes per image, and generally isn't worth it.

I have found gemini to be the best of the non-local models, particularly for my main use case, which was to take an excel spreadsheet I had made, and some python logic, and turn it into a web app so I could stop relying on Excel for my budget / financial runway calcs.

Loading...
Loading...