

That’s what you just got shown: Shove the configgy bits into Git.
You will likely have to find the configs you want to save first.
That’s what you just got shown: Shove the configgy bits into Git.
You will likely have to find the configs you want to save first.
Gamers Nexus recently made a video about GPU “shrinkflation”: https://youtu.be/2tJpe3Dk7Ko
It’s a neat comparison of Nvidia GPUs over the years.
I installed it and tried it on occasion, but it never worked for finding any coupons. It was the only extension I had that I kept disabled because I always thought it’s interaction with the browser and web pages was sus as fuck.
TBH, it was more of a curiosity I kept around to explore one day. I also dissect and detonate malware a few times a week, so I just treated Honey as such.
(That folder named “malware” on my computer is actually real. I pitty the poor soul who steals it thinking its just a joke to store my private data.)
Unrelated: I finally got my first .SVG downloader today, actually. Whoever the fuck thought it would be a good idea to add a script tag to SVG needs to be put down.
According to Baidu and most of .ml, absolutely nothing. It was a perfectly normal day of getting emulsified by tanks. There are no unhappy people in China, and they have the CCTV recordings to prove it!
Maybe. People with more technical knowledge should understand that LLMs aren’t magic or sentient and have some severe limitations. Hell, I have been tinkering with ML and ANNs for a better part of 15 years or so and they can be extremely useful. (I am no expert and never indend to be.)
It’s the marketing wank, scams, art theft and all the bullshit that pisses me off now. In that regard, I am squarely in the “Fuck AI” category. There is absolutely nothing phenomenal that has come of this recent bubble in the commercial space. AI generated images are mostly trash, articles are riddled with gross factual errors, phishing and other scams are more realistic (and maybe even more dynamic) now and public forums contain even more annoying bots. And the worst bit is that AI generated media, like music, is just a collection of averaged values with no originality.
That bell curve represents something but it isn’t IQ.
Having v-cache on one CCD is not an issue. It seems most of the scheduler issues have been fixed and that was just software. It would be nice to have v-cache on both, but it actually adds more complexity.
Is SPM typically considered continuous production without slowdowns forever, or, is there some unwritten rule about needing to maintain +1000 for a specific amount of time?
I can spike to 700 SPM at times, but it only lasts for a minute or two until a resource or two gets completely sucked dry. (Until I get better production, I just pace my research and buffer enough resources to keep research at 100% for a bit.)
Employers figured out years ago that caffeine has excellent ROI for productivity. (Amphetamines are probably a close second, but we won’t talk about that right now.)
For Intel to cut basic morale boosters was just pure silliness.
Consoles made sense when they required specialized hardware. The $700 for a PS5 is probably better spent on a much better GPU for a PC, IMHO.
It’s cool if you like consoles! They still have a specific allure, so I get it.
I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.
Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)
If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.
With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)
It seems like it would be extremely fast to me. Take a 50x50 block of pixels and expand those across a 100x100 pixel grid leaving blank pixels were you have missing data. If a blank pixel is surrounded by blue pixels, the probability of the missing pixel being blue is fairly high, I would assume.
That is a problem that is perfect for AI, actually. There is an actual algorithm that can be used for upscaling, but at its core, its likely boiled down to a single function and AI’s are excellent for replicating the output of basic functions. It’s not a perfect result, but it’s tolerable.
If this example is correct or not for FSR, I have no clue. However, having AI shit out data based on a probability is mostly what they do.
No Starfield? Oh noes!
Future chips not affected by THIS cpu bug yet.
Same. I support AI completely as a tool to solve specific problems and that is about it. What is really cool is that AI libraries and such got a massive boost of needed development so plebs like me can code simple ANN apps in Python with little skill. Documentation has improved 100x and hardware support is fairly good.
LinkedIn seems to be an interesting indicator of where tech is in its hype cycle. I guess LinkedIn went from 100% AI-awesome-everything about 2 months ago to almost zero posts and ads about it. I suppose most of the vaporware AI products are imploding now…
Of course, algorithmic feeds are a thing, so your experience might be different.
Reptiles are interesting creatures, for sure. (I still have a female chameleon and bearded dragon. My male cham died of old age last year.) I never got into breeding feeders, but that is a really good route to avoid the excessive supplementation that is super common.
I was a mod of /r/chameleons and still technically the owner of /r/chameleonholdingstuff, but I haven’t been on Reddit for a long while. Strangely enough, some mods tried to pull me back into the fold a few months ago, but Reddit is just a shit show and I don’t want any part of it. The /r/reptiles mods seemed a little more on the sane side though.
Most other moderators I met or talked with in person from Reddit were raging alcoholics, myself included. The subs I was involved with had a community that was… eh… very unique. (Reptile owners themselves tend to be a special breed, to say the least.)
Mental health is super important so do what you need to do and find a good headspace. Modding is a good time suck when you aren’t busy but is a serious drain when other things are more important in life.
When I was in a much darker place, I would tend to make decisions that would isolate me from everyone and everything else. In some ways that is nice but in other ways… not so much. If you believe your decision is with specific purpose, awesome. If this decision is more of a trend across your life, try and only make positive decisions. (This is just something I share when people express decisions in the context that you used.)
Wishing you the best! Be happy and stay healthy, friend.
FYI, you can download your photos in bulk with Google Takeout, but you need to have enough space in Google Drive to do it. (Takeout zips up all your photos and will drop 10GB chunks in Drive.)
I was doing something similar to you recently. I downloaded all my photos and de-duped by generating MD5 hashes for all the pictures that were downloaded. (I was moving all of my photos to a local NAS, so it wasn’t quite what you are doing.)
If your dups have consistent MD5 hashes, that might work for you but it’s hard to say.
I am going to need your 50 point summary of those obvious points in the longest form possible by this afternoon so I can be completely convinced that I have already made up my mind in the correct way. Thanks.
I would look into something like Doppler instead of Vault. (I don’t trust any company acquired by IBM. They have been aquiring and enshittifying companies before there was even a name for it.)
Look into how any different solutions need their keys presented. Dumping the creds in ENV is generally fine since the keys will need to be stored and used somehow. You might need a dedicated user account to manage keys in its home folder.
This is actually a host security problem, not generally a key storage problem per se. Regardless of how you have a vault setup, my approach here is to create a single host that acts as a gateway for the rest of the credentials. (This applies to if keys are stored in “the cloud” or in a local database somewhere.)
Since you are going to using a Pi, you should focus on that being a restricted host: Only run your chosen vault solution on it. Period. Secure and patch it to the best of your ability and use very specific host firewall rules for minimum connectivity. Ie: Have one user for ssh in and limit another user account to managing vault, preferably without needing any kind of elevated access. This is actually a perfect use case for SELinux since you can put in some decent restrictions on the host for a single app (and it’s supporting apps…)
If you are paranoid enough to run a HIDS, you can turn on all the events for any type of root account actions. In theory once the host is configured, you shouldn’t need root again until you start performing patches.