Yeah quicksync won’t help you there.
I thought nVidia’s limit was enforced by their drivers, but that’s probably changed since it’s been a while since I looked at nvenc as a solution (quicksync, then an ARC card over here).
Yeah quicksync won’t help you there.
I thought nVidia’s limit was enforced by their drivers, but that’s probably changed since it’s been a while since I looked at nvenc as a solution (quicksync, then an ARC card over here).
If you have an Intel CPU with quicksync, it will likely perform better than the 1060 in terms of visual quality, if its coffee lake or newer (8th gen).
If not, well, it’ll be fine up to whatever the stream limit is (4?).
Wow, a commercial open source product that COULD have pulled a rugpull, looked for all the world like they were planning a rugpull, just uh, did the right thing?
Good job, Bitwarden.
Fair, but he said he wants to move from Windows to Linux, so I just assumed there wasn’t going to be any of those since, well, they’re not going to run in Linux anyways.
Not in a way you’re probably going to like.
You could set up a bare metal hypervisor on the system and set up a VM for your NAS, Windows, and Linux and swap between them as needed, but uh, that’s not really an exceedingly pleasant desktop use case, for a number of reasons, one of which is that you really won’t have the normal ‘sit down, and use the computer’ desktop experience.
Alternate option: run the NAS and either the Linux or Windows install in a VM, and keep it booted into, say, the desktop Linux environment with everything else being a virtualized setup.
Since android apps are required, I’d maybe go about this another way: find the app you like the most, then stand up whatever backend it uses for sync.
I was already in the FreshRSS ecosystem, but man, I don’t really like any of the android apps on offer, but swapping at this point would be annoying (bookmarks, saved stories, etc.)
good ideia to run restic as root
As a general rule, run absolutely nothing as root unless there’s absolutely no other way to do what you’re trying to do. And, frankly, there’s maybe a dozen things that must be root, at most.
One of the biggest hardening things you can do for yourself is to always, always run everything as the lowest privilege level you can to accomplish what you need.
If all your data is owned by a user, run the backup tool as that user.
If it’s owned by several non-priviliged users, then you want to make sure that the group permissions let you access it.
As a related note, this also applies to containers and software you’re running: you shouldn’t run docker containers as root unless they specifically MUST have a permission that only root has, and I personally don’t run internet facing ones as the same user as all the others: if something gets popped, then they not only do not have root permissions, but they’re also siloed into their own data in the event of a container escape.
My expectation is that, at some point, I’ll miss a CVE and get pwnt, so the goal is to reduce how much damage someone can do when that happens, rather than assume I’m going to be able to keep it from happening at all, so everything is focused on ‘once this is compromised, how can i make the compromise useless to the attacker’.
Unifi Gateway Ultra
How have you liked the gateway? Any stupid decisions that have annoyed?
My USG has decided that, after a decade, it’s going to be flaky and crash if it wants to (even after replacing it’s 4th dead PSU and 2nd USB stick) and I’m thinking it’s probably time to upgrade.
I’ll admit to both liking the Unifi ecosystem and firmly not trusting the Unifi ecosystem one damn bit, which is bit of a weird situation where I’ve been really really unwilling to upgrade anything because that hasn’t always gone uh, smoothly.
Granite Rapids is probably going win some of that back: a lot of the largest purchasers of x86 chips in the datacenter were buying Epycs because you could stuff more cores into a given amount of rack space than you could with Intel, but the Granite Rapids stuff has flipped that back the other way.
I’m sure AMD will respond with EVEN MORE CORES, and we’ll just flop around with however many cores you can stuff into $15,000 CPUs and thus who is outselling whom.
100%.
I see a CLA or a goofy “source-available” license, I just assume it’s going to be a rugpull and that I should move on. I very much do not give anyone the benefit of the doubt anymore.
Also if you’ve never seen it, lazydocker might be something up your alley.
It’s a TUI, but it provides easy access to docker containers, logs, updating/restarting/stopping/etc them and so on.
And it doesn’t mean they can take away anything.
Not if they’re able to monetize your small bugfix
The problem is they can, and that’s not the point - I don’t care if you make money with something I spent my time on willingly, I care that you’re forcing me to say you’re the full and sole owner of my contributions and can do whatever you want at any point in the future with them.
Signing a CLA puts the full ownership of the code in the hands of whomever you’ve signed the CLA with which means they have the full ability and legal right to do any damn thing they want, which often includes telling you to fuck yourself, changing the license, and running off to make a commercial product while both killing the AGPLed version, and fucking everyone who spent any time on it.
If you have a CLA, I don’t care if your project gives out free handjobs: I don’t want it anywhere near anything I’m going to either be using or have to maintain.
And sure you can fork from before the license change, but I’m unwilling to put a major piece of software into my workflows and hope that, if something happens, someone will come along and continue working on it.
Frankly, I’m of the opinion that if you’re setting up a project and make the very, very involved decision to go with a CLA and spend the time implementing one, you’re spending that time because you’ve already determined it’s probably in your interests later to do a rugpull. If you’re not going to screw everyone, you don’t go to the store and buy a gallon of baby oil.
I’ve turned into the person who doesn’t really care about new shit until it’s been around a decade, has no CLAs, and is under a standard GPL/AGPL license (none of this source-available business license nonsense), and has a proven track record of the developers not being shitheads.
Quickest peak and then utter vanishing of any interest in a project I’ve had in a while.
Wouldn’t mind something a little more open than SearXNG in that it owns it’s own database, but requiring that they be the sole owner of anything anyone contributes AND having the ability to yank the rug at any time they feel like it pretty much puts it in the meh-who-cares category.
Had enough stupid shit yanked over the past few years that I really just don’t care or have time to deal with any that is already prepping for their eventual enshittification.
You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
contrast to their desktop offerings
That’s because server offerings are real money, which is why Intel isn’t fucking those up.
AMD is in the same boat: they make pennies on client and gaming (including gpu), but dumptrucks of cash from selling Epycs.
IMO, the Zen 5(%) and Arrow Lake bad-for-gaming results are because uarch development from Intel and AMD are entirely focused on the customers that pay them: datacenter and enterprise.
Both of those CPU families clearly show that efficiency and a focus on extremely threaded workloads were the priorities, and what do you know, that’s enterprise workloads!
end of the x86 era
I think it’s less the era of x86 is ended and more the era of the x86 duopoly putting consumer/gaming workloads first has ended because, well, there’s just no money there relative to other things they could invest their time and design resources in.
I also expect this to happen with GPUs: AMD has already given up, and Intel is absolutely going to do that as soon as they possibly can without it being a catastrophic self-inflicted wound (since they want an iGPU to use). nVidia has also clearly stopped giving a shit about gaming - gamers get a GPU a year or two after enterprise has cards based on the same chip, and now they charge $2000* for them - and they’re often crippled in firmware/software so that they won’t compete with the enterprise cards as well as legally not being allowed to use the drivers in a situation like that.
ARM is probably the consumer future, but we’ll see who and with what: I desperately hope that nVidia and MediaTek end up competitive so we don’t end up in a Qualcomm oops-your-cpu-is-two-years-old-no-more-support-for-you hellscape, but well, nVidia has made ARM SOCs for like, decades, and at no point would I call any of the ones they’ve ever shipped high performance desktop replacements.
I mean, recovery from parity data is how all of this works, this just doesn’t require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I’ve ever used.
The only drawback is that the parity data is mostly equivalent in size to the actual data you’re making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.
In my case, I didn’t care: I’m using the oldest drives I’ve got as the parity drives, and the newer, larger drives for the data.
If i were doing the build now and not 5 years ago, I might pick a different solution but there’s something to be said for an option that’s dead simple (looking at you, zfs) and likely to be reliable because it’s not doing anything fancy (looking at you, btrfs).
From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.
A tool I’ve actually found way more useful than actual raid is snapraid.
It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.
It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.
There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.
Hell, maybe not since 1997!
Office 2000 was peak office: it had the definitive version of Clippit, and every actually useful feature you’ll probably ever need to type and edit any sort of document.
…I will say, though, that Excel has improved for the weirdos that want 100,000 row spreadsheets since then, but I mean, that’s a small group of people who need serious help.
This has nothing to do with anything, but whatever.
Hell I almost got snagged by one recently, and a goodly portion of my last job was dealing with phishing sites all day.
They’ve gotten good with making things look like a proper email from a business that would be sending that kind of email, and if you’re distracted and expecting something you can have at least a moment of ‘oh this is probably legitimate’.
The giveaway was, hilariously, a case of using ‘please kindly’ and ‘needful’ which uh, aren’t something this particular company would have actually used as phraseology in an email, so saved by scammers not realizing that americans at least don’t actually use those two phrases in conversation.
I didn’t think the consumer-level chip immolation carried over to their xeons?
If it did, holy crap, they’re mega-ultra-turbo-plaid levels of screwed.