Last time they’ll ever do that! Pass the buck of hosting web-facing Plex servers onto somebody else.
Last time they’ll ever do that! Pass the buck of hosting web-facing Plex servers onto somebody else.
Adding to this: doesn’t CAD usually want 3D acceleration? I would definitely try running the CAD software with the same VM configuration you plan to use in your Proxmox VPS first before progressing to make sure it (a works at all and b) is responsive enough. You could even try nesting Proxmox in Proxmox to emulate the kind of performance you’d had on a VPS.
SnipeIT just cares about serial numbers, models and manufacturers (you can just use a serial number in the asset tag section) for assets and I think consumables drop a bunch of those requirements. You might be able to put groceries under consumables? I’m less familiar with consumables in SnipeIT to be honest.
SnipeIT is really good and supports SSO including via LDAP.
They don’t need to be interested though. You could conceivably dump all the password you collect in an attack and just start trying them automatically like you would any other breach. Find a bunch of bank accounts and your chances you getting away with millions are high. Not to mention: a breach like this means changing all your saved passwords to re-secure them which is a multi-day affair.
Self-hosting removes the risk of somebody compromising Bitwarden’s servers and adding malicious javascript to send off your master password to a bad actor instead of just processing it locally like it’s designed to.
I don’t think ZFS can do anything for you if you have bad memory other than help in diagnosing. I’ve had two machines running ZFS where they had memory go bad and every disk in the pool showed data corruption errors for that write and so the data was unrecoverable. Memory was later confirmed to be the problem with a Memtest run.
What distro and version of that distro are you using? Did you install gpg from the repository or elsewhere? What version of gpg are you running?
The OOM killer is particularly bad with ZFS since the kernel doesn’t by default (at least on Ubuntu 22.04 and Debian 12 where I use it) see the ZFS as cache and so thinks its out of memory when really ZFS just needs to free up some of its cache, which happens after the OOM killer has already killed my most important VM. So I’m left running swap to avoid the OOM killer going around causing chaos.
I think that also causes issues for roaming profiles and folder redirection. If roaming is turned on then everything in the %appdata%\roaming folder is synced to a server. %AppData%\Local is not. So if your app is using %AppData%\Roaming for temporary data then you are causing a whole bunch on unnecessary IO. Same for using Documents since that if often synced.
Invidious still seems to work for VODs provided the instance doesn’t get restricted. Livestreams have been broken for ages though.
I don’t really see the advantage here besides orchestration tools unless the top secret cloud machines can still share it’s resources with public cloud to recoup costs?
So much better than my FunnelWAP. Best it can do is 100 KillerBytes. :(
Could it be a fear of a software patent relating to the design? Back in the day Apple had one for swipe to unlock that prompted Android to use different patterns.
I have really mixed feelings about this. My stance is that I don’t you should need permission to train on somebody else’s work since that is far too restrictive on what people can do with the music (or anything else) they paid for. This assumes it was obtained fairly: buying the tracks of iTunes or similar and not torrenting them or dumping the library from a streaming service. Of course, this can change if a song it taken down from stores (you can’t buy it) or the price is so high that a normal person buying a small amount of songs could not afford them (say 50 USD a track). Same goes for non-commercial remixing and distribution. This is why I thinking judging these models and services on output is fairer: as long as you don’t reproduce the work you trained on I think that should be fine. Now this needs some exceptions: producing a summary, parody, heavily-changed version/sample (of these, I think this is the only one that is not protected already despite widespread use in music already).
So putting this all together: the AIs mentioned seem to have re-produced partial copies of some of their training data, but it required fairly tortured prompts (I think some even provided lyrics in the prompt to get there) to do so since there are protections in place to prevent 1:1 reproductions; in my experience Suno rejects requests that involve artist names and one of the examples puts spaces between the letters of “Mariah”. But the AIs did do it. I’m not sure what to do with this. There have been lawsuits over samples and melodies so this is at least even handed Human vs AI wise. I’ve seen some pretty egregious copies of melodies too outside remixed and bootlegs to so these protections aren’t useless. I don’t know if maybe more work can be done to essentially Content ID AI output first to try and reduce this in the future? That said, if you wanted to just avoid paying for a song there are much easier ways to do it than getting a commercial AI service to make a poor quality replica. The lawsuit has some merit in that the AI produced replicas it shouldn’t have, but much of this wreaks of the kind of overreach that drives people to torrents in the first place.
The mayor’s office. It’s always in the mayor’s office.
Garry’s Mod…. what a rabbit hole that was…