Sounds like it would be nice if Savannah offered Forgejo hosting.
Sounds like it would be nice if Savannah offered Forgejo hosting.
Oh I see. Yeah DVD drives generally use the same SATA interface as hard drives.
If you mean a 2.5" drive (laptop sized) then yes you can generally do that. 3.5" drives are usually 1" thick and won’t fit in a slim DVD drive slot.
Newegg doesn’t seem to sell the Crucial MX500 any more*, only the BX500. But if the 870 evo is comparable, I might get that, since I have a couple of MX500s now and am happy with them. I hadn’t realized that Team Group was legit at all! I’ll keep that in mind. Thanks!
*Note: The MX500 appears on Newegg’s web site, but the actual sellers are “Newegg Marketplace” randos rather than Newegg itself, and I prefer to buy directly from Newegg when possible.
I don’t think I can use NVMe in my old laptop but yes, otherwise I’d do so. ;)
Thanks, I think you have it right and that it’s not worth messing with adapters. The adapter was never about performance from my perspective though. It was about being able to keep using the drive if I eventually moved to a laptop with an M.2 slot.
QVO is QLC flash which has worse durability. I’m trying to stay away from it though maybe it works better now than it originally did. Hmm, I had thought that the drive I looked at a while back had HMB but was not NVMe. Maybe you are right and I didn’t look closely enough. I believe those SATA shells don’t work with NVMe drives.
The purpose of the cache is to improve latency and save SSD wear. It doesn’t help much with throughput as far as I know. Although if it’s on the host side, maybe it does.
HMB is host memory buffer or something like that. It means instead of having a ram buffer in the drive, the OS software uses some of the host computer’s memory for disk buffering. That makes the drive cheaper but I haven’t heard claims of it being any faster. Consumer drives seem to all use it now, and Linux supports it, but maybe not when you wrap up the HMB drive in a SATA shell.
I guess $90 for 1TB is pretty good. I have been suspicious of the EVO drives but at least they aren’t QVO.
Thanks!
Thanks, I wasn’t really thinking about transfer speeds, it’s just the PCIe drives are cheaper (depending) and more re-usable if I get a newer laptop later. I think you are right though that it’s not worth messing with adapters.
I dunno if there’s such a thing as a reliable brand. The brands have reliable and unreliable models. Particularly I have the idea that I should be avoiding QLC drives, but that TLC these days is ok.
Java isn’t exactly hard, and it’s not particularly fundamental. It’s just bureaucratic, and Python will be both more enjoyable and more useful. Java was trendy in the 1990s and lingers on because so much Java code is still around. If your goal is to use a serious type system (Lisp and Python don’t have that), Haskell will be far more enlightening than Java. If you want to use the JVM for some reason, Clojure (a Lisp dialect that run in it) might interest you.
For low level fundamentals, you want assembly language! That gives you almost no assistance and you have to do EVERYTHING yourself, organizing the program in your own head. For old fashioned imperative programming with lots of organizational assistance, try Ada.
You will probably have to learn C at some point, but save it for later when it will be easier for you to spot the weaknesses.
I don’t remember being that impressed with HTDP but it’s been a while and I didn’t look much. I’d say read SICP first in either case.
The Java thing sounds totally uninteresting and if your next language after Lisp isn’t a a mainstream one, I’d say try Haskell.
Regarding math: it can help but it’s not that important for pure programming. If you’re good at languages and writing, that’s helpful in the same way. If you’re good at music, that is at least a helpful mindset.
You can turn off Borg encryption but maybe what you really want is an object store (S3 style). Those exist too.
I’m using Borg and it’s fine at that scale. I don’t know if it would still be viable with 100TB or whatever. The initial backup will be kind of slow but it encrypts everything, and deduplicates it too if I’m not mistaken. In any case, it deduplicates the common situation where you back up another snapshot later. Only the differences get written in the second backup. So you can save new snapshots fairly quickly and without much additional space.
Start a blog instead. I’d rather read it than listen to someone babbling.
Wow cool, I don’t have a project of my own to submit, but can maybe help with someone else’s.
What? Problems like this usually come down to some missing indexes. Can you view the query plan for your slow queries? See how long they are taking? IDK about SQL Server but usually there is a command called something like ANALYZE, that breaks down a query into the different parts of its execution plan, executes it, and measures how long each part takes. If you see something like “FULL TABLE SCAN” taking a long time, that can usually be fixed with an index.
If this doesn’t make any sense to you, ask if there are any database gurus at your company, or book a few hours with a consultant. If you go the paid consultant route, say you want someone good at SQL Server query optimization.
By the way I think some people in this thread are overestimating the complexity of this type of problem or are maybe unintentionally spreading FUD. I’m not a DB guru but I would say that by now I’m somewhat clueful, and I got that way mostly by reading the SQLlite docs including the implementation manuals over a few evenings. That’s probably a few hundred pages but not 2000 or anything like that.
First question: how many separate tables does your DB have? If less than say 20, you are probably in simple territory.
Also, look at your slowest queries. They likely say SELECT something FROM this JOIN that JOIN otherthing bla bla bla. How many different JOINs are in that query? If just one, you probably need an index; if two or three, it might take a bit of head scratching; and if 4 or more, something is possibly wrong with your schema or how the queries are written and you have to straighten that out.
Basically from having seen this type of thing many times before, there is about a 50% chance that it can be solved with very little effort, by adding indexes based on studying the slow query executions.
I just download the mp3 and play it with mplayer. Don’t need no apps.
50GB of flac = maybe 20GB of Vorbis amirite? Is that 450GB of flac in your screen shot? It would fit on a 256gb phone even without an SD card. A 512GB card is quite affordable these days. Just make sure to buy a phone with a slot, and think of it as next level degoogling ;).
Yeah I know there’s lots of music in the world but who wants to listen to all of it on a moment’s notice anyway?
You really have to see what the db is doing to understand where the bottlenecks are, i.e. find the query plans. It’s ok if it’s just single selects. Look for stuff like table scans that shouldn’t happen. How many queries per second are there? Remember that SSD’s have been a common thing for maybe 10 years. Before that it was HDD’s everywhere, and people still ran systems with very high throughput. They had much less ram then than now too.