• 0 Posts
  • 552 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • It’s less of an issue now, but there were stability issues in the early days of DDR5. Memory instability can lead to a number of issues including being unable to boot the PC (failing to post), the PC crashing suddenly during use, applications crashing or behaving strangely, etc. Usually it’s a sign of memory going bad, but for DDR5 since it’s still relatively young it can also be a sign that the memory is just too fast.

    Always check and verify that the RAM manufacturer has validated their RAM against your CPU.


  • Air cooling is sufficient to cool most consumer processors these days. Make sure to get a good cooler though. I remember Thermalright’s Peerless Assassin being well reviewed, but there may be even better (reasonably priced) options these days.

    If you don’t care about price, Noctua’s air coolers are overkill but expensive, or an AIO could be an option too.

    AIOs have the benefit of moving heat directly to your fans via fluid instead of heating up the case interior, but that usually doesn’t matter that much, especially outside of intense gaming.


  • Very few things need 64GB memory to compile, but some do. If you think you’ll be compiling web browsers or clang or something, then 64GB would be the right call.

    Also, higher speeds of DDR5 can be unstable at higher capacities. If you’re going with 64GB or more of DDR5, I’d stick to speeds around 6000 (or less) and not focus too much on overclocking it. If you get a kit of 2x32GB (which you should rather than getting the sticks independently), then you’ll be fine. You won’t benefit as much from RAM speed anyway as opposed to capacity.


  • The latest round of “stuff I wasn’t informed would be installed for me” included enough software to switch me to Linux. I’m still dual booting during the transition, but moving fully over when I can.

    I honestly used to love Windows too. Windows 10 was great, and 11 had problems but was still very usable on the happy path and came with some great improvements over time. These days, it’s just so full of bloatware. I just want my damn computer to be mine, and I’d hope an OS license that retails for $200 would be enough to get them to stop advertising to me and shoving shit down my throat but I guess not.

    Word and Powerpoint are good too, but there’s some real competition there these days. I haven’t needed those on my personal PC in years though, so that’s never been a problem for me, and it’ll continue to not be a problem as long as that software continues to require a subscription.






  • Quoting the analysis in the ruling:

    Authors also complain that the print-to-digital format change was itself an infringement not abridged as a fair use (Opp. 15, 25).

    In other words, part of what is being ruled is whether digitizing the books was fair use. Reinforcing that:

    Recall that Anthropic purchased millions of print books for its central library… [further down past stuff about pirated copies] Anthropic purchased millions of print copies to “build a research library” (Opp. Exh. 22 at 145, 148). It destroyed each print copy while replacing it with a digital copy for use in its library (not for sharing nor sale outside the company). As to these copies, Authors do not complain that Anthropic failed to pay to acquire a library copy. Authors only complain that Anthropic changed each copy’s format from print to digital (see Opp. 15, 25 & n.15).

    Bold text is me. Italics are the ruling.

    Further down:

    Was scanning the print copies to create digital replacements transformative? [skipping each party’s arguments]

    Here, for reasons narrower than Anthropic offers, the mere format change was fair use.

    The judge ruled that the digitization is fair use.

    Notably, the question about fair use is important because of what the work is being used for. These are being used in a commercial setting to make money, not in a private setting. Additionally, as the works were inputs into the LLM, it is related to the judge’s decision on whether using them to train the LLM is fair use.

    Naturally the pirated works are another story, but this article is about the destruction of the physical copies, which only happened for works they purchased. Pirating for LLMs is unacceptable, but that isn’t the question here.

    The ruling does go on to indicate that Anthropic might have been able to get away with not destroying the originals, but destroying them meant that the format change was “more clearly transformative” as a result, and questions around fair use are largely up to the judge’s opinion on four factors (purpose of use, nature of the work, amount of work used, and effect of use on the market).

    The print original was destroyed. One replaced the other. And, there is no evidence that the new, digital copy was shown, shared, or sold outside the company. [The question about LLM use is earlier in the ruling] This use was even more clearly transformative than those in Texaco, Google, and Sony Betamax (where the number of copies went up by at least one), and, of course, more transformative than those uses rejected in Napster (where the number went up by “millions” of copies shared for free with others).

    … Anthropic already had purchased permanent library copies (print ones). It did not create new copies to share or sell outside.

    TL;DR: Destroying the original had an effect on the judge’s decision and increased the transformativeness of digitizing the books. They might have been fine without doing it, but the judge admitted that it was relevant to the question of fair use.






  • TehPers@beehaw.orgtoChat@beehaw.orgdeleted
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I agree, it’s hard to tell what is needed here. But to add - if you’re looking to leave your country, then finding a community of people who have done that will be your best resource to start with.

    If you’re looking for mental health resources outside of your country, in the US we are paying $150/session out of pocket (for a highly specialized therapist that insurance won’t cover because insurance is a scam), but there are plenty of resources online to get you started and almost certainly a free or low cost program depending on your circumstances.

    If you’re looking for an education or a higher paying job, you may be able to pick up experience and learn on the job depending on the field, and you can also contract on the side for many jobs if you’re able to self-teach from online resources (web dev comes to mind here).

    If you’re looking for some other specialized kind of help, you’d need to be more specific about the problem. It doesn’t have to be as specific as “I live in Pakistan and want to move to Norway but Pakistan issues visa in an absolutely absurd manner” or whatever, but “moving from one country to another” would probably be enough (for example - I don’t know your specific circumstances).




  • Rust does not check arrays at compile time if it cannot know the index at compile time, for example in this code:

    fn get_item(arr: [i32; 10]) -> i32 {
        let idx = get_from_user();
        arr[idx] // runtime bounds check
    }
    

    When it can know the index at compile time, it omits the bounds check, and iterators are an example of that. But Rust cannot always omit a bounds check. Doing so could lead to a buffer overflow/underflow, which violates Rust’s rules for safe code.

    Edit: I should also add, but the compiler also makes optimizations around slices and vectors at compile time if it statically knows their sizes. Blanket statements here around how it optimizes will almost always be incorrect - it’s smarter than you think, but not as smart as you think at the same time.



  • Rust’s memory safety guarantees only work for Rust due to its type system, but another language could also make the same guarantees with a higher runtime cost. For example, a theoretical Python without a GIL (so 3.13ish) that also treated all mutable non-thread-local values as reentrant locks and required you to lock on them before read or write would be able to make the same kinds of guarantees. Similarly, a Python that disallowed coroutines and threading and only supported multiprocessing could offer similar guarantees.