• 3 Posts
  • 182 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle

  • A lot of shaky stuff in here that has a long way to go before it makes it out of the lab.

    3.5 cubic meters of material ought to be enough to make quite a comfy house

    OP, a 3.5 m-wide cube is not 3.5 cubic meters. That’s the size of a decently large shed… Of solid concrete.

    would have enough capacity to store about 10 kilowatt-hours of energy, which is considered the average daily electricity usage for a household

    No mention in the article about round trip efficiency, self-discharge rates / storage duration, etc.

    Storing 10 kWh doesn’t mean much if it loses much of that to internal losses, leakage into the environment, etc., before you can use it.

    Capacitors generally tend to be designed to store very little energy but can charge/discharge repeatedly at a high rate. Is this designed to discharge quickly? If so, what happens if someone touches the giant Borg cube in your yard?

    Concrete is also prone to cracking, which last I checked, is not good for electronics.

    That said, this is an interesting concept, and if it can perform at a useful level / scale, I could see industrial uses for large systems with high peak loads / energy recovery / regenerative braking, as a cost effective way to smooth grid loads, but probably wouldn’t expect to see it in use at people’s homes for a loooong time.

    Less “you can make a super capacitor at home”, more “innovative material uses may one day make super capacitors more cost effective for certain applications, if it can be scaled out of a lab”








  • I think SD card failure rates are way overblown if you’re buying from reputable manufacturers (Sandisk, Samsung). I’m sure they do occasionally fail, but I’ve never experienced one.

    You’re right, for really intensive tasks the costs can climb, but I see people asking for ideas for what to do with a junk laptop and the top suggestion is always something like pi-hole or a bookmark manager that could run on a potato.

    Like with most things in life, it depends.



  • Not everyone wants to play a game that relies on responding to cues.

    Overuse of one mechanic can make it unappealing.

    I feel the same about games that rely on reactions during cutscenes or climbing. On the one hand having to be on edge all the time is annoying, but on the other, the absence of interaction can hamper suspense.

    For example, I’ve been playing Horizon Forbidden West lately - There’s a lot of climbing, and the devs love to throw a mid-climb “post you’re hanging on starts to fall” gag, but with no reaction mechanic, it’s pretty much always harmless and kinda feels “why bother”



  • Just built a new PC literally this weekend. WiFi mouse and Bluetooth drivers did not work out of the box. I had to spend hours searching through what little info exists out there tangentially related to my problem to find:

    WiFi drivers were fixed in kernel 6.10, which fortunately Mint let’s you upgrade to 6.11 at this time with relative ease.

    Bluetooth drivers do not appear to have been fixed, but I might have a shot if I switch over to a rolling release distro and relearn everything I’m used to from using Debian-based distros for years. Dongle is on order, but I don’t love having to have 2 bluetooth devices.

    It’s unclear if mouse drivers have been fixed in the kernel, but I was able to find a nice set of drivers/controller on github which fixed some mouse problems but only if i used their experimental branch and it did not work with my wireless adapter. Very fortunately I had an old wireless adapter from a mouse from the same brand that was able to close the loop, but that was just dumb luck.

    By EOD today I should have everything I want working, but it wasn’t “30s” of searching - to your point, 60-70% of problems may be solvable that way, but having 1/3 of your problems require technical expertise is not going to bring Linux out of the hobbyist domain.

    Note: this is not a complaint against Linux, just a statement of fact. These things have gotten a lot better over the years, and things get easier to find as the community grows and these struggles get discussed more openly, but there’s still lots of challenges out there that take more than a 30s search.








  • As part of a websites DNS info they have to provide a TTL (time to live). This value can be just about anything but is often in the 30s to 5m range, and serves as an instruction on how long a client should cache the IP address locally before checking for updates.

    This is because IP addresses can change, and you don’t want to experience hours of downtime for all clients every time your IP changes.

    Every time your client queries your tracker for server updates (every few minutes, give or take, based on tracker preferences) it should follow your system DNS settings, which should involve checking your local cache, then going to the upstream server indicated in your system DNS settings.

    If your system is set to a DNS server outside of your local network (e.g., 8.8.8.8) that request should go through your VPN

    If your system is set to use a local DNS server (e.g., 192.168.X.X…), typically either done through something like a pi-hole, or if your router sets itself as the DNS server then forwards all requests, this MIGHT create a DNS leak around your VPN.

    A good VPN like Mullvad should have an option to force their own DNS settings when enabled to prevent this leak.