• 11 Posts
  • 339 Comments
Joined 2 years ago
cake
Cake day: October 20th, 2023

help-circle


  • In the sense that we have dongles/docks, sure. In the sense of monitors with native USB-c input? These are still fairly rare as the accepted pattern is that your dock has an HDMI/DP port and you connect via that (which actually is a very good pattern for laptops).

    As for TVs? I am not seeing ANYTHING with usb c in for display. In large part because the vast majority of devices are going to rely on HDMI. As I said above.


    I’ll also add that many (most?) of those docks don’t solve this problem. The good ones are configured such that they can pass the handshake information through. I… genuinely don’t know if you can do HDCP over USBC->HDMI as I have never had reason to test it. Regardless, it would require both devices at the end of that chain to be able to resolve the handshakes to enable the right HDMI protocol which gets us back to the exact same problem we started with.

    And the less good docks can’t even pass those along. Hence why there is a semi-ongoing search for a good Switch dock among users and so forth.


  • Not that easy.

    To get HDMI 2.1 support for the Gabe Cube itself essentially requires kernel level patches. Which on a “normal” Linux device is possible (but ill advised) but on these atomic distros where even something like syncthing involves shenanigans to keep active week to week? Ain’t happening. Because HDMI is not just mapping data to pins and using the right codecs. There are a LOT of handshakes involved along the way (which is also the basis for HDCP which essentially all commercial streaming services utilize to some degree).

    There ARE methods (that I have personally used) to take a DP->HDMI dongle and flash a super sketchy Chinese (the best source for sketchy tech) firmware to effectively cheat the handshakes. It isn’t true HDMI 2.1 but it provides VRR and “good enough for 2025” HDR at 4k/120Hz. But… I would wager money that is violating at least one law or another.

    So expect a lot of those “This ini change fixes all of Windows 11. Just give money to my patreon for it” level fixes. And… idiots will believe it since you can use a dongle to already get like HDMI 2.05 or whatever with no extra effort. And there will likely be a LOT of super sketchy dongles on AliExpress that come pre-flashed that get people up to 2.09 (which is genuinely good enough for most people). But it is gonna be a cluster.

    And that is why all of us with AMD NUCs already knew what a clusterfuck this was going to be.


    There are also ways to fake the handshake in software. I personally did not try that but from what I have seen on message boards? It is VERY temporary (potentially having to redo every single time you change inputs on your TV/receiver) and it is unclear if the folk who think it works actually tested anything or just said “My script printed out ‘Handshake Successful’, it works with this game that doesn’t even output HDR!”


  • Ballparking but it will likely take closer to a decade than not for that to actually happen… and I am still not optimistic. And there are actually plenty of reasons to NOT want any kind of bi-directional data transfer between your device and the TV that gets updated to push more and more ads to you every single week.

    The reason HDMI is so successful is that the plug itself has not (meaningfully?) changed in closer to 20 years than not. You want to dig out that PS3 and play some Armored Core 4 on the brand new 8k TV you just bought? You can. With no need for extra converters (and that TV will gladly upscale and motion smooth everything…).

    Which has added benefits because “enthusiasts” tend to have an AV receiver in between.

    The only way USB C becomes a primary for televisions (since display port and usb c are arguably already the joint primary for computer monitors) is if EVERY other device migrates. Otherwise? Your new TV doesn’t work with the PS5 that Jimmy is still using to watch NFL every week.



  • There are layers to this.

    Yes, there is zero chance any of these investments are going to turn a profit.

    But research and technology is fundamentally built around developing capabilities for long term power (soft or hard). That is WHY governments invest so much into university groups and research divisions at companies to develop features of interest. You are never going to make back the money that funded hundreds of PhD students to develop a slightly more durable polymer compound. But you will benefit because you now have hundreds of new graduates aligned with fields of interest AND a slightly better grip on your military grade sybian.

    And, regardless of what people want to believe, AI genuinely does have some great uses (primarily pattern matching and human interfaces). And… those have very big implications both in terms of military capability and soft power where the entire world is dependent on one nation’s companies for basic functionality.

    Of course, the problem is that the “AI craze” isn’t really being driven by state governments at this point. It is being driven by the tech companies themselves and the politicians who profit off of them. Hence why we are so focused on insanely expensive search engine replacements and “AI powered toaster ovens” rather than developing the core technologies and capabilities that will make this more power efficient and more feasible for edge computing.

    And… when one of (if not ) the super powers is actively divesting itself of all soft power at an alarming rate… yeah.


  • That is a distinction without difference. It doesn’t matter what mechanism is used to collect those metrics. The fact is they are there

    And, at a glance: Forgejo/Codeberg definitely has stars and watches and fork tracking as well

    Which is all fundamentally the supply and demand aspects of consumerism. It is the idea that people can identify what there is a high demand for and work to provide a supply. Which is not at all a bad thing and extends far beyond capitalism.

    But it goes back to the previous poster’s comments about how they don’t like that netflix analyzes everything they do and greenlights projects based on that. That extends FAR beyond netflix and well into even open source projects.



  • Open source/selfhost projects 100% keep track of how many people star a repo, what MRs are submitted, and even usage/install data. And many of them are specifically designed to fulfill a role that industry standard tools aren’t (or are too expensive for) and… guess where the data on that comes from?

    The reality is that you cannot escape consumerism in the modern world. You can pretend you are but… you aren’t. What you CAN do is focus on supporting tools and media that you want/approve of and making your own life better as a result.

    And a big chunk of that involves actually thinking through consequences.


  • I mean… depending on how new an item is and what “tier” the restaurant is? They are 100% watching for stuff like that and probably making a note that you got up after eating only a quarter of your burger. Because if the burger were good, you would want to finish it. Is it too sloppy? Did you feel the need to wash your hands mid bite? Did it make you nauseous?

    Same with taking out your phone. Does it look like you are telling a friend what a great burger you had? Or are you feeling bloated and trying to digest a bit before you eat more?

    This level of market analysis is not at all new. Streaming services just have a much easier time automating it but… give it time until startups are selling cameras to monitor the dining area and automate analytics based on who ordered what and did what.


  • I mean… that IS how restaurants work. If people don’t order the fish of the day then they buy fewer and fewer fishes until it is no longer a thing. Even the speed people eat DOES matter since restaraunts tend to be designed around each customer spending a certain amount of time dining. Too short and they will never order a dessert. Too long and they are costing you money while they nurse that coffee.

    And similar happens with even buying blu-rays. If nobody bought Master and Commander in 4k then you can be sure that experiment would be over. Instead? That thing sold like toiler paper during COVID and we’ll likely see more “prestige” releases with a huge dose of FOMO.

    As for up fronts versus long tails? Guess what is motivating all those revivals “nobody asked for”?

    Don’t get me wrong. I vastly prefer to rip blu rays to my NAS and watch via plex. But the idea that you are somehow no longer part of the marketing cycle is just… wrong.


  • Which is why people who actually look at trends tend to compare it more to the Dot-com bubble.

    The short version? A few early internet adopting sites (like Amazon…) set up online retail presences. People were ecstatic because you could now do most of the monthly shopping online and even re-buy pants that you know will fit and so forth.

    Seeing money, EVERYBODY made an online retailer or service website and EVERYONE wanted to invest in that.

    Then the market was oversaturated and companies with no right to exist went bankrupt and it was a bloodbath.

    Except… not really. Because while the massively overinflated stock market did indeed “downturn” and a LOT of those scam companies went away, the actual fundamental premise of online first companies was a very sound one. I mean… just look at “Cyber Monday” and so forth.

    And “AI” will almost definitely go the same route. Because, yeah, LLMs are HORRIBLE for accounting and finance. But they are actually really good for replacing the early career folk who translate earnings into reports. And ML in general is excellent at detecting patterns which can mean potentially billions of dollars in investing. But, like all things, it is about verification and caution. You actually need a human to read that earnings report before you send it to the investors. And you only give your “AI” a small portion of your portfolio. Same as with any team.


  • What you are describing is something different… that is “close enough” to Moore’s Law for all but the most pedantic.

    The (I forget the proper economics term so) base price of RAM/Storage does indeed go down as new processes and economies of scale are developed. But the cost of a “laptop hard drive” remains pretty steady in the sense that a couple hundred MB was enough back in the day but you REALLY want at least 500 gigs now. The price per byte does indeed drop rapidly but the price per “drive” is far more stable (not fully stable due to inflation and how many people are buying them, but within spitting distance).

    Its why a good rule of thumb was to always just spend roughly the same on storage during an upgrade and that would result in faster technologies and larger capacity drives and so forth.

    That isn’t what is happening with RAM in 2025. A much better comparison is GPUs because… it is the same problem. It is ridiculously high demand from businesses (often startups pouring dump trucks of VC money into their only hope… well, VC money or drug money in the case of miners but they matter a lot less these days) driving this. A quick search didn’t yield an easy graph and I can’t be bothered to go dig through Gamers Nexus’s twelve videos on it, but the price of an “entry level” GPU has drastically changed in the past decade.

    But just for two-ish data points?

    • The GTX 980 and 970 had an MSRP (probably) of 550 and 330 USD, respectively, back in 2014
    • While there is some other bullshit involved, the RTX 5080 and 5070 have MSRPs of 1000 USD and 550 USD in 2025
    • Adjusting for inflation, the 980 and 970 would still only be about 753 and 451 USD in 2025 dollars
    • And let’s not forget that basically no cards were sold at MSRP back in early 2025…

    The last point being what is, by all accounts, going to be the new normal. Barring outside impacts like… RAM going through the roof. Vendors will sell the cards for the ACTUAL MSRP rather than the inflated demand prices. And they will still be considerably more expensive as a result.

    All of which is to say… my current card is definitely good enough but having a hard time deciding if I do one “final” upgrade for the decade. But I am an AMD boi so those are at least “reasonable” in terms of price per performance.


    1. Prices rarely, if ever, go down in a meaningful degree. Stuff like this is partially necessity and partially a REALLY good excuse to see what the price ceiling actually is… and then turn that into the floor moving forward. Just look at gas prices
    2. The “AI Bubble” is likely to be on the same level as the Dotcom Bubble and the like. It is going to be brutal and a LOT of people are going to lose their jobs… and then much of the same tech will still dominate just with more realistic expectations. And that will still need large amounts of memory
    3. If the “AI Bubble” really is as bad as people seem to want it to be: A LOT of the vendors who make the parts you are buying RAM to use are going to be gutted. And then RAM production will drop drastically. Which will decrease supply and…

  • If you process digital photos, the edits get saved, so you can change them. It’s like a digital darkroom. I’d lose all that and be left with the unprocessed photos.

    And you export them when you are finished? So you have the unprocessed photos AND the finalized processed once? And you just have the ones that were in flight that are… in flight.

    But it sounds like you think this is worth using. If you think you understand the crack and understand the risks then go for it? Just understand that piracy of media and games is very different than piracy of productivity software and the risks and liability go up drastically with the latter.


  • I guess I am confused what what is actually going on.

    All your photos should be stored locally first. It looks like Capture One provides some form of cloud based access if you want to edit with a mobile device but you should still have the raw (possibly RAW format) files yourself?

    So the only thing you are “risking” is your most recent batch of edits that you haven’t exported/finalized yet. So if you are actually a hobbyist… who cares?

    As for whether you should crack it:

    1. If you are relying on cloud storage, this is a deeply stupid idea. They will have logs of your data store being accessed and will have logs of you not having a valid license. A paralegal can pound out the lawsuit over lunch.
    2. If you are less a “hobbyist” and more running a business (what it sounds like): You are inherently playing with fire if you use pirated software for profit. Up to you but my general rule of thumb is that if you are making enough to do something professionally then you are making enough to buy a seat for the industry standard software (or to take the time to learn something jank but cheap)