• 3 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: September 7th, 2023

help-circle
  • The difference is how you interact with the browser engine. Blink is very easy to embed into a new browser project. I’ve seen it done - if you’re familiar with the tools, you can build a whole new browser built around the Blink engine in a few hours. You can write pretty much whatever you want around it and it doesn’t really change how you interact with the engine, which also makes updates very simple to do.

    With Firefox, it’s practically impossible to build a new browser around Gecko. The “forks” that you see are mostly just reskins that change a few settings here and there. They still follow upstream Firefox very closely and cannot diverge too much from it because it would be a huge maintenance burden.

    Pale Moon and Waterfox are closer to forks of Firefox than Librewolf for example, but they’ve had to maintain the engine themselves and keep up with standards and from what I’ve read, they’re struggling pretty hard to do so. Not a problem that Blink-based browsers have to deal with because it’s pretty easy and straightforward to update and embed the engine without having to rewrite your whole browser.

    Unfortunately, since Google controls the engine, this means that they can control the extensions that are allowed to plug into it. If you don’t have the hooks to properly support an extension (ie. ublock), then you can’t really implement it… unless you want to take on the burden of maintaining that forked engine again.

    That said, Webkit is still open source and developed actively (to the best of my knowledge - I could be completely wrong here). Why don’t forks build around Webkit instead of Blink? Not really sure to be honest.


  • I chuckled a bit while reading this, because what you wrote is exactly where Blink came from. It was a fork of webkit, which in turn was derived from KHTML. Then again, the fact KHTML was discontinued does support your point to an extent too, I guess.

    But the point is, Chrome is doing exactly this - providing the engine free as in beer and letting people embed it however they like. And yet, what you’re predicting, ie. not using the original but just using forks instead, doesn’t seem to be happening with Chrome - they still enjoy a massive fraction of the market share. There’s no reason to believe that this couldn’t happen at Mozilla as well. People usually want the original product, and it’s only a small fraction of people that are really interested in using the derivatives.


  • Ironically, the anti monopoly lawsuit against Google will end this.

    People are quick to assume this, and there’s a very good chance that they’re right, but I don’t think we should take it as a given. It’s always possible that there could be some sort of court decision that allows Google to keep funding Mozilla after the “breakup” is complete.

    In any case, we don’t yet know what the outcome of the antitrust case will be, so I think it might be best to avoid making statements of certainty like this until we see how things really shake out.

    We should definitely take the possibility of this happening very seriously though.


  • You’re right about the fact that building an engine is hard, but Socraticly speaking, then why are there so many blink-based browsers and so few gecko-based ones? The answer is because blink is easy to embed in a new project and gecko isn’t.

    If Mozilla really wants to take back the web (and I honestly don’t think they actually do), then what they should really be doing is making gecko as easy to embed in a new browser as blink is. They don’t do this, and I suspect that they have ulterior motives for doing so, but if they did, I think we would be much closer to breaking chrome’s grasp on the web.

    Because let’s face it: Mozilla makes a pretty damn good browser engine. But they don’t really make a compelling browser based off it. Ever noticed how Mozilla has been declining ever since they deprecated XPCOM extensions? It’s because when they provided XPCOM, it enabled users to actually build cool and interesting new features. And now that they’ve taken it away, all innovation in browser development has stagnated (save for the madlads making Vivaldi).

    They need to empower others to build the browser that they can’t. That’s what would really resurrect the glory days of Firefox in my opinion.


  • This has always been the whole point behind the Trojan Horse that is systemd. Now that Poettering/Red Hat control the entire userspace across virtually all distros, he/they can use it as a vehicle to force all of them to adopt whatever bullshit he thinks of next.

    This is what the Linux ecosystem gave away when they tossed their simple init system to adopt the admittedly convenient solution that is systemd. But in reality, the best solution was always to drop init, and instead replace it with an alternative that was still simple to replace if the need should arise. But now that everyone is stuck on systemd, they’re all at the mercy of Poettering’s Next Stupid Idea.

    Convenience comes at a price. systemd is the Google Chrome of Linux userspace. Get out while you can.




  • I haven’t done too much work with WASM myself, but when I did, the only languages I saw recommended were Rust, C++, or TinyGo. From what I’ve heard, Rust and C++ are smoother than TinyGo. Garbage collected languages usually aren’t great choices for compiling to wasm because wasm doesn’t have any native garbage collection support. That limits your selection down a lot.

    But another option you may want to consider is Nim. As I understand, it compiles to C, so any C->Wasm compiler should theoretically work for you as well. I did a quick search and wasn’t able to find any great resources on how to do this, but you might get a bit more lucky. Good luck!


  • You’re probably right. I think COBOL development is one of the cases where the crazier stories are the ones that bubble to the top. The regular scene is probably more mundane.

    I do think there are a few advantages to learning COBOL over C++. COBOL seems to be much stickier - companies that use it seem much more hesitant to replace it than a lot of the companies that use C++, and as a result, they will probably get more desperate. And while there’s definitely a lot more C++ out there than COBOL, I have to imagine that the number of people under 50 that use COBOL is probably tiny, while C++ still has a very large userbase. On the other hand, consulting depends a lot on your portfolio, references, and past accomplishments, and nobody’s going to pay 1k EUR/USD/etc. per hour (exaggerating, obviously) if you don’t have any credentials. It takes time to build that up.

    Ultimately, I do think you’re pretty spot on, but we’ll have to see. This is more just a fantasy I tell myself to make it seem like retirement is closer than it probably is…



  • It was always obvious to me that as long as I was using closed source software that any day could come when the vendor would screw me over. In fact, it could have been running it with bundles and bundles of spyware already and I had no way of knowing it. So I pledged to start using open source software only, to make sure that wouldn’t happen. First, I migrated all my desktop applications to open source alternatives. Then I finally made the switch.



  • This is very interesting! Things like this make me wish programmers would give functional^W declarative programming more of a chance. I’ve long fantasized about being able to write programs as declarative code that the computer can optimize automatically without human intervention. When you implement your program in more restrictive (ie. stateless) paradigms, you can more easily reason about the code, and thereby make it easier to optimize or run in different environments.

    SQL is a great example of this - when you look at some of the optimizations that servers like PostgreSQL can do under the hood, this is because the language inherently limits what you can do so the actual system executing your instructions can do different things with it for better performance and reliability. Things like this are what make query optimizers possible, and it’s really fascinating if you actually read carefully what query analyzers report (beyond just checking whether your indices are being used or not).

    Beautiful chart. Thanks for sharing!



  • What exactly is it that people obsess over? The desktop environment and terminal customisation? Setting up NetworkManager with nmcli? Using Vim to edit a .conf file?

    Welcome to the crowd! Eventually, you realize that an operating system is just an operating system: something you use to get work done, and the less you notice it, the better it’s doing its job. The pride of setting it all up mostly ends very shortly after you’re done. At that point, you realize that pretty much all distros are the same, give or take.

    That said, there are always moments that make you realize that your OS is amazing. When you’re faced with a new and difficult task that you don’t know how to achieve, then you look at your distro’s documentation and solve it in a few elegant steps. And I’m not an Arch user, but that’s when the Arch wiki will really be your friend, as well as all the other resources that Arch has for its users. I can’t think of examples of these kinds of moments because they’re so rare, but those are the moments that feel great and really make you appreciate your OS.



  • Interesting! Sorry, I don’t know why I thought you were using swipe keyboards, it must have been stuck in my memory from reading other comments. I definitely agree that pressing the buttons was a little annoying, but manufacturers could probably make softer buttons if they were willing to put the money into developing them.

    Anyway, I really miss the phone I had from about 2008-2010. It had two sliders that moved in orthogonal directions. One of the slide directions revealed a standard 12-button phone pad, while the other had a 4-row keyboard. And yet, I’m pretty sure it was under 1.5cm, so not too large. It was definitely easier to keep in my pocket than current phones!

    If it weren’t for reading Lemmy/RSS feeds and a camera, I’d probably be going back to dumb phones for my next one…


  • But what’s the error rate? I could type at 200 words per minute (even on a phone!!) if I didn’t care about how many typos I was making. And swiping keyboards get confused incredibly easily. The error rates are especially bad when you’re writing words that only use a single row of keys - on QWERTY keyboards for example, try writing something like “type”, and you could get that, or you might get something else, like wipe/write/ripe. Other groups could include things like tip/top, pit/pot, wit/wire and the selected word will be wrong almost as frequently as it’s right. And autocorrect systems can’t really correct for things like when you mean to press enter and hit the backspace key instead. Plus, their suggestions are generally just very stupid. So while buttons take longer to press on physical keyboards, the reduced error rate makes typing speed about the same in my experience.

    Plus, with physical buttons, you get tactile feedback, so you can tell when your fingers are slightly off and adjust them, whereas on a flat surface, you have no idea whether you pressed the correct button or not. You have to stare straight at the screen to make sure every press is correct, which is exhausting and bad for your eyesight. I feel a lot more eyestrain from simply typing on phones, whereas with physical buttons, I didn’t even have to look at the screen, and I could look at something else around me while typing. And don’t get me started on how many calls I’ve missed because I accidentally hit the hang-up button, or couldn’t find the accept call button - not a problem when you have physical buttons!

    Regarding screen real estate, all you need is a slide-out keyboard. They work great!

    There are a few downsides to physical keyboards, but in my experience, they’re far superior to non-keyboard devices. But what can you do - in the 21st century, practicality never matters, it’s just all about aesthetics and nothing else…



  • This is quite cool. I always find it interesting to see how optimization algorithms play games and to see how their habits can change how we would approach the game.

    I notice that the AI does some unnatural moves. Humans would usually try to find the safest area on the screen and leave generous amounts of space in their dodges, whereas the AI here seems happy to make minimal motions and cut dodges as closely as possible.

    I also wonder if the AI has any concept of time or ability to predict the future. If not, I imagine it could get cornered easily if it dodges into an area where all of its escape routes are about to get closed off.


  • Agreed on all points. I think some of the issues that you’re facing are things that would be resolved if Ocaml were more popular. But some others would be harder to fix without making breaking changes to the language as I mentioned earlier. If I had to put it as succinctly as possible, I’d say that the language just needs a lot more polish which would probably happen if it were more mainstream. But not all languages have to be mainstream, and maybe Ocaml’s purpose in the world is, as you put it, to inspire other languages. It is definitely extremely good at that!


  • namingthingsiseasy@programming.devtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    8 months ago

    No one has said Ocaml yet, so I will. It’s not a perfect language, but it has a lot of cool ideas and concepts. It’s a functional language, but allows you to write imperative code when you want to. Algebraic data types and type matching are built natively into the language and work very nicely. It’s type inference capabilities are very powerful (though that can backfire at times), and the |> operator is really, really fun to use. It also has very powerful module/functor capabilities, though they go a bit over my head since I haven’t had a chance to play with them. Also, Opam is a very powerful package manager and it’s pretty easy to wrap/bind external libraries with it.

    I’d love to see some improvements to the language - the syntax is a bit confusing and ugly at times (but this unfortunately can’t be fixed without breaking the language of course) - but overall I think I’d have a lot more fun programming in Ocaml than what I do in my day job.