• 3 Posts
  • 67 Comments
Joined 3 years ago
cake
Cake day: March 15th, 2023

help-circle

  • By IO heavy I meant db operations or other external requests. When the request handler starts, it waits for the IO to be completed. While it waits, it can accept other requests and so on, so the bottleneck is the IO in my case, not the request parsing.

    I imagine it like this (imaginary numbers):

    • DB operation: 20ms
    • Express request handler: 1ms
    • Brhama request handler: 0.5ms

    Which case, it wouldn’t matter which http framework to use. However, there are probably other use-cases.




  • What I meant is that you cannot turn any existing webpages to a basic page with some simple tricks like disabling js. That would be a never-ending fight.

    You are the one adding extra complexity

    I’m not the one defining the business requirement. I could build a site with true progressive enhancement. It’s just extra work, because the requirement is a modern page with actions, modals, notifications, etc.

    There are two ways I can fulfill this. SSR with scripts that feel like hacks. Or CSR. I choose CSR, but then progressive enhancement is now an extra work.








  • Who said making unbloated pages impossible? Your comment would be more serious without your emotions.

    Source code is the source code which gets transformed to some target code. An obfuscated code is not source code.

    A reminder, in the past, large pages downloaded all stuff at once. In contrast, with dynamic imports the first load is much much faster. And that matters most. And any changes in dynamic content would just require the dynamic data to be downloaded. My phone lasts at least 2 days with one charge (avg usage), but I charge it every night, that’s not an issue.



  • As a web developer, I see js as a quality improvement. No page reloads, nice smooth ui. Luckily, PHP times has ended, but even in the PHP era disabling jQuery could cause problems.

    We could generate static html pages It just adds complexity.

    Personally I use only client-side rendering, and I think, that’s the best from dev perspective. Easy setup, no magic, nice ui. And that results in blank page when you disable js.

    If your motivation is to stop tracking.

    • replace all foreign domain sources to file uris. e.g.: load google fonts from local cache.
    • disable all foreign script files unless it’s valid like js packages from public CDNs, which case load them from local cache.

    If your motivation is to see old html pages, with minimal style, well it’s impossible to do them reliably. If you are worried about closed-source js. You shouldn’t be. It’s an isolated environment. if something is possible for js and you want to limit its capability, contribute to browsers. That’s the clear path.

    I can be convinced. What’s your motivation?



  • I’m running Arch for a very long time. I agree this is not a distro for general audience. I disagree, however, that it is not stable. When I’m doing work I don’t update my system. I enjoy my stable configuration and when I have time, I do update, I curiously watch which amazing foss software had an update. And I try them. I check my new firefox. I check gimp’s new features. etc… or if I have to do something I easily fix it, like in no time because I know my OS. Then I enjoy my stable system again.

    Do you want to know what’s unstable? When I had my new AMD GPU that I built my own kernel for, because the driver wasn’t in mainline. And it randomly crashed the system. That’s unstable.

    Or when I installed my 3rd DE in ubuntu and apt couldn’t deal with it, it somehow removed X.org. And I couldn’t fix it. That’s also something I don’t want. Arch updates are much better than this.