

CPU and Memory use clock speed regulated by voltage to pass data back and forth with no gates between
could you please explain what you mean by no gates?
CPU and Memory use clock speed regulated by voltage to pass data back and forth with no gates between
could you please explain what you mean by no gates?
yes people come from reddit because it was ruined by corporate bs and immediately start simping hard for corporate propaganda, bogging down quality discussion
and it’s becoming the majority now, so just like reddit, you have to scroll so much further down a thread before the good info is found.
in other words: welcome redditors, pls leave the reddit bs at the door on your way in.
Reading up on RDP
Microsoft requires RDP implementers to obtain a patent license
there it is. good info to dig up jrgd, well done! shame we had to scroll so far in the thread to find these actual proper, highly relevant details.
well, everyone has to pick their battles, and perhaps RHEL just couldn’t fight this one out.
but imo i’d much rather see VNC get some upgrades under RHEL than continue the ever increasing microsoft-ization of linux
love this concept. ever found anything interesting?
so many classics listed already, some others…
the one where a civillisation downloads their lives into picard. whole ep you’re wondering wtf is with picard this is slow as he lives some dudes life, then the reveal is amazing it’s so full of positivity and warmth - appropriately titled “the inner light”. hats off for capturing positivity so well onscreen (for some reason an elusive skill)
when picard defends data’s rights as a sentient being
“Meet me in the middle, says the unjust man.
You take a step towards him, he takes a step back.
Meet me in the middle, says the unjust man.”
Thanks for the reference, from there I found the very impressive original Nature paper “A RISC-V 32-bit microprocessor based on two-dimensional semiconductors” fantastic stuff!!
From the paper, that’s almost a 40x improvement on comparable logic integration!
Some notes from the paper:
Typically this is where people like to shit on the design “cos muh GHz” etc, but tbf not only will people doubtless work on improving the clock speeds etc, but there’s plenty of applications where computation time or complexity isn’t so demanding, so i’m just excited by any breakthrough in these areas.
if this is a full RISCV implementation in 2D materials this is a genuinely impressive breakthrough!!
just want to add, it’s not the zoomer’s fault. they were intentionally raised in ignorance because its apparently profitable
fuck the corporations who’ve deliberately turned our living computers into soulless commercial brainwashing surveillance machines
Or they’re just adding improvements to the software they heavily rely on.
which they can do in private any time they wish, without any of the fanfare.
if they actually believe in opensource let them opensource windows 7 1, or idk the 1/4 of a century old windows 2k
instead we get the fanare as they pat themselves on the back for opensourcing MS-DOS 4.0 early last year (not even 8.0, which is 24 years old btw, 4.0 which came out in 1986).
38 years ago…
MS-fucking-DOS, from 38 years ago, THAT’S how much they give a shit about opensource mate.
all we get is a poor pantomime which actually only illustrates just how stupid they truly think we are to believe the charade.
does any of that mean they’re 100% have to be actively shipping “bad code” in this project, not by any means. does it mean microsoft will never make a useful contribution to linux, not by any means. what it does mean is they’re increasing their sphere of influence over the project. and they have absolutely no incentive to help anyone but themselves, in fact the opposite.
as everyone knows (it’s not some deep secret the tech heads on lemmy somehow didn’t hear about) microsoft is highly dependent on linux for major revenue streams. anything a monolith depends on which they don’t control represents a risk. they’d be negligent if they didn’t try to exert control over it. and that’s for any organisation in their position. then factor in their widespread outspoken agenda against opensource, embrace, extend, extinguish and the vastly lacking longterm evidence to match their claims of <3 opensource.
they’re welcome to prove us all wrong, but that isn’t even on the horizon currently.
1 yes yes they claim they can’t because “licensing”, which is mostly but not entirely fucking flimsy, but ok devils advocate: release the rest, but nah.
yes they lost the battle, now they’re most likely aiming to win the war.
remember back when we didn’t listen to famous people’s opinions outside their field of expertise?
mmW is FR2
5G FR1 is sub x-band microwave afaik
For science, medical and engineering degrees, online tuition is just going to produce people vastly underprepared for work in anything that requires the skills & knowledge the degree is meant to provide
and they’re using our retirement money to do it
what a fucked timeline
browsers turning off specific extensions which protect us.
they shouldn’t even have a horse in this race. i mean we know why they do, but damn is it completely insane.
what’s also fucked is how normalised this is becoming.
all of that said, edge who?
our sensory capabilities are probably better than you think
however good our current capabilities are, it’s not exactly reasonable to think we’re at the apex. we don’t know everything - perhaps we never will, but even if we do it’ll surely be in 100, 1,000 or 10,000 years, rather than 10 years.
i’m not aware of any sound argument that the final paradigm in sensing capability has already happened.
there is really no scenario where this logic works
assuming you mean there’s no known scenario where this logic works? then yes, that’s the point - we currently don’t know.
this is asklemmy not a scientific journal. there can be value or fun in throwing ideas around about the limits of what we do know, or helping op improve their discussion, rather than shit on it. afaict they’ve made clear elsewhere in this thread they’re just throwing ideas around & not married to any of it.
(ok i see, you’re using the term CPU colloquially to refer to the processor. i know you obviously know the difference & that’s what you meant - i just mention the distinction for others who may not be aware.)
ultimately op may not require exact monitoring, since they compared it to standard system monitors etc, which are ofc approximate as well. so the tools as listed by Eager Eagle in this comment may be sufficient for the general use described by op?
eg. these, screenshots looks pretty close to what i imagined op meant
now onto your very cool idea of substantially improving the temporal resolution of measuring memory bandwidth…you’ve got me very interested with your idea :)
my inital sense is counting completed L3/4 cache misses sourced from DRAM and similar events might be alot easier - though as you point out that will inevitably accumulate event counts within a given time interval rather than an individual event.
i understand the role of parity bits in ECC memory, but i didn’t quite understand how & which ECC fields you would access, and how/where you would store those results with improved temporal resolution compared to event counts?
would love to hear what your setup would look like? :) which ECC-specific masks would you monitor? where/how would you store/process such high resolution results without impacting the measurement itself?