Oh man, I forgot about that! Guess I have to go rewatch it now.
Oh man, I forgot about that! Guess I have to go rewatch it now.
It’s a joke referencing: https://en.wikipedia.org/wiki/PC_LOAD_LETTER
To make sure we’re all on the same page, this proposal involves creating an account with a service provider, then uploading some sort of preexisting, established proof-of-identity (eg passport data page), and then requesting a token against that account. The token is timestamped and non-fungible, so that when the token is presented to an age-restricted website, that website can query the service provider to verify that: 1) the token is still valid, 2) the person associated with the token is at least a certain age.
If I understood that correctly, what you’re describing is an account service combined with an identity service, which could achieve the objectives of a proof-of-age service, but does not minimize privacy complications. And we already have account services of varying degrees and complexity: Google Accounts, OAuth, etc. Basically any service where you log-in, since the point of logging in is to associate to a account, although one person can have multiple accounts. Passing around tokens isn’t strictly necessary since you can just ask the user to prove account ownership by signing into their Google Account, for example. An account service need not necessarily verify age, eg signing in to post a comment on a news article.
Compare this with an identity service like ID.me, which provide records on an individual; there cannot be multiple records for the same live person. This type of service is distinct from an account service, but some accounts are necessarily tied to a single identity, such as online banking. But apart from KYC regulations or filing one’s taxes online, an identity service isn’t required for most day to day activities, and any additional uses pose identify theft concerns.
Proof-of-age – as I understand it from the Australian legislation – does not necessarily demand an identity service be used to satisfy the law, but the question in this Lemmy thread is whether that’s a distinction without a difference. We don’t want to be checking identities if we don’t have to, for privacy and identity theft reasons.
In short, can a person be uniquely, anonymously age-verified online? I suspect not. Your proposal might be reasonable for an identity service, but does not move us further towards a theoretical privacy-centric proof-of-age validation mechanism. If such a mechanism doesn’t exist, then the Australian legislation would be mandating identity checks for subject websites, which then become targets for the holder of those identity records. This would be bad.
Sadly, this type of scheme suffers from: 1) repudiation, and 2) transferability. An ideal system would be non-repudiable, meaning that when a GUID is used, it is unmistakably an action that could only be undertaken by the age-verified person. But a GUID cannot guarantee that, since it’s easy enough for an adult to start selling their valid GUIDs online to the highest bidder en-masse. And being a simple string, it can easily and confidentially be transferred to the buyer, so that no one but those two would know that the transaction actually took place, or which GUID was passed along.
As a general rule, when complex questions arise which might possibly be solved by encryption, it’s fairly safe to assume that expert cryptographers have already looked at the problem and that no easy or obvious solution exists. That’s not to say that cryptographers must never be questioned, but that the field is complicated enough that incomplete answers abound.
IMO, the other comments have it right: there does not exist a general solution to validate age without also compromising anonymity or revealing one’s identity to someone. And that alone is already a privacy compromise.
I’m on mobile so I can’t compile this myself, but can you clarify on what you’re observing? Does “nothing” mean no output to stdout and stderr? Or that you did get an error message but it’s not dispositive as to what libcurl was doing? Presumably the next step would be to validate that the program is executing at all, either with a debugger or printf-style debug statements at all junctures.
Please include as much detail as you can, since this is now more akin to a bug report.
EDIT: wait a sec. What exactly is this example code meant to do? The Pastebin API call suggests that this is meant to upload a payload to the web, not pull it down. But CURLOPT_WRITEFUNCTION is for receiving data from a URI. What is your intention with running this example program?
Unless I’m mistaken, that first example as-written will fetch POST the network resource and then immediately clean up. The fact that CURLOPT_NOPROGRESS is passed means that the typical progress bar for curl
in an interactive shell will be suppressed. The comment in the code even says that to make the example do something useful, you’ll have to pass callback pointers, possibly by way of CURLOPT_WRITEFUNCTION or CURLOPT_WRITEDATA.
From the curl_easy_perform()
man page:
A network transfer moves data to a peer or from a peer. An application tells libcurl how to receive data by setting the CURLOPT_WRITEFUNCTION and CURLOPT_WRITEDATA options. To tell libcurl what data to send, there are a few more alternatives but two common ones are CURLOPT_READFUNCTION and CURLOPT_POSTFIELDS.
You might also want to ask around at !indiegaming@lemmy.world , since there are gamedevs there who ostensibly have honed their workflows, even if not necessarily FOSS.
I was once working on an embedded system which did not have segmented/paged memory and had to debug an issue where memory corruption preceded an uncommanded reboot. The root cause was a for-loop gone amok, intending to loop through a linked list for ever member of an array of somewhat-large structs. The terminating condition was faulty, so this loop would write a garbage byte or two, ever few hundred bytes in memory, right off the end of the 32 bit memory boundary, wrapping around to the start of memory.
But because the loop only overwrote a few bytes and then overflew large swaths of memory, the loop would continue passing through the entire address space over and over. But since the struct size wasn’t power-of-two aligned, eventually the garbage bytes would write over the crucial reset vector, which would finally reboot the system and end the misery.
Because the system wouldn’t be fatally wounded immediately, the memory corruption was observable on the system until it went down, limited only by the CPU’s memory bandwidth. That made it truly bizarre to diagnose, as the corruption wasn’t in any one feature and changed every time.
Fun times lol
On one hand, I’m pleased that C++ is answering the call for what I’ll call “safety as default”, since as The Register and everyone else since pointed out, if safety constructs are “bolted on” like an afterthought, then of course it’s not going to have very high adoption. Contrast this to Rust and its “unsafe” keyword that marks all the places where the minimum safety of the language might not hold.
On the other hand, while this Safe C++ proposal adopts a similar notion of an “unsafe” context, it also adds a “safe” keyword, to specify that a function will conform to compile-time safety checks. But as the proposal readily admits:
Rust’s functions are safe by default. C++’s are unsafe by default.
While the proposal will surely continue to evolve before being implemented, I forsee a similar situation as in C where code that lacked initial const-correctness will struggle to work with newer code and libraries. In this case, it would be the “unsafe” keyword that proliferates everywhere just to call older, unsafe code from newer, safe callers.
Rust has the advantage that there isn’t much/any legacy Rust to upkeep, and that means the volume of unsafe code in Rust proframs is minimal, making them safer overall today. But for Safe C++ code, there’s going to be a lot of unsafe legacy C++ code and that reduces the safety benefit for programs overall, for the time being
Even as this proposal progresses, the question of whether to start rewriting some code anew in Rust remains relevant. But this is still exciting as a new option to raise the bar in memory safety in C++.
A few months ago, my library gained a copy of Cybersecurity For Small Networks by Seth Enoka, published by No Starch Press in 2022. So I figured I’d have a look and see if it it included modern best-practices for networks.
It was alright, in that it’s a decent how-to guide for a novice to set up sensible, minimum network fortifications. But it only includes an overview of how those fortifications work, without going into the additional depth needed to fine-tune or optimize them for specific environments. So if the reader has zero experience with network security, it’s a worthwhile read. But if you’ve already been operating a network with defenses for a while, there’s not much to gain from this particular text.
Also, the author suggests that IPv6 should be disabled, which is a terrible idea. Modern best-practice is not to pretend IPv6 doesn’t exist, but to assure that firewalls and other defenses are configured to handle this traffic. There’s a vast difference between “administratively reject IPv6 traffic in/out of the WAN” and “disable IPv6 on all devices and pray no one ever connects an IPv6-enabled device”.
You might have a look at other books available from No Starch Press, though.
I lost it when coming across this commit: https://github.com/WinampDesktop/winamp/commit/67c68e6dc24f36b266427034d016fb86ef4d486c
I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.
The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.
If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.
In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol
It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI equivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.
For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.
In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.
Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.
TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.
That book sounds very insightful. I hope my public library accepts my purchase suggestion.
I will admit that my familiarity with private law outside the USA is almost non-existent, except for what I skimmed from the Wikipedia article for the Inquisitorial system. So I had assumed that private law in European jurisdictions would follow the same judge-intensive approach. Rereading the article more closely, I do see that it really only talks about criminal proceedings.
But I did some more web searching, and found this – honestly, extremely convenient – article comparing civil litigation procedure in Germany and California (the jurisdiction I’m most familiar with; IANAL). The three most substantial differences I could identify were the judge’s involvement in: serving papers, discovery, and depositions.
Serving legal notice is the least consequential difference between California and Germany, but it seems that the former allows any qualified adult to chase down the respondent (ie person being sued) and deliver the notice of a lawsuit – hence the trope of yelling “you have been served” and then throwing a stack of papers at someone’s porch – on behalf of the complainant (person who filed the lawsuit). Whereas German courts take up the role themselves for notifying the complainant. Small difference, but notable.
In Germany, the court, and not the plaintiff, is required to serve the complaint on the defendant without undue delay, which is usually immediately after it has been filed with the court.
Next, discovery and pleadings in Germany appear to be different from the California custom. It seems that German courts require parties to thoroughly plead their positions first, and only afterwards will discovery begin, with the court deciding what topics can be investigated. Whereas California allows parties to make broad assertions that can later be proven or disproven during discovery. This is akin to throwing spaghetti at the wall and seeing what sticks, and a big reason this is done is because any argument that isn’t raised during trial cannot be reargued during a later appeal.
I believe that discovery in California and other US States can get rather invasive, as each party’s lawyers are on a fact-finding mission where the truth will out. The general limitation on the pleadings in California is that they still must be germane to the complaint and at least be colorable. This obviously leads to a lot of pre-trial motions, as the targeted party will naturally want to resist a fishing expedition during discovery.
Lastly, depositions in Germany involve the judge(s) a lot more than they would in California. Here, depositions are off-site from the court and conducted by the deposing party, usually video-taped and with all attorneys present, plus a privately hired stenographer, with the deposing attorney asking questions. Basically, after a deposition order is granted by the judge, the judge isn’t involved unless during the deposition, the process is interrupted in a way that would violate the judge’s order. But the solution to that is to simply phone the judge and ask for clarification or a new order to force the deposition to continue.
Whereas that article describes the German deposition process as always occuring in court, during trial, and with questions asked by the judge(s). The parties may suggest certain questions by way of constructing arguments which require the judge(s) to probe in a particular direction. But it’s not clear that the lawyers get to dictate the exact questions asked.
In contrast, depositions in Germany are conducted by the judge or the panel of judges and only during trial.
I grant you that this is just an examination of the German court proceedings for private law. And perhaps Germany may be an outlier, with other European counterparts adopting civil law but with a more adversarial flavor for private law. But I would say that for Germany, these differences indicate that their private law is more inquisitorial overall, in stark contrast to the California or USA adversarial procedure for private litigation.
You are absolutely correct: this fragile experiment called democracy will not survive if the citizenry becomes ambivalent about its institutions, allowing corrupt officials and other enablers of authoritarianism to take root.
If you are an American and that prospect disturbs you, then you need to help strengthen and guard the institutions that protect the core American values. Nobody owes you a democracy.
For some ideas of what to do, this post by Teri Kanefield has a list of concrete actions that you can take: https://terikanefield.com/things-to-do/
I am usually not wont to defend the dysfunction presently found in the USA federal (and state-level) judiciary, but I think this comparison to the German courts requires a bit more context. Generally speaking, the USA federal courts and US States adopt the adversarial system, originally following the English practice in both common law and equity. This means the judge takes on a referee role, and a plaintiff and a defendant will make their best, most convincing arguments.
I should clarify that “common law” in this context refers to the criminal matters (akin to public law), and “equity” refers to person-versus-person disputes (akin to private law), such as contracts.
For the adversarial system to work, the plaintiff and defendant need to be sufficiently motivated (and nowadays, well-monied) to put on good arguments, or else they’re just wasting the court’s time. Hence, there is a requirement (known as “standing”) where – grossly oversimplifying – the plaintiff must be the person with the most to gain, and the defendant must be the person with the most to lose. They are interested parties who will argue vigorously.
Of course, that’s legal fiction, because oftentimes, a defendant might be unable to able to afford excellent legal counsel. Or plaintiffs will half-ass or drag out a lawsuit, so that it’s more an annoyance to the opposite party.
In an adversarial system, it is each party’s responsibility to obtain subject-matter experts and their opinions to present to the court. The judge is just there to listen and evaluate the evidence – exception: criminal trials leave the evaluation of evidence to the jury.
Why is the USA like this? For the USA federal courts, it’s because it’s part of our constitution, in the Case or Controversy Clause. One of the key driving forces for drafters of the USA Constitution was to restrict the powers of government officials and bureaucrats, after seeing the abuses committed during the Colonial Era. The Clause above is meant to constrain the unelected judiciary – which otherwise has awe-inducing powers such as jailing people, undoing legislation, and assigning wardship or custody of children – from doing anything unless some controversy actually needed addressing.
With all that history in mind, if the judiciary kept their own in-house subject-matter experts, then that could be viewed as more unelected officials trying to tip the scale in matters of science, medicine, computer science, or any other field. Suddenly, landing a position as the judiciary’s go-to expert could have broad reaching impacts, despite no one in the federal judiciary being elected.
In a sense, because of the fear of officials potentially running amok, the USA essentially “privatizes” subject matter experts, to be paid by the plaintiff or defendant, rather than employed by the judiciary. The adversarial system is thus an intentional value judgement, rather than “whoopsie” type of thing that we walked into.
Small note: the federal executive (the US President and all the agencies) do keep subject matter experts, for the limited purpose of implementing regulations (aka secondary legislation). But at least they all report indirectly to the US President, who is term-limited and only stays 4 years at a time.
This system isn’t perfect, but it’s also not totally insane.
Can you please kindly link to that article, if it’s publicly available?
This is an interesting application of so-called AI, where the result is actually desirable and isn’t some sort of frivolity or grift. The memory-safety guarantees offered by native Rust code would be a very welcome improvement over C code that guarantees very little. So a translation of legacy code into Rust would either attain memory safety, or wouldn’t compile. If AI somehow (very unlikely) manages to produce valid Rust that ends up being memory-unsafe, then it’s still an advancement as the compiler folks would have a new scenario to solve for.
Lots of current uses of AI have focused on what the output could enable, but here, I think it’s worth appreciating that in this application, we don’t need the AI to always complete every translation. After all, some C code will be so hardware-specific that it becomes unwieldy to rewrite in Rust, without also doing a larger refactor. DARPA readily admits that their goal is simply to improve the translation accuracy, rather than achieve perfection. Ideally, this means the result of their research is an AI which knows its own limits and just declines to proceed.
Assuming that the resulting Rust is: 1) native code, and 2) idiomatic, so humans can still understand and maintain it, this is a project worth pursuing. Meanwhile, I have no doubt grifters will also try to hitch their trailer on DARPA’s wagon, with insane suggestions that proprietary AI can somehow replace whole teams of Rust engineers, or some such nonsense.
Edit: is my disdain for current commercial applications of AI too obvious? Is my desire for less commercialization and more research-based LLM development too subtle? :)
Unabashed plug for GnuCash. It’s FOSS, double-entry, and capable enough for oddball personal finances or business finance, with all the spreadsheet exporting one might need.