google does a lot of things that just aren’t realistic for the large majority of cases
before kubernetes, you couldn’t just reference borg and say “well google does it” and call it a day
google does a lot of things that just aren’t realistic for the large majority of cases
before kubernetes, you couldn’t just reference borg and say “well google does it” and call it a day
i’d say it’s less that it’s inadequate, and more that it’s complex
for a small team, build a monolith and don’t worry
for a medium team, you’ll want to split your code into discreet parts (libraries shared across different parts of your codebase, services with discreet test boundaries, etc)… but you still need coordination of changes across all those things, and team members will probably be touching every part of the codebase at some point
for large teams, you want to take those discreet parts and make them fairly independent, and able to be managed separately: different languages, different deployment patterns, different test frameworks, heck even different infrastructure
a monorepo is a shit version of real, robust tooling in many categories… it’s quick to setup, and allows you a path to easily change to better tooling when it’s needed
You should really not need to do a PR across multiple repos.
different ways of treating PRs… it’s a perfectly valid strategy to say “a PR implements a specific feature”, in which case you might work in a backend, a front end, and library… of course, those PRs aren’t intrinsically linked (though they do have dependencies between them… heck i wouldn’t even say it’d be uncommon or wrong for the library to have schemas that do require changes in both the fronted and backend)
if you implement something in eg the backend, and then get retasked with something else, or the feature gets dropped then sure it’s “working” still, but to leave unused code like that would be pretty bad… backend and front end PRs tend to be fairly closely tied to each other
a monorepo does far more than i think you think it does… it’s a relatively low-infrastructure way of adding internal libraries shared across different parts of your codebase, external libraries without duplication (and ensuring versions are consistent, where required), and coordinating changes, and plenty more
can these things be achieved with build systems and deployment tooling? absolutely… but if you’re just a small team, a monorepo could be the right call
of course, once the team grows in size it’s no longer the correct option… real tooling is probably going to be faster and better in every way… but a monorepo allows you to choose when to replace different parts of the process… it emulates an environment with everything very separated
i’d say they’re pretty equivalent
a monorepo is far easier to develop a single-language, fairly monolithic (ie you need the whole application to develop any part) codebase in
(though as soon as you start adding multiple languages or it gets big enough that you need to work on parts without starting other parts of the application it starts to break down rather significantly)
but as soon as your app becomes less of a cohesive thing and more separated it becomes problematic… especially when it comes to deployments: a push to a repo doesn’t mean “deploy changes to everything” or “build everything” any more
i think the best solution (as with most things) is somewhere in the middle: perhaps several different repos, and a “monorepo” that’s mostly a bunch of subtrees or submodules… you can coordinate changes by committing to the monorepo (and changes are automatically duplicated), or just work on individual parts (tricky with pnpm since the workspace file would be in the monorepo)… but i’ve never really tried this: just had the thought for a while


you should still stop treating corporations like people: the death penalty shouldn’t exist for people


the zip file itself might also be generated (you can just tack random garbage into places in the zip format and it’ll be ignored - which is extremely quick to do), in which case the hash would change… the file itself is important in case it’s an exploit in the unzip program itself, but also the contents of the file is important


not entirely true. if the file downloaded, windows does a bunch of “helpful” things with files… these are almost certainly benign (eg rendering thumbnails, getting metadata about certain file types) but almost anything is potentially exploitable (eg overflow in thumbnail generation code could lead to code execution just from browsing a website and then opening your downloads folder in explorer)
drive-by attacks don’t just effect the browser
with that said, it’d be a huge deal if this was the reality of the situation… it’s highly unlikely, but zero days exist, and the possibility is always real
i say this because this has been exploited in the past with exactly the same scenario: preview generation


new fabs is iffy… samsung chose not to scale up production because they’re betting that the AI bubble is just a bubble, and in that case any change in the short term will be bad in the long term… building a factory for DRAM takes years: let’s hope the bubble of AI enshittification doesn’t last that long


similar with energy retailers in aus
the government even plays into it by having a web tool to compare energy plans, and roughly once per year my state pays people $100 just to compare deals (and then i usually spend 5min to switch providers and save ~$500/y)


so what they’re saying is you should cancel your subscription and keep deal-hopping around different providers i guess!


generally people think men are evil by default, and women are good by default
i think this is a misunderstanding of the dynamic
we see this play out pretty regularly with the “not all men” arguments and the like: men getting annoyed by women being careful, and taking “you could hurt me” behaviour as some kind of insult. the statement is true: not all men are evil to women, but any man could be evil to women and thus need to be treated as though it’s possible in order to protect themselves


also the emdash thing kinda proves that the majority of training data comes properly published works rather than user comments, and that the training methods merge “knowledge” from user stuff like reddit together with books and papers etc


deleted by creator


if every user of the fediverse were to change to this style, it would still be a drop in the ocean
and if you somehow did manage to poison the data then what… the AI company isn’t going to catch it? no they do a find and replace… they don’t even need to do it in the training data (though they would)… they could just filter the output


meta and ctrl switched, because if there’s something apple did right it’s using the thumb as modifier key for copy/paste/etc instead of pinkie finger which is far FAR less able to deal with repeat strain
but i also type programmers dvorak because i got pretty horrible wrist pain at one point so anything to stop me damaging my wrists :p


software is not a one and done, and foss is so far from a workplace. there’s a huge amount of software engineering that’s not writing code, and maintaining a code base over years is far different than a relatively isolated fire and forget
an internship wouldn’t cut it, and neither would foss contributions


i wouldn’t say projects are practise… they’re kinda like a really basic simulator… you’re solving contrived problems so they’re not messy, you don’t have seniors etc, there’s no existing code base, no complex deployments, you’re not doing most of the non-technical parts of software engineering, and the list goes on and on and on
internships are great, but they’re really short


software should be a trade, and treated like apprenticeships… some theory is needed, but it’s wild that anyone thinks 3 years of just theory is going to produce decent software engineers


+1 davinci… it’s incredible what you get in the free version, and the studio version is getting more and more worth the money in a value add way rather than a need it way
that’s a good and bad thing though…
it’s easy to reference code, so it leads to tight coupling
it’s easy to reference code, so let’s pull this out into a separately testable, well-documented, reusable library
my main reason for ever using a monorepo is to separate out a bunch of shared libraries into real libraries, and still be able to have eg HMR