• 0 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • And memory bugs are only a subset of bugs that can be exploited in a program. Pretending Rust means no more exploitation is stupid.

    This is facile.

    According to Microsoft, about 70% of security bugs they see are memory safety issues.

    Yes: if you introduce memory safety, there’s still those 30% of security bugs left. But, well, I’d rather worry about 30% of issues than 100%…

    Similarly, I use libraries that eliminate SQL injections unless you really go out of your way.


  • One important thing to realize is that different dialects of English have slightly different grammars.

    One place where different dialects differ is around negation. Some dialects, like Appalachian English or West Texas English, exhibit ‘negative concord’, where parts of a sentence must agree in negation. For example, “Nobody ain’t doin’ nothing’ wrong”.

    One of the most important thing to understanding a sentence is to figure out the dialect of its speaker. You’ll also notice that with sentences with ambiguous terminology like “he ate biscuits” - were they cookies, or something that looked like a scone? Rules are always contextual, based on the variety of the language being spoken.



  • No.

    There’s two types of grammar rules. There’s the real grammar rules, which you intuitively learn as a kid and don’t have to be explicitly taught.

    For example, any native English speaker can tell you that there’s something off about “the iron great purple old big ball” and that it should really be “the great big old purple iron ball”, even though many aren’t even aware that English has an adjective precedence rule.

    Then there’s the fake rules like “ain’t ain’t a real word”, ‘don’t split infinitives’ or “no double negatives”. Those ones are trumped up preferences, often with a classist or racist origin.


  • Symbols display with friendly string-y names in a number of languages. Clojure, for example, has a symbol type.

    And a number of languages display friendly strings for enumy things - Scala, Haskell, and Rust spring to mind.

    The problem with strings over enums with a nice debugging display is that the string type is too wide. Strings don’t tell you what values are valid, strings don’t catch typos at compile time, and they’re murder when refactoring.

    Clojure symbols are good at differentiation between symbolly things and strings, though they don’t catch typos.

    The other problem the article mentions is strings over a proper struct/adt/class hierarchy is that strings don’t really have any structure to them. Concatenating strings is brittle compared to building up an AST then rendering it at the end.

    Edit: autocorrect messed a few things up I didn’t catch.


  • Javascript is generally considered OOP, but classes weren’t widely available till 2017.

    Inheritance isn’t fundamental to OOP, and neither are interfaces. You can have a duck- typed OOP language without inheritance, although I don’t know of any off the top of my head.

    Honestly, the more fundamental thing about OOP is that it’s a programming style built around objects. Sometimes OO languages are class based, or duck typing based, etc. But you’ll always have your data carrying around it’s behavior at runtime.


  • keeping state (data) and behavior (functions) that operate on that state, together

    Importantly, that’s “together at runtime”, not in terms of code organization. One of the important things about an object is that it has dynamic dispatch. Your object is a pointer both to the data itself and to the implementation that works on that data.

    There’s a similar idea that’s a bit different that you see in Haskell, Scala, and Rust - what Haskell calls type classes. Rust gives it a veneer of OO syntax, but the semantics themselves are interestingly different.

    In particular, the key of type classes is keeping data and behavior separate. The language itself is responsible for automagically passing in the behavior.

    So in Scala, you could do something like

    def sum[A](values: List[A])(implicit numDict: Num[A]) = values.fold(numDict.+)(numDict.zero)
    

    Or

    def sum[A: Num](values: List[A]) = values.fold(_ + _)(zero)
    

    Given a Num typeclass that encapsulates numeric operations. There’s a few important differences:

    1. All of the items of that list have to be the same type of number - they’re all Ints or all Doubles or something

    2. It’s a list of primitive numbers and the implementation is kept separate - no need for boxing and unboxing.

    3. Even if that list is empty, you still have access to the implementation, so you can return a type-appropriate zero value

    4. Generic types can conditionally implement a typeclass. For example, you can make an Eq instance for List[A] if A has an Eq instance. So you can compare List[Int] for equality, but not List[Int => Int].


  • Yeah, projects also exist in the real world and practical considerations matter.

    The legacy C/C++ code base might slowly and strategically have components refactored into rust, or you might leave it.

    The C/C++ team might be interested in trying Rust, but have to code urgent projects in C/C++.

    In the same way that if you have a perfectly good felling axe and someone just invented the chain saw, you’re better off felling that tree with your axe than going into town, buying a chainsaw and figuring out how to use it. The axe isn’t really the right tool for the job anymore, but it still works.


  • Pipoca@lemmy.worldtoProgrammer Humor@lemmy.mlSTOP WRITING C
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    11 months ago

    C is not how a computer truly works.

    If you want to know how computers work, learn assembly and circuit design. You can learn C without ever thinking about registers, register allocation, the program counter, etc.

    Although you can learn assembly without ever learning about e.g. branch prediction. There’s tons of levels of abstraction in computers, and many of the lower level ones try to pretend you’ve still got a computer from the 80s even though CPUs are a lot more complex than they used to be.

    As an aside, I’ve anecdotally heard of some schools teaching Rust instead of C as a systems language in courses. Rust has a different model than C, but will still teach you about static memory vs the stack vs the heap, pointers, etc.

    Honestly, if I had to write some systems software, I’d be way more confident in any Rust code I wrote than C/C++ code. Nasal demons scare me.


  • Pipoca@lemmy.worldtoProgrammer Humor@lemmy.mlSTOP WRITING C
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    11 months ago

    Right tool for the job, sure, but that evolves over time.

    Like, years back carpenters didn’t have access to table saws that didn’t have safety features that prevent you from cutting off your fingers by stopping the blade as soon as it touches them. Now we do. Are old table saws still the “right tool for the job”, or are they just a dangerous version of a modern tool that results in needless accidents?

    Is C still the right tool for the job in places where Rust is a good option?






  • Pipoca@lemmy.worldtoProgramming@programming.devRust vs C
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    11 months ago

    C is many things, but elegant really isn’t one of them.

    C has always been part of the “worse is better”/New Jersey school of thinking. The ultimate goal is simplicity. Particularly simplicity of language implementation, even if that makes programs written in that language more complex or error prone. It’s historically been a very successful approach.

    Rust, on the other hand, is part of “The Right Thing”/MIT approach. Simplicity is good, but it’s more important to be correct and complete even if it complicates things a bit.

    I don’t really think of void* and ubiquitous nulls, for example, as the hallmark of elegance, but as pretty simple, kludgey solutions.

    Rust, on the other hand, brings a lot of really elegant solutions from ML- family languages to a systems language. So you get algebraic data types, pattern matching, non-nullable references by default, closures, typeclasses, expression-oriented syntax, etc.


  • Just like walking doesn’t really compete, like at all, with flying in an aircraft, Functional and Object Oriented Programming are at their best when you use whichever approach makes sense for a given situation and in any reasonably complex software that means your code should be full of both.

    I’m not really sure sure that’s true.

    In FP languages like Haskell, you get tools like algebraic data types, typeclasses, and pattern matching.

    FP is really opposed to imperative programming, while objects are opposed to algebraic data types.

    You can write OO code that’s 100% fully functional, and you can write code in Haskell or rust where you barely notice you never once used an object.