Suppose we have a large to-do task manager app with many features. Say we have an entity, which is the task, and it has certain fields like: title, description, deadline, sub-tasks, dependencies, etc. This entity is used in many parts of our codebase.

Suppose we decided to modify this entity, either by modifying, removing, or adding a field. We may have to change most if not all of the code that deals with this entity. How can we do this in a way that protects us from errors and makes maintenance easy?

Bear in mind, this is just an example. The entity may be something more low-key, such as a logged user event in analytics, or a backend API endpoint being used in the frontend, etc.

Potential Solutions

Searching

One way people do this already is by just searching the entity across the codebase. This is not scalable, and not always accurate. You may get a lot of false positives, and some parts of the code may use the entity without using it by name directly.

Importing

Defining the entity in one central place, and importing it everywhere it is used. This will create an error if a deleted field remains in use, but it will not help us when, say, adding a new field and making sure it is used properly everywhere the entity is being used

so what can be done to solve this? plus points if the approach is compatible with Functional Programming

Automated Tests and CICD

Tests can discover these types of issues with high accuracy and precision. The downside is… Well tests have to be written. This requires developers to be proactive, and writing and maintaining tests is non-trivial and needs expensive developer time. It is also quite easy and common to write bad tests that give false positives.

  • chris@l.roofo.cc
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    An adequate test coverage should help you with these kinds of errors. Your tests should at least somehow fail if you make something incompatible. Also using the tools of your IDE will help you with refactoring.

    • matcha_addict@lemy.lolOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      10 months ago

      Testing definitely works, but the downside is it requires the developer to be proactive, and the effort put in writing tests is non-trivial (and it’s easy and common for developers to write bad tests that give false positives).

      • chris@l.roofo.cc
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        There is a whole field, that looks a bit like religion to me, about how to test right.

        I can tell you from experience that testing is a tool that can give confidence. There are a few new tools that can help. Mutation testing is one I know that can find bad tests.

        Integration tests can help find the most egregious errors that make your application crash.

        Not every getter needs a test but using unit tests while developing a feature can even save time because you don’t have to start the app and get to the point where the change happens and test by hand.

        A review can find some errors but human brains are not compilers it is hard to miss errors and the more you add to a review the easier it can get lost. The reviews can mostly help make sure that the code is more in line with the times style and that more than one person knows about the changes.

        You can’t find all mistakes all the time. That’s why it is very important to have a strategy to avert the worse and revert errors. If you develop a web app: backups, rolling deployments, revert procedures. And make sure everyone know how and try it at least once. These procedures can fail. Refine them trough failure.

        That is my experience from working in the field for a while. No tests is bad. Too many tests is a hassle. There will always be errors. Be prepared.

      • toasteecup@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        10 months ago

        That’s why test coverage exists and needs to be a mandated item.

        I have absolutely no patience for developers unwilling to make good code. I don’t give a shit if it takes a while, bad code means vulnerabilities means another fucking data breach. If you as a developer don’t want to do what it takes to make good code, then quit and find a new fucking career.

        • sweng@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          Test coverage alone is meaningless, you need to think about input-coversge as well, and that’s where you can spend almost an infinite amount of time. At some point you also have to ship stuff.

          • toasteecup@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            You get it!

            Fully agreed things need to get shipped but that’s why I’m a fan of test driven development. You’ll always have your tests written with your feature.

            Then again even if someone does it after as long as you write a test every time you write a feature you’ll eventually have the code base covered.

            Input coverage is new to me, mind linking me some info so I can learn? (Yes google exists but if someone has the low down on a good source I’d prefer that)

            • sweng@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              10 months ago

              By input coverage I just mean that you test with different inputs. It doesn’t matter if you have 100% code coverage, if you only tested with the number “1”, and the code crashes if you give it a negative number.

              If you can prove that your code can’t crash (e.g. using types), it’s a lot more valuable then spending time thinking about potentially problematic inputs and writing individual tests for them (there ate tools thst help with this, but they are not perfect).

              • toasteecup@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                Ahhh gotcha gotcha. I was doing this by default in my python testing, glad I was doing things right

      • jkrtn@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        10 months ago

        “What’s a technique so woodworkers can make sure their furniture fits together on the first try?”

        “Measuring and marking out the plan before making cuts.”

        “Hmm. No, that sounds tedious and difficult, and requires the woodworker to be proactive. No thank you.”

        • matcha_addict@lemy.lolOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          10 months ago

          Interesting analogy, but it’s probably better to address my point directly instead of arguing about woodworking

          • jkrtn@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            10 months ago

            It’s very clear that you want a magic solution that does what you want without any upfront effort. Please let us all know if you find one.

            • matcha_addict@lemy.lolOP
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              10 months ago

              Nothing is without effort. I want something with high confidence. Most organizations fail at testing in one way or another (riddled with false positives, flaky tests, or outright low coverage). Tests are good to have, but they are not enough for what I want.

              magic solution

              If you think type systems are magic, then sure :)

              plesse let us know if you find one

              It looks like I can leverage certain type systems to do this. I might need to work with it more before concluding.