• 33 Posts
  • 429 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Visual Studio provides some kind of AI even without Copilot.

    Inline (single line) completions - I not always but regularly find quite useful

    Repeated edits continuation - I haven’t seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they’re not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can’t be sure it didn’t change any of those lines.

    Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

    In my company we’re still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don’t have practical experience regarding any analysis, generating, or chat functionality with project context. I’m skeptical but somewhat interested.

    I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

    I use Phind and more recently more ChatGPT for research/search queries. I’m mindful of the type of queries I use and which provider or service I use. In general, I’m a friend of ref docs, which is the only definite source after all. I’m aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn’t seem capable to respond to what I am looking for.






  • I strongly disagree.

    Coloring is categorization of code. Much like indent, spacing, line-breaking, aligning, it aids readability.

    None of the examples they provided looked better, more appropriate, or more useful. None of the “tests” lead me to question my syntax highlighting. Quite the contrary.

    By reducing the highlighting to what they seem important, they’re losing the highlighting for other cases. The examples of highlighting only one or two things make it obvious. When you highlight only method heads, you gain clarity when reading on that level, across methods, but lose everything when reading the body.

    I didn’t particularly like their dark theme choice. Their initial example is certainly noisy, but you can have better themes and defaults with more subtle and more equal strength colors. The language or framework syntax and spacing can also influence it.

    Bolding is very useful when color categorizes code to give additional structure discoverability, just like spacing does.


  • I failed the question about remembering what colour my class definitions were, but you know what? I don’t care. All I want is for it to be visually distinct when I’m trying to parse a block of code

    Between multiple IDEs, text editors, diff viewers and editors, and hosted tools like MR/review diff, they’re not even consistently just one thing. For me, very practically and factually. Colors differ.

    As you point out, they’re entirely missing the point. What the colors are for and how they’re being used.


  • That’s a very one-dimensional view of technical debt.

    I was about to write something more, but I think if I don’t know what they refer to when they say “knowledge”, then it’s too wishy-washy and I may be talking about something different than they intended.


    Contrasting “resolving technical debt” and “investing [improvement] knowledge” we’re moving the reference view point.

    I document state and issues as technical debt, and opportunities for change as opportunities. They cross, but are distinct concepts, and do not always cross. Some technical debt may be documented without a documented opportunity. Opportunities may be open improvements that do not tackle technical debt.

    In my eyes, technical debt is about burdens that reduce maintainability where better alternatives likely exist.

    “Investing knowledge” is something different, and not necessarily about known burdens, but may be improvements unrelated to known burdens.










    • Make changes to existing projects
    • Create and use projects you have an interest or use in for yourself
    • Reading technical articles
    • Reading guidance docs (like Microsoft dotnet or SQL Server docs giving introduction to architecture, systems, approaches, behaviors, design decisions, etc)
    • Working with more experienced people - seeing them work, being instructed, reviewed, commented, guided by them
    • Experiencing alternative technologies and approaches
    • Experience in general
    • Exploring existing projects and their architectures

    I don’t know how far along you are in Python use. In general, I don’t think Python guides you into good practice or architecture. It’s too dynamic and varied of a language. You’ll need a framework to be guided. Personally, I have a dislike for it for multiple reasons. Others seem to like it. Other languages and ecosystems are more limited, in good ways. (Maybe I’m misinterpreting “todays” Python, I’ve only peeking experience with Python.)

    I would suggest trying out Go or/and then C#. Both are relatively simple to get into, and have more native/mainline frameworks and guidance. C#/Dotnet in general has a lot of guidance, documentation in broad and specific, and tutorials and sample projects.


  • I don’t think 2% of M365 is necessarily bad numbers. Office is prevalent, for all kinds of and even the simplest of office work. Not everyone needs AI or has the technical expertise or awareness of what this offer even means. Some people may not have launched their Office for one or two years but still have a paid license.

    There’s also a free copilot for GitHub users, which may be necessary as a teaser and testing, and adoption. That may also offset “adoption” by measure of commercial licenses instead of active users.

    I didn’t like the initial focus on that number of sold licenses in the article. Of course, they expand upon it and draw a broader picture afterwards.


  • I think it makes sense that publishers are required to update or at least assess games when open security issues come to their attention.

    The current state is that you may have 20 games installed and 10 have not been maintained for a long time, and 5 have open security issues that an attacker may use. For example, a game launcher with service installs to program files with admin permission. And suddenly, you have a privilege escalation.

    Or a game, when run, pulls in some monitoring, and suddenly exfiltrates data because that service is defunct and was taken over, or hacked.

    The necessity is quite clear.

    Maybe this will also push us towards more stable software, that changes less, or has less attack or escalation surface. That could significantly reduce maintenance burden - even if it ends up only assessing reported open vulnerabilities not affecting your product (because you don’t make use of or open up the vulnerable functionality).