And that’s why we make sure to double check our work.
Even in C++, most of the times, we are using logically managed containers. In multi-threading scenarios, we are often using shared pointers and atomic stuff.
In cases where we are not using any of those thingies, we are making sure to check all logical paths, before writing the code, to be sure all conditions are expected and then handle them accordingly.
Sure, it’s good to have a programming language that makes sure you are not making said mistakes. And then you can keep your mind on the business logic.
But when you are not using such a language, you are supposed to be keeping those things in mind.
So you will need to add to that: “We are lazy. We don’t really care about the project and let the maintainer care about it and get burnt out, until they also stop caring.”
I really don’t think you’re looking at this from the right angle. This isn’t about being lazy. This isn’t about not double checking work.
My point is that statistically speaking, even the double checkers who check the work of the double checkers may, at some point, miss some really subtle, nuanced condition. Colloquially, these often fall under the category of critical zero-day bugs. Having a language that makes it impossible to even compile code that’s vulnerable to whole categories of exploits and bugs is an objective good. I’m a bit mystified why you’re trying to argue that it’s purely a skill/rigor issue.
Case in point: the LN-100 inertial nav unit used in the F-22 had a bug in it that caused the whole system to unrecoverably crash as the first squadron flew over the International Date Line as it was being deployed to Kaneda air base in Japan. The only reason why they didn’t have to ditch in the pacific was that the tanker was still in radio range; they had to be shepherded back to Honolulu by the tanker, and Northrop Grumman flew an engineering team out to (very literally, heh) hotfix the planes on the tarmac, and then they continued on to Kaneda without issue. TLDR: even with systems that enforce extreme rigor (code was developed and tested under DO-178B), mistakes can and do happen. Having a language that guards against that is just one more level of safety, and that’s a good thing.
Having a language that guards against that is just one more level of safety, and that’s a good thing.
Yes it is.
But my point simply is, “caring” about stuff needs to be normalised, instead of over-anti-pedantism and answering concerns with stuff like, “chill dude!”.
We know very well that not all bugs are memory related.
And that’s why we make sure to double check our work.
Even in C++, most of the times, we are using logically managed containers. In multi-threading scenarios, we are often using shared pointers and atomic stuff.
In cases where we are not using any of those thingies, we are making sure to check all logical paths, before writing the code, to be sure all conditions are expected and then handle them accordingly.
Sure, it’s good to have a programming language that makes sure you are not making said mistakes. And then you can keep your mind on the business logic.
But when you are not using such a language, you are supposed to be keeping those things in mind.
So you will need to add to that: “We are lazy. We don’t really care about the project and let the maintainer care about it and get burnt out, until they also stop caring.”
In the same way that a developer should program things that are user proof, language developers should program languages that are dev proof.
I really don’t think you’re looking at this from the right angle. This isn’t about being lazy. This isn’t about not double checking work.
My point is that statistically speaking, even the double checkers who check the work of the double checkers may, at some point, miss some really subtle, nuanced condition. Colloquially, these often fall under the category of critical zero-day bugs. Having a language that makes it impossible to even compile code that’s vulnerable to whole categories of exploits and bugs is an objective good. I’m a bit mystified why you’re trying to argue that it’s purely a skill/rigor issue.
Case in point: the LN-100 inertial nav unit used in the F-22 had a bug in it that caused the whole system to unrecoverably crash as the first squadron flew over the International Date Line as it was being deployed to Kaneda air base in Japan. The only reason why they didn’t have to ditch in the pacific was that the tanker was still in radio range; they had to be shepherded back to Honolulu by the tanker, and Northrop Grumman flew an engineering team out to (very literally, heh) hotfix the planes on the tarmac, and then they continued on to Kaneda without issue. TLDR: even with systems that enforce extreme rigor (code was developed and tested under DO-178B), mistakes can and do happen. Having a language that guards against that is just one more level of safety, and that’s a good thing.
Yes it is.
But my point simply is, “caring” about stuff needs to be normalised, instead of over-anti-pedantism and answering concerns with stuff like, “chill dude!”.
We know very well that not all bugs are memory related.
Even this article of the thread states it dropped from 76% to 24% through the introduction of Rust.
If you seriously think:
I’d like to see the water you walk on.
Anti Commercial-AI license
I definitely thing that.
The rest, not so much.