…according to a Twitter post by the Chief Informational Security Officer of Grand Canyon Education.
So, does anyone else find it odd that the file that caused everything CrowdStrike to freak out, C-00000291-
00000000-00000032.sys was 42KB of blank/null values, while the replacement file C-00000291-00000000-
00000.033.sys was 35KB and looked like a normal, if not obfuscated sys/.conf file?
Also, apparently CrowdStrike had at least 5 hours to work on the problem between the time it was discovered and the time it was fixed.
If I had to bet my money, a bad machine with corrupted memory pushed the file at a very final stage of the release.
The astonishing fact is that for a security software I would expect all files being verified against a signature (that would have prevented this issue and some kinds of attacks
So here’s my uneducated question: Don’t huge software companies like this usually do updates in “rollouts” to a small portion of users (companies) at a time?
I mean yes, but one of the issuess with “state of the art av” is they are trying to roll out updates faster than bad actors can push out code to exploit discovered vulnerabilities.
The code/config/software push may have worked on some test systems but MS is always changing things too.
Companies don’t like to be beta testers. Apparently the solution is to just not test anything and call it production ready.
Every company has a full-scale test environment. Some companies are just lucky enough to have a separate prod environment.
From my experience it was more likely to be an accidental overwrite from human error with recent policy changes that removed vetting steps.
deleted by creator
Quick development will probably spell the end of the internet once AI code creation hits its stride. It’ll be like the most topheavy SCRUM you’ve ever seen with the devs literally incapable of disagreeing.
I was thinking about his stint at McAfee, and I think you’re right. My real question is: will the next company he golden parachutes off to learn the lesson?
I’m going to bet not.
Every affected company should be extremely thankful that this was an accidental bug, because if crowdstrike gets hacked, it means the bad actors could basically ransom I don’t know how many millions of computers overnight
Not to mention that crowdstrike will now be a massive target from hackers trying to do exactly this
Don’t Google solar winds
Holy hell
New vulnerability just dropped
Oooooooo this one again thank you for reminding me
I’m not a dev, but don’t they have like a/b updates or at least test their updates in a sandbox before releasing them?
It could have been the release process itself that was bugged. The actual update that was supposed to go out was tested and worked, then the upload was corrupted/failed. They need to add tests on the actual released version instead of a local copy.
one would think. apparently the world is their sandbox.
Ah, a classic off by 43,008 zeroes error.
The fact that a single bad file can cause a kernel panic like this tells you everything you need to know about using this kind of integrated security product. Crowdstrike is apparently a rootkit, and windows apparently has zero execution integrity.
This is a pretty hot take. A single bad file can topple pretty much any operating system depending on what the file is. That’s part of why it’s important to be able to detect file corruption in a mission critical system.
This was a binary configuration file of some sort though?
Something along the lines of:
IF (config.parameter.read == garbage) { Dont_panic; }
Would have helped greatly here.
Edit: oh it’s more like an unsigned binary blob that gets downloaded and directly executed. What could possibly go wrong with that approach?
We agree, but they were responding to “windows apparently has zero execution integrity”.
I’m not sure why you think this statement is so profound.
CrowdStrike is expected to have kernel level access to operate correctly. Kernel level exceptions cause these types of errors.
Windows handles exceptions just fine when code is run in user space.
This is how nearly all computers operate.
Yeah pretty much all security products need kernel level access unfortunately. The Linux ones including crowdstrike and also the Open Source tools SELinux and AppArmor all need some kind of kernel module in order to work.
crowdstrike has caused issues like this with linux systems in the past, but sounds like they have now moved to eBPF user mode by default (I don’t know enough about low level linux to understand that though haha), and it now can’t crash the whole computer. source
As explained in that source eBPF code is still running in kernel space. The difference is it’s not turing complete and has protections in place to make sure it can’t do anything too nasty. That being said I am sure you could still break something like networking or critical services on the system by applying the wrong eBPF code. It’s on the authors of the software to make sure they thoroughly test and review their software prior to release if it’s designed to work with the kernel especially in enterprise environments. I am glad this is something they are doing though.
At least SELinux doesn’t crash on bad config file
I am not praising crowdstrike here. They fucked up big time. I am saying that the concept of security software needing kernel access isn’t that unheard of, and is unfortunately necessary for a reason. There is only so much a security thing can do without that kernel level access.
Security products of this nature need to be tight with the kernel in order to actually be effective (and prevent actual rootkits).
That said, the old mantra of “with great power” comes to mind…
How can all of those zeroes cause a major OS crash?
If I send you on stage at the Olympic Games opening ceremony with a sealed envelope
And I say “This contains your script, just open it and read it”
And then when you open it, the script is blank
You’re gonna freak out
Maybe. But I’d like to think I’d just say something clever like, “says here that this year the pummel horse will be replaced by yours truly!”
Problem is that software cannot deal with unexpected situations like a human brain can. Computers do exactly what a programmer tells it to do, nothing more nothing less. So if a situation arises that the programmer hasn’t written code for, then there will be a crash.
Poorly written code can’t.
In this case:
- Load config data
- If data is valid:
- Use config data
- If data is invalid:
- Crash entire OS
Is just poor code.
If AV suddenly stops working, it could mean the AV is compromised. A BSOD is a desirable outcome in that case. Booting a compromised system anyway is bad code.
You know there’s a whole other scenario where the system can simply boot the last known good config.
And what guarantees that that “last known good config” is available, not compromised and there’s no malicious actor trying to force the system to use a config that has a vulnerability?
If it had been all ones this could have been avoided.
Just needed to add 42k of ones to balance the data. Everyone knows that, like tires, you need to balance your data.
school districts were also affected… at least mine was.