Reading Time: 4 minutes
A long time ago, I worked at a software company that developed a law practice management system. I was about to head off for law school and had my first PC. It was pretty great. The internet existed but it was still something that I only connected to when I needed it. Most of what I did on my computer was local to that device. There were occasional software updates but, for the most part, software changed with a new version: Windows 95, Corel’s purchase of WordPerfect, and so on. It is a very different world, now, with our need to balance automatic updates with the risks they entail.
I am firmly in the “update everything immediately” camp. I worked with IT teams and know of some that still operate with a mindset of selective updates. When a new release or patch is provided by a software developer, it was tested thoroughly before being pushed out to the computers the team managed. This takes time and resources—and still may not catch all of the possible ways to fail—but it had the smart goal of trying to reduce (a) downtime for the computer user and (b) spikes in technology support when a bunch of devices failed all at once.
This is not a small thing. A huge number of Windows computers went offline—Microsoft estimated 8.6 million—because they relied on a security package called Crowdstrike. Security apps on Windows have, until recently, been able to integrate very closely with the core software (the kernel) of the operating system. This close integration meant that, when Crowdstrike pushed out an update, a defect in the update caused hundreds of thousands of computers to stop working. On the one hand, you want the latest security software on your PC when pretty much all PCs are now internet-facing and, possibly, always connected. On the other, organizations that waited to test the upgrade may have spared them thousands of person hours in tech support. One estimate of the financial impact put it at about $2.5 billion, as the impact meant planes couldn’t fly, banks couldn’t trade, lawyers couldn’t practice.
There has long been a discussion about whether products like Norton (Symantec) and McAfee internet security created more potential for harm than for benefit (and don’t get me started on Russia’s Kaspersky even without the FCC’s take on it). Not because they are themselves malicious but because of this kernel access issue that gave them heightened privilege. It is one reason I move every computer I touch and can impact to just use Windows Defender. Microsoft may be further limiting or removing kernel access for these security products after Crowdstrike, which would be a welcome change.
The ideal, then, is not to update everything immediately but to test and gradually roll out changes. Unless, of course, you are an individual who has neither the time nor expertise to test, nor the multiple devices (virtual or otherwise) to test. This includes a vast number of personal devices but also lots of solo and small businesses who shoestring their IT support. I think, in fact, as the Crowdstrike event showed, lots of organizations choose to enable some immediate updates as they balance resource demands, even when they may test other updates prior to distributing them.
A lot of people are facing the end of life of Windows 10, which looms just before Halloween this year. It’s been a good 10-year run but even an operating system reaches the end of the road. For people who are not watching (or who have escaped the nagging from Microsoft within the operating system), they may continue to blithely rely on out-of-date software without realizing that their risk grows the longer they delay an upgrade.

Then there are the updates that work too well, as WordPress sites found out this week when Gravity Forms revealed it had been exploited. Again, as an update-often person, I auto-update WordPress plugins on my site and I recommend it on sites that I have managed. In part, it’s because I have dealt with a site that was left to go without updates and the work involved to fix it. I would rather have a site stop working (and see the latest update, and roll it back) than to have a site breached without knowing about it.
For websites using Gravity Forms—and I’m confident there are law libraries using it—the automatic updates were a damned if you do, damned if you don’t. Gravity Forms’ updates had been hacked, on their server, so what looked like an official update was actually carrying additional malicious code. These supply-chain attacks mean that your site may be entirely secure and properly patching but, by doing the right thing, you are at risk because of someone upstream.
I was thinking about supply chain attacks before the Gravity Forms breach in the context of law firms. Recently, a Washington D.C. law firm was infiltrated via its employees’ Microsoft 365 accounts and information was accessed from their mailboxes. This is not terribly surprising as law firms have been seen for some time as a weak link in corporate security. There may be value in what a law firm or legal organization controls for itself (employee PII, payroll, etc.), but there is definitely value in what information they control for their clients: trade secrets, contract negotiation and litigation strategies, and so on.
Unlike Crowdstrike or Gravity Forms, the law firm breach does not sound like a software-based exploit or hack. This sounds like PEBCAK. The unauthorized parties “[broke] into the email accounts of attorneys and advisers….” I mean, maybe Microsoft 365 was hacked but I would bet that it was more specific to those lawyers. Perhaps they re-used passwords found in another breach, or added a browser extension or phone app that allowed for browser cookies to be copied. As someone noted, D.C. is a breach reporting jurisdiction so it would be interesting to FOIA law firm breach reports to see which firms have suffered breaches and how they were exploited.
The link in the chain that is people always seems to me to be the hardest to strengthen. You can test people with phishing emails or require them to use password managers but eventually they will find a way, on purpose or by inattention, to rebalance friction and risk. A password manager cannot stop you from re-using the same password on every site or to use one that includes your initials and the year you graduated high school.
At the same time, I am not sure I would call people the weakest link. We have to balance risk and friction—the number and difficulty of steps people have to take to maintain secure devices and systems—and people are just one of those steps. My inclination remains to err on the side of automatic, untested updates unless there is a better solution for a person, family, or organization. But the increasing complexity and the pace and scope of system updates and changes means that we probably need to get accustomed to monitoring our weak links and perhaps assume that all links are somewhat fragile.