Skip to main content

Why “only security updates” approach is not sufficient?

Many organisations by principle only apply product updates that are explicitly marked as security fixes. I argue why this policy is not sufficient with examples on how general updates also have impact on security.

In Linux, package updates can break things. It’s not frequent, but updates sometimes introduce incompatible changes even in stable distributions. This low-likelihood risk is easily mitigated with a pre-production environment and comprehensive functional testing. Some organisations still however as a matter of policy will only install updates explicitly marked as fixing specific security vulnerabilities.

Why is this wrong from security point of view?

Firstly, functional updates in projects such as Linux kernel, libc or systemd often introduce massive security improvements. Even if they don’t fix any particular bugs, they improve system hardening and second-layer protection, in some cases dramatically reducing chances of successful exploitation of a zero-day.

Reducing your attack surface with systemd article gives some examples.

Secondly, holding regular package updates increases technical debt in the system and over time you may end up with a system that is simply insecure, even if all of the vendor-provided security patches are formally installed.

One such case I had to deal with in a client system was GeoIP databases: their servers were relying on MaxMind GeoIP databases for some controls, and these databases were updated by scripts packaged by Debian. MaxMind has in the meantime changed the update API and while these changes were of course reflected in updated Debian packages, these updated were not security fixes and per client’s policy were not installed. This eventually led to GeoIP databases getting dramatically out of date over time, and controls based on these rendered useless.

I had to resolve a similar case with ClamAV Freshclam utility used to update malware signatures, which over time introduced significant improvements in private mirror management and HTTPS support, both of which also resulted in incompatible changes in freshclam daemon configuration and operations. As these weren’t formally “security fixes”, they weren’t installed either and the whole signature update system gradually slipped into oblivion.

Conclusions and recommendations:

  • Install all package updates published for your Linux distribution, including both functional and security updates. The risk of breaking things can be easily mitigated by first applying updates to pre-production and ensuring correct operations with functional testing. In an ideal scenario, all updates will be propagated to production with a short (e.g. one day) delay or at nearest maintenance window.
  • Design systems, applications and CI/CD pipelines for frequent restarts and reboots. A system that is “too important to reboot” always degrades into a museum of ancient exploits and someone eventually visits it and makes use of them. Chances of serious update failure are only increasing with every postponed reboot or service restart, which obviously only decreases willingness of the team to undertake such massive operations, which further increases technical debt.
  • From first-hand experience, if the system is designed for frequent reboots and service restarts, the cost of maintenance and updates becomes very small as it’s evenly distributed over time. It’s quick and easy to update a web app for incompatible changes between Django 3.0 and 3.1, but it’s practically a major rewrite to migrate from Django 1.0 to 3.1, and then this will be increased even further as you will also need to migrate from end-of-life versions of database, cache, libraries etc.

I’m on Mastodon and Twitter, feel free to comment!