fbpx
24-09-2021 Abhishek Sharma

The Perils of a Pipeline

The Pipeline – the backbone of any delivery ecosystem. In the world of software delivery, the pipeline is an integral part of all the automation efforts to delight customers with releasing software early and often. When not overhauled, and when not attended, the pipeline could become an artifact of flow inhibitor and a breeding ground for cybercriminals. This short article is an effort to describe and unravel some of the hidden dangers that a pipeline could potential come bearing:

  1. The Bloated Pipeline
  2. The Twin Pipeline
  3. The Unattended Pipeline
  4. The Vulnerable Pipeline

The Bloated Pipeline

Spikes and Proof of Concepts are very common in today’s software development. When an engineer attends a security conference or a DevOps talk, the person is pursued to try new tools, and integrate them in their pipelines. Of all the practices and processes to ensure quality and security, integrating tools in the pipeline is the easiest of all. Couple of lines of scripts, or even GUI assisted drag, and drop makes it for all the swift experience to integrate security and quality tools in the pipeline. However, that is only the beginning, it is what happens after a security tool in the pipeline starts dumping vulnerabilities as a build output. There is no dearth of free and open source tooling readily available to be plugged in to the build pipelines.

Over a period, more and more tools are integrated, sometimes without assessing the real value the tools add to the quality and security of the software. Static code analyzers are known to catch software defects early in the software development life cycle, they are also well known for the number of false positives in their default configurations. As a result of all the new tooling, there is a bloat in the pipeline that inhibits the delivery flow. Too many tools, could be of too little value when not properly tuned to a software context. And some of these tools provide auto update options to update their versions, and scan signatures to the latest versions, and some of them quickly become outdated as well.

The Twin Pipeline

When there is pipeline that needs to be troubleshot, often times in the development and test environments at least, the engineers go ahead and clone, creating a new twin pipeline to essentially, troubleshoot, identify, and remediate a failing pipeline. In reality a pipeline could fail for a myriad of reasons such as an error from the underlying container instances, or the cloud host, or network interruptions, or the failing tools integrated in the pipelines, or a quality / security gate induced failure. Once the issue in question is resolved, it is only up to the engineer that created the pipeline to delete it. In a globally diversified DevOps teams, sometimes there over 30 pipelines lying dormant, doing nothing in a place where only three pipelines would suffice, such as a CI incremental build, a full build, and a nightly build. These twin pipelines could eventually go unattended and become evil twins.

The Unattended Pipeline

To maintain backward compatibility of different software releases, often there is different release pipelines created for every version of a release. Smarter teams manage build pipelines are code, and pass arguments to the parameter based pipelines, however many other teams just create a new pipeline for every releases and the management of these pipelines could quickly go out of hand resulting in loss of accountability and attention. Some teams prefer doing a pipeline clean up with a cadence, some prefer parameterizing the variable parts of the pipeline. In any case when there is no accountability for a pipeline, it becomes unattended, and could potentially become outdated and vulnerable.

The Vulnerable Pipeline

Poorly managed access control, improper system hardening, timed-out patching, build machines with unlimited resources are only few reasons that make a pipeline vulnerable. Couple of years ago, an antivirus vendor shipped a system cleaning utility that apparently started spreading malware to the installed user base. It was found that one of their build systems was compromised, that attackers took advantage of, to inject malware into a legitimate piece of software build leading to several questions about software supply chain vulnerabilities.

In a similar way, a developer might not have direct production access to run a certain background task, however a developer might have edit access to the pipeline, where in the pipeline has unfettered access to run production tasks. The devil could be in the details, that is a developer without direct production access, has indirect production access through the pipelines. Controlling access to pipeline rights needs to be managed carefully, so as not to create friction with the delivery process. Taking advantage of privileged identity management services and adhering to decades IT security best practices goes a long way in security the pipelines and the infrastructure that runs the pipelines.

In building defensive security strategies, minimizing the attack surface means minimizing the number of assets that need to be managed and controlled. The lesser tools and pipelines, the easier it becomes to reduce exposures. From a lean perspective, there might be an introduction of waste through our tools and pipelines, rather than the intention to eliminate waste. Security is a balancing act, and it is imperative that there is enough security focus in software delivery, and the security efforts do not cause flow inhibition, waste introduction, and other frictions.