Originally posted on Sonatype.com

I learned about Buckminster Fuller when I was frantically drawing my way through an architecture degree in college. Fuller was quite an inspirational architect and the inventor of the Geodesic dome.

He had this saying that stuck in my head: “A fool with a tool still remains a fool.” When I hear about organizations discussing the adoption of “DevOps” or “DevSecOps” I never hear talk of culture or practice. Unfortunately, the conversation lands on tooling, and the existing tools that they have available to automate.

This approach often is an Epic Failure waiting to happen. Just because you have a tool, doesn't mean you need to use it, nor is there a guarantee that it is the right tool to use. It's hard to determine which comes first. The fool, or the tool.

Unintended Consequences

I had the opportunity to kick off the lightning talks at DevOps Enterprise Summit 2019 in Las Vegas a few weeks back. I decided I was going to talk about an example of using the wrong tool for the wrong job.

My goal was to build on a story that I wrote about in Epic Failures of DevSecOps where I talked about some of the hurdles that I faced while integrating security controls into DevOps pipelines. I began by painting a picture where one can place themselves into the shoes of a developer that's just checked in code that fixes a critical security vulnerability in a piece of software.

Once checked in, the first thing that should happen is that the code is scanned for security vulnerabilities. Normally this happens without a glitch, but this time the automated build pipeline grinds to a halt - because another build has been scanning for almost 10 hours and is only at 10% complete.

What happens? The feature team disables the security control, pushes the build through, and vulnerable software is released to the customer.

The unfortunate result of this bypass is that in this case, no one let the security team know that the control was removed.

Turning Failure into Opportunity

This time around, I discussed a few techniques to keep in mind on how we can all learn from that failure.

First — We shouldn't have pushed a tool into a production pipeline without understanding how the application will perform. Ideally it should have been tested in an identical pipeline where we could test performance without production impact.

Had we done this, we could have put a plan together to scan large codebases out of band – parallel to the testing of the application — and correlated the results back to the build tag.

Second — We should have consulted a reference architecture or researched best practices to determine where a particular tool should ultimately go. With such a vast number of people attempting to add security into their pipelines, we should have worked with the community.

Third — We shouldn’t have dropped a tool into a pipeline without a playbook to help diagnose build issues. Documentation would have helped as well - especially in regards to terminating a long running scan.

You can watch my full lightning talk below.

We should have enabled developers to terminate the scan holding up their build, and fail a stalled build with a message that it was prohibiting application delivery. Better yet - we should have automated that.

To conclude - don’t make assumptions about tools, and never start a conversation about DevOps or DevSecOps practice by suggesting them. Nail down the technique and understand what cultural changes are needed. Security engineers should know the same processes for selecting tools as the DevOps tribes.

Build safer software sooner and put things into your pipeline intelligently.

Yeah… let's try that.