DevOps, what IT can learn from Manufacturing

A while ago I was asked to do a presentation on trends in automation and cloud. I titled it “Industrialize your Information Technology.” I often have the impression IT is still seen by many as a craftsmanship, particularly when we include the whole development cycle. This reminded me of when I started working more than 30 years ago and when the industry was buzzing with a concept called “design for manufacturability”. It preceded the lean movement and TPF, the Toyota Production System. The idea was to stop designing products in isolation without thinking how they would actually be produced. Isn’t that what is still happening in many IT shops?

In the 90’s one of the key books I read often was “The Goal” by Goldratt, which is why I was not surprised to find plenty of references to it in “The Phoenix Project.” All this to say that fundamentally, 25 years after manufacturing, IT is going the route of industrialization.

The end of the craftsmanship

For many years, software development was somewhat of an art. You had it or you didn’t. Some people were able to code software straight from the basic requirements while others went through lengthy specification development, architecture design etc. Typically a step wise approach was taken, often referred to as “waterfall”. As a user, you identified requirements and then hoped that 6, 9 or 12 months later something would come out of the pipe that would somewhat resemble what you had asked for.

That was the way people did things, and from an IT perspective, it worked. The first time I was confronted with a different thinking was when I read a book about Microsoft, called “Microsoft Secrets” stating that, with a “zero defect” goal, they introduced a “spiral lifecycle model”- an approach where, prototype after prototype, additional functionality was added. Later I discovered this approach had been introduced by Boehm in 1986 under the name “the spiral model”, which looks like the ancestor of agile development.

The DevOps divide

Somewhere down the road, software is transferred to operations to be taken in production. That’s when the fun starts. Indeed, OS and middleware versions often differ from the ones already in production. And just like 30 years ago in manufacturing, it’s up to the operations people to make the new software run in production.

In manufacturing, this was resolved by getting the product development teams to work closely with the production team in the early stages, so moving to production became much smoother. Some companies assign a couple product developers to follow the product in early production stages, while a couple production guys join the R&D teams to highlight how small changes in design facilitated production greatly. This allows for the time needed to take a new product into production to reduce drastically.

DevOps, at its essence, should be all about collaboration- smoothening the way the software is taken in production. This requires three key elements: standardization, automation and governance.


Is there really a need to install every database differently? Does every middleware need different installation parameters for every program? Probably not. So, the question is: how we can standardize component installations, making sure that, when-ever possible, they are installed in the same way? This will reduce operations, management costs and effort. It may also reduce license and support costs by limiting the amount of software versions in production. Again, by standardizing the way applications and their components are installed, we fundamentally facilitate operations.

But to make sure that works, the development teams need to take the standard installations into account when developing the applications. This is where dev and ops come together. The second element, automation, plays a key role here.


Automation facilitates the standardization of the installations through the use of topologies. Indeed, to automate, one needs to describe to the tool how the installation will take place. Scripts, workflows or topologies can be used to do that. By maintaining a repository of automation scripts for databases, middleware, operating systems and other components, the automation designer gives the developer a number of options to choose from. The developer no longer needs to define the installation of all these components, which saves time.

Using a standard such as TOSCA (Topology and Orchestration Specification for Cloud Applications) from Oasis, companies maintain flexibility on which design and automation tool to use. This avoids lock-in, ensuring the company can continuously take advantage of the latest improvements in technology.

TOSCA uses the concept of nodes. Nodes define how infrastructure and application components are installed. They also call upon artefacts (Scripts, installables, images,…) to install the appropriate component. Nodes are linked together through relationships, identifying amongst others in what sequence they actually are installed.

When designing the application topology, the developer combines existing nodes for middleware and infrastructure, and only has to create the nodes for the application elements themselves.


Good developers are of the creative type. And creative people have a tendency to reinvent the wheel whenever possible. Thus tight governance is necessary to avoid the creation of a multitude of similar nodes. For middleware, databases and operating systems for example, a new node can be developed, but only when approved by the governance board. This forces the developer to explain clearly why the existing nodes cannot address his needs, in other words why the current standards are not sufficient. Obviously, there might always be a good reason for creating new nodes.

My experience is that, when a clearly defined and enforced governance is in place, not only will developers require less new nodes, but also will they think twice about why such node is actually required. This improves standardization and reduces unnecessary work, “waste” as defined in lean manufacturing.

DevOps benefits both parties

Things start with the developer. Having him increasingly using standard components and services results in a more standardized runtime environment reducing the cost and effort required to operate and manage the production environment. Combining this approach with agile, the regular release of incremental improvements rather than big bang new versions, improves the responsiveness of the IT department, ultimately addressing the needs of the business better.

Enterprises using SAFe© (Scaled Agile Framework) have shown productivity improvements of up to 60% through the combination of automation and standardization. Not only does this makes the developers more productive, but it also allows time to market improvements of 30 to 70%. Implementing DevOps increasingly becomes the norm of fast moving enterprises, and these are the hot companies of today.

This blog entry was originally published in May 2015 on the “CloudSource” blog.

Tell me what is going through your mind.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s