Assume your friend Jitendra works as a DevOps engineer and is responsible for creating all the CI and CD pipelines. For this instance, we will assume that Jenkins is the tool used as CI engine. Let's take a simple case and understand the following:
- There are some microservices being developed.
- There are two environments - Dev and QA.
- Microservices are deployed in a Kubernetes cluster. Needless to say, microservices are containerized.
- Deployments are always happening from development branch, and code is in Git.
- All the microservices are developed in same tech stack. Consider NodeJS, Java, Go or .NET.
With all these variables in place Jitendra created a pipeline, which takes two parameters:
- Git URL
- Environment where the microservice is to be deployed i.e. Dev or QA.
Whichever environment is selected in the dropdown, the pipeline selects the K8S cluster and its credentials accordingly and does the deployment after build the code from the given Git URL (and development branch).
Problems in parameterization
Now assume that client introduces a new cluster for Perf environment, though you can argue for another environment, we could use different namespace, but that’s not the point here. Point here is that you have something new and that would cause changes to your pipeline, while you are using the parameterization approach. The changes might be:
- Adding one more item to the dropdown i.e. "PERF". You can argue that we can use text box instead of dropdown, but that has two down sides: a. User can make spelling mistakes b. You are asking the user to type
- Adding, basically hardcoding, the new cluster details somewhere.
- Writing the logic to link that new details (#2) with PERF environment (#1).
Solve using DFD
As we have noticed that even when we have the parameterized pipelines, they are still prone to changes even there are very normal and legit requirements comes in. The reason is very simple for this situation: We have things hardcoded in the pipeline.
Now think that Jitendra starts populating the dropdown from a data source or CSV, instead of hardcoding them in the pipeline. Even the link between the PERF and the cluster details Jitendra get from a data source. If that happens, Jitendra just need to update the data source for adding a new environment, without touching your pipeline, and your pipeline will start showing new environment in the dropdown. In essence, pipeline is working on the data as input and flexible for more changes on the basis of more data. This is what we call as Data First DevOps or DFD.
In further readings, more of the concept would be explained.