Designing
Premise
We have looked at the requirement and mapped the data associated with every element in the previous page. Now we have the data. Next step is how to integrate it with the DevOps pipelines. The questions we need to answer in the design phase would be around same designing concerns i.e.
- Maintainability
- Feasibility
Component design
Creation of Git Repo
We need to understand that we will have to automate the process of the creating a repo for every microservice. As we are talking about 300 Microservices, we will have 300 repos and even if it is small job, doing it 300 times will make it a big work. So, let’s see what all we will do in that process:
Input:
- Name of the repo.
- Type of the repo.
- Description
Processing
If the type is "Microservice":
- Create docker repository in the registry.
Output
Insert the metadata mentioned in previous page to the data store.
Creating a build pipeline
Now certainly we won't create 300 pipelines for 300 Microservices, the correct approach would be to build a parameterized pipeline. Also, we need to support build from any branch. So, keeping these in mind, the pipeline should be like:
Input
- Service name: This should be a drop down with the name of the Microservice.
- Branch name: This can be a text box.
- Deployment environment: This should be a multi select dropdown.
Processing
- Populating the Service name dropdown should be done by taking the metadata populated from repo creation process. This will eliminate the possibility of building something which is wrong by spelling or does not exists.
- The environment dropdown should be populated from the environment data, which we discussed in previous page.
- If the environment is not selected as NONE, then the deployment pipeline should be called.
Output
- While building if the build is successful, the data related to build discussed in previous page should be logged to some data store in a structured way.
Creating a deployment pipeline
It also makes sense to have one single deployment pipeline which is parameterized. The pipeline behavior should be like:
Input
- Service name
- Docker image with tag as text box.
- Environment as dropdown like earlier build pipeline.
Processing
As per the selected environment, pipeline will select the K8S cluster and Jenkins credential Id, which will come from the data we talked about in previous page.
Output
Insert the deployment metadata mentioned in previous page to the data store.
Setting up environment
We can have a pipeline for admin purpose to add environment details.
Input
- Environment name
- Credential Id
- Cluster name
- Env type (Prod or Non-Prod) (Who knows where this will be useful)
Processing
On execution, it will insert these details in the data store, along with who created it and when.
Output
Approach
Design your pipeline to consume or spit the data which matters. Think the pipeline to be dumb with the smart data i.e. the pipeline will behave differently for different data.
The Glitch
Think carefully what should be part of your pipeline and what should be data. Don't over or under do it. Whatever may change in future is probable candidate of configuration which would be kept as data.