Case Study: FINTECH – State of DevOps in Financial Services

Case Study: FINTECH – State of DevOps in Financial Services

FINTECH CaSE STUDY (1)

About FINTECH

FINTECH is a term used to define finance technology to render finance products & services including payments, security, and investment. In order to enhance financial services and reduce programming codes, we execute DevOps technology in FINTECH industry. Using DevOps technology we can make financial services progressively more secure and accessible to clients.

With the quick progression in innovation, there is likewise extent of enhancing in FINTECH industry.

Industry

Financial & Banking

FINTECH Requirement:

To deploy FINTECH project which includes running of dozen of micro-services in Kubernetes(k8) Cluster along with integration with other components of GCP which includes CloudSQL, PubSub, Storage Bucket, Stack Driver, Container Registry, Cloud Endpoint. The entry point to Kubernetes Cluster should be a Nginx Load Balancer.

Everything should be deployed through CI/CD pipeline using Jenkins. In CI, a new k8 cluster will be deployed using Terraform and all the micro-services will be deployed using Helm Chart.

On the other hand, in CD, changes will be firstly deployed on Staging cluster and Production Cluster.

Apart from this, there should be an auto-scalability in pod and node level in all the k8 clusters.

The Challenge: The main challenge in FINTECH industry is Data security. Every online payee is battling with security and consistency for payments and needs a secure payment option for their sensitive data. Another challenge while implementing DevOps technology is full automation of CI/CD pipeline including Infrastructure as a Code. Integrations with Third party software are including SMS gateways & another enormous challenge every organization is encountering.

Technologies Involved: DevOps

What role does the ONjection Labs Solutions Play?

CI/CD pipeline structure for FINTECH

Pipeline Structure for Individual Microservices

Jenkins is integrated with BitBucket using Webhook so that on every new code check-in Jenkins pipeline should be triggered and verify the changes in the pipeline.

Pipeline Execution

Code Dependency -> Code Analysis -> Code Compilation -> Docker Image Build -> Google Container Registry

On every pipeline execution, in CI code dependency, code analysis, code compilation and docker image build is being done in different stages but in CD, newly build docker image is being pushed to Google Container Registry.

CI/CD stages structure for Microservice:

CI Stages:

Initialize Workspace & Artifact Number:

This stage cleanup the workspace and sends a notification that the pipeline is running and in the process to the bitbucket repository.

Initiate SonarQube Scanning:

In this stage, checked in code is send to the sonar server to scan the code and prepare a report. SonarQube server analyzes the code and shares the report as per the defined gates. Jenkins pipeline passes or fails on the basis of a report. If project code does not meet the quality gate of Sonar then pipeline will fail.

Gradle Test:

This stage is to run a Gradle test. By default, all tests present in the project will be executed.

Gradle Build:

This stage is to build code by using Gradle. It builds artifact from Java code.

Docker Build:

This stage is going to use Dockerfile to build Docker image. After successful execution of this stage, Docker will build new container containing Gradle jar file. It also provides runtime variables to an application, so same application is able to run on different env (CI / Staging / Production).

CD stage:

Docker Image Push:

This stage is going to run if the request comes from Master Branch.

It uploads the Docker image to Google cloud repository in respective micro service directory. It uploads images to GCR with artifact version number tag.  For example: If artifact number for contact-us microservice is 1.0.0, it will tag and push new image to gcr.io/contact-us/contact-us:1.0.0

Final Stage: (It Executes in CI and CD)

Final Stage is to publish pipeline result {success / fail} to Bitbucket. If the request comes from the master branch then it will send email to a predefined group. If a request comes from other than master branch it will send email to commit author with proper information.

Pipeline Structure To Deploy Infrastructure

This project is used to build full infrastructure on Google Kubernetes Engine. On every run, it builds full infrastructure, in CI with includes deployment of the k8 cluster, deployment of CloudSQL, deployment of micro-services and Nginx load balancer using Helm Chart.

In CD, pipeline updates k8 cluster in case of any changes from the desired state. In subsequent stages, it updates the micro-services pods accordingly in Staging. If pipeline passes from Staging, it will automatically deploy it on production.

CI/CD Stages Structure for Infrastructure Pipeline:

CI stages:

Infra Build: (Using Terraform)

  • This stage Initialize the Terraform workspace
  • Create an instance and install all necessary packages for Redis.
  • Create Gcloudsql server with Mysql 5.7. It sets up a password for root and proxy user.
  • Create a CI cluster with defined node and machine type. Apart from this it creates secrets for cloud proxy, Pub-sub and initializes helm in the cluster.

SQL Table Setup:

In this stage, a new database is created in cloudsql using cloud proxy. After database creation, all necessary tables are being created in the same db. The main purpose of this stage is to setup database and tables before micro-services start on a kubernetes cluster.

Helm Deployment:

In this stage, helm installs Ingress Load balancer first from the defined chart. Then it starts all micro-services container on CI kubernetes cluster and configures ingress resource which is used to rewrite the request.

Test Stage:

This stage checks the micro-services pod status. If any of the microservices is in restarted, crashed or in loopback state, it breaks the pipeline.

Cleanup Stage:

This stage destroys all resource created by Terraform. This stage always runs even if the pipeline fails to avoid manual interaction with CI resources.

CD Stages:

Below stages only run if request comes from master branch and these stages will only run if CI will pass.

Staging Cluster Setup: (Using Terraform)

This stage checks the staging resources which includes the Staging-redis-server, staging-cloudsql, staging-cluster, helm Initialize in kubernetes and all required secrets ( Pub/sub , proxy-user etc.) If any resource is missing then terraform fix only that resource.

Staging Sql Table Setup:

This stage sets up the staging cloudsql db and tables. It creates the database/ tables if not exist. So that database will be up before micro services deployment.

Staging Helm Deployment:

Pipeline pauses in this stage and waits for user input button. There is [ Yes / No ] button, if user select yes button then all micro services is updated along with Staging ingress load balancer. If deployment is not present in the kubernetes cluster then it installs all micro services. If a user selects No button then no action will perform.

Staging Rollback:

Again pipeline will pause in this stage and waits for user [Yes/No] button. If something went wrong on staging after updating of micro services then this will rollback all changes and set the state of all micro services which was before update. This stage only runs if user select yes button. If select no button then no action will perform.

Production Cluster Setup:

In this stage terraform checks all production resources, secrets, helm. If everything is present then no action will be performed. If there will be something missing then terraform will create only that missing resource. It also manages the cloudsql failover replication, setup backup time and enable backup.

Production Sql Table Setup:

This stage sets up the production cloudsql db and tables. It created the database/ tables if not exist. So that database up before micro services deployment.

Helm Production Deployment:

Helm will check the SHA of images on GCR and only upgrades micro services with new images on GCR.

Rollback Production changes:

Pipeline pauses on this stage and wait for user input with [Yes/no] button. If something went wrong then user just need to go to Jenkins UI and respective pipeline. Just click on yes button, then helm will rollback micro services to previous state before update.

Final Stage:

In this stage, Email notification sends to defined group with a status of this deployment.

DevOps Tools We Have Used

  • Google PubSub
  • Google Endpoint
  • Google Container repo
  • Google Kubernetes Engine
  • Kubernetes Helm Chart
  • Jenkins in k8 cluster
  • Groovy Pipeline
  • SonarQube
  • StackDriver
  • Terraform
  • CloudSQL
  • BitBucket
  • Docker

Author: Tomas Jindal