Quality checks in CI/CD Pipelines

16 Aug 2021

It's a responsibility of a development team (and each software engineer) to deliver quality software on time. What "quality" and "on time" actually are, of course, is defined by the organisation these engineers work for. The lower your risk appetite is, the more checks you might tend to have as an organisation. That naturally leads to more delays in the development and deployment processes. The next big question then is: what is the right balance between speed and quality? In this post, I want to share with you our experience of setting up quality checks in CI pipelines for a number of projects.

Before we start, let me briefly familiarise you with our technical stack. We use React with TypeScript for our front-end projects. The pipelines are shared as GitLab CI includes. They live in a dedicated repository and are semantically versioned. There is always a major version tag (e.g. v3.x) pointing at the latest non-breaking commit. So if a developer is using this version tag as a ref in her include, like in the example below, they can be sure their pipelines get all the updates and do not break. (Human factor aside, obviously. Semantic versioning is relying on commit messages, which are written by humans.)

include:
  - project: 'shared/frontend/pipeline-templates'
    ref: v3.x
    file: '/pipelines/frontend.yml'

An average pipeline consists of the following stages: test, build, post-build test, report, and deploy. (You are probably going to ask: "Where do you install dependencies? Not in every job, of course?". One day I'll share the answer in a separate post.)

Now to the quality checks that we have.

  • Peer review. Okay, not being 100% honest here. This one doesn't run in the CI/CD pipeline. It is done by another human, before or after (usually after) the pipeline has turned green.
  • Linter. We use ESLint with a TypeScript parser and a pretty standard set of rules. Not a lot of custom stuff.
  • TypeScript. Type checking has a lot of benefits. We could skip it. The build would fail and inform us of a TypeScript issue. But builds are longer and more expensive (in most cases), so we prefer to fail fast.
  • Code formatting. This is just running Prettier. Making sure the code is styled in the same way for everybody reduces the number of merge conflicts. Spinning a separate job for it looks a little excessive, though, so we might combine this one with the linter job in the future.
  • Unit tests. Jest with React Testing Library (or Enzyme in some older projects). Strictly speaking, there might be some integration tests in terms of integrating modules inside an application. Our quality gates (see SonarQube below) require at least 80% of unit test coverage.
  • SonarQube, a code quality and code security checking tool, which also has a nice merge/pull request decoration feature. We run it during the "report" stage, after other checks. SonarQube estimates the number of potential bugs and security issues, code smells, and the amount of tech debt. It then makes the job fail if the numbers are greater than those defined in a quality gate.
  • Cypress. In the past we relied on unit tests a lot, but recently we started to steer to having more integration tests in place (The terminology might differ from what you're used to. We say "integration" in a sense of integrating all the different modules inside one system component, e.g. a web frontend application. Thus, here the upstream backends are mocked.). We also employ an awesome tool called Cypress Dashboard.
  • Visual regression tests with ChromaticQA. Not only it helps to spot visual regressions using Storybook (which is already in use in many projects), but also it gives us an ability to include designers in the processes and get their feedback. We use it for testing individual components as well as whole application screens. Chromatic jobs are becoming lengthy at some point, but there are some workarounds.
  • E2e tests. These are full-blown vertical + horizontal tests implemented separately by QA engineers. The goal is to test integrations between components in a system, or integrations between systems.
  • Manual tests. For the completeness of the picture, we also cover some of the features or flows with manual tests.

As you can see, there's quite a few. In one of the following posts I'll try to share the technical part of the setup and the ways we use to optimise pipeline performance.

About

This blog contains things around software engineering that I consider worth sharing or at least noting down. I will be happy if you find any of these notes helpful. Always feel free to give me feedback at: kk [at] kirill-k.pro.

KK © 2025