Building Continuous Delivery Pipeline using CDK Pipelines Modern API


In this step-by-step tutorial, we will learn how to create Continuous Delivery using CDK Pipelines using the Modern API. Before we discuss CDK pipelines, let’s discuss a few things first so we’re all on the same page.

What is a CI/CD pipeline?

The CI/CD pipeline automates the software delivery process. It’s a series of steps that happen when you push your changes through to deployment, such as building your code, running tests, and deploying your application and services.

Why CDK pipelines?

AWS CDK It is an open source software development framework that allows you to define cloud infrastructure. The AWS CDK supports many languages ​​including Typescript, Python, C#, and Java. You can learn more about the AWS CDK from the docs here, and I’ve written a beginner’s guide to it on my blog here.

If you’re already using the AWS CDK to define your infrastructure, it’s easiest to use CDK pipelines. It’s serverless and you only pay for what you use.

CDK pipelines are self-mutation – which means if you add any stage or stack in your pipeline, the CDK pipeline will reconfigure itself to deploy those new stages or stacks.

Project

In this tutorial, we are going to build a pipeline using CDK Pipeline For a simple serverless project. Since the main goal of this tutorial is to explain the concepts and implementation of CDK pipelines using the modern API, the actual implementation (serverless project) is intentionally kept simple. However, the same concept applies even to large-scale applications. The only difference would be your application being deployed.

In this project, the lambda function will be called whenever an object is loaded into an S3 bucket.

Our simple serverless project

CDK Pipeline Engineering

Below is a high-level architecture diagram for Code Pipeline.

CI CD Pipeline Architecture using CDK Pipelines

Our code is in the github repository. First, we will create a connection to our GitHub repository using AWS CodeStar in the AWS console (detailed instructions are provided later in this article).

When a user pushes a change to GitHub, AWS CodeStar detects the changes, and the AWS code pipeline will start executing. AWS CodeBuild will build your project and AWS CodeDeploy will deploy the AWS resources required for the project (S3 bucket and Lambda function)

Create an AWS CDK project

Before creating a new CDK project, you must have it installed aws-cdk package worldwide. If you don’t, you can install using the command below

npm i -g aws-cdk

After that, you can execute the following commands to create a new CDK project.

mkdir aws-cdk-pipelines
cd aws-cdk-pipelines
cdk init app --language=typescript

Once the CDK application is created, it will provide a default model for creating a SQS (pending token) queue as shown below.

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
// import * as sqs from 'aws-cdk-lib/aws-sqs';

export class AwsCdkPipelinesStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    // example resource
    // const queue = new sqs.Queue(this, 'AwsCdkPipelinesQueue', {
    //   visibilityTimeout: cdk.Duration.seconds(300)
    // });
  }
}

Since we won’t need SQS, you can delete the commented lines.

Create a github repository

We will create a new GitHub repository and push changes (the CDK project we created earlier).

git remote add origin [email protected]:<your-github-username>/<your-repo-name>.git
git branch -M main
git push -u origin main

We need this GitHub repository in the next step to create an AWS CodeStar connection. The AWS CodeStar connection is the connection between AWS CodePipeline and your GitHub repository.

Create an AWS CodeStar connection

You can go to the AWS CodePipeline service in the AWS console and select settings from the left menu, as below screenshot shown

Click the Create Connection button and you will see the screen below. You can choose Github As a provider, enter your connection name and click the “Connect to GitHub” button

Once you click on the “Connect to GitHub” button, the screen will appear at the bottom of the screen and you can click on the “Install New App” button

Once you click the “Install New App” button, you will be redirected to GitHub asking you to agree to access the repository. The screen will look like the one below.

You can either provide access to all repositories or to a specific repository. It is recommended to provide access to a specific repository. We have selected the GitHub repository we created earlier.

Once you click Save, the connection will be established.

Please note that we will be using an ARN (Amazon Resource Name) for this connection in our pipeline.

Create an AWS CDK Pipeline

The AWS CDK provides an L2 architecture for creating CDK pipelines

We want to do the following steps as part of our pipeline

All of these four steps can be done using ShellStep . ShellStep It is a build provided by the CDK where you can execute shell script commands in a pipeline. You can provide a file ShellStep to me synth The property is as described below.

const pipeline = new pipelines.CodePipeline(this, 'Pipeline', {
      synth: new pipelines.ShellStep('Synth', {
        // Use a connection created using the AWS console to authenticate to GitHub
        // Other sources are available.
        input: pipelines.CodePipelineSource.connection(
          'mugiltsr/aws-cdk-pipelines',
          'main',
          {
            connectionArn:
              'arn:aws:codestar-connections:us-east-1:853185881679:connection/448e0e0b-0066-486b-ae1c-b4c8be92f79b', // Created using the AWS console
          }
        ),
        commands: ['npm ci', 'npm run build', 'npx cdk synth'],
      }),
      dockerEnabledForSynth: true,
    });

the input property in synth Represents a GitHub source connection – it takes 3 parameters

  • GitHub repo follows the pattern – / . My GitHub ID is mugiltsr And my repo name is aws-cdk-pipelines
  • The name of the branch on which the pipeline will run. Because I want the pipeline to be executed when the code is pressed main Branch – you have specified main as the branch name
  • connectionArn – This is the ARN (Amazon Resource Name) for the connection we created earlier in the AWS console.

the commands The property is a string array containing the commands to be executed.

  • npm ci – This command is to install packages in CI mode
  • npm run build The order to build the project
  • npm run test If you like, you can implement test cases
  • npx cdk synth – the synth The command is to generate cloud configuration templates from our CDK code

dockerEnabledForSynth : Set the value of this property to true if we want to enable docker for a file synth a step. Since we will be using aggregate to build lambda functions – we need to set the value of this property to true.

Deploy our stack

Commit these changes and push them to the GitHub repo.

Only for the first time, we need to populate the stack with the command cdk deploy . Once the command is executed, the AWS Code Pipeline will be created and executed.

From next time we don’t need to manually deploy our code. As you can see in the screenshot below, the pipeline has been created and executed successfully.

Creating our serverless app

Our serverless application is very simple and only contains the following components

We will have all of these components in a new stack AwsS3LambdaStack who will be in lib Guide next to our pipeline stack

export class AwsS3LambdaStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    // create s3 bucket
    // create lambda function 
    // add s3 event source for lambda function
    }
   }

Create an S3 bucket

Below is the CDK code for creating an S3 bucket

 const bucket = new s3.Bucket(this, 'S3Bucket', {
      bucketName: `aws-lambda-s3-132823`,
      autoDeleteObjects: true,
      removalPolicy: RemovalPolicy.DESTROY,
    });

It has 3 characteristics – bucketName represents the name of Aquarius, autoDeleteObjects Tells us whether to delete the objects in the container when the stack is dropped, removalPolicy Tells whether the container should be removed when the stack is dropped. only property bucketName It is a required property.

Create a lambda function

We create a simple lambda function with Node16 as the runtime. We configure the timeout and memory size properties and specify the path to the lambda function.

const nodeJsFunctionProps: NodejsFunctionProps = {
      bundling: {
        externalModules: [
          'aws-sdk', // Use the 'aws-sdk' available in the Lambda runtime
        ],
      },
      runtime: Runtime.NODEJS_16_X,
      timeout: Duration.minutes(3), // Default is 3 seconds
      memorySize: 256,
    };

    const readS3ObjFn = new NodejsFunction(this, 'readS3ObjFn', {
      entry: path.join(__dirname, '../src/lambdas', 'read-s3.ts'),
      ...nodeJsFunctionProps,
      functionName: 'readS3ObjFn',
    });

    bucket.grantRead(readS3ObjFn);

as we use NodeJsFunction , will use docker to compile the lambda function. This is why we set the property value dockerEnabledForSynth right in the pipeline.

Add S3 as an event source for Lambda

We want to run lambda when creating object in S3 container and under CDK code does that

readS3ObjFn.addEventSource(
      new S3EventSource(bucket, {
        events: [s3.EventType.OBJECT_CREATED],
      })
    );

Lambda function code

We are just printing the contents of the loaded S3 object. However, you can manipulate the file as per your application’s needs. Below is the lambda function code that prints the contents of the file.

import { S3Event } from 'aws-lambda';
import * as AWS from 'aws-sdk';

export const handler = async (
  event: S3Event,
  context: any = {}
): Promise<any> => {
  for (const record of event.Records) {
    const bucketName = record?.s3?.bucket?.name || '';
    const objectKey = record?.s3?.object?.key || '';

    const s3 = new AWS.S3();
    const params = { Bucket: bucketName, Key: objectKey };
    const response = await s3.getObject(params).promise();
    const data = response.Body?.toString('utf-8') || '';
    console.log('file contents:', data);
  }
};

Create a Serverless application

We’ll be using our lambda stack in our app – let’s call this app MyApplication . You can name whatever you want.

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { AwsS3LambdaStack } from './lambda-stack';
export class MyApplication extends cdk.Stage {
  constructor(scope: Construct, id: string, props?: cdk.StageProps) {
    super(scope, id, props);

    const lambdaStack = new AwsS3LambdaStack(this, 'lambdaStack');
  }
}

Please note that we are expanding cdk.Stage Build so we can use this as a stage in our pipeline.

Adding the stage to our pipeline

You can use the CDK code below to add our application as a stage in the pipeline

pipeline.addStage(new MyApplication(this, 'lambdaApp'));

Commit and push these changes to main branch. Please note that we do not need to populate the stack manually.

Code Pipeline will self-mutation and You will create additional steps and the S3 container will be deployed and the lambda function created.

Commit and push these changes to main branch. Please note that we do not need to populate the stack manually.

Code Pipeline will self-mutation It will create additional steps and will deploy the S3 container and create the lambda function

Tests

To test our actual serverless app, I load a simple text file into the container (screenshot is shown below)

Our lambda function will be called and you can see the same in the execution logs

conclusion:

I hope you learned about building CI/CD pipelines using CDK Pipelines. Because of this self-modulating feature, it is easier to build a CICD pipeline.

loading
. . . comments & more!