Managing Infrastructure with GitLab CI/CD for Terraform: Plan,Validate, Apply, Destroy
Prerequisites -
Step2:Create S3 Bucket
GitLab: GitLab is a platform where developers store and collaborate on their code, acting as a virtual workspace for coding projects.
CI/CD: CI means "Continuous Integration," and CD means "Continuous Deployment" or "Continuous Delivery."
Continuous Integration (CI): Every time a developer makes changes to the code, CI tools automatically check if those changes work well with the existing code, acting as a "test run" to catch any mistakes early on.
Continuous Deployment/Delivery (CD): Once the code is tested and ready, CD tools automatically release it to production, functioning like an automatic delivery system for software.
In Simple Words: GitLab CI/CD is a system that helps developers automatically check their code for errors and, if everything is fine, automatically release their software without manual effort. It's like having robots that test and deliver your code, saving time and reducing mistakes.
Create an IAM user
Navigate to the AWS console
Search for IAM → User →1.give a name 2.Click "Attach policies directly"3.Click this checkbox with Administrator access 4.create a user.
Click "Security credentials" →Click "Create access key" →Click this radio button with the CLI → Agree to terms →next → create access key →Download .csv file.
Create S3 Bucket
Navigate to AWS Console and search for s3 and create a s3 Bucket.
Create Gitlab Account
Now ,Lets create New project/Repository
Variables setup in Gitlab (Secrets)
Inside your repository → Click on Settings → ci/cd →Click on Expand at variables →Click on Add variable like below added.
Terraform Files
Create a blank repository in Gitlab and add these files.
ami-053b12d3152c0cc71
- AMI ID of instance and Key name we have to add in this file.
resource "aws_security_group" "Jenkins-sg" {
name = "Jenkins-Security Group"
description = "Open 22,443,80,8080"
# Define a single ingress rule to allow traffic on all specified ports
ingress = [
for port in [22, 80, 443, 8080] : {
description = "TLS from VPC"
from_port = port
to_port = port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
]
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Jenkins-sg"
}
}
resource "aws_instance" "web" {
ami = "ami-053b12d3152c0cc71" #change Ami if you different region
instance_type = "t2.medium"
key_name = "a" #change key name
vpc_security_group_ids = [aws_security_group.Jenkins-sg.id]
user_data = templatefile("./install_jenkins.sh", {})
tags = {
Name = "Jenkins-sonar"
}
root_block_device {
volume_size = 8
}
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "ap-south-1" #change to desired region.
}
#!/bin/bash
exec > >(tee -i /var/log/user-data.log)
exec 2>&1
sudo apt update -y
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible -y
sudo apt install git -y
mkdir Ansible && cd Ansible
pwd
git clone https://github.com/Ankita2295/Terraform-GitlabCICD.git
cd ANSIBLE
ansible-playbook -i localhost Jenkins-playbook.yml
s3 bucket name we have to mention
terraform {
backend "s3" {
bucket = "<s3-bucket>" # Replace with your actual S3 bucket name
key = "Gitlab/terraform.tfstate"
region = "ap-south-1"
}
}
GitLab CI/CD configuration
stages:
- validate
- plan
- apply
- destroy
stages:
: This section defines the stages in the CI/CD pipeline. In your configuration, you have four stages: validate
, plan
, apply
, and destroy
.
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
image:
specifies the Docker image to use for the GitLab Runner. In this case, you're using the "hashicorp/terraform:light
" image for running Terraform commands. The entrypoint
lines set the environment to include commonly used paths.
before_script:
- export AWS_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- rm -rf .terraform
- terraform --version
- terraform init
before_script:
This section defines commands to run before each job in the pipeline.
The first two lines export the AWS access key and secret access key as environment variables, which are used for AWS authentication in your Terraform configuration.
rm -rf .terraform
: This command removes any existing Terraform configuration and state files to ensure a clean environment. *terraform --version
: This command displays the Terraform version for debugging and version confirmation. *terraform init
: This command initializes Terraform in the working directory, setting up the environment for Terraform operations.
validate:
stage: validate
script:
- terraform validate
validate:
defines a job named "validate" in the "validate" stage, which checks the Terraform configuration for errors.
script:
specifies the commands to run as part of this job, which in this case isterraform validate
, used to check the syntax and structure of your Terraform files.
plan:
stage: plan
script:
- terraform plan -out=tfplan
artifacts:
paths:
- tfplan
plan:
: This job, in the "plan" stage, creates a Terraform plan by running terraform plan -out=tfplan
, and it saves the plan as an artifact named tfplan
.
script:
: Runsterraform plan -out=tfplan
, which generates a plan and saves it as "tfplan" in the working directory. *artifacts:
: Specifies the artifacts (output files) of this job, indicating that the "tfplan" file should be preserved as an artifact.
apply:
stage: apply
script:
- terraform apply -auto-approve tfplan
dependencies:
- plan
apply:
: This job, in the "apply" stage, applies the Terraform plan generated in the previous stage.
script:
: Runsterraform apply -auto-approve tfplan
, which applies the changes specified in the "tfplan" file. *dependencies:
: Specifies that this job depends on the successful completion of the "plan" job.
destroy:
stage: destroy
script:
- terraform init
- terraform destroy -auto-approve
when: manual
dependencies:
- apply
destroy:
This job, in the "destroy" stage, is meant for removing the resources managed by Terraform.
script:
: Runsterraform init
to set up the Terraform environment and then executesterraform destroy -auto-approve
to remove the resources, with the-auto-approve
flag allowing for non-interactive execution. *when: manual
: Indicates that this job must be manually triggered by a user. *dependencies:
: Ensures this job relies on the successful completion of the "apply" job, meaning resources can only be destroyed if they have been applied by a previous "apply" job.
.gitlab-ci.yml
Full Gitlab CI/CD configuration file and add it to the repository
Click on + →Click on New file.The name of the file is .gitlab-ci.yml
Copy this content and add it
stages:
- validate
- plan
- apply
- destroy
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- export AWS_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- rm -rf .terraform
- terraform --version
- terraform init
validate:
stage: validate
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out=tfplan
artifacts:
paths:
- tfplan
apply:
stage: apply
script:
- terraform apply -auto-approve tfplan
dependencies:
- plan
destroy:
stage: destroy
script:
- terraform init
- terraform destroy -auto-approve
when: manual
dependencies:
- apply
Click commit. It will automatically start the build.Now click on Build → Pipelines
It will open like this.Click on validate to see the build output.
Initialized and validated terraform code.
Click on Jobs to come back.See plan output.
Now come back and see apply output also.
Go to the AWS console to check if the EC2 instance is provisioned.
Connect to the instance using Putty or MobaXterm with the following commands.
cd /
cd Ansible #mkdir used in shell script
cd ANSIBLE #cloned repo
ls #to see ansible playbook
Now come back to
cd /home/ubuntu
cd /var/log/
ls
cat user-data.log
The Ansible playbook has finished running to install Jenkins.
Copy the public IP of the EC2 instance.
<Ec2-public-ip:8080>
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Copy the password and sign in.
Destroy
Return to GitLab and manually select destroy to delete resources by clicking on >> in stages.Now select destroy and the Run job
Destroy is completed.
CICD looks like this.
In conclusion, GitLab CI/CD simplifies and speeds up the software development process, allowing developers to concentrate on creating innovative and valuable software while the CI/CD pipeline manages the rest. As you start using GitLab CI/CD, remember that it's not just about automation; it's about delivering better software faster, a goal every development team can support.
Embrace GitLab CI/CD to enhance your software projects with automation, collaboration, and quality assurance, allowing your code to improve rapidly and reliably.
Welcome to GitLab CI/CD, where your code becomes efficient, resilient, and agile, ready for success in the digital age;