Skip to content
Using Flux, a GitOps Tool, with Amazon Elastic Kubernetes Service (EKS) - Part 1

Introduction

Have you ever wondered if there was a better way to manage the lifecycle of an application, from deploying the application, scaling the application, and managing the required infrastructure while storing the code in Git and allowing versioning of the code? Well, there is a way. It is called GitOps.

This is the first part of the series on Using Flux, a GitOps Tool, with Amazon Elastic Kubernetes Service (EKS). This guide will explain what GitOps is and show you how to use Flux, a GitOps tool, with Amazon Elastic Kubernetes Service (EKS). You will use Flux to deploy various Kubernetes Services and Applications to Amazon Elastic Kubernetes Service (EKS). In addition, you will create a container image containing the code for the React container. You will upload the container image to a private Amazon Elastic Container Registry.

You can access all of the code used in my GitHub Repository.

Before we begin, let's define what GitOps is and what GitOps tool we will use in this guide.

What is GitOps?

GitOps is a software development and operations (DevOps) methodology that leverages the principles of version control and collaboration from Git to manage the deployment and operation of applications and infrastructure more efficiently. The core idea behind GitOps is to use Git repositories as the source of truth for both application code and infrastructure configuration. This approach aims to streamline and automate deploying, managing, and monitoring software systems.

GitOps works on a few foundational principles:

  1. Declarative Infrastructure: In a GitOps workflow, all aspects of your application's infrastructure, including configuration files, deployment manifests, and environment settings, are stored as code in a Git repository. This means that the desired state of your application's infrastructure is defined declaratively in these code files.
  2. Version-Controlled System: All the declarative representations of the desired system state are stored in a Git repository. This includes application code, configuration, deployment manifests, and more.
  3. Continuous Deployment: Any infrastructure or application code changes are made by committing and pushing changes to the Git repository. The Git repository serves as a single source of truth for both the development and operations teams. This is known as "continuous deployment" or "continuous delivery," as changes are automatically propagated through the pipeline once pushed to the repository.
  4. Automated Convergence: Continuous integration and continuous deployment (CI/CD) tools automatically converge the actual system state towards the desired state represented in the Git repository. If someone changes the configuration in Git, CI/CD tools apply those changes to the environment.
  5. Pull-Based Deployments: Rather than pushing changes out to environments, the environments (via agents or operators) pull the desired state from the Git repository and enact any necessary changes.

Benefits of GitOps:

  1. Versioning and Auditability: Since everything is in Git, you have an audit trail of who changed what and when. It becomes easy to roll back to a previous desired state if necessary.
  2. Enhanced Developer Productivity: Developers are already familiar with Git. Using Git as the deployment mechanism, developers can use the same infrastructure and source code workflows.
  3. Consistency and Reliability: By defining and storing configurations declaratively and applying them automatically, it's possible to ensure consistent environments.
  4. Faster Recovery: If something goes wrong in production, the correct desired state is in the Git repository. Systems can quickly revert to a previous, known-good state, or problematic changes can be quickly identified and corrected.
  5. Improved Collaboration: Since everything is stored in Git, teams can collaborate using merge requests, reviews, and discussions around the code.

Popular tools and platforms implementing the GitOps methodology include Flux, ArgoCD, Jenkins X, and Weaveworks' FluxCD. These tools provide automation and integration capabilities to help facilitate the GitOps workflow.

GitOps is a powerful approach for improving software deployment and operations' reliability, traceability, and efficiency by integrating version control practices with DevOps principles.

The GitOps model particularly shines in Kubernetes environments due to Kubernetes' declarative nature, but it's not limited to it. However, like any other methodology, GitOps is not a silver bullet and might only be viable for some scenarios or organizations. Proper tooling, understanding, and training are essential to reaping the full benefits of GitOps.

Flux, a GitOps Tool

Flux, sometimes called Flux CD, is an open-source tool for automating applications and infrastructure deployment and lifecycle management in Kubernetes clusters. It's part of the broader ecosystem of tools that facilitate continuous delivery and GitOps practices in Kubernetes environments.

The core idea behind Flux is to maintain a declarative representation of the desired state of your Kubernetes resources in a Git repository. This GitOps approach ensures that any changes to the infrastructure or application configurations are made through code changes in the Git repository, which triggers Flux to synchronize and apply those changes to the Kubernetes cluster.

Critical features of Flux include:

1. Automated Synchronization: Flux monitors the Git repository for changes and automatically synchronizes the cluster with the desired state defined in the repository.

2. Multi-Environment Support: Flux CD supports managing multiple environments (e.g., development, staging, production) with different configurations and policies.

3. Versioning: The Git repository serves as a versioned history of your infrastructure and application changes, allowing you to track changes over time and roll back if needed.

4. Release Automation: Flux can be integrated with CI/CD pipelines to automate the release process, triggering deployments when new code is merged to specific branches.

5. Policy Enforcement: Flux supports applying policies and rules to ensure that only approved changes are applied to the cluster, enhancing security and compliance.

6. Integrations: It can be used alongside other tools, like Helm, Kubernetes Operators, and more, to manage a wide variety of resources in your cluster.

Flux promotes the GitOps approach, which emphasizes using version-controlled Git repositories as the single source of truth for your application and infrastructure configurations. This approach enhances collaboration, transparency, and traceability while reducing the risk of manual errors and ensuring consistent deployments. Additional information on Flux can be found here.

Technologies we are going to use:

  • HashiCorp Terraform
  • Flux
  • GitHub
  • Amazon Elastic Kubernetes Service (EKS)
  • Amazon Elastic Container Registry (ECR)
  • AWS Key Management Service (KMS)
  • Amazon Route 53
  • AWS Certificate Manager (ACM)
  • Amazon Virtual Private Cloud (Amazon VPC)
  • IAM policies and roles

Prerequisites

Before you begin, make sure you have the following before starting:

  • An active AWS account. You can create a new AWS account here.
  • AWS CLI installed and configured. Instructions can be found here.
  • Terraform installed. Instructions can be found here.
  • Helm installed. Instructions can be found here.
  • Kubernetes CLI (kubectl). Instructions can be found here.
  • A GitHub Personal Access Token. Instructions can be found here.

Architecture Overview

Amazon Elastic Kubernetes Service (EKS)

module "eks" {

  source  = "terraform-aws-modules/eks/aws"

  version = "~> 19.15"



  cluster_name = local.eks_cluster_name

  # cluster_version                 = local.eks_cluster_version

  cluster_endpoint_private_access = true

  cluster_endpoint_public_access  = true



  cluster_addons = {

    kube-proxy = {

      most_recent                 = true

      resolve_conflicts           = "OVERWRITE"

      resolve_conflicts_on_update = "OVERWRITE"

    }

    vpc-cni = {

      most_recent                 = true

      resolve_conflicts           = "OVERWRITE"

      resolve_conflicts_on_update = "OVERWRITE"

      service_account_role_arn    = module.vpc_cni_ipv4_irsa_role.iam_role_arn

    }

  }



  vpc_id     = module.vpc.vpc_id

  subnet_ids = module.vpc.private_subnets



  depends_on = [module.vpc]

}

...

Amazon Elastic Container Registry (ECR)

module "ecr" {

  source  = "terraform-aws-modules/ecr/aws"

  version = "~> 1.6.0"



  repository_name = local.ecr_repo_name



  create_lifecycle_policy         = true

  repository_image_tag_mutability = "MUTABLE"

  repository_lifecycle_policy = jsonencode({

    rules = [

      {

        rulePriority = 1,

        description  = "Keep last 5 untagged images",

        selection = {

          tagStatus   = "untagged",

          countType   = "imageCountMoreThan",

          countNumber = 4

        },

        action = {

          type = "expire"

        }

      },

      {

        "rulePriority" : 2,

        "description" : "Keep last 5 tagged images",

        "selection" : {

          "tagStatus" : "tagged",

          "tagPrefixList" : ["v"],

          "countType" : "imageCountMoreThan",

          "countNumber" : 4

        },

        "action" : {

          "type" : "expire"

        }

      }

    ]

  })



  repository_force_delete = true



  depends_on = [module.vpc]

}

AWS Certificate Manager (ACM)

resource "aws_acm_certificate" "podinfo" {

  domain_name       = local.podinfo_domain_name

  validation_method = "DNS"



  lifecycle {

    create_before_destroy = true

  }

}

...

Amazon Virtual Private Cloud (Amazon VPC)

module "vpc" {

  source  = "terraform-aws-modules/vpc/aws"

  version = "~> 5.0.0"



  private_subnets     = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]

  public_subnets      = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  elasticache_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 64)]



  name                 = local.vpc_name

  cidr                 = local.vpc_cidr

  azs                  = local.azs

  enable_nat_gateway   = true

  single_nat_gateway   = true

  enable_dns_hostnames = true

  enable_dns_support   = true



  create_flow_log_cloudwatch_iam_role             = true

  create_flow_log_cloudwatch_log_group            = true

  enable_dhcp_options                             = true

  enable_flow_log                                 = true

  flow_log_cloudwatch_log_group_retention_in_days = 7

  flow_log_max_aggregation_interval               = 60



  public_subnet_tags = {

    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"

    "kubernetes.io/role/elb"                          = 1

  }



  private_subnet_tags = {

    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"

    "kubernetes.io/role/internal-elb"                 = 1

  }

}

IAM policies and roles

module "load_balancer_controller_irsa_role" {

  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"

  version = "~> 5.0"



  role_name                              = "${local.eks_iam_role_prefix}-load-balancer-controller"

  attach_load_balancer_controller_policy = true



  oidc_providers = {

    ex = {

      provider_arn               = module.eks.oidc_provider_arn

      namespace_service_accounts = ["kube-system:${local.eks_alb_service_account_name}"]

    }

  }

}

...

Amazon ALB

Amazon ALBs will automatically be created when you deploy the various apps using Flux as the deployment tool. This will be discussed in part 2 of this article.

Amazon Route 53

When you deploy the various apps using Flux, the public domains you will use will be registered automatically on Amazon Route 53. This will be discussed in part 2 of this article.

We just finished reviewing the architecture that will be created by Terraform code. Several of the code blocks from above are just snippets of code. Please see the git repo for the complete code.

Setup and Deploy Infrastructure

Follow these steps to set up the environment.

Step 1. Set variables in "locals.tf". Below are some of the variables that should be set.

  • aws_region
  • aws_profile
  • tags
  • custom_domain_name
  • public_domain
  • react_app_domain_name
  • weave_gitops_domain_name
  • podinfo_domain_name

Step 2. Update Terraform S3 Backend in the "provider.tf" file.

  • bucket
  • key
  • profile
  • dynamodb_table

Step 3. Initialize Terraform

terraform init

Step 4. Validate the Terraform code

terraform validate

Step 5. Run, review, and save a Terraform plan

terraform plan -out=plan.out

Step 6. Apply the Terraform plan

terraform apply plan.out

Step 7. Review Terraform apply results

After completing the above steps, you should have a running and working Amazon EKS Cluster and Amazon ECR.

Please stay tuned for part 2 of the series, where we will complete the following tasks.

  • Configure access to Amazon EKS Cluster
  • Build and push Docker image to Amazon ECR
  • Install GitOps and Flux CLI Tools
  • Review script to configure Flux Repository
  • Install Flux to the Amazon EKS Cluster

Related Articles

Moving at the Speed of Cryptocurrency with Infrastructure as Code

Read more

Call Center Analytics: Part 3 - Sentiment Analysis with Amazon Comprehend

Read more

Call Center Analytics: Part 5 - Full-Stack Development of the AI Call Center Analysis Tool

Read more

Contact Us

Achieve a competitive advantage through BSC data analytics and cloud solutions.

Contact Us