A Comprehensive Guide to Deployment and Infrastructure: AWS, NestJS, MongoDB
November 16, 2024
A Comprehensive Guide to Deployment and Infrastructure: AWS, NestJS, MongoDB
This article provides a detailed overview of the deployment process, infrastructure. It covers everything from Git branching strategies to AWS infrastructure, CI/CD pipelines, database setup, and security considerations.
Git Flow & CI/CD Pipeline

This section outlines the branching strategy and the overall CI/CD pipeline. We utilize a Git flow-inspired approach.

  • main branch: Represents the production-ready code. Only thoroughly tested and approved code is merged into main. Direct commits to main are prohibited.
  • develop branch: Serves as the integration branch for new features and bug fixes. Pull Requests (PRs) are merged into develop after code review and passing automated tests.
  • Feature branches: Created from develop for individual features or significant bug fixes. Naming convention: feature/short-description. These branches are short-lived and merged back into develop via PRs.
  • Release branches: Created from develop when preparing for a new release. Naming convention: release/v1.23.4 (using the version number). Used for final testing, bug fixes, and version bumping. Once ready, merged into both main and develop.
  • Hotfix branches: Created from main to address critical production issues. Naming convention: hotfix/short-description. After the fix is applied, they are merged into both main and develop (and potentially a release branch if one is active).
  • A visual depiction of what is being written about

    CI/CD Pipeline Overview:

  • Code Commit: Developers commit code to their feature branches.
  • Pull Request: A PR is created to merge the feature branch into develop.
  • Automated Build: The CI/CD system (e.g., Jenkins, GitLab CI, CircleCI, AWS CodePipeline) triggers a build upon PR creation. This includes:
  • Code compilation (if applicable).
  • Linting and code style checks.
  • Unit tests.
  • Code Review: Code reviewers examine the changes for quality, correctness, and adherence to standards.
  • Merge to develop: If the build passes and the code review is approved, the PR is merged into develop.
  • Automated Deployment to Staging: A merge to develop triggers a deployment to a staging environment for further testing.
  • Integration and End-to-End Tests: Automated integration and end-to-end tests are run in the staging environment.
  • Release Branch Creation: When features are ready for release, a release branch is created from develop.
  • Deployment to Pre-Production: The release branch is deployed to a pre-production environment that mirrors production as closely as possible.
  • Final Testing: Final testing (e.g., UAT, performance testing) is conducted in pre-production.
  • Merge to main and develop: The release branch is merged into both main (for production) and develop (to keep it up-to-date).
  • Automated Deployment to Production: The merge to main triggers a deployment to the production environment.
  • Tagging: The production release is tagged in Git (e.g., v1.23.4).
  • A visual depiction of what is being written about
    AWS Infrastructure & CI/CD Pipeline Requirements

    This section details the AWS infrastructure and specific requirements for the CI/CD pipeline.

    A visual depiction of what is being written about

    AWS Services:

  • Amazon ECS (Elastic Container Service): Used to run and manage containerized applications.
  • Amazon ECR (Elastic Container Registry): Used to store and manage Docker container images.
  • Amazon RDS (Relational Database Service): Used for managed database instances (e.g., PostgreSQL, MySQL). Likely used for relational data.
  • Amazon S3 (Simple Storage Service): Used for storing static assets, backups, and potentially logs.
  • Amazon CloudFront: Content Delivery Network (CDN) for fast content delivery to users.
  • AWS WAF (Web Application Firewall): Used to protect web applications from common attacks.
  • AWS IAM (Identity and Access Management): Used to manage users, permissions, and roles within AWS.
  • Amazon CloudWatch: Used for monitoring and logging.
  • AWS CodePipeline, CodeBuild, CodeDeploy: Potentially used for the CI/CD pipeline (alternatively, a third-party CI/CD tool could be used).
  • AWS VPC (Virtual Private Cloud): Provides an isolated network environment for the infrastructure.
  • AWS Elastic Load Balancing (ELB): Distributes traffic across multiple ECS tasks.
  • CI/CD Pipeline Requirements:

  • Automated Builds: The pipeline must automatically build Docker images upon code changes.
  • Automated Testing: Unit, integration, and end-to-end tests must be executed automatically.
  • Automated Deployments: Deployments to staging, pre-production, and production environments must be automated.
  • Rollback Capability: The pipeline must support easy rollbacks to previous versions in case of deployment failures.
  • Infrastructure as Code (IaC): Infrastructure should be defined as code (e.g., using Terraform - see section 1.23.4.4) for consistency and reproducibility.
  • Monitoring and Alerting: The pipeline should integrate with monitoring tools (CloudWatch) to provide alerts for failures or performance issues.
  • Security Scanning: Integrate security scanning tools (e.g., container image vulnerability scanning) into the pipeline.
  • A visual depiction of what is being written about
    MongoDB Setup

    This section describes the MongoDB setup.

  • Deployment Type: Likely a managed MongoDB Atlas cluster (recommended for ease of management and scalability) or a self-managed cluster on EC2 instances.
  • Cluster Configuration: Details about the cluster size, instance types, storage capacity, and replica set configuration. This should include specifics like the number of nodes, shard configuration (if sharding is used), and whether it's a replica set or a sharded cluster.
  • Version: The specific version of MongoDB being used (e.g., MongoDB 6.0).
  • Security:
  • Authentication: Details on how users and applications authenticate to the database (e.g., SCRAM, x.509 certificates).
  • Authorization: Role-Based Access Control (RBAC) configuration, defining user roles and permissions.
  • Network Security: Configuration of network access controls (e.g., VPC peering, IP whitelisting).
  • Encryption: Encryption at rest (using AWS KMS or MongoDB Atlas encryption) and in transit (using TLS/SSL).
  • Backup and Recovery: Details on the backup strategy, including frequency, retention policy, and recovery procedures. MongoDB Atlas provides automated backups.
  • Monitoring and Alerting: Configuration of monitoring and alerting for database performance and health (using MongoDB Atlas monitoring or CloudWatch).
  • Connection Strings: Examples of connection strings used by applications to connect to the database, including any necessary credentials or parameters.
  • bash
    #!/bin/bash
    ## input
    DATABASE_CONNECTION_STRING="$1"
    if [ -z $DATABASE_CONNECTION_STRING ]; then
    echo "Require DATABASE_CONNECTION_STRING! exit"
    exit 1
    fi
    ## default
    MIGRATE_MONGODB_DIR="./mongodb-migrations"
    MIGRATE_MONGO_NPM_VERSION="11.0.0"
    export DATABASE_CONNECTION_STRING="${DATABASE_CONNECTION_STRING}"
    if [ -d $MIGRATE_MONGODB_DIR ]; then
    # go to mongodb-migrations directory
    cd $MIGRATE_MONGODB_DIR
    # install migrate-mongo npm package
    npm install -g migrate-mongo@${MIGRATE_MONGO_NPM_VERSION}
    # start migration
    migrate-mongo up
    # check migration status
    if [ $? != 0 ]; then
    echo "Migrate mongodb failed!"
    exit 1
    fi
    echo "Migrate mongodb successfully!"
    migrate-mongo status
    else
    echo "${MIGRATE_MONGODB_DIR} not found! Skip!"
    fi
    Terraform Requirements

    This section outlines the requirements for using Terraform to manage infrastructure.

  • Terraform Version: The specific version of Terraform being used (e.g., Terraform v1.3).
  • State Management: How Terraform state is managed. Remote state storage (e.g., using Amazon S3 and DynamoDB for locking) is highly recommended for team collaboration and to prevent state corruption.
  • Modules: Use of Terraform modules to organize and reuse infrastructure code. This promotes consistency and reduces code duplication. Modules should be created for common infrastructure components (e.g., VPC, ECS cluster, RDS database).
  • Variables and Outputs: Proper use of Terraform variables to parameterize infrastructure configurations and outputs to expose important information (e.g., resource IDs, endpoints).
  • Workspaces: Use of Terraform workspaces to manage multiple environments (e.g., staging, production) with the same configuration.
  • Coding Standards: Adherence to Terraform coding best practices, including consistent naming conventions, formatting, and commenting.
  • CI/CD Integration: Integration of Terraform into the CI/CD pipeline for automated infrastructure provisioning and updates. This typically involves running terraform plan and terraform apply as part of the pipeline.
  • Security: Secure handling of sensitive data (e.g., passwords, API keys) using Terraform's sensitive variable handling or a secrets management solution (e.g., AWS Secrets Manager).
  • Service Deployment Process to ECS Cluster

    This section provides a step-by-step guide on deploying services to the ECS cluster.

  • Build the Docker Image:
  • Create a Dockerfile that defines the application's environment and dependencies.
  • Build the Docker image using docker build.
  • Tag the image with a unique identifier (e.g., commit hash, version number).
  • Push the Image to ECR:
  • Authenticate to the ECR registry using aws ecr get-login-password.
  • Tag the image with the ECR repository URI.
  • Push the image to ECR using docker push.
  • Update the ECS Task Definition:
  • Create or update an ECS task definition. A task definition is a JSON file that describes the containers that make up a service.
  • Specify the Docker image URI from ECR.
  • Configure resource limits (CPU, memory).
  • Define environment variables.
  • Configure port mappings.
  • Set up logging (e.g., sending logs to CloudWatch).
  • Update the ECS Service:
  • Create or update an ECS service. An ECS service manages the desired number of instances of a task definition.
  • Specify the updated task definition.
  • Configure the desired number of tasks.
  • Configure the deployment strategy (e.g., rolling update, blue/green deployment).
  • Configure the load balancer (if applicable).
  • Deploy the Service:
  • Use the AWS CLI, SDK, or console to update the ECS service. This will trigger a deployment of the new task definition.
  • ECS will manage the deployment process, replacing old tasks with new tasks according to the deployment strategy.
  • Monitor the Deployment:
  • Monitor the deployment progress using the ECS console or CloudWatch.
  • Check for any errors or issues.
  • Rollback (if necessary):
  • If the deployment fails or causes issues, roll back to the previous task definition using the ECS console or CLI.
  • json
    "<service_name>": {
    "service_ports" : [3000], #--> default port for service (keep it by default)
    "security" : "low", #--> set log retention days, refer README.md in terraform for more details.
    "service_enable_autoscaling" : true, #--> set autoscaling, should be enabled (keep it by default)
    "service_discovery_dns_record_a_ttl": 60, #--> ttl for private dns records (keep it by default)
    "task_role_name" : "ecs-execution-role", #--> the task role for ecs service
    "execution_role_name" : "ecs-execution-role", #--> the service role for ecs service
    "service_health_check_path" : "/<service_name>/healthcheck", #--> the healthcheck path name of public API endpoint. For example: <leaderboard>: https://api-uat.claynosaurz.com/leaderboard/healthcheck
    "leaderboard_config_path_pattern" : "/<service_name>/*", #--> the path name of public API endpoint. For example: <leaderboard>: https://api-uat.claynosaurz.com/leaderboard/*
    "task_container_definitions" : [
    {
    "name" : "<service_name>-container", #--> this is name of service container. For example: <leaderboard>: 'leaderboard-container'
    "image" : "975050041486.dkr.ecr.us-east-1.amazonaws.com/claynosaurz-<env>-<service_name>:<tag_version>", #--> image ECR url. The tag version, you can use 'latest' and update after the image was built and pushed to ECR.
    "environment" : #--> Environments for ecs service. In details, gameloft will share the list env.
    {
    "SERVICE_NAME": "leaderboard",
    "ENV": "uat",
    "PORT": "3000",
    "DATABASE_CONNECTION_LIMIT": "100",
    "DEBUG_ENABLE_DATABASE_QUERY_LOG": "1",
    "DEBUG_ENABLE_DATABASE_ERROR_LOG": "1",
    "DEBUG_ENABLE_HTTP_REQUEST_BODY_LOG": "1",
    "DEBUG_ENABLE_HTTP_REQUEST_HEADERS_LOG": "1",
    "DISABLE_PRETTY_LOGGER": "1",
    "DISABLE_HEALTHCHECK_LOG": "1",
    "CORS_ORIGINS": "*",
    "APP_REQUEST_TIMEOUT": "30",
    "PATH_PREFIX": "leaderboard",
    "DEBUG_ENABLE_SWAGGER": "1",
    "REFRESH_TOKEN_EXPIRES_IN": "86400",
    "ACCESS_TOKEN_EXPIRES_IN": "600",
    "ADDITIONAL_LOG_REDACT_KEYS": "*",
    "APM_ENABLE": true,
    "SENTRY_ENVIRONMENT": "leaderboard.uat"
    },
    "secrets" : #--> This is a secrets of services. Similar to environments, gameloft will share it.
    [
    "DATABASE_CONNECTION_STRING",
    "DATA_TRANSFER_KEY",
    "DATA_TRANSFER_IV",
    "INTERNAL_KEY",
    "SENTRY_DSN",
    "REDIS_CONNECTION_STRING_READ",
    "REDIS_CONNECTION_STRING_WRITE"
    ],
    "ports" : [3000], #--> port for this container. One service is able to contains multiple containers with different ports.
    "vars" : {
    "linuxParameters" : { "initProcessEnabled" : true }, #--> keep it by default
    "private_registry" : false #--> set true if you are using the docker private registry.
    }
    }
    ]
    }
    CMS Frontend CI/CD Workflow

    This section describes the CI/CD workflow specifically for the CMS frontend, likely deployed using Vercel.

  • Code Commit: Developers commit code to their feature branches.
  • Pull Request: A PR is created to merge the feature branch into develop (or a similar integration branch).
  • Automated Build (Vercel): Vercel automatically detects the PR and triggers a build. This typically involves:
  • Installing dependencies (e.g., npm install).
  • Running build scripts (e.g., npm run build).
  • Running linters and tests.
  • Preview Deployment (Vercel): Vercel automatically creates a preview deployment for the PR. This allows reviewers to test the changes in a live environment.
  • Code Review: Code reviewers examine the changes and the preview deployment.
  • Merge to Integration Branch: If the build passes and the code review is approved, the PR is merged.
  • Automated Deployment to Staging (Vercel): Vercel automatically deploys the changes to a staging environment.
  • Testing: Further testing (e.g., manual testing, automated end-to-end tests) is conducted in the staging environment.
  • Merge to main: When the changes are ready for production, they are merged into the main branch.
  • Automated Deployment to Production (Vercel): Vercel automatically deploys the changes to the production environment.
  • Caching (Vercel): Vercel automatically handles caching of static assets for optimal performance.
  • A visual depiction of what is being written about
    Backend Services CI/CD Workflow

    This section describes the CI/CD workflow for the backend services, deployed to the ECS cluster. This workflow is similar to the general CI/CD pipeline described in 1.23.4.1, but with specifics for backend services.

  • Code Commit: Developers commit code to their feature branches.
  • Pull Request: A PR is created to merge the feature branch into develop.
  • Automated Build: The CI/CD system triggers a build. This includes:
  • Code compilation (if applicable).
  • Linting and code style checks.
  • Unit tests.
  • Building the Docker image (as described in 1.23.4.6).
  • Code Review: Code reviewers examine the changes.
  • Merge to develop: If the build passes and the code review is approved, the PR is merged.
  • Automated Deployment to Staging: The CI/CD system triggers a deployment to the staging environment. This involves:
  • Pushing the Docker image to ECR.
  • Updating the ECS task definition.
  • Updating the ECS service.
  • Integration and End-to-End Tests: Automated tests are run in the staging environment.
  • Release Branch Creation: A release branch is created from develop.
  • Deployment to Pre-Production: The release branch is deployed to a pre-production environment.
  • Final Testing: Final testing is conducted in pre-production.
  • Merge to main and develop: The release branch is merged into both main and develop.
  • Automated Deployment to Production: The merge to main triggers a deployment to the production environment. This involves the same steps as the staging deployment (pushing the image, updating the task definition, and updating the service).
  • Tagging: The production release is tagged in Git.
  • A visual depiction of what is being written about

    Discussion (0)

    Loading...

    Recommended articles

    More articles ➜
    Exploring Colyseus: Building Scalable Game Servers and Enhancing Team Performance

    Exploring Colyseus: Building Scalable Game Servers and Enhancing Team Performance

    In the world of multiplayer game development, having a robust and scalable game server is crucial for delivering a seamless gaming experience to players. In this blog post, we'll dive into Colyseus, a powerful framework for building game servers, and explore its features, benefits, and how it can improve team performance. We'll also take a closer look at the Colyseus playground and its role in game server development.

    Architecture
    Backend
    DevOps
    Beiryu

    Beiryu

    Contributor

    0
    How to notify Slack for AWS CodePipeline

    How to notify Slack for AWS CodePipeline

    To notify Slack for AWS CodePipeline, create a notification rule for the pipeline, selecting all events and Slack as the target. Register the Slack channel as a target and configure a new chatbot client. After authorizing Slack, specify the Slack channel to connect to, set the chatbot's privileges, and establish guard rules. With read-only access for both allowed rules and guard conditions, the pipeline can trigger notifications to the Slack channel.

    DevOps
    Beiryu

    Beiryu

    Contributor

    0
    Subscribe to the newsletter
    Get emails from me about web development, tech, and early access to new articles.