Skip to main content

Installing Coginiti Enterprise on EKS

This guide provides comprehensive instructions for installing Coginiti Enterprise on Amazon Elastic Kubernetes Service (EKS). This deployment method leverages AWS-native services for a fully managed Kubernetes experience.

Overview

The deployment process consists of three key stages:

  1. Configuring AWS account and required tools
  2. Provisioning AWS resources including EKS cluster and supporting infrastructure
  3. Installing Coginiti Enterprise on the EKS cluster
Enterprise on AWS

This guide covers Coginiti Enterprise deployment on AWS EKS. For Google Cloud deployments, see the GKE Installation Guides.

Prerequisites

AWS Account and Permissions

  • Valid AWS account with administrative access
  • AWS CLI configured with appropriate credentials
  • Required IAM permissions for EKS, VPC, RDS, and other AWS services

Required AWS IAM Permissions

Ensure your AWS account/role has the following permissions:

  • EKS Full Access - For cluster management
  • EC2 Full Access - For VPC and compute resources
  • RDS Full Access - For PostgreSQL database
  • Route53 Full Access - For DNS management
  • Certificate Manager Full Access - For SSL certificates
  • Application Load Balancer Full Access - For ingress
  • ECR Full Access - For container registry access

Network Connectivity

  • Internet access for pulling container images from ECR
  • Ability to access AWS services via API endpoints

Installing Required Tools

Install the following software on your local machine:

1. AWS CLI

Download and install the AWS CLI from the official documentation.

Configure your credentials:

aws configure

2. eksctl

Install eksctl for simplified EKS cluster management:

# macOS
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl

# Linux
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

3. kubectl

Install kubectl for Kubernetes cluster management:

# macOS
brew install kubectl

# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

4. KOTS CLI

Install the KOTS kubectl plugin following the KOTS CLI Getting Started guide.

Verify installations:

aws --version
eksctl version
kubectl version --client
kubectl kots version

AWS Infrastructure Setup

Required AWS Resources

Networking

  • VPC - Virtual Private Cloud with public and private subnets
  • Internet Gateway - For public subnet internet access
  • NAT Gateway - For private subnet outbound connectivity
  • Route Tables - For traffic routing configuration

Compute and Storage

  • EKS Cluster - Managed Kubernetes cluster
  • Node Groups - EC2 instances for worker nodes
  • RDS PostgreSQL - Managed database with pg_vector extension

Security and DNS

  • Security Groups - Network access control
  • Route53 Hosted Zone - DNS management
  • ACM Certificate - SSL/TLS certificate management
  • Application Load Balancer - Ingress traffic management

EKS Cluster Requirements

  • Kubernetes Version: 1.27 or higher
  • Node Group Configuration: Minimum 3 nodes of m5.2xlarge type
  • Storage: gp3 EBS volumes, minimum 256 GB per node
  • Networking: Private subnets with NAT Gateway for outbound access
  • Add-ons: EBS CSI Driver, VPC CNI, CoreDNS

RDS PostgreSQL Requirements

  • Engine Version: PostgreSQL 15.x or 16.x
  • Instance Class: db.t3.medium or higher
  • Storage: Minimum 100 GB with auto-scaling enabled
  • Extensions: pg_vector extension support
  • Networking: Private subnets with security group access from EKS

Step-by-Step Infrastructure Deployment

1. Create EKS Cluster with eksctl

Create a cluster configuration file coginiti-cluster.yaml:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: coginiti-enterprise
region: us-west-2
version: "1.27"

iam:
withOIDC: true

vpc:
cidr: 10.0.0.0/16
subnets:
private:
us-west-2a: { cidr: 10.0.1.0/24 }
us-west-2b: { cidr: 10.0.2.0/24 }
us-west-2c: { cidr: 10.0.3.0/24 }
public:
us-west-2a: { cidr: 10.0.101.0/24 }
us-west-2b: { cidr: 10.0.102.0/24 }
us-west-2c: { cidr: 10.0.103.0/24 }

managedNodeGroups:
- name: coginiti-nodes
instanceType: m5.2xlarge
minSize: 3
maxSize: 6
desiredCapacity: 3
volumeSize: 256
volumeType: gp3
privateNetworking: true
subnets:
- us-west-2a
- us-west-2b
- us-west-2c
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonEBSCSIDriverPolicy

addons:
- name: vpc-cni
- name: coredns
- name: kube-proxy
- name: aws-ebs-csi-driver
serviceAccountRoleARN: arn:aws:iam::{ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole

cloudWatch:
clusterLogging:
enable: ["audit", "authenticator", "controllerManager"]

Deploy the cluster:

eksctl create cluster -f coginiti-cluster.yaml

2. Configure kubectl Context

aws eks update-kubeconfig --region us-west-2 --name coginiti-enterprise
kubectl get nodes

3. Install AWS Load Balancer Controller

Create IAM policy for ALB controller:

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json

aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json

Create service account:

eksctl create iamserviceaccount \
--cluster=coginiti-enterprise \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::{ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

Install ALB controller with Gateway API CRDS:

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml

helm repo add eks https://aws.github.io/eks-charts
helm repo update

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=coginiti-enterprise \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set featureGates.ALBGatewayAPI=true

4. Create RDS PostgreSQL Instance

Create DB subnet group:

aws rds create-db-subnet-group \
--db-subnet-group-name coginiti-db-subnet-group \
--db-subnet-group-description "Subnet group for Coginiti PostgreSQL" \
--subnet-ids subnet-12345678 subnet-87654321 subnet-11223344

Create security group for RDS:

aws ec2 create-security-group \
--group-name coginiti-rds-sg \
--description "Security group for Coginiti RDS" \
--vpc-id vpc-12345678

aws ec2 authorize-security-group-ingress \
--group-id sg-12345678 \
--protocol tcp \
--port 5432 \
--source-group sg-87654321 # EKS worker nodes security group

Create RDS instance:

aws rds create-db-instance \
--db-instance-identifier coginiti-postgres \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 15.4 \
--master-username postgres \
--master-user-password {YOUR_SECURE_PASSWORD} \
--allocated-storage 100 \
--storage-type gp3 \
--storage-encrypted \
--vpc-security-group-ids sg-12345678 \
--db-subnet-group-name coginiti-db-subnet-group \
--backup-retention-period 7 \
--multi-az \
--no-publicly-accessible

5. Configure RDS Extensions

Connect to the RDS instance and install required extensions:

-- Connect using psql or your preferred PostgreSQL client
-- psql -h coginiti-postgres.cluster-xyz.us-west-2.rds.amazonaws.com -U postgres

-- Create the Coginiti database
CREATE DATABASE coginiti_db;

-- Connect to the Coginiti database
\c coginiti_db;

-- Install the vector extension
CREATE EXTENSION vector;

-- Verify extension installation
SELECT extname FROM pg_extension WHERE extname = 'vector';

6. Set Up DNS and SSL Certificate

Create Route53 hosted zone (if not already exists):

aws route53 create-hosted-zone \
--name example.com \
--caller-reference $(date +%s)

Request ACM certificate:

aws acm request-certificate \
--domain-name coginiti.example.com \
--validation-method DNS \
--region us-west-2

Validate the certificate using DNS validation (follow ACM console instructions).

Install Coginiti Enterprise

1. Obtain License and Registry Access

Contact your Coginiti Account Manager to obtain:

  • License file (.yml format)
  • ECR registry credentials for Coginiti's container images
  • Installation instructions specific to your deployment

2. Configure ECR Authentication

If using Coginiti's private ECR registry:

# Get ECR login token
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin {COGINITI_ECR_URI}

# Create Kubernetes secret for ECR access
kubectl create secret docker-registry coginiti-ecr-secret \
--docker-server={COGINITI_ECR_URI} \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password --region us-east-1) \
--namespace coginiti-enterprise

3. Install Coginiti Enterprise using KOTS

Create namespace:

kubectl create namespace coginiti-enterprise

Install Coginiti Enterprise:

kubectl kots install coginiti-premium/enterprise-release \
--namespace coginiti-enterprise \
--shared-password {ADMIN_CONSOLE_PASSWORD}

4. Access Admin Console

Set up port forwarding to access the KOTS admin console:

kubectl kots admin-console --namespace coginiti-enterprise

Navigate to http://localhost:8800 in your browser.

Configure Coginiti Enterprise

1. Admin Console Login

Enter your admin console password to access the configuration interface.

2. Upload License File

Upload the license file provided by your Coginiti Account Manager.

3. Configure Application Settings

Superuser Account

Create the primary administrator account:

  • Superuser Name: Admin username for Coginiti Enterprise
  • Superuser Password: Strong password (minimum 8 characters)

Database Configuration

Configure the RDS PostgreSQL connection:

  • Database Type: PostgreSQL
  • Host: RDS endpoint (e.g., coginiti-postgres.cluster-xyz.us-west-2.rds.amazonaws.com)
  • Port: 5432
  • Database Name: coginiti_db
  • Username: postgres
  • Password: RDS master password

Encryption Configuration

Choose "Generate Encryption key and JWT secret" for new installations.

Critical Security Information

SAVE YOUR ENCRYPTION KEY AND JWT SECRET securely. These are essential for:

  • System reinstallation
  • Data recovery from backups
  • Infrastructure migration

Without these keys, encrypted data cannot be recovered.

4. AWS-Specific Configuration

Load Balancer Configuration

Configure the Application Load Balancer:

  • Ingress Class: alb
  • Scheme: internal or internet-facing based on requirements
  • Target Type: instance
  • SSL Certificate: Use the ACM certificate ARN

Domain Configuration

  • Hostname: Your Coginiti domain (e.g., coginiti.example.com)
  • SSL Certificate: ACM certificate ARN
  • Route53 Zone: Your hosted zone ID

Storage Configuration

  • Storage Class: gp3
  • Volume Size: Appropriate size based on data requirements
  • Backup Strategy: EBS snapshots and RDS automated backups

5. AWS Load Balancer Ingress

Create an ingress resource for Coginiti Enterprise:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: coginiti-enterprise-ingress
namespace: coginiti-enterprise
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:{ACCOUNT_ID}:certificate/{CERT_ID}
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
spec:
rules:
- host: coginiti.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: coginiti-enterprise-service
port:
number: 80

6. Configure Route53 DNS

Get the ALB DNS name:

kubectl get ingress coginiti-enterprise-ingress -n coginiti-enterprise

Create Route53 record:

aws route53 change-resource-record-sets \
--hosted-zone-id Z123456789 \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "coginiti.example.com",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "k8s-default-cognitie-xyz-123456789.us-west-2.elb.amazonaws.com"}]
}
}]
}'

7. Preflight Checks and Deployment

The admin console will validate your configuration:

  • ✅ Required Kubernetes Version
  • ✅ Minimum 3 nodes in cluster
  • ✅ Container Runtime
  • ✅ Internet connectivity for image pulls
  • ✅ Required storage classes
  • ✅ CPU and memory requirements
  • ✅ PostgreSQL connectivity and extensions
  • ✅ Load balancer configuration

Click "Deploy" when preflight checks pass.

Post-Installation Verification

1. Verify EKS Cluster Status

kubectl get nodes
kubectl get pods -n coginiti-enterprise
kubectl get svc -n coginiti-enterprise

2. Check Application Load Balancer

aws elbv2 describe-load-balancers --names k8s-default-cognitie-xyz

3. Verify RDS Connectivity

kubectl run pg-test --image=postgres:15 --rm -it --restart=Never \
-- psql -h {RDS_ENDPOINT} -U postgres -d coginiti_db -c "SELECT version();"

4. Access Coginiti Enterprise

Navigate to your configured domain (e.g., https://coginiti.example.com) and log in with the superuser credentials.

5. Verify License and Configuration

  • Confirm license information is correct
  • Test basic functionality with sample queries
  • Verify user access and permissions

AWS-Specific Operations

Scaling the Cluster

# Scale node group
eksctl scale nodegroup --cluster=coginiti-enterprise --name=coginiti-nodes --nodes=6

# Enable cluster autoscaler
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Monitoring and Logging

# Enable Container Insights
aws logs create-log-group --log-group-name /aws/containerinsights/coginiti-enterprise/cluster

# Install CloudWatch agent
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yaml

Backup and Recovery

# Create EBS snapshot
aws ec2 create-snapshot --volume-id vol-12345678 --description "Coginiti Enterprise backup"

# Create RDS snapshot
aws rds create-db-snapshot --db-instance-identifier coginiti-postgres --db-snapshot-identifier coginiti-backup-$(date +%Y%m%d)

Troubleshooting

Common Issues

EKS Node Group Issues

Symptoms: Nodes not joining cluster or in NotReady state

Solutions:

  1. Check IAM roles and policies for node group
  2. Verify security group rules allow node communication
  3. Check subnet routing and NAT Gateway configuration
    kubectl describe nodes
    aws eks describe-nodegroup --cluster-name coginiti-enterprise --nodegroup-name coginiti-nodes

Load Balancer Issues

Symptoms: ALB not created or not routing traffic

Solutions:

  1. Verify ALB controller is running:
    kubectl get pods -n kube-system | grep aws-load-balancer-controller
  2. Check ingress annotations and service configuration
  3. Verify ALB controller IAM permissions

RDS Connectivity Issues

Symptoms: Cannot connect to PostgreSQL from EKS pods

Solutions:

  1. Verify security group rules allow port 5432 from EKS worker nodes
  2. Check RDS subnet group configuration
  3. Test network connectivity:
    kubectl run network-test --image=busybox --rm -it --restart=Never \
    -- nslookup {RDS_ENDPOINT}

ECR Authentication Issues

Symptoms: ImagePullBackOff errors for Coginiti container images

Solutions:

  1. Refresh ECR login token:
    aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin {ECR_URI}
  2. Update Kubernetes secret:
    kubectl delete secret coginiti-ecr-secret -n coginiti-enterprise
    kubectl create secret docker-registry coginiti-ecr-secret --docker-server={ECR_URI} --docker-username=AWS --docker-password=$(aws ecr get-login-password --region us-east-1) -n coginiti-enterprise

Performance Optimization

EKS Cluster Optimization

# Enable cluster autoscaler
kubectl annotate deployment.apps/cluster-autoscaler \
cluster-autoscaler.kubernetes.io/safe-to-evict="false" \
-n kube-system

# Configure pod disruption budgets
kubectl apply -f - <<EOF
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: coginiti-pdb
namespace: coginiti-enterprise
spec:
minAvailable: 2
selector:
matchLabels:
app: coginiti-enterprise
EOF

RDS Performance Tuning

# Enable Performance Insights
aws rds modify-db-instance \
--db-instance-identifier coginiti-postgres \
--enable-performance-insights \
--performance-insights-retention-period 7

Getting Help

For installation support:

  • Primary Contact: Your Coginiti Account Manager
  • Technical Support: support@coginiti.co
  • AWS Support: For AWS-specific infrastructure issues

When contacting support, provide:

  • AWS account ID and region
  • EKS cluster configuration
  • RDS instance details
  • Complete error messages and logs
  • Network topology and security group configurations

Summary

You have successfully installed Coginiti Enterprise on Amazon EKS! Key achievements:

EKS Cluster: Managed Kubernetes cluster with worker nodes and networking ✅ RDS PostgreSQL: Managed database with pg_vector extension ✅ Application Load Balancer: Internet-facing load balancer with SSL termination ✅ DNS and SSL: Route53 DNS with ACM certificate management ✅ Container Images: Successfully pulling from ECR registry ✅ Security: Encryption keys generated and AWS security best practices ✅ Monitoring: CloudWatch integration for logging and metrics ✅ Scalability: Auto-scaling enabled for production workloads

Your Coginiti Enterprise instance is now ready for production use on AWS with enterprise-grade reliability, security, and scalability.