Skip to main content

Installing Coginiti Enterprise on GKE

This guide provides comprehensive instructions for installing Coginiti Enterprise on Google Kubernetes Engine (GKE) using Coginiti's public container registry on AWS. This deployment method is suitable for environments with internet connectivity.

Overview

The deployment process consists of three key stages:

  1. Configuring GCP project and required tools
  2. Provisioning GCP resources within the project
  3. Installing Coginiti Enterprise on the GKE cluster using public container images
Enterprise Deployment Options

This guide covers standard Enterprise deployment using public container registries. For air-gapped environments, see the Air-Gapped GKE Installation Guide.

Prerequisites

GCP Account and Project

  • Valid Google Cloud Platform account
  • New GCP project or existing project designated for Coginiti Enterprise
  • Required IAM roles assigned to your account

Network Connectivity

  • Internet access for pulling container images from Coginiti's public AWS registry
  • Ability to access external package repositories and documentation

Required IAM Roles

Ensure your account has the following IAM roles in the GCP project:

  • Compute Network Admin (compute.networkAdmin)
  • DNS Administrator (dns.admin)
  • Certificate Manager Editor (certificatemanager.editor)
  • Kubernetes Engine Cluster Admin (container.clusterAdmin)
  • Cloud SQL Editor (cloudsql.editor)

Installing Required Tools

Install the following software on your local machine:

1. Google Cloud CLI

Download and install the Google Cloud CLI from the official documentation. This is required to interact with GCP services via command line.

2. Kubernetes Command-Line Tool (kubectl)

Follow the installation instructions for your operating system from kubectl installation guide.

3. KOTS CLI

Install the KOTS kubectl plugin following the KOTS CLI Getting Started guide.

Verify your KOTS installation:

kubectl kots version

Provisioning GCP Resources

Required Infrastructure Components

You will need to provision the following GCP resources:

Networking

  • VPC network - Custom VPC for the deployment
  • External Static IP address - Used by External Application Load Balancer and DNS

Compute and Storage

  • Private VPC-native GKE cluster - Kubernetes cluster for Coginiti Enterprise
  • PostgreSQL server - Database backend with pg_vector extension

Security and DNS

  • DNS zone and DNS name - Domain configuration for Coginiti Enterprise
  • Self-Managed SSL Certificate - For HTTPS communication

GKE Cluster Requirements

  • Kubernetes Version: 1.27 or higher
  • Node Configuration: Minimum 3 nodes of n2-standard-8 type
  • Storage: pd-balanced persistent disk, minimum 256 GB per node
  • Required Addons: HorizontalPodAutoscaling, HttpLoadBalancing, GcePersistentDiskCsiDriver
  • Internet Access: Nodes must be able to pull container images from external registries
  • Gateway API: Gateway API must be enabled on the cluster in GCP console or via CLI command:
gcloud container clusters update CLUSTER_NAME \
--location=CLUSTER_LOCATION\
--gateway-api=standard

PostgreSQL Requirements

  • Version: PostgreSQL 15 or 16
  • Extensions: pg_vector extension must be installed
  • Connectivity: Accessible from the GKE cluster
  • Database: Dedicated database for Coginiti application

Step-by-Step Infrastructure Setup

1. Create VPC Network

gcloud compute networks create coginiti-net \
--subnet-mode custom

2. Create Private Subnet with NAT Gateway Access

gcloud compute networks subnets create coginiti-cluster-snet \
--network coginiti-net \
--range 192.168.0.0/28 \
--secondary-range my-pods=10.1.0.0/20,my-services=10.1.16.0/24 \
--enable-private-ip-google-access \
--region={REGION}

Replace {REGION} with your chosen Google Cloud region.

3. Create NAT Gateway for Internet Access

Private GKE nodes need internet access to pull container images from public registries.

Create a Cloud Router:

gcloud compute routers create coginiti-router \
--network coginiti-net \
--region={REGION}

Create a NAT Gateway:

gcloud compute routers nats create coginiti-nat \
--router coginiti-router \
--region={REGION} \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges

4. Create External Static IP Address

gcloud compute addresses create coginiti-external-ip \
--global

This creates a global external IP address that will be used by the external Application Load Balancer.

5. Get External Static IP Address

gcloud compute addresses describe coginiti-external-ip \
--global \
--format="get(address)"

Save this IP address - you'll need it for DNS configuration.

6. Configure Public DNS

Create an A record in your public DNS provider (e.g., Cloud DNS, Route 53, Cloudflare) pointing your domain to the external IP address.

Domain Ownership and DNS Configuration

You must own the domain name and have access to configure its DNS records. The specific steps depend on your DNS provider:

  • Google Cloud DNS: Create a public managed zone and add an A record
  • Route 53: Create a hosted zone and add an A record
  • Cloudflare: Add an A record in your domain's DNS settings
  • Other providers: Follow their documentation to create an A record

Example for Google Cloud DNS:

# Create public DNS zone
gcloud dns managed-zones create coginiti-public \
--dns-name example.com. \
--description="Public DNS Zone for Coginiti Enterprise" \
--visibility public

# Add A record
gcloud dns record-sets create coginiti.example.com. \
--zone=coginiti-public \
--type=A \
--ttl=300 \
--rrdatas={EXTERNAL_STATIC_IP}

Replace example.com with your actual domain and {EXTERNAL_STATIC_IP} with the IP from step 5.

7. Configure SSL Certificate

For internet-facing deployments, you can use Google-managed SSL certificates which automatically provision and renew:

gcloud compute ssl-certificates create coginiti-certificate \
--domains=coginiti.example.com \
--global

Replace coginiti.example.com with your actual domain.

SSL Certificate Provisioning

Google-managed certificates can take up to 15-20 minutes to provision. The certificate status must be ACTIVE before the application will be accessible via HTTPS.

Check certificate status:

gcloud compute ssl-certificates describe coginiti-certificate \
--global \
--format="get(managed.status)"

Alternative: Self-Managed Certificate

If you prefer to use your own SSL certificate:

gcloud compute ssl-certificates create coginiti-certificate \
--certificate={PATH_TO_CERTIFICATE_FILE} \
--private-key={PATH_TO_PRIVATE_KEY_FILE} \
--global

Deploy PostgreSQL Server

1. Allocate IP Address Range for VPC Peering

gcloud compute addresses create google-managed-services-coginiti-net \
--purpose=VPC_PEERING \
--addresses=192.168.3.0 \
--prefix-length=24 \
--network=projects/{PROJECT_NAME}/global/networks/coginiti-net \
--global

2. Create Private Connection

gcloud services vpc-peerings update \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-coginiti-net \
--network=coginiti-net \
--project={PROJECT_NAME} \
--force

3. Create PostgreSQL Instance

gcloud sql instances create coginiti-pg-server \
--database-version=POSTGRES_15 \
--cpu=2 \
--memory=7680MB \
--network=coginiti-net \
--no-assign-ip \
--region={REGION}

4. Set PostgreSQL User Password

gcloud sql users set-password postgres \
--instance=coginiti-pg-server \
--password={YOUR_PASSWORD}

5. Create Coginiti Database

Connect to the PostgreSQL server and run:

CREATE DATABASE coginiti_db;

6. Install pg_vector Extension

Connect to the Coginiti database and run:

CREATE EXTENSION vector;

Deploy GKE Cluster

1. Create GKE Cluster with Internet Access

gcloud container clusters create coginiti-private-cluster \
--location {REGION} \
--network coginiti-net \
--subnetwork coginiti-cluster-snet \
--cluster-secondary-range-name my-pods \
--services-secondary-range-name my-services \
--enable-private-nodes \
--enable-ip-alias \
--master-ipv4-cidr 192.168.0.32/28 \
--machine-type "n2-standard-8" \
--num-nodes "3" \
--enable-autoscaling \
--max-nodes "6" \
--min-nodes "3"

2. Enable Authorized Networks for Management Access

gcloud container clusters update coginiti-private-cluster \
--location {REGION} \
--enable-master-authorized-networks \
--master-authorized-networks $(curl ifconfig.me)/32

3. Get Cluster Credentials

gcloud container clusters get-credentials coginiti-private-cluster \
--location {REGION}

4. Verify Cluster Connectivity

kubectl get nodes
kubectl get pods --all-namespaces

Install Coginiti Enterprise

1. Obtain License and Registry Access

Contact your Coginiti Account Manager to obtain:

  • License file (.yml format)
  • Registry credentials for Coginiti's public AWS ECR
  • Installation command specific to your deployment

2. Install Coginiti Enterprise using KOTS

kubectl kots install coginiti-premium/enterprise-release \
--namespace coginiti-enterprise \
--shared-password {ADMIN_CONSOLE_PASSWORD}

The installation process will:

  • Create the namespace
  • Deploy the KOTS Admin Console
  • Pull container images from Coginiti's public registry
  • Set up initial configuration

3. Access Admin Console

The installer will display the admin console URL and port-forwarding information:

Admin Console: http://localhost:8800

Keep the terminal session open for port-forwarding, or restore access with:

kubectl kots admin-console --namespace coginiti-enterprise

Configure Coginiti Enterprise

1. Admin Console Login

Navigate to http://localhost:8800 and enter your admin console password.

2. Upload License File

Upload the license file provided by your Coginiti Account Manager.

3. Configure Registry Access

Since you're using public registries, the default configuration should work. Verify:

  • Registry Authentication: Configured during installation
  • Image Pull Policy: Set to pull from public registry
  • Network Access: Confirmed via NAT Gateway

4. Configure Superuser Account

Create the primary administrator account:

  • Superuser Name: Admin username for Coginiti Enterprise
  • Superuser Password: Strong password (minimum 8 characters)

5. Database Configuration

Get the PostgreSQL server IP address:

gcloud sql instances describe coginiti-pg-server

Configure database connection:

  • Postgres Host Type: Select "Postgres IP Address"
  • Postgres Instance IP Address: Enter the PostgreSQL server IP
  • Database Name: coginiti_db
  • Username: postgres
  • Password: Password set during PostgreSQL configuration

6. Encryption Configuration

Choose "Generate Encryption key and JWT secret" for new installations.

Critical Security Information

SAVE YOUR ENCRYPTION KEY AND JWT SECRET securely. These are required for:

  • System reinstallation
  • Data recovery from backups
  • Infrastructure migration

Without these keys, encrypted data cannot be recovered.

7. Domain and SSL Configuration

Choose whether to use a cloud provider-managed ingress (gateway) or a customer-managed ingress.

Option 1: Cloud Provider-Managed Ingress (Default)

This is the recommended approach for most deployments:

  • Ingress Hostname: Your Coginiti domain (e.g., coginiti.example.com)
  • GCP Managed Certificate Name: coginiti-certificate
  • Global Static External IP Address Name: coginiti-external-ip
  • Storage Class: standard-rwo
External Load Balancer

Since this is an internet-facing deployment, the application will use an external Application Load Balancer. Ensure you've configured:

  • External static IP address
  • Public DNS record pointing to the external IP
  • SSL certificate (Google-managed or self-managed)

Option 2: Customer-Managed Ingress (Advanced)

Customer-managed gateways are not within the scope of this documentation, but Istio gateway configuration is provided below as a reference.

Gateway Declaration

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: coginiti-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- coginiti.example.com
tls:
mode: SIMPLE
credentialName: tls-secret # Kubernetes secret with TLS cert/key

DestinationRule Configuration

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: coginiti-enterprise-destination
spec:
host: coginiti-enterprise-service
trafficPolicy:
# Session affinity - cookie-based to ensure client sticks to same pod
loadBalancer:
consistentHash:
httpCookie:
name: ISTIO_AFFINITY
ttl: 86400s # 24 hours - cookie lifetime
path: / # Cookie valid for all paths
# Connection pool settings for long-running sessions
connectionPool:
tcp:
connectTimeout: 30s
tcpKeepalive:
time: 7200s # 2 hours - send keepalive probe every 2 hours
interval: 75s # Retry interval if probe fails
probes: 9 # Number of failed probes before connection is dead
http:
http1MaxPendingRequests: 1024
http2MaxRequests: 1024
maxRequestsPerConnection: 0 # 0 = unlimited, allow connection reuse
maxRetries: 3
idleTimeout: 24h # 24 hours - very long idle timeout for long-running sessions
outlierDetection:
# Health check equivalent settings
consecutive5xxErrors: 3 # Allow more errors before ejecting (was 2)
consecutiveGatewayErrors: 2 # Gateway errors less tolerant
interval: 10s # Check every 10s (was 5s, less aggressive)
baseEjectionTime: 60s # Keep pod ejected for 60s minimum
maxEjectionPercent: 50 # Never eject more than 50% of pods
minHealthPercent: 25 # Require at least 25% healthy pods

VirtualService Configuration

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: coginiti-app-route
spec:
hosts:
- coginiti.example.com
gateways:
- coginiti-gateway
http:
# gRPC service route - must come first for more specific matching
- match:
- port: 443
uri:
prefix: /arrow.flight.protocol.FlightService
route:
- destination:
host: coginiti-enterprise-service
port:
number: 50550
retries:
attempts: 0 # Don't retry streaming requests
# Default HTTP route for all other HTTPS traffic
- match:
- port: 443
route:
- destination:
host: coginiti-enterprise-service
port:
number: 8080
retries:
attempts: 3 # Retry failed requests up to 3 times
retryOn: 5xx,reset,connect-failure,refused-stream

8. Performance Configuration (Optional)

Configure performance settings based on your requirements:

  • Resource Limits: CPU and memory limits for pods
  • Horizontal Pod Autoscaling: Enable automatic scaling
  • Storage Configuration: Persistent volume settings

9. Preflight Checks

The admin console will validate your configuration:

  • ✅ Required Kubernetes Version
  • ✅ Minimum 3 nodes in cluster
  • ✅ Container Runtime
  • ✅ Internet connectivity for image pulls
  • ✅ Required storage classes
  • ✅ CPU cores requirement (6 or greater)
  • ✅ PostgreSQL 15.x or later
  • ✅ Registry access confirmed
  • ✅ External IP address configured
  • ✅ Public DNS record resolves correctly

10. Deploy Application

Click "Deploy" when preflight checks pass. The deployment will:

  • Pull container images from Coginiti's public AWS registry
  • Deploy all application components
  • Configure load balancing and networking
  • Initialize the application database

Monitor the deployment progress in the admin console.

Post-Installation Verification

1. Verify Container Image Sources

Check that pods are using images from Coginiti's public registry:

kubectl get pods -n coginiti-enterprise -o wide
kubectl describe pod {POD_NAME} -n coginiti-enterprise | grep Image

2. Check Network Connectivity

Verify NAT Gateway is providing internet access:

kubectl run test-connectivity --image=busybox --rm -it --restart=Never \
--command -- nslookup google.com

3. Access Coginiti Enterprise

Navigate to your configured domain (e.g., https://coginiti.example.com) from any internet-connected device and log in with the superuser credentials.

Public Access

Your Coginiti Enterprise instance is now accessible from the public internet via HTTPS. Ensure you have proper authentication and security measures in place.

4. Verify License Application

The Coginiti Enterprise license should be automatically applied. Verify:

  • License information appears correctly
  • User limits match your license agreement
  • Expiration date is accurate

5. Initial Configuration

Complete these setup tasks:

  • Configure user accounts and permissions
  • Set up database connections to your data sources
  • Configure authentication (SSO, LDAP if required)
  • Test basic functionality with sample queries

Monitoring and Maintenance

Update Management

Since you're using public registries, updates are simplified:

  1. Check for Updates: KOTS will automatically check for new versions
  2. Review Changes: Use the admin console to review update details
  3. Deploy Updates: Updates pull new container images from the public registry

Monitoring Registry Access

Monitor container image pull status:

kubectl get events --sort-by=.metadata.creationTimestamp -n coginiti-enterprise

Look for any ImagePullBackOff or registry authentication issues.

Troubleshooting

Common Issues

Container Image Pull Failures

Symptoms: Pods stuck in ImagePullBackOff state

Solutions:

  1. Verify NAT Gateway configuration and internet connectivity
  2. Check AWS ECR authentication:
    kubectl get secret -n coginiti-enterprise
    kubectl describe secret {ECR_SECRET_NAME} -n coginiti-enterprise
  3. Test registry access from within cluster:
    kubectl run registry-test --image=busybox --rm -it --restart=Never \
    --command -- nslookup {COGINITI_ECR_REGISTRY}

NAT Gateway Issues

Symptoms: No internet access from private nodes

Solutions:

  1. Verify NAT Gateway configuration:
    gcloud compute routers get-status coginiti-router --region={REGION}
  2. Check subnet configuration for private Google access
  3. Verify firewall rules allow egress traffic

PostgreSQL Connection Issues

Symptoms: Database connectivity errors during configuration

Solutions:

  1. Verify PostgreSQL instance is running
  2. Check VPC peering configuration
  3. Test connectivity from GKE cluster:
    kubectl run pg-test --image=postgres:15 --rm -it --restart=Never \
    --command -- psql -h {PG_IP} -U postgres -d coginiti_db -c "SELECT 1;"

SSL Certificate Problems

Symptoms: Certificate validation errors or HTTPS issues

Solutions:

  1. Verify certificate matches your domain
  2. Check certificate installation:
    gcloud compute ssl-certificates describe coginiti-certificate
  3. Validate certificate chain and expiration

Performance Optimization

Resource Scaling

Monitor and adjust cluster resources:

kubectl top nodes
kubectl top pods -n coginiti-enterprise

Enable horizontal pod autoscaling:

kubectl autoscale deployment coginiti-app -n coginiti-enterprise \
--cpu-percent=70 --min=2 --max=10

Storage Performance

For improved I/O performance, consider upgrading to SSD persistent disks:

kubectl patch storageclass standard-rwo -p '{"parameters":{"type":"pd-ssd"}}'

Getting Help

For installation support:

  • Primary Contact: Your Coginiti Account Manager
  • Technical Support: support@coginiti.co
  • Documentation: Latest installation guides and troubleshooting resources

When contacting support, provide:

  • GCP project details and region
  • Kubernetes cluster configuration
  • Container registry access logs
  • Complete error messages and logs
  • Network topology and connectivity tests

Summary

You have successfully installed Coginiti Enterprise on GKE as an internet-facing deployment using public container registries! Key achievements:

Infrastructure: GCP resources provisioned with internet connectivity via NAT Gateway ✅ Database: PostgreSQL with pg_vector extension configured ✅ Kubernetes: Private GKE cluster with external registry access ✅ Container Images: Successfully pulling from Coginiti's public AWS ECR ✅ Application: Coginiti Enterprise deployed and configured ✅ Public Access: External Application Load Balancer with public DNS and SSL certificate ✅ Security: Encryption keys generated and HTTPS encryption configured ✅ Scalability: Auto-scaling enabled for production workloads

Your Coginiti Enterprise instance is now accessible from the public internet via HTTPS and ready for production use with simplified update management through public container registries.