Skip to main content

Configuring AI Assistant

This guide explains how to set up and configure Coginiti's AI Assistant, which acts as an intelligent agent to help users generate data cleansing, transformation, and analysis code. The AI Assistant can also explain queries, optimize performance, and help fix errors across multiple generative AI platforms.

Overview

Coginiti's AI Assistant provides intelligent code generation and analysis capabilities powered by leading AI models. The assistant integrates seamlessly with your database environment to provide context-aware suggestions and solutions.

AI Assistant Capabilities

Code Generation:

  • Data cleansing and transformation scripts
  • Complex analytical queries and calculations
  • ETL pipeline code generation
  • Custom function and macro creation

Query Assistance:

  • Query explanation in natural language
  • Performance optimization recommendations
  • Error diagnosis and fixing suggestions
  • Best practice guidance and code review

Database Integration:

  • Context-aware suggestions using database metadata
  • Table and column name recognition
  • Data type and constraint awareness
  • Schema-specific optimization recommendations

Supported AI Providers

  • Azure OpenAI - Microsoft's managed OpenAI service
  • Amazon Bedrock - AWS managed AI service with Anthropic models
  • Google Vertex AI - Google Cloud's managed AI platform with multiple model providers
  • OpenAI - Direct integration with OpenAI's ChatGPT models
  • Anthropic - Direct integration with Claude models

Prerequisites

Administrative Requirements

  • Administrator access to Coginiti Team or Enterprise
  • Valid subscription with chosen AI provider
  • API credentials from your selected AI provider
  • Network connectivity to AI provider endpoints

AI Provider Requirements

Azure OpenAI

  • Azure subscription with OpenAI service approval
  • Azure OpenAI resource created and deployed
  • Model deployment completed in Azure OpenAI Studio

Amazon Bedrock

  • AWS account with Bedrock access permissions
  • Model access granted through AWS console
  • IAM credentials with appropriate Bedrock permissions

OpenAI

  • Paid OpenAI account (free tier not supported for API access)
  • API key generated from OpenAI platform
  • Usage limits configured appropriately

Google Vertex AI

  • Google Cloud project with Vertex AI API enabled
  • Service account with appropriate Vertex AI permissions
  • Model access configured in Google Cloud Console
  • JSON service account key or Application Default Credentials

Anthropic

  • Paid Anthropic account with developer role
  • API key generated from Claude.ai platform
  • Rate limits understood and configured

Enabling AI Assistant

Step 1: Access AI Assistant Configuration

  1. Navigate to Admin Settings by clicking your profile in the upper right corner
  2. Select "Global Preferences" from the admin menu
  3. Choose "AI Assistant" from the global preferences options

Step 2: Enable AI Assistant

  1. Check the "Enable AI Assistant" checkbox to activate the feature
  2. Choose your preferred generative AI model from the dropdown menu
  3. Select your AI provider (Azure OpenAI, Amazon Bedrock, Google Vertex AI, OpenAI, or Anthropic)

Step 3: Configure Database Metadata Sharing

  1. Check "Share database metadata with AI Assistant" for enhanced accuracy
  2. This enables context-aware code generation using your database schema
  3. Review privacy implications of sharing metadata with AI providers
Metadata Sharing Benefits

Enabling database metadata sharing significantly improves AI-generated code accuracy by providing table names, column types, relationships, and constraints to the AI model for context-aware suggestions.

Step 4: Apply Configuration

  1. Click "Apply" to enable AI Assistant for all users in your organization
  2. Test the connection to ensure proper configuration
  3. Verify AI Assistant availability in the user interface

Azure OpenAI Configuration

Microsoft's Azure OpenAI service provides enterprise-grade AI capabilities with enhanced security and compliance features.

Prerequisites for Azure OpenAI

Azure Setup Requirements:

  1. Azure subscription - Sign up for Azure
  2. Azure OpenAI service access - Request access to Azure OpenAI
  3. Resource creation completed in Azure portal
  4. Model deployment configured in Azure OpenAI Studio

Step 1: Create Azure OpenAI Resource

  1. Log into Azure Portal at portal.azure.com
  2. Create new resource → Search for "OpenAI"
  3. Select "Azure OpenAI" and click "Create"
  4. Configure resource settings:
    • Subscription: Choose your Azure subscription
    • Resource Group: Create new or select existing
    • Region: Select supported region (e.g., East US, West Europe)
    • Name: Choose unique resource name
    • Pricing Tier: Select appropriate tier for usage

Step 2: Deploy AI Model

  1. Navigate to Azure OpenAI Studio from your resource
  2. Go to "Management" → "Deployments"
  3. Click "Create new deployment"
  4. Configure deployment:
    • Model: Select model (e.g., gpt-4, gpt-3.5-turbo)
    • Deployment name: Choose descriptive name
    • Version: Select model version
    • Scale settings: Configure capacity units

Step 3: Retrieve Connection Information

Navigate to your Azure OpenAI resource and collect the following information:

Resource Name (Endpoint)

  1. Go to "Resource Management" → "Keys and Endpoints"
  2. Copy the "Endpoint" URL (e.g., https://your-resource.openai.azure.com/)
  3. Paste into "Resource Name" field in Coginiti

API Key

  1. In "Keys and Endpoints" section
  2. Copy "Key 1" or "Key 2" (either key works)
  3. Keep key secure and paste into "API Key" field in Coginiti

Deployment Name

  1. Go to Azure OpenAI Studio → "Management" → "Deployments"
  2. Find your deployment in the deployments list
  3. Copy the deployment name exactly as shown
  4. Enter in "Deployment Name" field in Coginiti

Example Azure OpenAI Configuration

Resource Name: https://coginiti-openai.openai.azure.com/
API Key: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
Deployment Name: gpt-4-deployment

Azure OpenAI Security Best Practices

Access Control

  • Use managed identities when possible instead of API keys
  • Implement IP restrictions in Azure networking settings
  • Enable Azure AD authentication for enhanced security
  • Monitor API usage through Azure monitoring tools

Cost Management

  • Set spending limits in Azure billing
  • Monitor token usage to avoid unexpected costs
  • Use appropriate model sizes for your use case
  • Implement rate limiting to control usage

Amazon Bedrock Configuration

Amazon Bedrock provides access to foundation models from leading AI companies including Anthropic's Claude models.

Prerequisites for Amazon Bedrock

AWS Setup Requirements:

  1. AWS account with Bedrock service access
  2. Model access granted for desired AI models
  3. IAM credentials with appropriate permissions
  4. Region selection where Bedrock is available

Step 1: Request Model Access

  1. Log into AWS Console and navigate to Amazon Bedrock
  2. Go to "Model access" in the left navigation
  3. Click "Request model access" for desired models
  4. Complete access request form and submit
  5. Wait for approval (typically within 24 hours)

Supported Models in Coginiti

  • Claude (Anthropic)
  • Claude 2.1 (Anthropic)
  • Claude Instant (Anthropic)

Step 2: Create IAM User and Permissions

Create an IAM user with appropriate Bedrock permissions:

IAM Policy for Bedrock Access

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "arn:aws:bedrock:*::foundation-model/anthropic.claude*"
}
]
}

Create IAM User

  1. Navigate to IAMUsersCreate user
  2. Enter username (e.g., "coginiti-bedrock-user")
  3. Attach policy created above
  4. Create user and generate access keys

Step 3: Configure Bedrock in Coginiti

Connection Settings

  • Access Key ID: IAM user access key ID
  • Secret Access Key: IAM user secret access key
  • Region: AWS region where Bedrock is available (e.g., us-east-1, us-west-2)
  • Model: Select from supported Anthropic models

Example Amazon Bedrock Configuration

Access Key ID: AKIAIOSFODNN7EXAMPLE
Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Region: us-east-1
Model: Claude 2.1

Bedrock Security Best Practices

IAM Security

  • Use least privilege principle for IAM policies
  • Rotate access keys regularly (every 90 days)
  • Enable CloudTrail logging for API call auditing
  • Use IAM roles instead of users when possible

Cost Management

  • Monitor usage through AWS Cost Explorer
  • Set up billing alerts for unexpected usage
  • Understand pricing model for each foundation model
  • Use appropriate model sizes for your workload

Google Vertex AI Configuration

Google Vertex AI provides access to multiple AI models including PaLM, Gemini, and third-party models through Google Cloud's managed platform.

Prerequisites for Google Vertex AI

Google Cloud Setup Requirements:

  1. Google Cloud project with billing enabled
  2. Vertex AI API enabled in the project
  3. Service account with appropriate permissions
  4. Model access configured for desired models

Step 1: Enable Vertex AI API

  1. Log into Google Cloud Console at console.cloud.google.com
  2. Select your project or create a new one
  3. Navigate to "APIs & Services" → "Library"
  4. Search for "Vertex AI API" and click on it
  5. Click "Enable" to activate the API for your project

Step 2: Create Service Account

Create a service account with appropriate Vertex AI permissions:

Service Account Creation

  1. Navigate to "IAM & Admin" → "Service Accounts"
  2. Click "Create Service Account"
  3. Configure service account:
    • Name: Choose descriptive name (e.g., "coginiti-vertex-ai")
    • Description: "Service account for Coginiti AI Assistant"
    • Service account ID: Will be auto-generated

Assign Permissions

Assign the following roles to the service account:

  • Vertex AI User (roles/aiplatform.user)
  • Vertex AI Service Agent (roles/aiplatform.serviceAgent)

Generate Service Account Key

  1. Click on the created service account
  2. Go to "Keys" tab
  3. Click "Add Key" → "Create new key"
  4. Select "JSON" format
  5. Download and securely store the JSON key file

Step 3: Configure Model Access

Enable access to the AI models you want to use:

Available Models

  • Gemini Pro: Google's advanced language model
  • Gemini Pro Vision: Multimodal model with image understanding
  • PaLM 2: Google's foundation language model
  • Codey: Specialized model for code generation
  • Third-party models: Various models from other providers

Enable Model Access

  1. Navigate to Vertex AI → Model Garden
  2. Browse available models and select desired ones
  3. Click "Enable" or "Request Access" for each model
  4. Wait for approval if required (some models have access controls)

Step 4: Configure Vertex AI in Coginiti

Connection Settings

  • Project ID: Your Google Cloud project identifier
  • Location: Google Cloud region (e.g., us-central1, europe-west1)
  • Service Account Key: Upload the JSON key file or provide key content
  • Model: Select from available Vertex AI models

Example Google Vertex AI Configuration

Project ID: my-company-ai-project
Location: us-central1
Service Account Key: [Upload service-account-key.json]
Model: gemini-pro

Vertex AI Security Best Practices

Service Account Security

  • Use dedicated service accounts for Coginiti integration
  • Apply principle of least privilege to service account roles
  • Rotate service account keys regularly (quarterly recommended)
  • Monitor service account usage through Cloud Logging

Project Security

  • Enable Cloud Audit Logs for Vertex AI API calls
  • Configure VPC firewall rules if using private networks
  • Use Identity and Access Management for fine-grained control
  • Monitor API quotas and usage patterns

Cost Management

  • Set up billing alerts for unexpected usage
  • Monitor model usage through Cloud Monitoring
  • Use appropriate model sizes for your workload
  • Implement request quotas to control costs

Vertex AI Model Comparison

ModelBest ForContext LengthCost
Gemini ProGeneral tasks, reasoning30k tokensMedium
Gemini Pro VisionMultimodal, image analysis16k tokensHigher
PaLM 2Text generation, chat8k tokensLower
CodeyCode generation, completion6k tokensMedium

OpenAI Configuration

Direct integration with OpenAI's API provides access to the latest ChatGPT models and features.

Prerequisites for OpenAI

OpenAI Setup Requirements:

  1. Paid OpenAI account - Create OpenAI account
  2. API access enabled (requires payment method on file)
  3. Usage limits configured appropriately
  4. API key generated and secured

Step 1: Create OpenAI Account

  1. Visit OpenAI Platform
  2. Sign up for account or log into existing account
  3. Add payment method to enable API access
  4. Set usage limits to control spending

Step 2: Generate API Key

  1. Navigate to "API Keys" in your OpenAI account
  2. Click "Create new secret key"
  3. Enter descriptive name (e.g., "Coginiti Integration")
  4. Copy the API key immediately (it won't be shown again)
  5. Store key securely for use in Coginiti
API Key Security

OpenAI API keys provide full access to your account and billing. Store them securely and never share them publicly. Regenerate keys if compromised.

Step 3: Configure OpenAI in Coginiti

Connection Settings

  • API Key: Your OpenAI API secret key
  • Model: Select from available OpenAI models (GPT-4, GPT-3.5-turbo, etc.)

Example OpenAI Configuration

API Key: sk-proj-abcdef123456789...
Model: gpt-4

Available OpenAI Models

GPT-4 Models

  • GPT-4: Most capable model, best for complex reasoning
  • GPT-4 Turbo: Faster and more cost-effective than GPT-4
  • GPT-4o: Optimized for speed and efficiency

GPT-3.5 Models

  • GPT-3.5-turbo: Good balance of capability and cost
  • GPT-3.5-turbo-16k: Extended context window for longer queries

OpenAI Security Best Practices

API Security

  • Regenerate API keys regularly
  • Monitor API usage for unusual activity
  • Set usage limits to prevent unexpected charges
  • Use environment variables to store API keys securely

Cost Management

  • Set spending limits in OpenAI dashboard
  • Monitor token usage and costs regularly
  • Choose appropriate models for your use case
  • Implement client-side rate limiting

Anthropic Configuration

Direct integration with Anthropic's Claude models provides advanced reasoning and analysis capabilities.

Prerequisites for Anthropic

Anthropic Setup Requirements:

  1. Paid Anthropic account - Create account at Claude.ai
  2. Developer role minimum required for API access
  3. API key generated from Anthropic console
  4. Usage limits understood and configured

Step 1: Create Anthropic Account

  1. Visit Claude.ai
  2. Sign up for account with email verification
  3. Upgrade to paid plan for API access
  4. Verify developer role in account settings

Step 2: Generate API Key

  1. Navigate to "API Keys" section in your profile
  2. Click "Create Key" button
  3. Enter descriptive name for the key
  4. Copy the generated API key immediately
  5. Store key securely for use in Coginiti
Developer Role Requirement

The minimum role required to access API keys is 'developer'. Ensure your account has appropriate permissions before attempting to generate keys.

Step 3: Configure Anthropic in Coginiti

Connection Settings

  • API Key: Your Anthropic API key
  • Model: Select from available Claude models

Example Anthropic Configuration

API Key: sk-ant-api03-abcdef123456789...
Model: Claude 3 Sonnet

Available Anthropic Models

Claude Models

  • Claude 3 Opus: Most capable model for complex tasks
  • Claude 3 Sonnet: Balanced performance and speed
  • Claude 3 Haiku: Fastest and most cost-effective
  • Claude Instant: Quick responses for simple tasks

Anthropic Security Best Practices

API Security

  • Rotate API keys according to security policies
  • Monitor API usage through Anthropic dashboard
  • Implement rate limiting to prevent abuse
  • Use secure storage for API credentials

Usage Management

  • Understand rate limits for your account tier
  • Monitor conversation costs and token usage
  • Choose appropriate models for different use cases
  • Implement fallback mechanisms for rate limit scenarios

Database Metadata Integration

Enabling Metadata Sharing

When enabled, Coginiti shares database schema information with the AI Assistant to provide more accurate and contextual suggestions.

Metadata Types Shared

  • Table names and schema information
  • Column names and data types
  • Primary keys and foreign key relationships
  • Indexes and constraints
  • View definitions and materialized views

Benefits of Metadata Sharing

  • Context-aware code generation using actual table/column names
  • Data type appropriate suggestions and transformations
  • Relationship-aware join suggestions and queries
  • Constraint-aware data validation and cleaning code

Privacy Considerations

Data Shared

  • Schema metadata only - no actual data content
  • Table and column names - consider if names contain sensitive information
  • Database structure - relationships and constraints
  • No query results or actual data values

Privacy Best Practices

  • Review table/column naming for sensitive information
  • Use generic names for sensitive schemas when possible
  • Document metadata sharing in privacy policies
  • Consider separate schemas for sensitive data

Using AI Assistant

Accessing AI Features

Once configured, users can access AI Assistant features through:

Keyboard Shortcuts

  • Mod+Alt+E: Explain selected query
  • Mod+Alt+O: Optimize selected query
  • Mod+Enter: Send message in AI chat
  • Mod+Shift+Z: Cancel AI request

Context Menus

  • Right-click on queries for AI options
  • Error message assistance for automatic fixing
  • Code generation prompts in editor

AI Chat Interface

  • Natural language queries for code generation
  • Interactive problem solving and guidance
  • Code review and optimization suggestions

Common AI Assistant Use Cases

Data Cleansing

User: "Generate code to clean customer data - remove duplicates, standardize phone numbers, and validate email addresses"

AI: [Generates appropriate SQL/CoginitiScript with table-specific column names]

Query Optimization

User: [Selects slow query and uses Mod+Alt+O]

AI: "This query can be optimized by adding an index on the date column and rewriting the subquery as a join..."

Error Resolution

System: [SQL error occurs]

AI: "This error is caused by a data type mismatch. Try converting the string to integer using CAST()..."

Troubleshooting AI Assistant

Common Configuration Issues

Connection Failures

Symptoms: AI Assistant fails to respond or shows connection errors

Solutions for Azure OpenAI:

  1. Verify endpoint URL format and accessibility
  2. Check API key validity and permissions
  3. Confirm deployment name matches Azure configuration
  4. Test network connectivity to Azure endpoints

Solutions for Amazon Bedrock:

  1. Verify model access has been granted
  2. Check IAM permissions for Bedrock service
  3. Confirm region availability for selected models
  4. Test AWS credentials with CLI or other tools

Solutions for Google Vertex AI:

  1. Verify service account permissions and key file validity
  2. Check Vertex AI API is enabled in the project
  3. Confirm model access has been granted in Model Garden
  4. Test Google Cloud credentials with gcloud CLI

Solutions for OpenAI:

  1. Verify API key is valid and not expired
  2. Check account billing and usage limits
  3. Confirm model availability for your account tier
  4. Test API access with curl or other tools

Solutions for Anthropic:

  1. Verify API key and developer role permissions
  2. Check account status and billing information
  3. Confirm model availability for your account
  4. Test rate limits and usage quotas

Performance Issues

Symptoms: Slow AI responses or timeouts

Solutions:

  1. Check network latency to AI provider endpoints
  2. Verify API rate limits aren't being exceeded
  3. Reduce query complexity for faster responses
  4. Monitor provider status pages for service issues

Accuracy Issues

Symptoms: AI provides incorrect or irrelevant suggestions

Solutions:

  1. Enable metadata sharing for better context
  2. Provide more specific prompts with examples
  3. Review database schema for clarity
  4. Consider different models for your use case

Monitoring and Maintenance

Usage Monitoring

  • Track API calls and token usage
  • Monitor costs across all AI providers
  • Review user feedback on AI suggestions
  • Analyze performance metrics and response times

Regular Maintenance

  • Rotate API keys according to security policies
  • Update model versions when available
  • Review and update permissions regularly
  • Test configurations after provider updates

Security and Compliance

Data Security

Data in Transit

  • TLS encryption for all API communications
  • API key authentication for secure access
  • Request/response logging for audit trails
  • Network security controls and monitoring

Data at Rest

  • No persistent storage of query content by default
  • Secure credential storage in Coginiti
  • Encrypted configuration data
  • Regular security updates and patches

Compliance Considerations

Data Governance

  • Document AI usage in data governance policies
  • Review data sharing agreements with AI providers
  • Implement user consent mechanisms if required
  • Maintain audit logs of AI interactions

Regulatory Compliance

  • GDPR considerations for EU users
  • HIPAA compliance for healthcare data
  • SOX requirements for financial data
  • Industry-specific regulations as applicable

Enterprise Security Features

Access Control

  • Admin-controlled enablement of AI features
  • Role-based access to AI capabilities
  • User permission management for AI features
  • Audit logging of AI usage and administration

Integration Security

  • Secure API key management with encryption
  • Network isolation options for sensitive environments
  • Proxy support for corporate networks
  • Certificate validation for secure connections

Best Practices

Configuration Best Practices

Model Selection

  • Choose appropriate models for your use case and budget
  • Test different models to find optimal performance
  • Consider cost vs. capability trade-offs
  • Monitor model performance and user satisfaction

Security Management

  • Implement key rotation schedules
  • Use least privilege access principles
  • Monitor usage patterns for anomalies
  • Regular security assessments of AI integrations

User Training

AI Assistant Capabilities

  • Train users on available AI features
  • Provide examples of effective prompts
  • Share best practices for AI interaction
  • Create usage guidelines for your organization

Privacy Awareness

  • Educate users about data sharing implications
  • Provide guidelines for sensitive data handling
  • Document policies for AI usage
  • Regular training updates on new features

Support and Resources

Getting Help

For AI Assistant configuration assistance:

  • Coginiti Support: support@coginiti.co
  • Provider Documentation: Consult AI provider official docs
  • Community Forums: User forums and knowledge base

Additional Resources

Provider-Specific Resources

Summary

You have successfully configured AI Assistant for Coginiti! Key achievements:

Multi-Provider Support: Integration with Azure OpenAI, Amazon Bedrock, Google Vertex AI, OpenAI, and Anthropic ✅ Secure Configuration: Proper API key management and secure connections ✅ Database Integration: Metadata sharing for context-aware AI suggestions ✅ Enterprise Features: Organization-wide enablement and access control ✅ Security Implementation: Best practices for credential and data security ✅ User Enablement: AI-powered code generation, query optimization, and error resolution

Your Coginiti instance now provides intelligent AI assistance for data analysis, query optimization, and code generation, significantly enhancing user productivity and code quality.