Configuring AI Assistant
This guide explains how to set up and configure Coginiti's AI Assistant, which acts as an intelligent agent to help users generate data cleansing, transformation, and analysis code. The AI Assistant can also explain queries, optimize performance, and help fix errors across multiple generative AI platforms.
Overview
Coginiti's AI Assistant provides intelligent code generation and analysis capabilities powered by leading AI models. The assistant integrates seamlessly with your database environment to provide context-aware suggestions and solutions.
AI Assistant Capabilities
Code Generation:
- Data cleansing and transformation scripts
- Complex analytical queries and calculations
- ETL pipeline code generation
- Custom function and macro creation
Query Assistance:
- Query explanation in natural language
- Performance optimization recommendations
- Error diagnosis and fixing suggestions
- Best practice guidance and code review
Database Integration:
- Context-aware suggestions using database metadata
- Table and column name recognition
- Data type and constraint awareness
- Schema-specific optimization recommendations
Supported AI Providers
- Azure OpenAI - Microsoft's managed OpenAI service
- Amazon Bedrock - AWS managed AI service with Anthropic models
- Google Vertex AI - Google Cloud's managed AI platform with multiple model providers
- OpenAI - Direct integration with OpenAI's ChatGPT models
- Anthropic - Direct integration with Claude models
Prerequisites
Administrative Requirements
- Administrator access to Coginiti Team or Enterprise
- Valid subscription with chosen AI provider
- API credentials from your selected AI provider
- Network connectivity to AI provider endpoints
AI Provider Requirements
Azure OpenAI
- Azure subscription with OpenAI service approval
- Azure OpenAI resource created and deployed
- Model deployment completed in Azure OpenAI Studio
Amazon Bedrock
- AWS account with Bedrock access permissions
- Model access granted through AWS console
- IAM credentials with appropriate Bedrock permissions
OpenAI
- Paid OpenAI account (free tier not supported for API access)
- API key generated from OpenAI platform
- Usage limits configured appropriately
Google Vertex AI
- Google Cloud project with Vertex AI API enabled
- Service account with appropriate Vertex AI permissions
- Model access configured in Google Cloud Console
- JSON service account key or Application Default Credentials
Anthropic
- Paid Anthropic account with developer role
- API key generated from Claude.ai platform
- Rate limits understood and configured
Enabling AI Assistant
Step 1: Access AI Assistant Configuration
- Navigate to Admin Settings by clicking your profile in the upper right corner
- Select "Global Preferences" from the admin menu
- Choose "AI Assistant" from the global preferences options
Step 2: Enable AI Assistant
- Check the "Enable AI Assistant" checkbox to activate the feature
- Choose your preferred generative AI model from the dropdown menu
- Select your AI provider (Azure OpenAI, Amazon Bedrock, Google Vertex AI, OpenAI, or Anthropic)
Step 3: Configure Database Metadata Sharing
- Check "Share database metadata with AI Assistant" for enhanced accuracy
- This enables context-aware code generation using your database schema
- Review privacy implications of sharing metadata with AI providers
Enabling database metadata sharing significantly improves AI-generated code accuracy by providing table names, column types, relationships, and constraints to the AI model for context-aware suggestions.
Step 4: Apply Configuration
- Click "Apply" to enable AI Assistant for all users in your organization
- Test the connection to ensure proper configuration
- Verify AI Assistant availability in the user interface
Azure OpenAI Configuration
Microsoft's Azure OpenAI service provides enterprise-grade AI capabilities with enhanced security and compliance features.
Prerequisites for Azure OpenAI
Azure Setup Requirements:
- Azure subscription - Sign up for Azure
- Azure OpenAI service access - Request access to Azure OpenAI
- Resource creation completed in Azure portal
- Model deployment configured in Azure OpenAI Studio
Step 1: Create Azure OpenAI Resource
- Log into Azure Portal at portal.azure.com
- Create new resource → Search for "OpenAI"
- Select "Azure OpenAI" and click "Create"
- Configure resource settings:
- Subscription: Choose your Azure subscription
- Resource Group: Create new or select existing
- Region: Select supported region (e.g., East US, West Europe)
- Name: Choose unique resource name
- Pricing Tier: Select appropriate tier for usage
Step 2: Deploy AI Model
- Navigate to Azure OpenAI Studio from your resource
- Go to "Management" → "Deployments"
- Click "Create new deployment"
- Configure deployment:
- Model: Select model (e.g., gpt-4, gpt-3.5-turbo)
- Deployment name: Choose descriptive name
- Version: Select model version
- Scale settings: Configure capacity units
Step 3: Retrieve Connection Information
Navigate to your Azure OpenAI resource and collect the following information:
Resource Name (Endpoint)
- Go to "Resource Management" → "Keys and Endpoints"
- Copy the "Endpoint" URL (e.g.,
https://your-resource.openai.azure.com/) - Paste into "Resource Name" field in Coginiti
API Key
- In "Keys and Endpoints" section
- Copy "Key 1" or "Key 2" (either key works)
- Keep key secure and paste into "API Key" field in Coginiti
Deployment Name
- Go to Azure OpenAI Studio → "Management" → "Deployments"
- Find your deployment in the deployments list
- Copy the deployment name exactly as shown
- Enter in "Deployment Name" field in Coginiti
Example Azure OpenAI Configuration
Resource Name: https://coginiti-openai.openai.azure.com/
API Key: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6
Deployment Name: gpt-4-deployment
Azure OpenAI Security Best Practices
Access Control
- Use managed identities when possible instead of API keys
- Implement IP restrictions in Azure networking settings
- Enable Azure AD authentication for enhanced security
- Monitor API usage through Azure monitoring tools
Cost Management
- Set spending limits in Azure billing
- Monitor token usage to avoid unexpected costs
- Use appropriate model sizes for your use case
- Implement rate limiting to control usage
Amazon Bedrock Configuration
Amazon Bedrock provides access to foundation models from leading AI companies including Anthropic's Claude models.
Prerequisites for Amazon Bedrock
AWS Setup Requirements:
- AWS account with Bedrock service access
- Model access granted for desired AI models
- IAM credentials with appropriate permissions
- Region selection where Bedrock is available
Step 1: Request Model Access
- Log into AWS Console and navigate to Amazon Bedrock
- Go to "Model access" in the left navigation
- Click "Request model access" for desired models
- Complete access request form and submit
- Wait for approval (typically within 24 hours)
Supported Models in Coginiti
- Claude (Anthropic)
- Claude 2.1 (Anthropic)
- Claude Instant (Anthropic)
Step 2: Create IAM User and Permissions
Create an IAM user with appropriate Bedrock permissions:
IAM Policy for Bedrock Access
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "arn:aws:bedrock:*::foundation-model/anthropic.claude*"
}
]
}
Create IAM User
- Navigate to IAM → Users → Create user
- Enter username (e.g., "coginiti-bedrock-user")
- Attach policy created above
- Create user and generate access keys
Step 3: Configure Bedrock in Coginiti
Connection Settings
- Access Key ID: IAM user access key ID
- Secret Access Key: IAM user secret access key
- Region: AWS region where Bedrock is available (e.g., us-east-1, us-west-2)
- Model: Select from supported Anthropic models
Example Amazon Bedrock Configuration
Access Key ID: AKIAIOSFODNN7EXAMPLE
Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Region: us-east-1
Model: Claude 2.1
Bedrock Security Best Practices
IAM Security
- Use least privilege principle for IAM policies
- Rotate access keys regularly (every 90 days)
- Enable CloudTrail logging for API call auditing
- Use IAM roles instead of users when possible
Cost Management
- Monitor usage through AWS Cost Explorer
- Set up billing alerts for unexpected usage
- Understand pricing model for each foundation model
- Use appropriate model sizes for your workload
Google Vertex AI Configuration
Google Vertex AI provides access to multiple AI models including PaLM, Gemini, and third-party models through Google Cloud's managed platform.
Prerequisites for Google Vertex AI
Google Cloud Setup Requirements:
- Google Cloud project with billing enabled
- Vertex AI API enabled in the project
- Service account with appropriate permissions
- Model access configured for desired models
Step 1: Enable Vertex AI API
- Log into Google Cloud Console at console.cloud.google.com
- Select your project or create a new one
- Navigate to "APIs & Services" → "Library"
- Search for "Vertex AI API" and click on it
- Click "Enable" to activate the API for your project
Step 2: Create Service Account
Create a service account with appropriate Vertex AI permissions:
Service Account Creation
- Navigate to "IAM & Admin" → "Service Accounts"
- Click "Create Service Account"
- Configure service account:
- Name: Choose descriptive name (e.g., "coginiti-vertex-ai")
- Description: "Service account for Coginiti AI Assistant"
- Service account ID: Will be auto-generated
Assign Permissions
Assign the following roles to the service account:
- Vertex AI User (
roles/aiplatform.user) - Vertex AI Service Agent (
roles/aiplatform.serviceAgent)
Generate Service Account Key
- Click on the created service account
- Go to "Keys" tab
- Click "Add Key" → "Create new key"
- Select "JSON" format
- Download and securely store the JSON key file
Step 3: Configure Model Access
Enable access to the AI models you want to use:
Available Models
- Gemini Pro: Google's advanced language model
- Gemini Pro Vision: Multimodal model with image understanding
- PaLM 2: Google's foundation language model
- Codey: Specialized model for code generation
- Third-party models: Various models from other providers
Enable Model Access
- Navigate to Vertex AI → Model Garden
- Browse available models and select desired ones
- Click "Enable" or "Request Access" for each model
- Wait for approval if required (some models have access controls)
Step 4: Configure Vertex AI in Coginiti
Connection Settings
- Project ID: Your Google Cloud project identifier
- Location: Google Cloud region (e.g., us-central1, europe-west1)
- Service Account Key: Upload the JSON key file or provide key content
- Model: Select from available Vertex AI models
Example Google Vertex AI Configuration
Project ID: my-company-ai-project
Location: us-central1
Service Account Key: [Upload service-account-key.json]
Model: gemini-pro
Vertex AI Security Best Practices
Service Account Security
- Use dedicated service accounts for Coginiti integration
- Apply principle of least privilege to service account roles
- Rotate service account keys regularly (quarterly recommended)
- Monitor service account usage through Cloud Logging
Project Security
- Enable Cloud Audit Logs for Vertex AI API calls
- Configure VPC firewall rules if using private networks
- Use Identity and Access Management for fine-grained control
- Monitor API quotas and usage patterns
Cost Management
- Set up billing alerts for unexpected usage
- Monitor model usage through Cloud Monitoring
- Use appropriate model sizes for your workload
- Implement request quotas to control costs
Vertex AI Model Comparison
| Model | Best For | Context Length | Cost |
|---|---|---|---|
| Gemini Pro | General tasks, reasoning | 30k tokens | Medium |
| Gemini Pro Vision | Multimodal, image analysis | 16k tokens | Higher |
| PaLM 2 | Text generation, chat | 8k tokens | Lower |
| Codey | Code generation, completion | 6k tokens | Medium |
OpenAI Configuration
Direct integration with OpenAI's API provides access to the latest ChatGPT models and features.
Prerequisites for OpenAI
OpenAI Setup Requirements:
- Paid OpenAI account - Create OpenAI account
- API access enabled (requires payment method on file)
- Usage limits configured appropriately
- API key generated and secured
Step 1: Create OpenAI Account
- Visit OpenAI Platform
- Sign up for account or log into existing account
- Add payment method to enable API access
- Set usage limits to control spending
Step 2: Generate API Key
- Navigate to "API Keys" in your OpenAI account
- Click "Create new secret key"
- Enter descriptive name (e.g., "Coginiti Integration")
- Copy the API key immediately (it won't be shown again)
- Store key securely for use in Coginiti
OpenAI API keys provide full access to your account and billing. Store them securely and never share them publicly. Regenerate keys if compromised.
Step 3: Configure OpenAI in Coginiti
Connection Settings
- API Key: Your OpenAI API secret key
- Model: Select from available OpenAI models (GPT-4, GPT-3.5-turbo, etc.)
Example OpenAI Configuration
API Key: sk-proj-abcdef123456789...
Model: gpt-4
Available OpenAI Models
GPT-4 Models
- GPT-4: Most capable model, best for complex reasoning
- GPT-4 Turbo: Faster and more cost-effective than GPT-4
- GPT-4o: Optimized for speed and efficiency
GPT-3.5 Models
- GPT-3.5-turbo: Good balance of capability and cost
- GPT-3.5-turbo-16k: Extended context window for longer queries
OpenAI Security Best Practices
API Security
- Regenerate API keys regularly
- Monitor API usage for unusual activity
- Set usage limits to prevent unexpected charges
- Use environment variables to store API keys securely
Cost Management
- Set spending limits in OpenAI dashboard
- Monitor token usage and costs regularly
- Choose appropriate models for your use case
- Implement client-side rate limiting
Anthropic Configuration
Direct integration with Anthropic's Claude models provides advanced reasoning and analysis capabilities.
Prerequisites for Anthropic
Anthropic Setup Requirements:
- Paid Anthropic account - Create account at Claude.ai
- Developer role minimum required for API access
- API key generated from Anthropic console
- Usage limits understood and configured
Step 1: Create Anthropic Account
- Visit Claude.ai
- Sign up for account with email verification
- Upgrade to paid plan for API access
- Verify developer role in account settings
Step 2: Generate API Key
- Navigate to "API Keys" section in your profile
- Click "Create Key" button
- Enter descriptive name for the key
- Copy the generated API key immediately
- Store key securely for use in Coginiti
The minimum role required to access API keys is 'developer'. Ensure your account has appropriate permissions before attempting to generate keys.
Step 3: Configure Anthropic in Coginiti
Connection Settings
- API Key: Your Anthropic API key
- Model: Select from available Claude models
Example Anthropic Configuration
API Key: sk-ant-api03-abcdef123456789...
Model: Claude 3 Sonnet
Available Anthropic Models
Claude Models
- Claude 3 Opus: Most capable model for complex tasks
- Claude 3 Sonnet: Balanced performance and speed
- Claude 3 Haiku: Fastest and most cost-effective
- Claude Instant: Quick responses for simple tasks
Anthropic Security Best Practices
API Security
- Rotate API keys according to security policies
- Monitor API usage through Anthropic dashboard
- Implement rate limiting to prevent abuse
- Use secure storage for API credentials
Usage Management
- Understand rate limits for your account tier
- Monitor conversation costs and token usage
- Choose appropriate models for different use cases
- Implement fallback mechanisms for rate limit scenarios
Database Metadata Integration
Enabling Metadata Sharing
When enabled, Coginiti shares database schema information with the AI Assistant to provide more accurate and contextual suggestions.
Metadata Types Shared
- Table names and schema information
- Column names and data types
- Primary keys and foreign key relationships
- Indexes and constraints
- View definitions and materialized views
Benefits of Metadata Sharing
- Context-aware code generation using actual table/column names
- Data type appropriate suggestions and transformations
- Relationship-aware join suggestions and queries
- Constraint-aware data validation and cleaning code
Privacy Considerations
Data Shared
- Schema metadata only - no actual data content
- Table and column names - consider if names contain sensitive information
- Database structure - relationships and constraints
- No query results or actual data values
Privacy Best Practices
- Review table/column naming for sensitive information
- Use generic names for sensitive schemas when possible
- Document metadata sharing in privacy policies
- Consider separate schemas for sensitive data
Using AI Assistant
Accessing AI Features
Once configured, users can access AI Assistant features through:
Keyboard Shortcuts
- Mod+Alt+E: Explain selected query
- Mod+Alt+O: Optimize selected query
- Mod+Enter: Send message in AI chat
- Mod+Shift+Z: Cancel AI request
Context Menus
- Right-click on queries for AI options
- Error message assistance for automatic fixing
- Code generation prompts in editor
AI Chat Interface
- Natural language queries for code generation
- Interactive problem solving and guidance
- Code review and optimization suggestions
Common AI Assistant Use Cases
Data Cleansing
User: "Generate code to clean customer data - remove duplicates, standardize phone numbers, and validate email addresses"
AI: [Generates appropriate SQL/CoginitiScript with table-specific column names]
Query Optimization
User: [Selects slow query and uses Mod+Alt+O]
AI: "This query can be optimized by adding an index on the date column and rewriting the subquery as a join..."
Error Resolution
System: [SQL error occurs]
AI: "This error is caused by a data type mismatch. Try converting the string to integer using CAST()..."
Troubleshooting AI Assistant
Common Configuration Issues
Connection Failures
Symptoms: AI Assistant fails to respond or shows connection errors
Solutions for Azure OpenAI:
- Verify endpoint URL format and accessibility
- Check API key validity and permissions
- Confirm deployment name matches Azure configuration
- Test network connectivity to Azure endpoints
Solutions for Amazon Bedrock:
- Verify model access has been granted
- Check IAM permissions for Bedrock service
- Confirm region availability for selected models
- Test AWS credentials with CLI or other tools
Solutions for Google Vertex AI:
- Verify service account permissions and key file validity
- Check Vertex AI API is enabled in the project
- Confirm model access has been granted in Model Garden
- Test Google Cloud credentials with gcloud CLI
Solutions for OpenAI:
- Verify API key is valid and not expired
- Check account billing and usage limits
- Confirm model availability for your account tier
- Test API access with curl or other tools
Solutions for Anthropic:
- Verify API key and developer role permissions
- Check account status and billing information
- Confirm model availability for your account
- Test rate limits and usage quotas
Performance Issues
Symptoms: Slow AI responses or timeouts
Solutions:
- Check network latency to AI provider endpoints
- Verify API rate limits aren't being exceeded
- Reduce query complexity for faster responses
- Monitor provider status pages for service issues
Accuracy Issues
Symptoms: AI provides incorrect or irrelevant suggestions
Solutions:
- Enable metadata sharing for better context
- Provide more specific prompts with examples
- Review database schema for clarity
- Consider different models for your use case
Monitoring and Maintenance
Usage Monitoring
- Track API calls and token usage
- Monitor costs across all AI providers
- Review user feedback on AI suggestions
- Analyze performance metrics and response times
Regular Maintenance
- Rotate API keys according to security policies
- Update model versions when available
- Review and update permissions regularly
- Test configurations after provider updates
Security and Compliance
Data Security
Data in Transit
- TLS encryption for all API communications
- API key authentication for secure access
- Request/response logging for audit trails
- Network security controls and monitoring
Data at Rest
- No persistent storage of query content by default
- Secure credential storage in Coginiti
- Encrypted configuration data
- Regular security updates and patches
Compliance Considerations
Data Governance
- Document AI usage in data governance policies
- Review data sharing agreements with AI providers
- Implement user consent mechanisms if required
- Maintain audit logs of AI interactions
Regulatory Compliance
- GDPR considerations for EU users
- HIPAA compliance for healthcare data
- SOX requirements for financial data
- Industry-specific regulations as applicable
Enterprise Security Features
Access Control
- Admin-controlled enablement of AI features
- Role-based access to AI capabilities
- User permission management for AI features
- Audit logging of AI usage and administration
Integration Security
- Secure API key management with encryption
- Network isolation options for sensitive environments
- Proxy support for corporate networks
- Certificate validation for secure connections
Best Practices
Configuration Best Practices
Model Selection
- Choose appropriate models for your use case and budget
- Test different models to find optimal performance
- Consider cost vs. capability trade-offs
- Monitor model performance and user satisfaction
Security Management
- Implement key rotation schedules
- Use least privilege access principles
- Monitor usage patterns for anomalies
- Regular security assessments of AI integrations
User Training
AI Assistant Capabilities
- Train users on available AI features
- Provide examples of effective prompts
- Share best practices for AI interaction
- Create usage guidelines for your organization
Privacy Awareness
- Educate users about data sharing implications
- Provide guidelines for sensitive data handling
- Document policies for AI usage
- Regular training updates on new features
Support and Resources
Getting Help
For AI Assistant configuration assistance:
- Coginiti Support: support@coginiti.co
- Provider Documentation: Consult AI provider official docs
- Community Forums: User forums and knowledge base
Additional Resources
- Command Palette Reference - AI Assistant keyboard shortcuts
- Security Log Reference - AI usage auditing
- User Management Guide - User access control
Provider-Specific Resources
- Azure OpenAI Documentation
- Amazon Bedrock Documentation
- OpenAI API Documentation
- Anthropic API Documentation
Summary
You have successfully configured AI Assistant for Coginiti! Key achievements:
✅ Multi-Provider Support: Integration with Azure OpenAI, Amazon Bedrock, Google Vertex AI, OpenAI, and Anthropic ✅ Secure Configuration: Proper API key management and secure connections ✅ Database Integration: Metadata sharing for context-aware AI suggestions ✅ Enterprise Features: Organization-wide enablement and access control ✅ Security Implementation: Best practices for credential and data security ✅ User Enablement: AI-powered code generation, query optimization, and error resolution
Your Coginiti instance now provides intelligent AI assistance for data analysis, query optimization, and code generation, significantly enhancing user productivity and code quality.