Continue

AI DevelopmentOpen SourceCustomizationData ControlMulti-LLM Support

AI Tool

Continue

Overview

Continue is a completely open-source AI coding assistant. Supporting VS Code and JetBrains IDEs, it allows free selection and switching between multiple LLM providers including OpenAI, Anthropic, and Ollama. With the 2025 version 1.0 release, it enables custom AI assistant creation and sharing functionality, enterprise governance and security controls, and private data plane deployment, enabling complete data control.

Details

Continue began as an open-source project in 2023, underwent significant feature enhancements in 2024, and released the official version 1.0 in 2025. The latest version builds an innovative ecosystem through Continue Hub (a Docker Hub-like sharing platform), allowing developers to create and share custom AI assistants. Being completely open-source enables enterprises to achieve advanced AI development assistance in self-hosted environments without sending company data externally.

Key Features

  • Agent: Automatic execution of complex tasks and multi-step processing
  • Chat: Natural language interactive coding assistance with AI
  • Autocomplete: Real-time code completion
  • Edit: Existing code transformation and refactoring
  • Custom LLM Integration: Support for any LLM model and provider

New Features in 2025 Version 1.0

  • Continue Hub: Custom AI assistant sharing platform
  • Enterprise Governance: Detailed access control and usage management
  • Self-hosting Support: Deployment in completely private environments
  • Plugin Functionality: Custom tool and service integration API
  • Multi-language Support: Multi-language interface including Japanese

Advantages and Disadvantages

Advantages

  • Completely Open Source: Transparency and customization, avoiding vendor lock-in
  • Ultimate Customization: Detailed customization through configuration files
  • Complete Data Control: Self-hosting capable, no external data transmission
  • Multi-LLM Support: Free choice of OpenAI, Anthropic, Ollama, local models, etc.
  • Cost Efficient: Solo version free, enterprise version cheaper than competitors
  • Community Driven: Active development community and rich extensions

Disadvantages

  • Complex Setup: Technical knowledge required for configuration file editing, etc.
  • Learning Cost: Time required to utilize advanced features effectively
  • Support Structure: Limited official support due to open-source nature
  • Stability: May be less stable than commercial services due to being a new project
  • UI/UX: Interface less polished than commercial products

Reference Links

Code Examples

VS Code Setup

# Install VS Code extension
# 1. Open Extensions (Ctrl+Shift+X)
# 2. Search "Continue" and install
# 3. Click Continue icon in bottom right
# 4. Edit configuration file (config.json)

# Command line installation
code --install-extension continue.continue

Basic Configuration File (config.json)

{
  "models": [
    {
      "title": "GPT-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "your-openai-api-key",
      "contextLength": 128000
    },
    {
      "title": "Claude Sonnet 4",
      "provider": "anthropic", 
      "model": "claude-3-5-sonnet-20241022",
      "apiKey": "your-anthropic-api-key",
      "contextLength": 200000
    },
    {
      "title": "Llama 3.1 Local",
      "provider": "ollama",
      "model": "llama3.1:8b",
      "apiBase": "http://localhost:11434"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Codestral",
    "provider": "mistral",
    "model": "codestral-latest",
    "apiKey": "your-mistral-api-key"
  },
  "allowAnonymousTelemetry": false,
  "embeddingsProvider": {
    "provider": "transformers.js",
    "model": "Xenova/all-MiniLM-L6-v2"
  }
}

Python Project Analysis and Refactoring

# Large-scale Python project analysis and refactoring using Continue
# Comprehensive code improvement through Agent functionality

import asyncio
import aiohttp
import json
import logging
from typing import Dict, List, Optional, Union, Any
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from pathlib import Path

# Ask Continue: "Analyze this class and improve memory efficiency"
@dataclass
class PerformanceOptimizedUserProfile:
    """Memory-efficient optimized user profile"""
    user_id: str
    username: str = field(compare=False)  # Exclude from comparison
    email: str = field(repr=False)        # Exclude from repr display
    preferences: Dict[str, Any] = field(default_factory=dict)
    _cache: Dict[str, Any] = field(default_factory=dict, init=False, repr=False)
    
    def __post_init__(self):
        # Continue suggests performance optimization patterns
        # Reduce memory usage through string interning
        self.user_id = intern(self.user_id)
        self.username = intern(self.username)
    
    @property
    def display_name(self) -> str:
        """Cached display name"""
        if 'display_name' not in self._cache:
            self._cache['display_name'] = f"{self.username} ({self.user_id})"
        return self._cache['display_name']
    
    def clear_cache(self):
        """Clear cache"""
        self._cache.clear()

class OptimizedDatabaseManager:
    """Performance optimized database manager"""
    
    def __init__(self, connection_string: str, max_pool_size: int = 20):
        # Continue suggests high-performance database connection patterns
        self.connection_string = connection_string
        self.max_pool_size = max_pool_size
        self.connection_pool = None
        self.query_cache = {}  # Query result cache
        self.prepared_statements = {}  # Prepared statements
        
        # Logging setup
        self.logger = logging.getLogger(__name__)
        self.performance_logger = logging.getLogger(f"{__name__}.performance")
    
    async def initialize_pool(self):
        """Initialize optimized connection pool"""
        try:
            import asyncpg
            
            # Continue suggests high-performance DB configuration
            self.connection_pool = await asyncpg.create_pool(
                self.connection_string,
                min_size=2,
                max_size=self.max_pool_size,
                command_timeout=30,
                max_queries=50000,
                max_inactive_connection_lifetime=300,
                server_settings={
                    'jit': 'off',
                    'shared_preload_libraries': 'pg_stat_statements',
                    'track_activity_query_size': '2048'
                }
            )
            
            # Pre-create prepared statements
            await self._prepare_common_statements()
            self.logger.info("Optimized connection pool initialized")
            
        except Exception as e:
            self.logger.error(f"Failed to initialize pool: {e}")
            raise
    
    async def _prepare_common_statements(self):
        """Create prepared statements for frequently used queries"""
        # Continue analyzes and suggests high-frequency query patterns
        common_queries = {
            'get_user_by_id': """
                SELECT user_id, username, email, preferences, created_at
                FROM user_profiles 
                WHERE user_id = $1 AND active = true
            """,
            'update_user_preferences': """
                UPDATE user_profiles 
                SET preferences = preferences || $2, updated_at = CURRENT_TIMESTAMP
                WHERE user_id = $1
                RETURNING user_id, preferences
            """,
            'get_users_by_activity': """
                SELECT user_id, username, last_login
                FROM user_profiles 
                WHERE last_login > $1
                ORDER BY last_login DESC
                LIMIT $2
            """
        }
        
        async with self.connection_pool.acquire() as connection:
            for name, query in common_queries.items():
                self.prepared_statements[name] = await connection.prepare(query)
    
    async def get_user_optimized(self, user_id: str) -> Optional[PerformanceOptimizedUserProfile]:
        """Optimized user retrieval"""
        # Check cache
        cache_key = f"user:{user_id}"
        if cache_key in self.query_cache:
            cache_data = self.query_cache[cache_key]
            # Check cache validity (5 minutes)
            if datetime.now() - cache_data['timestamp'] < timedelta(minutes=5):
                self.performance_logger.debug(f"Cache hit for user {user_id}")
                return cache_data['data']
        
        start_time = datetime.now()
        
        try:
            async with self.connection_pool.acquire() as connection:
                # Use prepared statement
                stmt = self.prepared_statements['get_user_by_id']
                row = await stmt.fetchrow(user_id)
                
                if not row:
                    return None
                
                # Create optimized object
                user_profile = PerformanceOptimizedUserProfile(
                    user_id=row['user_id'],
                    username=row['username'],
                    email=row['email'],
                    preferences=row['preferences'] or {}
                )
                
                # Cache result
                self.query_cache[cache_key] = {
                    'data': user_profile,
                    'timestamp': datetime.now()
                }
                
                # Performance measurement
                execution_time = (datetime.now() - start_time).total_seconds()
                self.performance_logger.info(f"User fetch took {execution_time:.3f}s")
                
                return user_profile
                
        except Exception as e:
            self.logger.error(f"Error fetching user {user_id}: {e}")
            return None
    
    async def batch_update_preferences(self, updates: List[Dict[str, Any]]) -> List[str]:
        """Batch preference updates (high performance)"""
        # Continue suggests batch processing optimization patterns
        updated_users = []
        
        async with self.connection_pool.acquire() as connection:
            async with connection.transaction():
                stmt = self.prepared_statements['update_user_preferences']
                
                # Batch execution
                for update in updates:
                    user_id = update['user_id']
                    preferences = update['preferences']
                    
                    try:
                        result = await stmt.fetchrow(user_id, preferences)
                        if result:
                            updated_users.append(result['user_id'])
                            
                            # Cache invalidation
                            cache_key = f"user:{user_id}"
                            if cache_key in self.query_cache:
                                del self.query_cache[cache_key]
                                
                    except Exception as e:
                        self.logger.error(f"Failed to update user {user_id}: {e}")
                        continue
        
        self.logger.info(f"Batch updated {len(updated_users)} users")
        return updated_users

class AdvancedAPIController:
    """Advanced API controller (Continue optimized version)"""
    
    def __init__(self, db_manager: OptimizedDatabaseManager):
        self.db_manager = db_manager
        self.request_cache = {}
        self.rate_limiter = {}
    
    async def handle_bulk_user_operation(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
        """High-performance bulk user operation processing"""
        # Continue suggests large data processing patterns
        operation = request_data.get('operation')
        user_ids = request_data.get('user_ids', [])
        
        if not operation or not user_ids:
            return {
                "status": "error",
                "message": "Operation and user_ids are required",
                "code": 400
            }
        
        if len(user_ids) > 1000:  # Bulk processing limit
            return {
                "status": "error", 
                "message": "Maximum 1000 users per batch",
                "code": 400
            }
        
        start_time = datetime.now()
        
        try:
            if operation == "get_profiles":
                # High-speed retrieval through parallel processing
                tasks = [
                    self.db_manager.get_user_optimized(user_id) 
                    for user_id in user_ids
                ]
                
                results = await asyncio.gather(*tasks, return_exceptions=True)
                
                # Filter successful results only
                successful_results = [
                    {
                        "user_id": result.user_id,
                        "username": result.username,
                        "display_name": result.display_name
                    }
                    for result in results
                    if isinstance(result, PerformanceOptimizedUserProfile)
                ]
                
                processing_time = (datetime.now() - start_time).total_seconds()
                
                return {
                    "status": "success",
                    "data": successful_results,
                    "metadata": {
                        "total_requested": len(user_ids),
                        "successful": len(successful_results),
                        "processing_time": f"{processing_time:.3f}s"
                    }
                }
                
            elif operation == "update_preferences":
                # Batch update processing
                preferences = request_data.get('preferences', {})
                
                updates = [
                    {"user_id": user_id, "preferences": preferences}
                    for user_id in user_ids
                ]
                
                updated_users = await self.db_manager.batch_update_preferences(updates)
                
                processing_time = (datetime.now() - start_time).total_seconds()
                
                return {
                    "status": "success",
                    "data": {
                        "updated_users": updated_users,
                        "updated_count": len(updated_users)
                    },
                    "metadata": {
                        "processing_time": f"{processing_time:.3f}s"
                    }
                }
            
            else:
                return {
                    "status": "error",
                    "message": f"Unknown operation: {operation}",
                    "code": 400
                }
                
        except Exception as e:
            self.logger.error(f"Bulk operation error: {e}")
            return {
                "status": "error",
                "message": "Internal server error",
                "code": 500
            }

# Continue Agent functionality usage example
async def performance_analysis_agent():
    """Performance analysis by Continue Agent"""
    # Example instruction to Agent:
    """
    Please automatically execute the following tasks:
    
    1. Identify performance bottlenecks in this Python codebase
    2. Suggest memory usage optimizations
    3. Database query efficiency improvements
    4. Async processing improvement points
    5. Generate comprehensive performance test code
    6. Create optimization benchmark comparison report
    """
    
    # Continue Agent automatically executes:
    # ✓ Profile entire codebase
    # ✓ Identify bottlenecks and generate reports
    # ✓ Suggest and implement optimization code
    # ✓ Automatically generate test code
    # ✓ Create performance comparison report
    
    print("Performance analysis completed by Continue Agent")

if __name__ == "__main__":
    # Usage example
    async def main():
        db_manager = OptimizedDatabaseManager("postgresql://user:pass@localhost/db")
        await db_manager.initialize_pool()
        
        api_controller = AdvancedAPIController(db_manager)
        
        # Test request
        test_request = {
            "operation": "get_profiles",
            "user_ids": [f"user{i}" for i in range(100)]  # Batch processing for 100 users
        }
        
        result = await api_controller.handle_bulk_user_operation(test_request)
        print(f"Bulk operation result: {result}")
        
        # Execute agent analysis
        await performance_analysis_agent()
    
    asyncio.run(main())

TypeScript Custom LLM Integration

// TypeScript Continue custom LLM integration example
// Organization-specific AI model and service integration

interface CustomLLMConfig {
    name: string;
    endpoint: string;
    apiKey: string;
    model: string;
    contextLength: number;
    capabilities: string[];
}

interface ContinueConfig {
    models: CustomLLMConfig[];
    customCommands: CustomCommand[];
    contextProviders: ContextProvider[];
}

// Custom command definition
interface CustomCommand {
    name: string;
    description: string;
    prompt: string;
    model?: string;
}

// Enterprise-specific Continue configuration
const enterpriseConfig: ContinueConfig = {
    models: [
        {
            name: "Company Internal GPT",
            endpoint: "https://ai.company.com/v1/chat/completions",
            apiKey: process.env.COMPANY_AI_API_KEY!,
            model: "company-gpt-4-turbo",
            contextLength: 128000,
            capabilities: ["chat", "edit", "autocomplete"]
        },
        {
            name: "Security Focused Model", 
            endpoint: "https://secure-ai.company.com/api",
            apiKey: process.env.SECURE_AI_KEY!,
            model: "security-llm-v2",
            contextLength: 64000,
            capabilities: ["security-review", "vulnerability-scan"]
        },
        {
            name: "Code Review Specialist",
            endpoint: "https://codereview-ai.company.com/analyze",
            apiKey: process.env.CODE_REVIEW_KEY!,
            model: "code-reviewer-pro",
            contextLength: 200000,
            capabilities: ["code-review", "refactor-suggest"]
        }
    ],
    
    customCommands: [
        {
            name: "security-audit",
            description: "Perform comprehensive security audit on selected code",
            prompt: `Analyze the following code for security vulnerabilities:
            
            1. SQL injection risks
            2. XSS vulnerabilities  
            3. Authentication bypasses
            4. Data exposure risks
            5. Input validation issues
            
            Provide specific fixes and security best practices:
            
            {{{ input }}}`,
            model: "Security Focused Model"
        },
        {
            name: "performance-optimize",
            description: "Optimize code for performance",
            prompt: `Analyze and optimize the following code for performance:
            
            1. Memory usage optimization
            2. Algorithm efficiency improvements
            3. Database query optimization
            4. Caching strategies
            5. Async/await best practices
            
            Provide optimized code with performance metrics:
            
            {{{ input }}}`,
            model: "Company Internal GPT"
        },
        {
            name: "enterprise-refactor",
            description: "Refactor code following company standards",
            prompt: `Refactor the following code according to our enterprise standards:
            
            1. Follow company coding conventions
            2. Implement proper error handling
            3. Add comprehensive logging
            4. Ensure type safety
            5. Add unit tests
            6. Document complex logic
            
            Company standards context: {{{ company_standards }}}
            
            Code to refactor:
            {{{ input }}}`,
            model: "Code Review Specialist"
        }
    ],
    
    contextProviders: [
        {
            name: "company-docs",
            description: "Company internal documentation",
            query: "search-company-docs",
            type: "retrieval"
        },
        {
            name: "jira-tickets",
            description: "Related JIRA tickets",
            query: "get-related-tickets", 
            type: "api"
        }
    ]
};

// Advanced React component development
// Comprehensive component implementation by Continue Agent

import React, { useState, useEffect, useCallback, useMemo } from 'react';
import { debounce } from 'lodash';

interface AdvancedSearchProps {
    onSearch: (query: string, filters: SearchFilters) => Promise<SearchResult[]>;
    placeholder?: string;
    initialFilters?: SearchFilters;
    maxResults?: number;
}

interface SearchFilters {
    category?: string;
    dateRange?: DateRange;
    tags?: string[];
    sortBy?: 'relevance' | 'date' | 'popularity';
    includeArchived?: boolean;
}

interface SearchResult {
    id: string;
    title: string;
    description: string;
    category: string;
    tags: string[];
    createdAt: Date;
    relevanceScore: number;
}

// Ask Continue: "Implement high-performance search component"
const AdvancedSearchComponent: React.FC<AdvancedSearchProps> = ({
    onSearch,
    placeholder = "Enter search keywords...",
    initialFilters = {},
    maxResults = 50
}) => {
    const [query, setQuery] = useState<string>('');
    const [filters, setFilters] = useState<SearchFilters>(initialFilters);
    const [results, setResults] = useState<SearchResult[]>([]);
    const [loading, setLoading] = useState<boolean>(false);
    const [error, setError] = useState<string | null>(null);
    
    // Continue suggests performance optimization patterns
    // Debounced search implementation
    const debouncedSearch = useMemo(
        () => debounce(async (searchQuery: string, searchFilters: SearchFilters) => {
            if (!searchQuery.trim()) {
                setResults([]);
                return;
            }
            
            try {
                setLoading(true);
                setError(null);
                
                const searchResults = await onSearch(searchQuery, searchFilters);
                setResults(searchResults.slice(0, maxResults));
                
            } catch (err) {
                setError(err instanceof Error ? err.message : 'Search failed');
                setResults([]);
            } finally {
                setLoading(false);
            }
        }, 300),
        [onSearch, maxResults]
    );
    
    // Automatic search on query/filter changes
    useEffect(() => {
        debouncedSearch(query, filters);
        
        // Cleanup
        return () => {
            debouncedSearch.cancel();
        };
    }, [query, filters, debouncedSearch]);
    
    // Filter update function
    const updateFilter = useCallback(<K extends keyof SearchFilters>(
        key: K, 
        value: SearchFilters[K]
    ) => {
        setFilters(prev => ({
            ...prev,
            [key]: value
        }));
    }, []);
    
    // Search result highlighting
    const highlightSearchTerms = useCallback((text: string, searchQuery: string): string => {
        if (!searchQuery.trim()) return text;
        
        const regex = new RegExp(`(${searchQuery.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')})`, 'gi');
        return text.replace(regex, '<mark>$1</mark>');
    }, []);
    
    return (
        <div className="advanced-search">
            {/* Search input field */}
            <div className="search-input-section">
                <input
                    type="text"
                    value={query}
                    onChange={(e) => setQuery(e.target.value)}
                    placeholder={placeholder}
                    className="search-input"
                />
                
                {loading && <div className="search-spinner">Searching...</div>}
            </div>
            
            {/* Filter section */}
            <div className="search-filters">
                <select
                    value={filters.category || ''}
                    onChange={(e) => updateFilter('category', e.target.value || undefined)}
                >
                    <option value="">All Categories</option>
                    <option value="documents">Documents</option>
                    <option value="code">Code</option>
                    <option value="issues">Issues</option>
                    <option value="discussions">Discussions</option>
                </select>
                
                <select
                    value={filters.sortBy || 'relevance'}
                    onChange={(e) => updateFilter('sortBy', e.target.value as SearchFilters['sortBy'])}
                >
                    <option value="relevance">By Relevance</option>
                    <option value="date">By Date</option>
                    <option value="popularity">By Popularity</option>
                </select>
                
                <label>
                    <input
                        type="checkbox"
                        checked={filters.includeArchived || false}
                        onChange={(e) => updateFilter('includeArchived', e.target.checked)}
                    />
                    Include Archived
                </label>
            </div>
            
            {/* Error display */}
            {error && (
                <div className="search-error">
                    Error: {error}
                </div>
            )}
            
            {/* Search results display */}
            <div className="search-results">
                {results.length === 0 && !loading && query.trim() && (
                    <div className="no-results">
                        No search results found
                    </div>
                )}
                
                {results.map((result) => (
                    <div key={result.id} className="search-result-item">
                        <h3 
                            className="result-title"
                            dangerouslySetInnerHTML={{
                                __html: highlightSearchTerms(result.title, query)
                            }}
                        />
                        
                        <p 
                            className="result-description"
                            dangerouslySetInnerHTML={{
                                __html: highlightSearchTerms(result.description, query)
                            }}
                        />
                        
                        <div className="result-metadata">
                            <span className="result-category">{result.category}</span>
                            <span className="result-date">
                                {result.createdAt.toLocaleDateString('en-US')}
                            </span>
                            <span className="result-score">
                                Relevance: {(result.relevanceScore * 100).toFixed(1)}%
                            </span>
                        </div>
                        
                        <div className="result-tags">
                            {result.tags.map(tag => (
                                <span key={tag} className="result-tag">
                                    {tag}
                                </span>
                            ))}
                        </div>
                    </div>
                ))}
            </div>
            
            {/* Result count display */}
            {results.length > 0 && (
                <div className="search-summary">
                    {results.length} results
                    {results.length === maxResults && ' (partial display due to limit)'}
                </div>
            )}
        </div>
    );
};

export default AdvancedSearchComponent;

Custom Tool and Plugin Development

// Continue custom tool and plugin development example
// Organization-specific workflow automation

// Continue configuration file (config.json) definition
const customToolsConfig = {
    "experimental": {
        "modelRoles": {
            "architect": "gpt-4o",
            "security": "security-focused-model", 
            "reviewer": "code-review-specialist"
        }
    },
    
    "customCommands": [
        {
            "name": "create-feature",
            "description": "Create complete feature with tests and documentation",
            "prompt": "Create a complete feature implementation including:\n1. Core functionality\n2. Unit tests\n3. Integration tests\n4. API documentation\n5. User documentation\n\nFeature requirements:\n{{{ input }}}"
        }
    ],
    
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "analyze_codebase",
                "description": "Analyze codebase structure and dependencies",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "path": {
                            "type": "string",
                            "description": "Path to analyze"
                        },
                        "depth": {
                            "type": "integer", 
                            "description": "Analysis depth (1-5)"
                        }
                    },
                    "required": ["path"]
                }
            }
        }
    ]
};

// Node.js custom tool implementation
class ContinueCustomTool {
    constructor(workspacePath) {
        this.workspacePath = workspacePath;
        this.logger = console;
    }
    
    // Codebase analysis tool
    async analyzeCodebase(path, depth = 3) {
        try {
            const fs = require('fs').promises;
            const pathModule = require('path');
            
            const analysis = {
                structure: {},
                dependencies: {},
                metrics: {},
                issues: []
            };
            
            // Directory structure analysis
            analysis.structure = await this.analyzeDirectoryStructure(
                pathModule.join(this.workspacePath, path), 
                depth
            );
            
            // Dependency analysis
            analysis.dependencies = await this.analyzeDependencies(path);
            
            // Code metrics calculation
            analysis.metrics = await this.calculateCodeMetrics(path);
            
            // Issue detection
            analysis.issues = await this.detectIssues(path);
            
            return analysis;
            
        } catch (error) {
            this.logger.error('Codebase analysis failed:', error);
            throw error;
        }
    }
    
    async analyzeDirectoryStructure(dirPath, maxDepth, currentDepth = 0) {
        const fs = require('fs').promises;
        const pathModule = require('path');
        
        if (currentDepth >= maxDepth) return null;
        
        try {
            const items = await fs.readdir(dirPath);
            const structure = {};
            
            for (const item of items) {
                const itemPath = pathModule.join(dirPath, item);
                const stats = await fs.stat(itemPath);
                
                if (stats.isDirectory()) {
                    // Skip excluded directories
                    if (['node_modules', '.git', 'dist', 'build'].includes(item)) {
                        continue;
                    }
                    
                    structure[item] = {
                        type: 'directory',
                        children: await this.analyzeDirectoryStructure(
                            itemPath, 
                            maxDepth, 
                            currentDepth + 1
                        )
                    };
                } else {
                    // File information
                    structure[item] = {
                        type: 'file',
                        size: stats.size,
                        extension: pathModule.extname(item),
                        modified: stats.mtime
                    };
                }
            }
            
            return structure;
            
        } catch (error) {
            this.logger.warn(`Failed to analyze directory ${dirPath}:`, error);
            return null;
        }
    }
    
    async analyzeDependencies(projectPath) {
        const fs = require('fs').promises;
        const pathModule = require('path');
        
        const dependencies = {
            direct: {},
            dev: {},
            peer: {},
            vulnerabilities: []
        };
        
        try {
            // package.json analysis
            const packageJsonPath = pathModule.join(this.workspacePath, projectPath, 'package.json');
            const packageJson = JSON.parse(await fs.readFile(packageJsonPath, 'utf8'));
            
            dependencies.direct = packageJson.dependencies || {};
            dependencies.dev = packageJson.devDependencies || {};
            dependencies.peer = packageJson.peerDependencies || {};
            
            // Vulnerability scan (npm audit equivalent)
            dependencies.vulnerabilities = await this.scanVulnerabilities(projectPath);
            
        } catch (error) {
            this.logger.warn('Dependency analysis failed:', error);
        }
        
        return dependencies;
    }
    
    async calculateCodeMetrics(projectPath) {
        const fs = require('fs').promises;
        const pathModule = require('path');
        
        const metrics = {
            totalFiles: 0,
            totalLines: 0,
            codeLines: 0,
            commentLines: 0,
            blankLines: 0,
            complexity: 0,
            duplicateBlocks: []
        };
        
        // TypeScript/JavaScript file analysis
        const jsFiles = await this.findFiles(
            pathModule.join(this.workspacePath, projectPath),
            /\.(ts|tsx|js|jsx)$/
        );
        
        for (const filePath of jsFiles) {
            try {
                const content = await fs.readFile(filePath, 'utf8');
                const lines = content.split('\n');
                
                metrics.totalFiles++;
                metrics.totalLines += lines.length;
                
                // Line classification
                for (const line of lines) {
                    const trimmed = line.trim();
                    if (!trimmed) {
                        metrics.blankLines++;
                    } else if (trimmed.startsWith('//') || trimmed.startsWith('/*')) {
                        metrics.commentLines++;
                    } else {
                        metrics.codeLines++;
                    }
                }
                
                // Cyclomatic complexity calculation (simplified)
                metrics.complexity += this.calculateComplexity(content);
                
            } catch (error) {
                this.logger.warn(`Failed to analyze file ${filePath}:`, error);
            }
        }
        
        return metrics;
    }
    
    calculateComplexity(code) {
        // Simple cyclomatic complexity calculation
        const complexityKeywords = [
            'if', 'else', 'while', 'for', 'switch', 'case', 
            'catch', 'throw', '&&', '||', '?'
        ];
        
        let complexity = 1; // Base value
        
        for (const keyword of complexityKeywords) {
            const regex = new RegExp(`\\b${keyword}\\b`, 'g');
            const matches = code.match(regex);
            if (matches) {
                complexity += matches.length;
            }
        }
        
        return complexity;
    }
    
    async detectIssues(projectPath) {
        const issues = [];
        
        // Common problem pattern detection
        const problemPatterns = [
            {
                name: 'Remaining console.log',
                pattern: /console\.log\(/g,
                severity: 'warning',
                description: 'console.log remains in production code'
            },
            {
                name: 'Hardcoded URLs',
                pattern: /https?:\/\/[^\s"']+/g,
                severity: 'info',
                description: 'Hardcoded URLs detected'
            },
            {
                name: 'TODO/FIXME comments',
                pattern: /(TODO|FIXME|HACK):/gi,
                severity: 'info',
                description: 'Incomplete tasks exist'
            }
        ];
        
        // File search and issue detection
        const codeFiles = await this.findFiles(
            pathModule.join(this.workspacePath, projectPath),
            /\.(ts|tsx|js|jsx|py|java|go|rb)$/
        );
        
        for (const filePath of codeFiles) {
            try {
                const content = await fs.readFile(filePath, 'utf8');
                
                for (const pattern of problemPatterns) {
                    const matches = content.match(pattern.pattern);
                    if (matches) {
                        issues.push({
                            file: filePath,
                            type: pattern.name,
                            severity: pattern.severity,
                            description: pattern.description,
                            occurrences: matches.length
                        });
                    }
                }
                
            } catch (error) {
                this.logger.warn(`Failed to scan file ${filePath}:`, error);
            }
        }
        
        return issues;
    }
    
    async findFiles(directory, pattern) {
        const fs = require('fs').promises;
        const pathModule = require('path');
        const files = [];
        
        async function searchRecursive(dir) {
            try {
                const items = await fs.readdir(dir);
                
                for (const item of items) {
                    const fullPath = pathModule.join(dir, item);
                    const stats = await fs.stat(fullPath);
                    
                    if (stats.isDirectory()) {
                        // Skip excluded directories
                        if (!['node_modules', '.git', 'dist', 'build'].includes(item)) {
                            await searchRecursive(fullPath);
                        }
                    } else if (pattern.test(item)) {
                        files.push(fullPath);
                    }
                }
            } catch (error) {
                // Ignore directory access errors
            }
        }
        
        await searchRecursive(directory);
        return files;
    }
}

// Continue plugin registration
module.exports = {
    ContinueCustomTool,
    customToolsConfig
};

Enterprise Self-hosting Configuration

# docker-compose.yml
# Continue Enterprise self-hosting environment

version: '3.8'

services:
  continue-server:
    image: continue/continue-server:enterprise-1.0
    container_name: continue-enterprise
    ports:
      - "8080:8080"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://continue:${DB_PASSWORD}@postgres:5432/continue_enterprise
      - REDIS_URL=redis://redis:6379
      - JWT_SECRET=${JWT_SECRET}
      - ENCRYPTION_KEY=${ENCRYPTION_KEY}
      - CORS_ORIGINS=https://continue.company.com
    volumes:
      - ./config:/app/config
      - ./logs:/app/logs
      - ./models:/app/models
    depends_on:
      - postgres
      - redis
    restart: unless-stopped
    networks:
      - continue-network

  postgres:
    image: postgres:15-alpine
    container_name: continue-postgres
    environment:
      - POSTGRES_DB=continue_enterprise
      - POSTGRES_USER=continue
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    restart: unless-stopped
    networks:
      - continue-network

  redis:
    image: redis:7-alpine
    container_name: continue-redis
    command: redis-server --requirepass ${REDIS_PASSWORD}
    volumes:
      - redis_data:/data
    restart: unless-stopped
    networks:
      - continue-network

  nginx:
    image: nginx:alpine
    container_name: continue-nginx
    ports:
      - "443:443"
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - continue-server
    restart: unless-stopped
    networks:
      - continue-network

volumes:
  postgres_data:
  redis_data:

networks:
  continue-network:
    driver: bridge
# config/enterprise.yaml
# Enterprise configuration file

enterprise:
  license_key: "${CONTINUE_ENTERPRISE_LICENSE}"
  organization: "Web Reference Corp"
  
security:
  data_retention: "zero_day"
  encryption:
    at_rest: true
    in_transit: true
    key_rotation_days: 90
  
  authentication:
    providers:
      - type: "saml"
        name: "Corporate SSO"
        metadata_url: "https://sso.company.com/metadata"
      - type: "ldap"
        host: "ldap.company.com"
        port: 389
        base_dn: "dc=company,dc=com"
  
  audit:
    enabled: true
    log_level: "detailed"
    retention_days: 365
    export_format: ["json", "csv"]

models:
  providers:
    - name: "OpenAI"
      type: "openai"
      api_key: "${OPENAI_API_KEY}"
      models: ["gpt-4o", "gpt-4-turbo"]
      
    - name: "Anthropic"
      type: "anthropic" 
      api_key: "${ANTHROPIC_API_KEY}"
      models: ["claude-3-5-sonnet-20241022"]
      
    - name: "Internal AI"
      type: "custom"
      endpoint: "https://ai.company.com/v1"
      api_key: "${INTERNAL_AI_KEY}"
      models: ["company-gpt-4"]

features:
  chat: true
  autocomplete: true
  edit: true
  agent: true
  custom_commands: true
  
permissions:
  admin_users:
    - "[email protected]"
    - "[email protected]"
  
  user_groups:
    developers:
      features: ["chat", "autocomplete", "edit"]
      rate_limits:
        requests_per_hour: 1000
        
    senior_developers:
      features: ["chat", "autocomplete", "edit", "agent"]
      rate_limits:
        requests_per_hour: 2000
        
    architects:
      features: ["chat", "autocomplete", "edit", "agent", "custom_commands"]
      rate_limits:
        requests_per_hour: 5000

monitoring:
  metrics:
    enabled: true
    prometheus: true
    endpoint: "/metrics"
  
  alerts:
    slack_webhook: "${SLACK_WEBHOOK_URL}"
    email_recipients: ["[email protected]"]
    
  health_checks:
    interval_seconds: 30
    timeout_seconds: 10