AgentOne Settings

Complete reference for configuring AgentOne to match your workflow

🔑 API Configuration

Configure your AI provider and model settings. AgentOne supports multiple providers with flexible configuration options.

Provider Selection

OpenAI GPT

✅ Best for:
  • Quick code completions
  • Creative problem solving
  • Wide ecosystem support
  • Function calling capabilities
Configuration:
{
  "provider": "openai",
  "apiKey": "sk-...",
  "model": "gpt-4-turbo",
  "baseURL": "https://api.openai.com/v1",
  "maxTokens": 4096,
  "temperature": 0.3
}

Google Gemini

✅ Best for:
  • Multimodal tasks
  • Large context windows
  • Google Cloud integration
  • Cost-effective usage
Configuration:
{
  "provider": "google",
  "apiKey": "AIza...",
  "model": "gemini-pro",
  "projectId": "your-project-id",
  "maxTokens": 2048
}

Local Models

✅ Best for:
  • Privacy-sensitive projects
  • Offline development
  • Cost control
  • Custom fine-tuned models
Configuration:
{
  "provider": "ollama",
  "baseURL": "http://localhost:11434",
  "model": "codellama:13b",
  "temperature": 0.1
}

🎼 Maestro Mode Settings

Fine-tune AgentOne's advanced Maestro architecture for optimal performance on your projects.

🔬 Analysis Depth

Controls how thoroughly AgentOne analyzes your codebase before making changes.

Shallow Fast

Quick analysis focusing on immediate context. Best for simple tasks and rapid prototyping.

Deep Thorough

Exhaustive analysis including architectural patterns, performance implications, and security considerations.

Setting: AgentOne.maestroMode.analysisDepth
Options: "shallow" | "moderate" | "deep"
Default: "moderate"

Perfect Your AgentOne Setup

Configure AgentOne to match your exact workflow and preferences for maximum productivity.