Documentation

Everything you need to get started with Companion and make the most of its powerful features.

📚 Quick Start Guide

Installation

  1. Download and install Ollama
  2. Clone the Companion repository
  3. Install Python dependencies: pip install -r requirements.txt
  4. Run the application: python main.py

First Chat

Once Companion is running:

  • Select a model from the dropdown
  • Type your message in the chat input
  • Press Enter or click Send
  • View the AI response in the chat window

⚙️ Configuration

Local Models (Ollama)

Companion automatically detects Ollama models. To add more models:

ollama pull <model-name>

Cloud Models

Cloud AI models are available in Web Mode and provide access to the latest AI capabilities including GPT-4, Gemini, Claude, and more.

🚀 Advanced Features

Keyboard Shortcuts

  • Enter - Send message
  • Ctrl+Enter - New line in message
  • Ctrl+K - Clear chat
  • Ctrl+S - Save conversation

Chat Management

  • Conversations are automatically saved
  • Use the Clear button to start fresh
  • Export conversations in multiple formats

🌐 AI Modes

Web Mode vs Desktop Mode

Companion offers two ways to use AI:

🌐 Web Mode (Cloud AI Only)

  • Use directly in browser - no download needed
  • Access to cloud AI models (GPT-4, Gemini, Claude)
  • Always up-to-date with latest features
  • Requires internet connection
  • Perfect for quick testing and mobile use

💻 Desktop Mode (Cloud + Local AI)

  • Download and install full application
  • Access to both cloud AND local AI models
  • Works offline with local models
  • Enhanced privacy and performance
  • Best for power users and regular use

Choosing the Right Mode

Choose Web Mode if you:

  • Want to try Companion quickly
  • Primarily use cloud AI models
  • Use multiple devices including mobile
  • Don't want to install software

Choose Desktop Mode if you:

  • Want offline AI capabilities
  • Value privacy and data control
  • Use AI regularly for work
  • Want to minimize cloud costs

📖 Read the full AI Modes documentation

🔧 Troubleshooting

Common Issues

  • Ollama not found: Make sure Ollama is installed and running
  • No models available: Pull models using ollama pull
  • Cloud models not working: Check your internet connection
  • Connection issues: Verify firewall settings

Getting Help

Need more help? Check out our:

📡 API Reference

Integrate Companion's AI capabilities into your applications with our comprehensive REST API.

🔌 RESTful API

Simple HTTP endpoints for conversation management and AI interactions

GET /api/docs Complete API documentation

🤖 AI Models

Access to multiple advanced AI models with intelligent fallback:

  • GPT-4o - OpenAI's most advanced model
  • Claude 3.5 - Anthropic's reasoning powerhouse
  • DeepSeek R1 - Advanced reasoning capabilities
  • Perplexity - Web-connected research model
  • Gemini 2.5 Flash - Google's fast multimodal AI

🌟 Enhanced Capabilities

Built-in features for advanced AI interactions:

  • 🌐 Web Search - Real-time information from multiple search engines
  • 🧠 NLP Analysis - Natural language processing and understanding
  • 📈 Learning - Adaptive responses based on context
  • Intelligent Caching - Time-sensitive data management

📝 Quick Start Examples

Create & Manage Conversations

// Create a new conversation
POST /api/conversations
Content-Type: application/json

// Response
{
  "id": "uuid-string",
  "title": "New Chat",
  "created_at": "2024-08-14T18:30:00Z"
}

// Get all conversations
GET /api/conversations

// Delete a conversation
DELETE /api/conversations/{id}

Send Messages & Get AI Responses

// Send a message
POST /api/conversations/{id}/messages
Content-Type: application/json

{
  "message": "Explain quantum computing",
  "active_tools": ["web", "think"]
}

// Response with AI answer
{
  "content": "Quantum computing is...",
  "thinking": "Step-by-step reasoning...",
  "web_sources": ["url1", "url2"]
}

AI Tools & Capabilities

// Available AI tools
{
  "web": "Real-time web search",
  "think": "Step-by-step reasoning", 
  "deepsearch": "Comprehensive research",
  "code": "Programming assistance",
  "deepthink": "Multi-model analysis"
}

// Health check
GET /api/health

// System statistics  
GET /api/stats

🔗 API Actions