Troubleshooting¶
Common issues and their solutions when using the AI Documentation Agent.
Quick Diagnosis¶
Symptom Checklist¶
Check this first:
- Ollama is running:
ollama list - Model is installed:
ollama list | grep your-model - Python version:
python --version(need 3.8+) - Dependencies installed:
pip list | grep requests -
.envfile exists and configured - Directory path is correct
Common Issues¶
Installation Issues¶
Issue: Python Not Found¶
Symptom:
Solutions:
Windows:
# Reinstall Python with "Add to PATH" checked
# Or add Python to PATH manually
setx PATH "%PATH%;C:\Python311;C:\Python311\Scripts"
Linux/macOS:
# Use python3
python3 run.py --help
# Or create alias
echo "alias python=python3" >> ~/.bashrc
source ~/.bashrc
Issue: Module Not Found¶
Symptom:
Solutions:
# Install dependencies
pip install -r config/requirements.txt
# Or specific package
pip install requests dotenv markdown pdfkit
# Verify installation
pip list | grep requests
Still not working?
# Use pip3
pip3 install -r config/requirements.txt
# Or use python -m pip
python -m pip install -r config/requirements.txt
# Check if using virtual environment
which python # Should point to venv if activated
Ollama Connection Issues¶
Issue: Cannot Connect to Ollama¶
Symptom:
Solutions:
1. Check if Ollama is running:
# Test Ollama
ollama list
# If not running, start it
ollama serve
# On Windows (if installed as service)
# Check Services app for "Ollama" service
2. Verify API URL:
# Check .env file
cat .env | grep OLLAMA_API_URL
# Should be:
OLLAMA_API_URL=http://localhost:11434/api/generate
# Test manually
curl http://localhost:11434/api/tags
3. Check firewall:
Issue: Model Not Found¶
Symptom:
Solutions:
# List available models
ollama list
# Pull the model
ollama pull llama2:7b
# Verify installation
ollama list | grep llama2
# Update .env
MODEL_NAME=llama2:7b
Common models:
ollama pull llama2:7b # Fast, general
ollama pull mistral # Balanced
ollama pull codellama # Code-focused
ollama pull llama2:13b # High quality
API Timeout Issues¶
Issue: Request Timeout¶
Symptom:
Solutions:
1. Increase timeout:
2. Reduce file count:
3. Use faster model:
# In .env
MODEL_NAME=llama2:7b # Faster than llama2:13b
# Or command line
python run.py --model llama2:7b
4. Reduce iterations:
File Discovery Issues¶
Issue: No Files Found¶
Symptom:
Solutions:
1. Check directory path:
# Use absolute path
python run.py --directory /absolute/path/to/project
# Or relative from project root
python run.py --directory ./my-project
# Verify directory exists
ls /path/to/project
2. Check file types:
# List files in directory
ls -la my-project/
# Check if any supported files
find my-project -name "*.py" -o -name "*.js"
# Supported extensions:
# .py, .js, .ts, .tsx, .jsx, .java, .cs, .go, .php, .rb, .rs
# .c, .cpp, .h, .hpp, .html, .css, .scss, .sql, .sh
# .kt, .swift, .vue, .svelte, .xml, .gradle
3. Check ignored directories:
# Edit src/doc_generator.py if needed
IGNORED_DIRECTORIES = frozenset([
"node_modules", ".git", ".vscode", "__pycache__",
"dist", "build", "target"
# Add or remove as needed
])
4. Use verbose mode:
Permission Issues¶
Issue: Permission Denied¶
Symptom:
Solutions:
Linux/macOS:
# Check permissions
ls -la /path/to/project
# Fix permissions
chmod -R 755 /path/to/project
# Or run with sudo (not recommended)
sudo python run.py --directory /path/to/project
Windows:
# Run as Administrator
# Right-click Command Prompt → "Run as administrator"
# Or check folder permissions
# Right-click folder → Properties → Security
Output Issues¶
Issue: Output File Not Created¶
Symptom:
Solutions:
1. Check output directory:
2. Specify output name:
3. Check for errors:
Issue: PDF Generation Fails¶
Symptom:
Solutions:
1. Install wkhtmltopdf:
Windows:
# Using Chocolatey
choco install wkhtmltopdf
# Or download from
# https://wkhtmltopdf.org/downloads.html
macOS:
Linux:
2. Verify installation:
3. Use alternative format:
# Use HTML instead
python run.py --format html
# Convert HTML to PDF manually later
wkhtmltopdf output/docs.html output/docs.pdf
Quality Issues¶
Issue: Poor Documentation Quality¶
Symptom: - Documentation lacks detail - Missing sections - Unclear explanations - No examples
Solutions:
1. Increase iterations:
2. Use better model:
3. Analyze more files:
4. Specify project type:
5. Lower quality threshold (faster but lower quality):
Issue: Documentation is Incomplete¶
Symptom: - Missing deployment section - No examples - Incomplete API documentation
Solutions:
1. More iterations:
2. Increase timeout:
3. Check critique logs:
# Run with verbose
python run.py --verbose
# Look for critique feedback
grep "Critique:" ai_agent.log
4. Ensure important files are analyzed:
# Check if README is included
python run.py --verbose | grep README
# Increase file limit if needed
python run.py --max-files 75
Performance Issues¶
Issue: Generation is Very Slow¶
Symptom: - Takes > 15 minutes - Each iteration takes very long
Solutions:
1. Use faster model:
2. Reduce file count:
3. Reduce iterations:
4. Check system resources:
5. Close other applications: - LLMs need significant RAM and CPU - Close browsers, IDEs, etc.
Configuration Issues¶
Issue: Environment Variables Not Loading¶
Symptom: - Settings in .env are ignored - Using default values
Solutions:
1. Check .env location:
2. Check .env syntax:
# Correct format (no spaces around =)
OLLAMA_API_URL=http://localhost:11434/api/generate
MODEL_NAME=llama2:7b
# Wrong format
OLLAMA_API_URL = http://localhost:11434/api/generate # No spaces!
3. Check for quotes:
# Don't use quotes unless needed
API_TIMEOUT=300 # Correct
API_TIMEOUT="300" # Wrong (will be string)
4. Verify loading:
# Test in Python
from dotenv import load_dotenv
import os
load_dotenv()
print(os.getenv('MODEL_NAME'))
# Should print your model name
Docker Issues¶
Issue: Docker Container Can't Connect to Ollama¶
Symptom:
Solutions:
Windows/macOS:
# Use host.docker.internal
docker run --rm \
-v "$(pwd):/workspace" \
-e OLLAMA_API_URL=http://host.docker.internal:11434/api/generate \
ai-doc-agent --directory /workspace
Linux:
# Use host network
docker run --rm --network host \
-v "$(pwd):/workspace" \
ai-doc-agent --directory /workspace
Issue: Volume Mount Issues¶
Symptom: - Files not accessible in container - Permission denied
Solutions:
# Windows (use forward slashes)
docker run -v "//c/Projects/myapp:/workspace" ...
# Linux/macOS (use absolute paths)
docker run -v "/home/user/project:/workspace" ...
# Check current directory mount
docker run -v "$(pwd):/workspace" ...
Error Messages Reference¶
Common Error Messages¶
| Error | Meaning | Solution |
|---|---|---|
Connection refused |
Ollama not running | ollama serve |
Model not found |
Model not installed | ollama pull model-name |
Permission denied |
File access issue | Check permissions |
Timeout |
Request too slow | Increase timeout |
No files found |
Directory issue | Check path |
Module not found |
Missing dependency | pip install -r requirements.txt |
Debug Mode¶
Enable Verbose Logging¶
# Command line
python run.py --verbose
# Check logs
tail -f ai_agent.log
# Search for errors
grep ERROR ai_agent.log
# Search for warnings
grep WARNING ai_agent.log
Log File Analysis¶
# View full log
cat ai_agent.log
# Last 50 lines
tail -50 ai_agent.log
# Follow in real-time
tail -f ai_agent.log
# Search for specific issue
grep "timeout" ai_agent.log
grep "connection" ai_agent.log
Getting Help¶
Information to Provide¶
When asking for help, include:
- Error message (full traceback)
- Command used (exact command)
- Environment:
- OS and version
- Python version:
python --version - Ollama version:
ollama --version - Model:
MODEL_NAMEfrom.env - Log file (relevant sections)
- Configuration (
.envwithout secrets)
Example Bug Report¶
ERROR - Connection refused to Ollama API ConnectionError: [Errno 111]Environment: - OS: Ubuntu 22.04 - Python: 3.11.2 - Ollama: 0.1.14 - Model: llama2:7b
Configuration:
Logs: [Attach ai_agent.log]
---
## Advanced Troubleshooting
### Network Issues
```bash
# Test Ollama connectivity
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model":"llama2:7b","prompt":"test"}'
# Check port
netstat -an | grep 11434
# Test DNS
ping localhost
Memory Issues¶
# Check available memory
free -h # Linux
vm_stat # macOS
# Ollama memory usage
ollama ps
# Close large models
ollama stop model-name
File System Issues¶
# Check disk space
df -h
# Check output directory
ls -la output/
# Test file creation
touch output/test.txt
rm output/test.txt
Prevention Tips¶
1. Regular Maintenance¶
# Update dependencies
pip install --upgrade -r config/requirements.txt
# Update Ollama
# Download latest from ollama.ai
# Update models
ollama pull llama2:7b
2. Test Before Important Runs¶
# Test with small project first
python run.py --directory ./examples --max-files 5
# Then run on actual project
python run.py --directory ./my-project
3. Use Version Control¶
# Save working configuration
cp .env .env.backup
# Track changes
git add .env.backup
git commit -m "Working configuration"
Next Steps¶
- Sample Projects - Working examples
- Configuration - Setup guide
- Quick Start - Getting started
- Command Reference - All commands