7a3a0172cfed68198d01c20e9b7be636c34375a9
Telegram ChatBot with Ollama AI
A high-quality, production-ready Telegram chatbot powered by Ollama AI models. This bot provides natural conversation experiences using local AI models.
🎯 Features
- Ollama Integration: Uses OllamaSharp library for efficient AI model communication
- Multiple Model Support: Automatically manages and switches between multiple AI models
- Session Management: Maintains conversation history for each chat
- Command System: Extensible command architecture for bot commands
- Smart Retry Logic: Exponential backoff with jitter for failed requests
- Rate Limit Handling: Automatic model switching on rate limits
- Natural Conversation: Configurable response delays for human-like interactions
- Group Chat Support: Works in both private and group conversations
- Robust Logging: Comprehensive logging with Serilog
📋 Prerequisites
- .NET 9.0 or later
- Ollama server running locally or remotely
- Telegram Bot Token (from @BotFather)
🚀 Getting Started
1. Install Ollama
Download and install Ollama from ollama.ai
2. Pull an AI Model
ollama pull llama3
3. Configure the Bot
Edit appsettings.json:
{
"TelegramBot": {
"BotToken": "YOUR_BOT_TOKEN_HERE"
},
"Ollama": {
"Url": "http://localhost:11434",
"MaxRetries": 3,
"MaxTokens": 1000,
"Temperature": 0.7,
"ResponseDelay": {
"IsEnabled": true,
"MinDelayMs": 1000,
"MaxDelayMs": 3000
},
"SystemPromptFilePath": "system-prompt.txt"
}
}
Edit appsettings.Models.json to configure your models:
{
"ModelConfigurations": [
{
"Name": "llama3",
"MaxTokens": 2000,
"Temperature": 0.8,
"Description": "Llama 3 Model",
"IsEnabled": true
}
]
}
4. Customize System Prompt
Edit system-prompt.txt to define your bot's personality and behavior.
5. Run the Bot
cd ChatBot
dotnet run
🏗️ Architecture
Core Services
- AIService: Handles AI model communication and text generation
- ChatService: Manages chat sessions and message history
- ModelService: Handles model selection and switching
- TelegramBotService: Main Telegram bot service
Command System
Commands are automatically registered using attributes:
[Command("start", "Start conversation with the bot")]
public class StartCommand : TelegramCommandBase
{
// Implementation
}
Available commands:
/start- Start conversation/help- Show help information/clear- Clear conversation history/settings- View current settings
⚙️ Configuration
Ollama Settings
- Url: Ollama server URL
- MaxRetries: Maximum retry attempts for failed requests
- MaxTokens: Default maximum tokens for responses
- Temperature: AI creativity level (0.0 - 2.0)
- ResponseDelay: Add human-like delays before responses
- SystemPromptFilePath: Path to system prompt file
Model Configuration
Each model can have custom settings:
- Name: Model name (must match Ollama model name)
- MaxTokens: Maximum tokens for this model
- Temperature: Temperature for this model
- Description: Human-readable description
- IsEnabled: Whether the model is available for use
🔧 Advanced Features
Automatic Model Switching
The bot automatically switches to alternative models when:
- Rate limits are encountered
- Current model becomes unavailable
Session Management
- Automatic session creation per chat
- Configurable message history length
- Old session cleanup (default: 24 hours)
Error Handling
- Exponential backoff with jitter for retries
- Graceful degradation on failures
- Comprehensive error logging
📝 Development
Project Structure
ChatBot/
├── Models/
│ ├── Configuration/ # Configuration models
│ │ └── Validators/ # Configuration validation
│ └── Dto/ # Data transfer objects
├── Services/
│ ├── Telegram/ # Telegram-specific services
│ │ ├── Commands/ # Bot commands
│ │ ├── Interfaces/ # Service interfaces
│ │ └── Services/ # Service implementations
│ ├── AIService.cs # AI model communication
│ ├── ChatService.cs # Chat session management
│ └── ModelService.cs # Model management
└── Program.cs # Application entry point
Adding New Commands
- Create a new class in
Services/Telegram/Commands/ - Inherit from
TelegramCommandBase - Add
[Command]attribute - Implement
ExecuteAsyncmethod
Example:
[Command("mycommand", "Description of my command")]
public class MyCommand : TelegramCommandBase
{
public override async Task ExecuteAsync(TelegramCommandContext context)
{
await context.MessageSender.SendTextMessageAsync(
context.Message.Chat.Id,
"Command executed!"
);
}
}
🐛 Troubleshooting
Bot doesn't respond
- Check if Ollama server is running:
ollama list - Verify bot token in
appsettings.json - Check logs in
logs/directory
Model not found
- Pull the model:
ollama pull model-name - Verify model name matches in
appsettings.Models.json - Check model availability:
ollama list
Connection errors
- Verify Ollama URL in configuration
- Check firewall settings
- Ensure Ollama server is accessible
📦 Dependencies
- OllamaSharp (v5.4.7): Ollama API client
- Telegram.Bot (v22.7.2): Telegram Bot API
- Serilog (v4.3.0): Structured logging
- Microsoft.Extensions.Hosting (v9.0.10): Host infrastructure
📄 License
This project is licensed under the terms specified in LICENSE.txt.
🤝 Contributing
Contributions are welcome! Please ensure:
- Code follows existing patterns
- All tests pass
- Documentation is updated
- Commits are descriptive
🔮 Future Enhancements
- Multi-language support
- Voice message handling
- Image generation support
- User preferences persistence
- Advanced conversation analytics
- Custom model fine-tuning support
Built with ❤️ using .NET 9.0 and Ollama
Description
Languages
C#
99.9%
Dockerfile
0.1%