Neural Chatbot is a production-ready, intelligent AI chatbot built with TensorFlow/Keras, featuring advanced neural networks, modern web interface, and comprehensive training capabilities β completely self-hosted and customizable.
β 100% self-hosted & customizable
π§ Neural network-powered intent classification
π Modern web interface with real-time chat
π Production-ready with Docker & REST API
π Advanced training with visualization
π§ Extensible architecture
- Neural Network Intelligence: Multi-layer perceptron with dropout and batch normalization
- Intent Classification: Advanced NLP preprocessing with confidence scoring
- Modern Web UI: Responsive chat interface with session management
- REST API: Comprehensive API with OpenAPI documentation
- Production Ready: Docker deployment, health monitoring, and logging
- Training Pipeline: Advanced training with visualization and evaluation
- Conversation Management: User-specific history and analytics
- Data Augmentation: Automatic pattern expansion for better accuracy
git clone https://github.com/PawanRamaMali/Chatbot.git
cd Chatbot
pip install -e .
neural-chatbot train --plot
# Interactive chat
neural-chatbot chat
# Or start web server
neural-chatbot serve
# Send a message
curl -X POST 'http://localhost:5000/api/chat' \
-H 'Content-Type: application/json' \
-d '{"message": "Hello, how are you?"}'
# Check status
curl http://localhost:5000/api/status
# Get conversation history
curl http://localhost:5000/api/conversation
# Build development version
docker build --target development -t neural-chatbot:dev .
# Build production version
docker build --target production -t neural-chatbot:prod .
# Run development version
docker run -p 5000:5000 neural-chatbot:dev
# Run production version
docker run -p 5000:5000 neural-chatbot:prod
# Or use docker-compose
docker-compose up -d
# ---
# Quick start with Docker Compose
docker-compose up -d
# Check logs
docker-compose logs -f chatbot
# Production with monitoring
docker-compose --profile monitoring up -d
# Scale for production
docker-compose up -d --scale chatbot=3
Access the services:
- Chatbot API: http://localhost:5000
- Web Interface: http://localhost:5000
- Grafana (monitoring): http://localhost:3000
- Prometheus (metrics): http://localhost:9090
Chatbot/
βββ src/neural_chatbot/ # Main package
β βββ core/ # Core components
β β βββ chatbot.py # Main chatbot orchestrator
β β βββ model.py # Neural network model
β β βββ processor.py # Data preprocessing
β β βββ trainer.py # Training pipeline
β βββ api/ # REST API
β β βββ app.py # Flask application
β β βββ routes.py # API endpoints
β β βββ middleware.py # Request middleware
β βββ config/ # Configuration
β β βββ settings.py # Settings management
β β βββ config.yaml # Default configuration
β βββ utils/ # Utilities
β βββ web/ # Web interface
β β βββ templates/ # HTML templates
β β βββ static/ # CSS/JS assets
β βββ data/ # Training data
β βββ intents.json # Intent patterns
βββ tests/ # Test suite
βββ deployment/ # Deployment configs
βββ docker-compose.yml
βββ Dockerfile
βββ README.md
Get API information and available endpoints
Health check endpoint with component status
Detailed chatbot status and statistics
Send message to chatbot and get response
Body:
{
"message": "Hello, how are you?",
"user_id": "optional-user-id",
"context": {}
}
Response:
{
"response": "Hello! I'm doing great, thank you for asking!",
"intent": "greeting",
"confidence": 0.95,
"timestamp": "2024-01-15T10:30:00",
"user_id": "user-123"
}
Get conversation history for current user
Clear conversation history for current user
Train or retrain the chatbot model
Body:
{
"intents_data": {
"intents": [
{
"tag": "greeting",
"patterns": ["hi", "hello", "hey"],
"responses": ["Hello!", "Hi there!"]
}
]
},
"retrain": true
}
Predict intent for a message without storing conversation
Body:
{
"message": "What's the weather like?"
}
Evaluate model performance
API documentation
{
"intents": [
{
"tag": "greeting",
"patterns": [
"Hi", "Hello", "Good morning", "Hey there",
"What's up", "How are you"
],
"responses": [
"Hello! How can I help you today?",
"Hi there! What can I do for you?",
"Good to see you! How may I assist you?"
],
"context_set": "greeting"
}
]
}
- Greetings: Hello, hi, good morning
- Goodbyes: Bye, see you later, farewell
- Questions: What, how, why, when
- Commands: Show, tell me, help
- Small Talk: Weather, time, jokes
- Business Logic: Custom domain-specific intents
- Edit
src/neural_chatbot/data/intents.json
- Add new intent with patterns and responses
- Retrain:
neural-chatbot train
- Test:
neural-chatbot chat
# Train with default settings
neural-chatbot train
# Train with visualization
neural-chatbot train --plot
# Advanced training
neural-chatbot train --epochs 300 --batch-size 16 --learning-rate 0.0005 --plot --evaluate
# Evaluate current model
neural-chatbot evaluate --save-report
# Get detailed statistics
neural-chatbot status --detailed
- Data Augmentation: Automatic pattern expansion
- Early Stopping: Prevents overfitting
- Learning Rate Scheduling: Adaptive learning rates
- Validation Split: Model performance monitoring
- Visualization: Training plots and confusion matrices
- Comprehensive Metrics: Accuracy, precision, recall, F1-score
# Application
NEURAL_CHATBOT_SECRET_KEY=your-secret-key
NEURAL_CHATBOT_DEBUG=false
NEURAL_CHATBOT_LOG_LEVEL=INFO
# API Configuration
NEURAL_CHATBOT_HOST=0.0.0.0
NEURAL_CHATBOT_PORT=5000
# Model Configuration
NEURAL_CHATBOT_LEARNING_RATE=0.001
NEURAL_CHATBOT_EPOCHS=200
NEURAL_CHATBOT_BATCH_SIZE=8
NEURAL_CHATBOT_CONFIDENCE_THRESHOLD=0.25
# config.yaml
model:
learning_rate: 0.001
epochs: 200
batch_size: 8
dropout_rate: 0.5
confidence_threshold: 0.25
hidden_layers: [128, 64, 32]
data:
intents_file: "intents.json"
max_response_length: 500
api:
host: "0.0.0.0"
port: 5000
debug: false
cors_enabled: true
logging:
level: "INFO"
file: "logs/chatbot.log"
# Interactive chat
neural-chatbot chat
# Chat with specific user ID
neural-chatbot chat --user-id my-user-123
# Basic training
neural-chatbot train
# Custom training
neural-chatbot train --intents data/custom_intents.json --epochs 500 --plot
# Training with evaluation
neural-chatbot train --evaluate
# Development server
neural-chatbot serve --debug
# Production server
neural-chatbot serve --host 0.0.0.0 --port 8080 --workers 4
# Check status
neural-chatbot status --detailed
# Export model
neural-chatbot export --type model --output model.json
# Export conversations
neural-chatbot export --type conversations --output chat_history.json
- Batch Normalization: Stable training
- Dropout Regularization: Pr 6D47 events overfitting
- Early Stopping: Optimal training duration
- Learning Rate Scheduling: Adaptive learning
- Gunicorn: WSGI server with worker processes
- Redis Caching: Session and response caching
- Nginx: Reverse proxy and load balancing
- Health Checks: Automatic failover
# Monitor memory usage
docker stats neural-chatbot
# Scale horizontally
docker-compose up -d --scale chatbot=5
- Response times and latency
- Intent prediction accuracy
- User engagement statistics
- Conversation flow analysis
- Model confidence distributions
# Application health
curl http://localhost:5000/health
# Detailed statistics
curl http://localhost:5000/api/statistics
- Intent distribution analysis
- User engagement patterns
- Response confidence tracking
- Performance bottlenecks
- Message length limits
- HTML escaping and sanitization
- SQL injection prevention
- Rate limiting
- CORS configuration
- Request size limits
- Session management
- Error message sanitization
- Non-root Docker containers
- Secret management
- HTTPS enforcement
- Security headers
-
Model Not Training
- Check intents.json format
- Verify sufficient training data
- Check memory availability
-
Low Accuracy
- Add more training patterns
- Increase training epochs
- Adjust confidence threshold
-
API Errors
- Check if model is trained
- Verify input format
- Check logs for details
# Verbose logging
neural-chatbot --verbose train
# Check model status
neural-chatbot status
# View logs
tail -f logs/chatbot.log
- Neural network-based intent classification
- Modern web interface
- REST API with documentation
- Docker deployment
- Training visualization
- Conversation management
- Multi-language support
- Voice interface
- Integration with external APIs
- Advanced NLU with transformers
- Sentiment analysis
- Custom entity recognition
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Install development dependencies (
pip install -e ".[dev]"
) - Run tests (
pytest
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
# Clone and install
git clone https://github.com/PawanRamaMali/Chatbot.git
cd Chatbot
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Format code
black src/ tests/
isort src/ tests/
MIT License. See LICENSE
for details.
Built with β€οΈ by Pawan Rama Mali
- TensorFlow/Keras - Deep learning framework
- NLTK - Natural language processing
- Flask - Web framework
- scikit-learn - Machine learning utilities
- Docker - Containerization platform
Neural Chatbot - Making intelligent conversational AI accessible and customizable for everyone.