Local LLM Tools
Ollama
- Easy-to-use tool for running LLMs locally
- https://ollama.ai/
- Features:
- One-line model installation
- Multiple model support
- API access
- GPU acceleration
- Cross-platform (Mac, Windows, Linux)
LM Studio
- Desktop application for running LLMs
- https://lmstudio.ai/
- Features:
- User-friendly GUI
- Model management
- Chat interface
- API compatibility with OpenAI
- Performance optimization
Text Generation WebUI
- Web interface for running LLMs
- https://github.com/oobabooga/text-generation-webui
- Features:
- Multiple model formats support
- Extension system
- Character creation
- Training interface
- API endpoints
GPT4All
- Ecosystem for running open-source LLMs
- https://gpt4all.io/
- Features:
- Desktop application
- Python/C++ bindings
- Cross-platform support
- Multiple model support
- Low hardware requirements
LocalAI
- Self-hosted AI solution
- https://localai.io/
- Features:
- OpenAI API compatibility
- Multiple model support
- Docker support
- GPU acceleration
- Custom model loading
Verba
- Local LLM tool by Weaviate
- https://github.com/weaviate/Verba
- Features:
- Document Q&A
- RAG capabilities
- Vector search
- Easy deployment
koboldcpp
- Lightweight LLM runner
- https://github.com/LostRuins/koboldcpp
- Features:
- Low resource usage
- Multiple model formats
- Command-line interface
- Windows/Linux support
Additional Tools
Model Management
- HuggingFace Transformers CLI
- ModelScope
- FastChat
Hardware Optimization
- GGML tools
- llama.cpp
- AutoGPTQ
Considerations for Local LLM Setup
Hardware Requirements
- CPU vs GPU requirements
- RAM