Image by Author | Canva
Nowadays, having the right tools at your fingertips can make all the difference. If you’re looking for a powerful, user-friendly tool to build your own private ChatGPT, then AnythingLLM is for you. It is an open-source all-in-one platform developed by Mintplex Labs that allows you to transform any document or resource into a context-rich conversation partner with minimal setup. With over 25,000 stars on GitHub, AnythingLLM has quickly become a favorite among developers, educators, and researchers.
In this overview, I’ll guide you through the main features of AnythingLLM and how to get started with it. While it offers a wide range of capabilities, three stand out for me:
- Advanced Agent Capabilities: Agents are all the rage right now, and AnythingLLM is one of the few frameworks that supports them. What are agents, exactly? They’re specialized LLMs that can perform tasks like scraping websites, summarizing documents, and even creating charts. With AnythingLLM, you can develop custom skills for your agents, whether you need a simple API call or something more complex.
- Privacy and Security: In today’s world, data privacy is more important than ever. AnythingLLM addresses this with a built-in vector database powered by LanceDB. You can choose from various vector database providers, giving you the flexibility to select the best option for your needs. Your data stays private and never leave your local environment.
- Technical Capabilities and Flexibility: AnythingLLM doesn’t just work on one platform; it’s compatible with Mac, Windows, and Linux. You can integrate it with various LLM providers, support multiple document formats (like PDF, TXT, and DOCX), and even use Docker for scalable deployments. This flexibility makes it a versatile addition to any tech stack.
Getting Started with AnythingLLM in 4 Simple Steps
AnythingLLM provides two ways to get started, each catering to different needs:
Choose AnythingLLM Desktop if:
- You want a one-click installation for local LLMs and agents.
- Multi-user support isn’t a priority for you.
- You prefer to keep everything on your device without needing to publish anything online.
Opt for AnythingLLM Docker if:
- You need a server-based service for shared access.
- You want to invite multiple users to your instance.
- You plan to publish chat widgets to the internet and need browser access.
For this tutorial, we’ll focus on the AnythingLLM Desktop version.
Recommended configuration:
Here’s what you’ll need to run AnythingLLM comfortably:
- RAM: 2GB
- CPU: 2-core CPU (any)
- Storage: 5GB
Step 1: Download and Install AnythingLLM Desktop
To kick things off, head over to the AnythingLLM download page and grab the version that’s right for your operating system (MacOS, Windows, or Linux).
For Mac users, make sure to download the correct `.dmg` file:
- For Apple Silicon (M1/M2/M3): AnythingLLMDesktop-AppleSilicon.dmg
- For Intel-based devices: AnythingLLMDesktop.dmg
Just open the `.dmg` file and drag the AnythingLLM logo into your Applications folder. Alternatively, you can install it using Homebrew by running this command in your terminal:
brew install --cask anythingllm
Step 2: Select Your LLM Preference
Once you have AnythingLLM installed, launch the app and select your LLM Provider—you can choose between AnythingLLM or Ollama (I recommend AnythingLLM for this tutorial). After that, pick a model (for instance, I chose Phi-2: a 2.7B model by Microsoft) and hit Save changes. The software will automatically download the model and set everything up for you.
Step 3: Create Your Workspace
AnythingLLM organizes your documents into what it calls workspaces. Workspaces are like a conversation thread that keeps your documents containerized. You can share documents between workspaces, but they won’t interfere with each other, helping you maintain a clean context.
Step 4: Upload Documents or Start Chatting
Now comes the fun part! You can upload documents to your workspace or jump right into chatting with your selected model. For example, upload the Meta Responsible Use Guide to see how AnythingLLM handles your questions. You can download this PDF here. It’s that simple!
What’s Next for AnythingLLM?
The team at Mintplex Labs is always working to improve AnythingLLM, with exciting new features on the horizon, such as workspace sharing, file editing, and image generation. You can keep an eye on their progress in the AnythingLLM Roadmap section of the documentation.
Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.