Image by Author | Canva
Â
Rapid development is happening around us, and one of the most interesting aspects of this evolution is artificial intelligence’s ability to communicate through natural language with humans. Suppose you want to communicate with some LLM running on your computer without switching between applications or windows, just by using a voice hotkey. This is exactly what an open-source voice assistant called AlwaysReddy does. AlwaysReddy works with various LLM servers and allows you to talk to LLMs. AlwaysReddy can be used to draft any important email, write code for a project, or learn some new programming concept.
Â
Key Features of AlwaysReddy
Â
AlwaysReddy is a powerful yet minimalist program. It has several notable features. One is that it uses hotkeys for human user to speak to the language model. AlwaysReddy eliminates the need for a graphical user interface (GUI). Instead, you can use hotkeys to interact with the assistant. For example, to start dictating your thoughts, you would press Ctrl + Alt + R
. If you pressed it again, it would signal the assistant to stop listening to you.
Another feature is that it integrates with the clipboard. If you want to give the AI some context about whatever you’re working on, you can copy that relevant text to your clipboard and then double-tap Ctrl + Alt + R
. This way, AlwaysReddy knows to use that clipboard context in its response.
You can run AlwaysReddy on your personal computer, quite simply and straightforwardly, ensuring that your data remains private and under your control. On top of that, the app gives you a choice of several different LLM servers to connect toá…³OpenAI, Anthropic, and Together AIá…³so it can suit a range of operational requirements.
Most importantly, though, AlwaysReddy can be run locally using any Windows, Mac, or Linux machine, giving it a cross-platform advantage that the other two apps simply don’t have.
Â
Setup and Configuration
Â
Starting with AlwaysReddy is easy. First, you need to clone the repository from GitHub. After that, you simply run a setup script, and the necessary dependencies are installed. Then, the application is configured. Let’s go through the installation process step by step.
Â
Prerequisites
Prior to jumping into the installation process, confirm that you have the following on your machine:
- Git: Needed to clone the repository.
- Python (version 3.8 or higher): Required to run the application.
- pip: Python’s package installer (usually bundled with Python installations).
- A virtual environment tool: Either the venv that comes with Python or virtualenv, which is also quite popular.
Now we’ll guide you through setting up AlwaysReddy on your system!
Â
Step-by-Step Guide to Using AlwaysReddy
Â
Step 1: Clone the Repository
To begin reading the AlwaysReddy code, you’ll first need to copy it from GitHub. You do this with the “git clone” command. The basic idea here is to create a local copy of the project on your own computer.
git clone https://github.com/ILikeAI/AlwaysReddy
Â
This command will create a directory named AlwaysReddy in your current working directory, containing all the necessary files.
Cloning into 'AlwaysReddy'..
remote: Enumerating objects: 1601, done.
remote: Counting objects: 100% (593/593), done.
remote: Compressing objects: 100% (240/240), done.
remote: Total 1601 (delta 416), reused 465 (delta 353), pack-reused 1008 Receiving objects: 100% (1601/1601), 133.22 MiB | 10.08 MiB/s, done.
Resolving deltas: 100% (769/769), done.
Updating files: 100% (53/53), done.
Â
Step 2: Navigate to the Project Directory
After cloning the AlwaysReddy repository, you should navigate into the project directory, which is called AlwaysReddy. Run the following command in terminal:
cd AlwaysReddy py setup.py
Â
Step 3: Run the Setup Script
The next step is to run the following command in terminal:
Â
This creates a virtual environment for the project and automatically installs the required libraries. The virtual environment isolates the project and ensures that its dependencies are separate from those of other projects.
Then it will prompt you and ask the following:
[+] Successfully installed dependencies from requirements.txt. [?] Do you want to install extra libraries for:
1. Faster Whisper (local Transcription)
2. Transformer Whisper (local Transcription)
3. Skip
[>] Enter your choice (1/2/3):
Â
You can either do 1 or 2 if you don’t have some transcription library already installed. For the purpose of this tutorial, I chose 1 which is Faster Whisper.
After that you’ll see the following prompt in terminal window:
[+] Successfully installed dependencies from faster_whisper_requirements.txt. [+] Copied config_default.py to config.py
[+] Copied .env.example to .env
[!] Please open .env and enter your API keys
[?] Do you want to install Piper local TTS? (y/n): y
Â
Piper is a local text to voice converter which converts LLM’s response into voice and runs that voice in your speaker. Press y, and proceed.
After that you’ll get another prompt which asks if you want to run this program every time you start up or not. It depends on your choice how you want to use it. For this tutorial, I chose No.
Piper TTS setup completed successfully.
[+] Created run file for Windows
[?] Do you want to add AlwaysReddy to startup? (y/n): n
Â
Finally now the setup is complete:
[!] Skipping adding AlwaysReddy to startup.
Setup Complete
Â
Step 4: Activate Virtual Environment
Then, on Windows, activate the virtual environment by running the command:
Â
If you’re using macOS or Linux, use this command to create the virtual environment:
Â
Then, activate it by using this:
Â
Once the virtual environment is active, its name appears in parentheses at the beginning of the command line prompt.
Â
Step 5: Configuring Environment and API Keys
The configuration files are in two files: config.py and .env. These files contain the virtual environment’s specific configurations and API keys. To edit the .env file, you first open the .env.example file in a text editor. You then update it with the relevant keys and configurations and save it as .env. For this tutorial, I will be using the openai model, so I added an openai key to the .env file.
To get your openai key, you need to make an account on openai and access your api token from here: OpenAI API KEY
Once you get the openai api token, just add it to the .env file. My .env file looked like this:
# TOGETHER_API_KEY=""
OPENAI_API_KEY="sk-..."
# ANTHROPIC_API_KEY="sk-.."
# PERPLEXITY_API_KEY="pplx-.."
# OPENROUTER_API_KEY="sk-or..."
# GROQ_API_KEY="gsk_..."
Â
Step 6: Specifying Models in Config File
Next you need to specify the models in the config.py file which you’ll use. I used gpt-4o model with local whisper transliteration. My config settings looked like this:
### COMPLETIONS API SETTINGS ###
## OPENAI COMPLETIONS API EXAMPLE ##
COMPLETIONS_API = "openai"
COMPLETION_MODEL = "gpt-4o"
### Transcription API Settings ###
## Faster Whisper local transcription ###
TRANSCRIPTION_API = "FasterWhisper" # this will use the local whisper model
WHISPER_MODEL = "tiny.en" # If you prefer not to use english set it to "tiny", if the transcription quality is too low then set it to "base" but this will be a little slower
BEAM_SIZE = 5
### Piper TTS SETTINGS ###
TTS_ENGINE="piper"
PIPER_VOICE = "default_female_voice" # You can add more voices to the piper_tts/voices folder
PIPER_VOICE_INDEX = 0 # For multi-voice models, select the index of the voice you want to use
PIPER_VOICE_SPEED = 1.0 # Speed of the TTS, 1.0 is normal speed, 2.0 is double speed, 0.5 is half speed
Â
The rest of the settings in this file must not be changed.
Â
Step 7: Launching AlwaysReddy
After all is set, you’re good to go with launching AlwaysReddy. You can use a batch file or the command prompt for the execution. For Windows, batch file usage is frictionless and proceeds with this command:
Â
(venv) C:\Users\PMYLS\Downloads\Repositories\Always Reddy>run_AlwaysReddy.bat Using faster-whisper model: tiny.en and device: cpu
Press 'alt+ctrl+r' to start recording, press again to stop and transcribe. Alternatively hold it down to record until you release.
Hold down 'alt+ctrl' and double tap 'r' to give AlwaysReddy the content currently copied in your clipboard. Press 'alt+ctrl+e' to cancel recording.
Press 'alt+ctrl+w' to clear the chat history.
Â
If you use macOS or Linux, you will need to process the application a bit differently. You can execute AlwaysReddy from the shell script or from the terminal directly. To do this, you’ll need to navigate to the directory where the shell script is located and use this command to run the app.
Â
Step 8: Using AlwaysReddy
Now press ctrl+alt+r
to start recording, and press the same keys again to stop recording. Your voice will be transcribed and the LLM will respond with an answer!
Press 'alt+ctrl+r' to start recording, press again to stop and transcribe.
Double tap to the record hotkey to give AlwaysReddy the content currently copied in your clipboard. Press 'alt+ctrl+e' to cancel recording.
Press 'alt+ctrl+w' to clear the chat history.
Transcription:
Hello, can you hear me?
Response:
Yes, I can hear you.
Â
AlwaysReddy is an extremely powerful tool. Yet, it is still actively developed and, thus, may exhibit some odd behavior. For example, on Linux, the detection of hotkeys works perfectly well—provided the application is in focus. This is a rather large limitation for anybody who intends to run AlwaysReddy in the background. As of this writing, the project’s lead dev has only completed documentation for Ubuntu.
Â
Final Remarks
Â
AlwaysReddy is a tool that’s useful for people who want to use LLMs without having to open specific application or website, and then typing text to get responses back from LLM. With AlwaysReddy, they can just press keyboard shortcuts, speak whatever they want, and get a response from LLM in real time. This makes the process of using LLMs very efficient.
In this article, we provided step by step guidance on how to set up AlwaysReddy and start using it. What are you waiting for now? Go and set it up for you and start talking to your computer. Tell us how your experience was!
Â
Â
Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
Our Top 3 Course Recommendations
1. Google Cybersecurity Certificate – Get on the fast track to a career in cybersecurity.
2. Google Data Analytics Professional Certificate – Up your data analytics game
3. Google IT Support Professional Certificate – Support your organization in IT