Show HN: Local GLaDOS

https://github.com/dnhkng/GlaDOS

dnhkng%2FGlaDOS | Trendshift

GLaDOS Personality Core

This is a project dedicated to building a real-life version of GLaDOS!

NEW: If you want to chat or join the community, Join our discord! If you want to support, sponsor the project here!

LocalGLaDOS.mp4

Update 3-1-2025 Got GLaDOS running on an 8Gb SBC!

glados_update.mov

This is really tricky, so only for hardcore geeks! Checkout the 'rock5b' branch, and my OpenAI API for the RK3588 NPU system Don't expect support for this, it's in active development, and requires lots of messing about in armbian linux etc.

Goals

This is a hardware and software project that will create an aware, interactive, and embodied GLaDOS.

This will entail:

  • Train GLaDOS voice generator
  • Generate a prompt that leads to a realistic "Personality Core"
  • Generate a medium- and long-term memory for GLaDOS (Probably a custom vector DB in a simpy Numpy array!)
  • Give GLaDOS vision via a VLM (either a full VLM for everything, or a 'vision module' unsing a tiny VLM the GLaDOS can function call!)
  • Create 3D-printable parts
  • Design the animatronics system

Software Architecture

The initial goals are to develop a low-latency platform, where GLaDOS can respond to voice interactions within 600ms.

To do this, the system constantly records data to a circular buffer, waiting for voice to be detected. When it's determined that the voice has stopped (including detection of normal pauses), it will be transcribed quickly. This is then passed to streaming local Large Language Model, where the streamed text is broken by sentence, and passed to a text-to-speech system. This means further sentences can be generated while the current is playing, reducing latency substantially.

Subgoals

  • The other aim of the project is to minimize dependencies, so this can run on constrained hardware. That means no PyTorch or other large packages.
  • As I want to fully understand the system, I have removed a large amount of redirection: which means extracting and rewriting code.

Hardware System

This will be based on servo- and stepper-motors. 3D printable STL will be provided to create GlaDOS's body, and she will be given a set of animations to express herself. The vision system will allow her to track and turn toward people and things of interest.

Installation Instruction

Try this simplified process, but be aware it's still in the experimental stage! For all operating systems, you'll first need to install Ollama to run the LLM.

Install Drivers in necessary

If you are an Nvidia system with CUDA, make sure you install the necessary drivers and CUDA, info here: https://onnxruntime.ai/docs/install/

If you are using another accelerator (ROCm, DirectML etc.), after following the instructions below for you platform, follow up with installing the best onnxruntime version for your system.

Set up a local LLM server:

  1. Download and install Ollama for your operating system.
  2. Once installed, download a small 2B model for testing, at a terminal or command prompt use: ollama pull llama3.2

Windows Installation Process

  1. Open the Microsoft Store, search for python and install Python 3.12
  2. Download this repository, either:
    1. Download and unzip this repository somewhere in your home folder, or
    2. If you have Git set up, git clone this repository using git clone github.com/dnhkng/glados.git
  3. In the repository folder, run the install_windows.bat, and wait until the installation in complete.
  4. Double click start_windows.bat to start GLaDOS!

macOS Installation Process

This is still experimental. Any issues can be addressed in the Discord server. If you create an issue related to this, you will be referred to the Discord server. Note: I was getting Segfaults! Please leave feedback!

  1. Download this repository, either:

    1. Download and unzip this repository somewhere in your home folder, or
    2. In a terminal, git clone this repository using git clone github.com/dnhkng/glados.git
  2. In a terminal, go to the repository folder and run these commands:

      chmod +x install_mac.command
      chmod +x start_mac.command
    
  3. In the Finder, double click install_mac.command, and wait until the installation in complete.

  4. Double click start_mac.command to start GLaDOS!

Linux Installation Process

This is still experimental. Any issues can be addressed in the Discord server. If you create an issue related to this, you will be referred to the Discord server. This has been tested on Ubuntu 24.04.1 LTS

  1. Install the PortAudio library, if you don't yet have it installed:

      sudo apt update
      sudo apt install libportaudio2
    
  2. Download this repository, either:

    1. Download and unzip this repository somewhere in your home folder, or
    2. In a terminal, git clone this repository using git clone github.com/dnhkng/glados.git
  3. In a terminal, go to the repository folder and run these commands:

      chmod +x install_ubuntu.sh
      chmod +x start_ubuntu.sh
    
  4. In the a terminal in the GLaODS folder, run ./install_ubuntu.sh, and wait until the installation in complete.

  5. Run ./start_ubuntu.sh to start GLaDOS!

Changing the LLM Model

To use other models, use the command: ollama pull {modelname} and then add {modelname} to glados_config.yaml as the model. You can find more models here!

Common Issues

  1. If you find you are getting stuck in loops, as GLaDOS is hearing herself speak, you have two options:
    1. Solve this by upgrading your hardware. You need to you either headphone, so she can't physically hear herself speak, or a conference-style room microphone/speaker. These have hardware sound cancellation, and prevent these loops.
    2. Disable voice interruption. This means neither you nor GLaDOS can interrupt when GLaDOS is speaking. To accomplish this, edit the glados_config.yaml, and change interruptible: to false.
  2. If you want to the the Text UI, you should use the glados-ui.py file instead of glado.py

Testing the submodules

You can test the systems by exploring the 'demo.ipynb'.

Star History

Star History Chart

{
"by": "dnhkng",
"descendants": 1,
"id": 40237586,
"kids": [
40237931,
40237725
],
"score": 12,
"text": "I built GLaDOS's brain, with a low-latency chat interface. Sub 600ms voice-to-voice response, running on Llama-3 70B.",
"time": 1714664903,
"title": "Show HN: Local GLaDOS",
"type": "story",
"url": "https://github.com/dnhkng/GlaDOS"
}
{
"author": "dnhkng",
"date": null,
"description": "This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve. - dnhkng/GlaDOS",
"image": "https://opengraph.githubassets.com/2b776b129a46787671450ac1f18f0bb9bb806076150c98f3a05685bc9203662a/dnhkng/GlaDOS",
"logo": "https://logo.clearbit.com/github.com",
"publisher": "GitHub",
"title": "GitHub - dnhkng/GlaDOS: This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.",
"url": "https://github.com/dnhkng/GlaDOS"
}
{
"url": "https://github.com/dnhkng/GlaDOS",
"title": "GitHub - dnhkng/GlaDOS: This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.",
"description": "GLaDOS Personality Core This is a project dedicated to building a real-life version of GLaDOS! NEW: If you want to chat or join the community, Join our discord! If you want to support, sponsor the project...",
"links": [
"https://github.com/dnhkng/GlaDOS"
],
"image": "https://opengraph.githubassets.com/2b776b129a46787671450ac1f18f0bb9bb806076150c98f3a05685bc9203662a/dnhkng/GlaDOS",
"content": "<div><article><p><a target=\"_blank\" href=\"https://trendshift.io/repositories/9828\"><img src=\"https://camo.githubusercontent.com/11548e4e7f3d59c7480114c995de80ddc5c48403407d50f1c67134987de966e4/68747470733a2f2f7472656e6473686966742e696f2f6170692f62616467652f7265706f7369746f726965732f39383238\" alt=\"dnhkng%2FGlaDOS | Trendshift\" /></a></p>\n<p></p><h2>GLaDOS Personality Core</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#glados-personality-core\"></a><p></p>\n<p>This is a project dedicated to building a real-life version of GLaDOS!</p>\n<p>NEW: If you want to chat or join the community, <a target=\"_blank\" href=\"https://discord.com/invite/ERTDKwpjNB\">Join our discord!</a> If you want to support, <a target=\"_blank\" href=\"https://ko-fi.com/dnhkng\">sponsor the project here!</a></p>\n<details>\n <summary>\n <span>LocalGLaDOS.mp4</span>\n <span></span>\n </summary>\n <video src=\"https://private-user-images.githubusercontent.com/2691732/402261237-c22049e4-7fba-4e84-8667-2c6657a656a0.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzY5MzYzMDUsIm5iZiI6MTczNjkzNjAwNSwicGF0aCI6Ii8yNjkxNzMyLzQwMjI2MTIzNy1jMjIwNDllNC03ZmJhLTRlODQtODY2Ny0yYzY2NTdhNjU2YTAubXA0P1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDExNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAxMTVUMTAxMzI1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NmVjMjY0MWNiM2YxNTQyMzRhZmYyYTc1YWIwNGRlNTk4NzU1MjJkZTRmY2U0ZTk1MjNmYWQyNzdkNmQ0MTNiMiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.a8MKQVFiF-tB_CeOcEGX1yVssCTZ0q-H38Q9_BGsnM4\" controls=\"controls\" muted=\"muted\">\n </video>\n</details>\n<p></p><h2>Update 3-1-2025 <em>Got GLaDOS running on an 8Gb SBC!</em></h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#update-3-1-2025-got-glados-running-on-an-8gb-sbc\"></a><p></p>\n<details>\n <summary>\n <span>glados_update.mov</span>\n <span></span>\n </summary>\n <video src=\"https://private-user-images.githubusercontent.com/2691732/399914832-99e599bb-4701-438a-a311-8e6cd595796c.mov?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzY5MzYzMDUsIm5iZiI6MTczNjkzNjAwNSwicGF0aCI6Ii8yNjkxNzMyLzM5OTkxNDgzMi05OWU1OTliYi00NzAxLTQzOGEtYTMxMS04ZTZjZDU5NTc5NmMubW92P1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDExNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAxMTVUMTAxMzI1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YzM3MjY3NDNiYzVhNGU1MzY3MWVkZGE3YWU3MTdjNmU1MTVjNDgyOWQyZTA3MjkxODBjOTkzZTVhNGZiN2FiNSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.vv74Fh2CDowjpOwNjv4NXNPnGGyEAbwkbMSnkHcf6s4\" controls=\"controls\" muted=\"muted\">\n </video>\n</details>\n<p>This is really tricky, so only for hardcore geeks! Checkout the 'rock5b' branch, and my OpenAI API for the <a target=\"_blank\" href=\"https://github.com/dnhkng/RKLLM-Gradio\">RK3588 NPU system</a>\nDon't expect support for this, it's in active development, and requires lots of messing about in armbian linux etc.</p>\n<p></p><h2>Goals</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#goals\"></a><p></p>\n<p><em>This is a hardware and software project that will create an aware, interactive, and embodied GLaDOS.</em></p>\n<p>This will entail:</p>\n<ul>\n<li> Train GLaDOS voice generator</li>\n<li> Generate a prompt that leads to a realistic \"Personality Core\"</li>\n<li> Generate a medium- and long-term memory for GLaDOS (Probably a custom vector DB in a simpy Numpy array!)</li>\n<li> Give GLaDOS vision via a VLM (either a full VLM for everything, or a 'vision module' unsing a tiny VLM the GLaDOS can function call!)</li>\n<li> Create 3D-printable parts</li>\n<li> Design the animatronics system</li>\n</ul>\n<p></p><h2>Software Architecture</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#software-architecture\"></a><p></p>\n<p>The initial goals are to develop a low-latency platform, where GLaDOS can respond to voice interactions within 600ms.</p>\n<p>To do this, the system constantly records data to a circular buffer, waiting for <a target=\"_blank\" href=\"https://github.com/snakers4/silero-vad\">voice to be detected</a>. When it's determined that the voice has stopped (including detection of normal pauses), it will be <a target=\"_blank\" href=\"https://github.com/huggingface/distil-whisper\">transcribed quickly</a>. This is then passed to streaming <a target=\"_blank\" href=\"https://github.com/ggerganov/llama.cpp\">local Large Language Model</a>, where the streamed text is broken by sentence, and passed to a <a target=\"_blank\" href=\"https://github.com/rhasspy/piper\">text-to-speech system</a>. This means further sentences can be generated while the current is playing, reducing latency substantially.</p>\n<p></p><h3>Subgoals</h3><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#subgoals\"></a><p></p>\n<ul>\n<li>The other aim of the project is to minimize dependencies, so this can run on constrained hardware. That means no PyTorch or other large packages.</li>\n<li>As I want to fully understand the system, I have removed a large amount of redirection: which means extracting and rewriting code.</li>\n</ul>\n<p></p><h2>Hardware System</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#hardware-system\"></a><p></p>\n<p>This will be based on servo- and stepper-motors. 3D printable STL will be provided to create GlaDOS's body, and she will be given a set of animations to express herself. The vision system will allow her to track and turn toward people and things of interest.</p>\n<p></p><h2>Installation Instruction</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#installation-instruction\"></a><p></p>\n<p>Try this simplified process, but be aware it's still in the experimental stage! For all operating systems, you'll first need to install Ollama to run the LLM.</p>\n<p></p><h2>Install Drivers in necessary</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#install-drivers-in-necessary\"></a><p></p>\n<p>If you are an Nvidia system with CUDA, make sure you install the necessary drivers and CUDA, info here:\n<a target=\"_blank\" href=\"https://onnxruntime.ai/docs/install/\">https://onnxruntime.ai/docs/install/</a></p>\n<p>If you are using another accelerator (ROCm, DirectML etc.), after following the instructions below for you platform, follow up with installing the <a target=\"_blank\" href=\"https://onnxruntime.ai/docs/install/\">best onnxruntime version</a> for your system.</p>\n<p></p><h2>Set up a local LLM server:</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#set-up-a-local-llm-server\"></a><p></p>\n<ol>\n<li>Download and install <a target=\"_blank\" href=\"https://github.com/ollama/ollama\">Ollama</a> for your operating system.</li>\n<li>Once installed, download a small 2B model for testing, at a terminal or command prompt use: <code>ollama pull llama3.2</code></li>\n</ol>\n<p></p><h2>Windows Installation Process</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#windows-installation-process\"></a><p></p>\n<ol>\n<li>Open the Microsoft Store, search for <code>python</code> and install Python 3.12</li>\n<li>Download this repository, either:\n<ol>\n<li>Download and unzip this repository somewhere in your home folder, or</li>\n<li>If you have Git set up, <code>git clone</code> this repository using <code>git clone github.com/dnhkng/glados.git</code></li>\n</ol>\n</li>\n<li>In the repository folder, run the <code>install_windows.bat</code>, and wait until the installation in complete.</li>\n<li>Double click <code>start_windows.bat</code> to start GLaDOS!</li>\n</ol>\n<p></p><h2>macOS Installation Process</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#macos-installation-process\"></a><p></p>\n<p>This is still experimental. Any issues can be addressed in the Discord server. If you create an issue related to this, you will be referred to the Discord server. Note: I was getting Segfaults! Please leave feedback!</p>\n<ol>\n<li>\n<p>Download this repository, either:</p>\n<ol>\n<li>Download and unzip this repository somewhere in your home folder, or</li>\n<li>In a terminal, <code>git clone</code> this repository using <code>git clone github.com/dnhkng/glados.git</code></li>\n</ol>\n</li>\n<li>\n<p>In a terminal, go to the repository folder and run these commands:</p>\n<div><pre><code> chmod +x install_mac.command\n chmod +x start_mac.command\n</code></pre></div>\n</li>\n<li>\n<p>In the Finder, double click <code>install_mac.command</code>, and wait until the installation in complete.</p>\n</li>\n<li>\n<p>Double click <code>start_mac.command</code> to start GLaDOS!</p>\n</li>\n</ol>\n<p></p><h2>Linux Installation Process</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#linux-installation-process\"></a><p></p>\n<p>This is still experimental. Any issues can be addressed in the Discord server. If you create an issue related to this, you will be referred to the Discord server. This has been tested on Ubuntu 24.04.1 LTS</p>\n<ol>\n<li>\n<p>Install the PortAudio library, if you don't yet have it installed:</p>\n<div><pre><code> sudo apt update\n sudo apt install libportaudio2\n</code></pre></div>\n</li>\n<li>\n<p>Download this repository, either:</p>\n<ol>\n<li>Download and unzip this repository somewhere in your home folder, or</li>\n<li>In a terminal, <code>git clone</code> this repository using <code>git clone github.com/dnhkng/glados.git</code></li>\n</ol>\n</li>\n<li>\n<p>In a terminal, go to the repository folder and run these commands:</p>\n<div><pre><code> chmod +x install_ubuntu.sh\n chmod +x start_ubuntu.sh\n</code></pre></div>\n</li>\n<li>\n<p>In the a terminal in the GLaODS folder, run <code>./install_ubuntu.sh</code>, and wait until the installation in complete.</p>\n</li>\n<li>\n<p>Run <code>./start_ubuntu.sh</code> to start GLaDOS!</p>\n</li>\n</ol>\n<p></p><h2>Changing the LLM Model</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#changing-the-llm-model\"></a><p></p>\n<p>To use other models, use the command:\n<code>ollama pull {modelname}</code>\nand then add {modelname} to glados_config.yaml as the model. You can find <a target=\"_blank\" href=\"https://ollama.com/library\">more models here!</a></p>\n<p></p><h2>Common Issues</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#common-issues\"></a><p></p>\n<ol>\n<li>If you find you are getting stuck in loops, as GLaDOS is hearing herself speak, you have two options:\n<ol>\n<li>Solve this by upgrading your hardware. You need to you either headphone, so she can't physically hear herself speak, or a conference-style room microphone/speaker. These have hardware sound cancellation, and prevent these loops.</li>\n<li>Disable voice interruption. This means neither you nor GLaDOS can interrupt when GLaDOS is speaking. To accomplish this, edit the <code>glados_config.yaml</code>, and change <code>interruptible:</code> to <code>false</code>.</li>\n</ol>\n</li>\n<li>If you want to the the Text UI, you should use the glados-ui.py file instead of glado.py</li>\n</ol>\n<p></p><h2>Testing the submodules</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#testing-the-submodules\"></a><p></p>\n<p>You can test the systems by exploring the 'demo.ipynb'.</p>\n<p></p><h2>Star History</h2><a target=\"_blank\" href=\"https://github.com/dnhkng/GlaDOS#star-history\"></a><p></p>\n<p><a target=\"_blank\" href=\"https://star-history.com/#dnhkng/GlaDOS&amp;Date\"><img src=\"https://camo.githubusercontent.com/cd12fb552ec381a967d02fdca15505e4fe69dbccc54647235b35d7d3a1439e5d/68747470733a2f2f6170692e737461722d686973746f72792e636f6d2f7376673f7265706f733d646e686b6e672f476c61444f5326747970653d44617465\" alt=\"Star History Chart\" /></a></p>\n</article></div>",
"author": "",
"favicon": "https://github.githubassets.com/favicons/favicon.svg",
"source": "github.com",
"published": "",
"ttr": 182,
"type": "object"
}