From 0510747a670c6bdd4a92d311ec6cb911d5e202e2 Mon Sep 17 00:00:00 2001 From: Erik Stambaugh Date: Sun, 13 Jul 2025 09:49:05 -0700 Subject: [PATCH] Fix ordering in readme --- .gitignore | 3 +++ README.md | 16 ++++++++-------- 2 files changed, 11 insertions(+), 8 deletions(-) diff --git a/.gitignore b/.gitignore index 726f177..2180f37 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,6 @@ env data/* __pycache__ +orig +scripts/are.txt +scripts/is.txt diff --git a/README.md b/README.md index c1d94f3..693ac4d 100644 --- a/README.md +++ b/README.md @@ -17,14 +17,6 @@ Huh? -# How do I use it? - -You need CUDA working in Docker, or to edit the docker compose files to take that stuff out and rely on CPU. - -At present the LLM seems to require about 2GB of GPU RAM, which is really small as LLMs go. My PC works harder playing Balatro. - - - # What are its limitations? It doesn't have heaps of feature parity with the old perl infobot. The right way to get that might be to hack on the old bot code and use it as the main chat parser for this. I don't have a ton of desire to sit down and code my own implementation of the entire thing. @@ -41,6 +33,14 @@ And some non-infobot stuff we could use: +# How do I use it? + +You need CUDA working in Docker, or to edit the docker compose files to take that stuff out and rely on CPU. + +At present the LLM seems to require about 2GB of GPU RAM, which is really small as LLMs go. My PC works harder playing Balatro. + + + ## Initialize the database This took around an hour to do 300k factoids and a fair amount of compute/GPU power. There's no consistency or duplicate checking at the moment so you're best off trashing the postgres data dir first