Fix ordering in readme
This commit is contained in:
parent
ccbcebf0e8
commit
0510747a67
2 changed files with 11 additions and 8 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
|
@ -1,3 +1,6 @@
|
||||||
env
|
env
|
||||||
data/*
|
data/*
|
||||||
__pycache__
|
__pycache__
|
||||||
|
orig
|
||||||
|
scripts/are.txt
|
||||||
|
scripts/is.txt
|
||||||
|
|
|
||||||
16
README.md
16
README.md
|
|
@ -17,14 +17,6 @@ Huh?
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# How do I use it?
|
|
||||||
|
|
||||||
You need CUDA working in Docker, or to edit the docker compose files to take that stuff out and rely on CPU.
|
|
||||||
|
|
||||||
At present the LLM seems to require about 2GB of GPU RAM, which is really small as LLMs go. My PC works harder playing Balatro.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# What are its limitations?
|
# What are its limitations?
|
||||||
|
|
||||||
It doesn't have heaps of feature parity with the old perl infobot. The right way to get that might be to hack on the old bot code and use it as the main chat parser for this. I don't have a ton of desire to sit down and code my own implementation of the entire thing.
|
It doesn't have heaps of feature parity with the old perl infobot. The right way to get that might be to hack on the old bot code and use it as the main chat parser for this. I don't have a ton of desire to sit down and code my own implementation of the entire thing.
|
||||||
|
|
@ -41,6 +33,14 @@ And some non-infobot stuff we could use:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# How do I use it?
|
||||||
|
|
||||||
|
You need CUDA working in Docker, or to edit the docker compose files to take that stuff out and rely on CPU.
|
||||||
|
|
||||||
|
At present the LLM seems to require about 2GB of GPU RAM, which is really small as LLMs go. My PC works harder playing Balatro.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Initialize the database
|
## Initialize the database
|
||||||
|
|
||||||
This took around an hour to do 300k factoids and a fair amount of compute/GPU power. There's no consistency or duplicate checking at the moment so you're best off trashing the postgres data dir first
|
This took around an hour to do 300k factoids and a fair amount of compute/GPU power. There's no consistency or duplicate checking at the moment so you're best off trashing the postgres data dir first
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue