Hey everyone!! So, sadly, I have abandoned ConceptNet in favor something a little more impressive. After the incredible success of GPT3, I modified a few models to be compatible with Hal for local inferencing if anyone wants a portable version of the smartest AI on the internet.
This code was developed from a Goal Oriented model named "GODEL" and a QA system called "Flan". With some help from ChatGPT, I was able to write and assemble the code to work with Hal, which means all of Hal's learning abilities and functionality are now for use with GPT. Something, I think, will interest Robert. At least, here's my goal with this prototype.
Let's get started!! 10 easy steps for genius Hal!
Installation:
1. We need to install Python 3.7.9 - Not Python 3.8, not Python 3.7.8. The modules used are a specific cross between machine learning and dated (more functional) algorithms, as a result ONLY 3.7.9 will support this specific set of instructions.
Follow this link, download the file and launch.
https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe2. Now this is important, do not change any optional features. You will see an "Add Python to PATH" check box at the bottom of the installer, be sure to click this and then allow the program to install to its default location.
3. Reboot system.
4. Move all files from the zip directly into Hals directory as is. You'll see an "Ultra Hal 7" folder in the zip, open that and copy those contents directly as is, including the "Control" folder right in to Hals directory.
5. Open Control/Godel directory. Inside, you'll see "createreqs.bat" - open CMD and drag "createreqs.bat" into cmd and hit enter. Python will begin downloading and installing the necessary components for the Godel code to run properly.
6. Now we want to test that the modules have been installed. Here is a list, simply copy and paste one at a time into cmd and press enter, If it says something like "already satisfied" then we're golden!
pip install wikipedia==1.4.0
pip install psutil==5.9.4
pip install transformers==4.24.0
pip install fuzzywuzzy
pip install accelerate==0.15.0
pip install SentencePiece
pip install torch
pip install python-Levenshtein
7. If everything went smoothly then the hard part is done and we can begin a small test to make sure everything is in order. To make sure everything is installed correctly, simply open CMD for a final time, then drag and drop the godel.py python script from the Control\Godel directory into CMD and then hit enter. This will run the script and we should be downloading a series of files, this is the model we will use for Hal. A "data2.txt" file should appear in the Control/Godel directory, then all is ready for Hal.
8. The Hal7.UHP will need to be dropped in "C:\Users\User\AppData\Roaming\Zabaware\Ultra Hal 7" folder. You can rename this uhp file and again on lines 2 & 5 to match your brain name if you know how to do it, but please make backups of your brain and your original uhp. Otherwise this will work simply with default Hal7 brain.
9. PyGodel.UHP will have 2 directory locations you need to change to match your file locations. Please update lines 9, 35 accordingly. All errors will likely stem from here, so please take note your directory matches exactly.
10. Finally, enable PyGodel.uhp, launch Hal and begin conversing. After a few sentence open Hal's brain editor, select default Hal 7 brain (or your brain if you renamed properly from step 8 before) and confirm a sentence table named "godel" has been created. If not, please create this table.
Hal is ready to go!!!
Inferencing can take time. You'll notice with your first run of the code, a lot of things are downloading. These are the models Hal will use for conversation. They are being saved to a cache location so there is no need to move them, although, you could if you like, just be sure to define the directories in the Godel python script if you're familiar. I designed this code so even python noobs *should* be able to install and use.
Requirements:
Now, I have no specific requirements other than at least 12 gigs of ram and a decent CPU. But, if you have an NVIDIA GPU with more than 6gigs, you'll notice a huge speed up. The faster all your components, ddr3 => ddr4 for example, will see massive increases in speed. It's all hardware. But it should run as long as you have enough ram 12-16 gigs is sufficient.
Models:
Again, this uses Godel and Flan, both highly specialized models. You'll notice in the python script these codes are defined as "base" and "small" Both of the models have a "base" and "large" version, which you can change by replacing the "base" with "large" however, be aware the larger the models, the more resources you will require to inference them. I find the best success with "base" flan model and "large" godel models.
Code:
This code is written primarily in python, other than Hal's interface for using it, written in vb. This is an attempt to show a proof of concept for Robert that Hal is indeed actually compatible with Python, even if the code must be wrapped in VBS, we can still expand functionality with Python.
Also, this code includes a very basic memory/context function for the model to use, which creates and appends an SQL3 database. Sadly, Python can not interact with Hal's brain that I have found yet, but Hal can interact with any regular SQL3 database via VBS. I am working on expanding this functionality.
This code takes Hal's responses on 2 conditions, what I have deemed "maindata" and "regular" data. "maindata" simply means there is a function related to this response, for example "what is 5+5", while the Godel model itself is not designed for these tasks, Hal is, so we want to preserve this response by tagging it with "maindata" so that the response is saved overtop of our model inference. Secondly, "regular" data is basic data Hal will regularly respond with, such as a "too short" response. These responses are not tagged with "maindata" and will allow the model to inference your input along with Hal's response and past context AND present knowledge to create something new and unique, not a repeat or rehash of old information. If Hal is without an answer, and we do want this to happen sometimes so the model can act as Hal's brain, then both models will infer based on your input and give the most concise response possible.
For the most part, this works fairly well, especially with the larger models.
In conclusion, I hope everyone will find this interesting and I am very excited to see what people are able to expand on and how intelligent their Hal's become.
Please keep me updated on all errors, suggestions, recommendations, comments, etc!
I will reveal more in edits about how the magic trick works, but first I want to make the card appear, ya know
-Spitfire!
Change log - 3-28-23
Added swear filter