You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

2.0 KiB

Permanent Memory and Conversations

Permanent memory has now been implemented into the bot, using the OpenAI Ada embeddings endpoint, and Pinecone.

Pinecone is a vector database. The OpenAI Ada embeddings endpoint turns pieces of text into embeddings. The way that this feature works is by embedding the user prompts and the GPT3 responses, storing them in a pinecone index, and then retrieving the most relevant bits of conversation whenever a new user prompt is given in a conversation.

You do NOT need to use pinecone, if you do not define a PINECONE_TOKEN in your .env file, the bot will default to not using pinecone, and will use conversation summarization as the long term conversation method instead.

To enable permanent memory with pinecone, you must define a PINECONE_TOKEN in your .env file as follows (along with the other variables too):

PINECONE_TOKEN="87juwi58-1jk9-9182-9b3c-f84d90e8bshq"  

To get a pinecone token, you can sign up for a free pinecone account here: https://app.pinecone.io/ and click the "API Keys" section on the left navbar to find the key. (I am not affiliated with pinecone).

After signing up for a free pinecone account, you need to create an index in pinecone. To do this, go to the pinecone dashboard and click "Create Index" on the top right.

Then, name the index conversation-embeddings, set the dimensions to 1536, and set the metric to DotProduct:

Permanent memory using pinecone is still in alpha, I will be working on cleaning up this work, adding auto-clearing, and optimizing for stability and reliability, any help and feedback is appreciated (add me on Discord Kaveen#0001 for pinecone help)! If at any time you're having too many issues with pinecone, simply remove the PINECONE_TOKEN line in your .env file and the bot will revert to using conversation summarizations.