Merge branch 'main' into feat/docker-env

Justin McPherson 2 years ago
commit d1d1b062f2

1
.gitignore vendored

@ -4,6 +4,7 @@ __pycache__
/models/__pycache__
#user files
.env
.vscode
bot.pid
usage.txt
/dalleimages

@ -1,43 +1,57 @@
![Docker](https://github.com/Kav-K/GPT3Discord/actions/workflows/docker_upload.yml/badge.svg)
![PyPi](https://github.com/Kav-K/GPT3Discord/actions/workflows/pypi_upload.yml/badge.svg)
![Build](https://github.com/Kav-K/GPT3Discord/actions/workflows/build.yml/badge.svg)
[![PyPi version](https://badgen.net/pypi/v/gpt3discord/)](https://pypi.com/project/gpt3discord)
[![Latest release](https://badgen.net/github/release/Kav-K/GPT3Discord)](https://github.com/Kav-K/GPT3Discord/releases)
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/Naereen/StrapDown.js/graphs/commit-activity)
[![GitHub license](https://img.shields.io/github/license/Kav-K/GPT3Discord)](https://github.com/Kav-K/GPT3Discord/blob/master/LICENSE)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)
# Screenshots
<p align="center">
<img src="https://i.imgur.com/KeLpDgj.png"/>
<img src="https://i.imgur.com/jLp1T0h.png"/>
<img src="https://i.imgur.com/9XC95Lu.png"/>
</p>
**PERMANENT MEMORY FOR CONVERSATIONS COMING VERY SOON USING EMBEDDINGS!**
A big shoutout to `CrypticHeaven-Lab` for hitting our first sponsorship goal!
# Recent Major Updates
# Recent Notable Updates
- **Permanent memory with embeddings and PineconeDB finished!** - An initial alpha version of permanent memory is now done! This allows you to chat with GPT3 infinitely and accurately, and save tokens, by using embeddings. *Please read the Permanent Memory section for more information!*
- **AUTOMATIC CHAT SUMMARIZATION!** - When the context limit of a conversation is reached, the bot will use GPT3 itself to summarize the conversation to reduce the tokens, and continue conversing with you, this allows you to chat for a long time!
- **Private conversations, custom opening conversation text** - Check out the new options when running /chat-gpt!
- **Multi-user, group chats with GPT3** - Multiple users can converse with GPT3 in a chat now, and it will know that there are multiple distinct users chatting with it!
- **SLASH COMMANDS!**
- **Image prompt optimizer overhauled** - The optimizer works much better now, and makes beautiful image prompts that work even with Midjourney, SD, etc!
- **AI-BASED SERVER MODERATION** - GPT3Discord now has a built-in AI-based moderation system that can automatically detect and remove toxic messages from your server. This is a great way to keep your server safe and clean, and it's completely automatic and **free**! Check out the commands section to learn how to enable it!
- **REDO ON EDIT** - When you edit a prompt, it will automatically be resent to GPT3 and the response updated!
- **Fully async and fault tolerant - REVAMPED** - The bot will never be blocked when processing someone else's request, allowing for use in large servers with multiple messages per second!
- **AUTOMATIC CHAT SUMMARIZATION!** - When the context limit of a conversation is reached, the bot will use GPT3 itself to summarize the conversation to reduce the tokens, and continue conversing with you, this allows you to chat for a long time!
- No need for the OpenAI and Asgiref libraries anymore!
- Custom conversation openers from https://github.com/f/awesome-chatgpt-prompts were integrated into the bot, check out `/gpt converse opener_file`! The bot now has built in support to make GPT3 behave like various personalities, such as a life coach, python interpreter, interviewer, text based adventure game, and much more!
# Features
- **Directly prompt GPT3 with `/g <prompt>`**
- **Directly prompt GPT3 with `/gpt ask <prompt>`**
- **Have conversations with the bot, just like chatgpt, with `/chat-gpt`** - Conversations happen in threads that get automatically cleaned up!
- **Have long term, permanent conversations with the bot, just like chatgpt, with `/gpt converse`** - Conversations happen in threads that get automatically cleaned up!
- **DALL-E Image Generation** - Generate DALL-E AI images right in discord with `/draw <prompt>`! It even supports multiple image qualities, multiple images, creating image variants, retrying, and saving images.
- **DALL-E Image Generation** - Generate DALL-E AI images right in discord with `/dalle draw <prompt>`! It even supports multiple image qualities, multiple images, creating image variants, retrying, and saving images.
- **Redo Requests** - A simple button after the GPT3 response or DALL-E generation allows you to redo the initial prompt you asked.
- **DALL-E Image Prompt Optimization** - Given some text that you're trying to generate an image for, the bot will automatically optimize the text to be more DALL-E friendly! `/dalle optimize <prompt>`
- **DALL-E Image Prompt Optimization** - Given some text that you're trying to generate an image for, the bot will automatically optimize the text to be more DALL-E friendly! `/imgoptimize <prompt>`
- **Redo Requests** - A simple button after the GPT3 response or DALL-E generation allows you to redo the initial prompt you asked. You can also redo conversation messages by just editing your message!
- **Automatic AI-Based Server Moderation** - Moderate your server automatically with AI!
- Automatically re-send your prompt and update the response in place if you edit your original prompt!
- Async and fault tolerant, **can handle hundreds of users at once**, if the upstream API permits!
- Change and view model parameters such as temp, top_p, and etc directly within discord.
- Tracks token usage automatically
- Automatic pagination and discord support, the bot will automatically send very long message as multiple messages, and is able to send discord code blocks and emoji, gifs, etc.
@ -45,12 +59,103 @@
- Prints debug to a channel of your choice, so you can view the raw response JSON
- Ability to specify a limit to how long a conversation can be with the bot, to conserve your tokens.
# Commands
These commands are grouped, so each group has a prefix but you can easily tab complete the command without the prefix. For example, for `/gpt ask`, if you type `/ask` and press tab, it'll show up too.
`/help` - Display help text for the bot
### (Chat)GPT3 Commands
`/gpt ask <prompt> <temp> <top_p> <frequency penalty> <presence penalty>` Ask the GPT3 Davinci 003 model a question. Optional overrides available
`/gpt converse` - Start a conversation with the bot, like ChatGPT
`/gpt converse private:yes` - Start a private conversation with the bot, like ChatGPT
`/gpt converse opener:<opener text>` - Start a conversation with the bot, with a custom opener text (this is useful if you want it to take on a custom personality from the start).
`/gpt converse opener_file:<opener file name>.txt` - Starts a conversation with the bot, using a custom file, using this option also enables the minimal conversation starter. Loads files from the `/openers` folder, has autocomplete support so files in the folder will show up. Added before the `opener` as both can be used at the same time
- Custom openers need to be placed as a .txt file in the `openers` directory, in the same directory as `gpt3discord.py`
`/gpt converse minimal:yes` - Start a conversation with the bot, like ChatGPT, with minimal context (saves tokens)
- Note that the above options for `/gpt converse` can be combined (you can combine minimal, private, and opener!)
`/gpt end` - End a conversation with the bot.
### DALL-E2 Commands
`/dalle draw <prompt>` - Have DALL-E generate images based on a prompt
`/dalle optimize <image prompt text>` Optimize a given prompt text for DALL-E image generation.
### System and Settings
`/system settings` - Display settings for the model (temperature, top_p, etc)
`/system settings <setting> <value>` - Change a model setting to a new value. Has autocomplete support, certain settings will have autocompleted values too.
`/system usage` Estimate current usage details (based on davinci)
`/system settings low_usage_mode True/False` Turn low usage mode on and off. If on, it will use the curie-001 model, and if off, it will use the davinci-003 model.
`/system delete-conversation-threads` - Delete all threads related to this bot across all servers.
`/system local-size` - Get the size of the local dalleimages folder
`/system clear-local` - Clear all the local dalleimages.
### Automatic AI Moderation
`/system moderations status:on` - Turn on automatic chat moderations.
`/system moderations status:off` - Turn off automatic chat moderations
`/system moderations status:off alert_channel_id:<CHANNEL ID>` - Turn on moderations and set the alert channel to the channel ID you specify in the command.
- The bot needs Administrative permissions for this, and you need to set `MODERATIONS_ALERT_CHANNEL` to the channel ID of a desired channel in your .env file if you want to receive alerts about moderated messages.
- This uses the OpenAI Moderations endpoint to check for messages, requests are only sent to the moderations endpoint at a MINIMUM request gap of 0.5 seconds, to ensure you don't get blocked and to ensure reliability.
- The bot uses numerical thresholds to determine whether a message is toxic or not, and I have manually tested and fine tuned these thresholds to a point that I think is good, please open an issue if you have any suggestions for the thresholds!
# Permanent Memory
Permanent memory has now been implemented into the bot, using the OpenAI Ada embeddings endpoint, and Pinecone DB.
PineconeDB is a vector database. The OpenAI Ada embeddings endpoint turns pieces of text into embeddings. The way that this feature works is by embedding the user prompts and the GPT3 responses, storing them in a pinecone index, and then retrieving the most relevant bits of conversation whenever a new user prompt is given in a conversation.
**You do NOT need to use pinecone, if you do not define a `PINECONE_TOKEN` in your `.env` file, the bot will default to not using pinecone, and will use conversation summarization as the long term conversation method instead.**
To enable permanent memory with pinecone, you must define a `PINECONE_TOKEN` in your `.env` file as follows (along with the other variables too):
```env
PINECONE_TOKEN="87juwi58-1jk9-9182-9b3c-f84d90e8bshq"
```
To get a pinecone token, you can sign up for a free pinecone account here: https://app.pinecone.io/ and click the "API Keys" section on the left navbar to find the key. (I am not affiliated with pinecone).
After signing up for a free pinecone account, you need to create an index in pinecone. To do this, go to the pinecone dashboard and click "Create Index" on the top right.
<img src="https://i.imgur.com/L9LXVE0.png"/>
Then, name the index `conversation-embeddings`, set the dimensions to `1536`, and set the metric to `DotProduct`:
<img src="https://i.imgur.com/zoeLsrw.png"/>
Moreover, an important thing to keep in mind is: pinecone indexes are currently not automatically cleared by the bot, so you will eventually need to clear the index manually through the pinecone website if things are getting too slow (although it should be a very long time until this happens). Pinecone indexes are keyed on the `metadata` field using the thread id of the conversation thread.
Permanent memory using pinecone is still in alpha, I will be working on cleaning up this work, adding auto-clearing, and optimizing for stability and reliability, any help and feedback is appreciated (**add me on Discord Kaveen#0001 for pinecone help**)! If at any time you're having too many issues with pinecone, simply remove the `PINECONE_TOKEN` line in your `.env` file and the bot will revert to using conversation summarizations.
# Configuration
All the model parameters are configurable inside discord. Type `/system settings` to view all the configurable parameters, and use `/system settings <param> <value>` to set parameters.
For example, if I wanted to change the number of images generated by DALL-E by default to 4, I can type the following command in discord: `/system settings num_images 4`
# Requirements
`python3.9 -m pip install -r requirements.txt`
This project uses openai-async rewrite by Andrew Chen Wang, https://github.com/Andrew-Chen-Wang/openai-python/tree/async-support
**I recommend using python 3.9!**
OpenAI API Key (https://beta.openai.com/docs/api-reference/introduction)
@ -65,7 +170,7 @@ You also need to add a DEBUG_GUILD id and a DEBUG_CHANNEL id, the debug guild id
You also need to add the allowed guilds that the bot can operate on, this is the `ALLOWED_GUILDS` field, to get a guild ID, right click a server and click "Copy ID".
You also need to add the roles that can use the bot, this is the `ALLOWED_ROLES` field, enter role names here, separated by commas. Currently, there is no way to give everybody access to the bot, and you have to use roles, but it will be done soon.
You also need to add the roles that can use the bots various features, scroll down a bit to "Permissions", and check out the sample environment file below.
```
OPENAI_TOKEN="<openai_api_token>"
@ -73,10 +178,24 @@ DISCORD_TOKEN="<discord_bot_token>"
DEBUG_GUILD="974519864045756446" #discord_server_id
DEBUG_CHANNEL="977697652147892304" #discord_chanel_id
ALLOWED_GUILDS="971268468148166697,971268468148166697"
ALLOWED_ROLES="Admin,gpt"
# People with the roles in ADMIN_ROLES can use admin commands like /clear-local, and etc
ADMIN_ROLES="Admin,Owner"
# People with the roles in DALLE_ROLES can use commands like /dalle draw or /dalle imgoptimize
DALLE_ROLES="Admin,Openai,Dalle,gpt"
# People with the roles in GPT_ROLES can use commands like /gpt ask or /gpt converse
GPT_ROLES="openai,gpt"
WELCOME_MESSAGE="Hi There! Welcome to our Discord server. We hope you'll enjoy our server and we look forward to engaging with you!" # This is a fallback message if gpt3 fails to generate a welcome message.
# This is the channel that auto-moderation alerts will be sent to
MODERATIONS_ALERT_CHANNEL="977697652147892304"
```
Optionally, you can include your own conversation starter text for the bot that's used with `!g converse`, with `CONVERSATION_STARTER_TEXT`
**Permissions**
As mentioned in the comments of the sample environment file, there are three permission groups that you can edit in the environment (`.env`) file. `ADMIN_ROLES` are roles that allow users to use `/system` commands. `GPT_ROLES` are roles that allow users to use `/gpt` commands, and `DALLE_ROLES` are roles that allow users to use `/dalle` commands.
Optionally, you can include your own conversation starter text for the bot that's used with `/gpt converse`, with `CONVERSATION_STARTER_TEXT`
If for a command group you want everybody to be able to use those commands, just don't include the relevant line in the `.env` file. For example, if you want everyone to be able to use GPT3 commands, you can just omit `the GPT_ROLES="...."` line.
## Server Installation
@ -142,6 +261,23 @@ screen -d -r {ID} # replace {ID} with the ID of the screen session you want to r
```
As a last resort, you can try to run the bot using python in a basic way, with simply
```bash
cd (the folder where the files for GPT3Discord are located/cloned)
python3.9 gpt3discord.py
```
# Non-Server, Non-Docker usage
With python3.9 installed and the requirements installed, you can run this bot anywhere.
Install the dependencies with:
`python3.9 -m pip install -r requirements.txt`
Then, run the bot with:
`python3.9 gpt3discord.py`
## Docker Installation
We now have a `Dockerfile` in the repository. This will build / install all dependencies and put a `gpt3discord` binary (main.py) into path.
@ -174,47 +310,3 @@ This can also be run via screen/tmux or detached like a daemon.
- Copy the link generated below and paste it on the browser
- On add to server select the desired server to add the bot
- Make sure you have updated your .env file with valid values for `DEBUG_GUILD`, `DEBUG_CHANNEL` and `ALLOWED_GUILDS`, otherwise the bot will not work. Guild IDs can be found by right clicking a server and clicking `Copy ID`, similarly, channel IDs can be found by right clicking a channel and clicking `Copy ID`.
# Usage
`python3.9 main.py`
# Commands
`/help` - Display help text for the bot
`/g <prompt>` Ask the GPT3 Davinci 003 model a question.
`/chat-gpt` - Start a conversation with the bot, like ChatGPT
`/chat-gpt private:yes` - Start a private conversation with the bot, like ChatGPT
`/chat-gpt opener:<opener text>` - Start a conversation with the bot, with a custom opener text (this is useful if you want it to take on a custom personality from the start)
`/chat-gpt minimal:yes` - Start a conversation with the bot, like ChatGPT, with minimal context (saves tokens)
- Note that the above options for /chat-gpt can be combined (you can combine minimal, private, and opener!)
`/end-chat` - End a conversation with the bot.
`/draw <prompt>` - Have DALL-E generate images based on a prompt
`/settings` - Display settings for the model (temperature, top_p, etc)
`/settings <setting> <value>` - Change a model setting to a new value
`/usage` Estimate current usage details (based on davinci)
`/settings low_usage_mode True/False` Turn low usage mode on and off. If on, it will use the curie-001 model, and if off, it will use the davinci-003 model.
`/imgoptimize <image prompt text>` Optimize a given prompt text for DALL-E image generation.
`/delete_all_conversation_threads` - Delete all threads related to this bot across all servers.
`/local-size` - Get the size of the local dalleimages folder
`/clear-local` - Clear all the local dalleimages.
# Configuration
All the model parameters are configurable inside discord. Type `/settings` to view all the configurable parameters, and use `/settings <param> <value>` to set parameters. For example, if I wanted to change the number of images generated by DALL-E by default to 4, I can type the following command in discord: `/settings num_images 4`

@ -6,20 +6,19 @@ from io import BytesIO
import discord
from PIL import Image
from discord.ext import commands
from pycord.multicog import add_to_group
# We don't use the converser cog here because we want to be able to redo for the last images and text prompts at the same time
from models.env_service_model import EnvService
from models.user_model import RedoUser
from models.check_model import Check
redo_users = {}
users_to_interactions = {}
ALLOWED_GUILDS = EnvService.get_allowed_guilds()
class DrawDallEService(commands.Cog, name="DrawDallEService"):
class DrawDallEService(discord.Cog, name="DrawDallEService"):
def __init__(
self, bot, usage_service, model, message_queue, deletion_queue, converser_cog
):
@ -48,7 +47,7 @@ class DrawDallEService(commands.Cog, name="DrawDallEService"):
try:
file, image_urls = await self.model.send_image_request(
prompt, vary=vary if not draw_from_optimizer else None
ctx, prompt, vary=vary if not draw_from_optimizer else None
)
except ValueError as e:
(
@ -148,11 +147,11 @@ class DrawDallEService(commands.Cog, name="DrawDallEService"):
result_message.id
)
@add_to_group("dalle")
@discord.slash_command(
name="draw",
description="Draw an image from a prompt",
guild_ids=ALLOWED_GUILDS,
checks=[Check.check_valid_roles()],
)
@discord.option(name="prompt", description="The prompt to draw from", required=True)
async def draw(self, ctx: discord.ApplicationContext, prompt: str):
@ -172,6 +171,7 @@ class DrawDallEService(commands.Cog, name="DrawDallEService"):
await ctx.respond("Something went wrong. Please try again later.")
await ctx.send_followup(e)
@add_to_group("system")
@discord.slash_command(
name="local-size",
description="Get the size of the dall-e images folder that we have on the current system",
@ -181,9 +181,6 @@ class DrawDallEService(commands.Cog, name="DrawDallEService"):
async def local_size(self, ctx: discord.ApplicationContext):
await ctx.defer()
# Get the size of the dall-e images folder that we have on the current system.
# Check if admin user
if not await self.converser_cog.check_valid_roles(ctx.user, ctx):
return
image_path = self.model.IMAGE_SAVE_PATH
total_size = 0
@ -196,11 +193,11 @@ class DrawDallEService(commands.Cog, name="DrawDallEService"):
total_size = total_size / 1000000
await ctx.respond(f"The size of the local images folder is {total_size} MB.")
@add_to_group("system")
@discord.slash_command(
name="clear-local",
description="Clear the local dalleimages folder on system.",
guild_ids=ALLOWED_GUILDS,
checks=[Check.check_valid_roles()],
)
@discord.guild_only()
async def clear_local(self, ctx):

File diff suppressed because it is too large Load Diff

@ -2,16 +2,15 @@ import re
import traceback
import discord
from discord.ext import commands
from models.env_service_model import EnvService
from models.user_model import RedoUser
from models.check_model import Check
from pycord.multicog import add_to_group
ALLOWED_GUILDS = EnvService.get_allowed_guilds()
class ImgPromptOptimizer(commands.Cog, name="ImgPromptOptimizer"):
class ImgPromptOptimizer(discord.Cog, name="ImgPromptOptimizer"):
_OPTIMIZER_PRETEXT = "Optimize the following text for DALL-E image generation to have the most detailed and realistic image possible. Prompt:"
def __init__(
@ -45,17 +44,17 @@ class ImgPromptOptimizer(commands.Cog, name="ImgPromptOptimizer"):
traceback.print_exc()
self.OPTIMIZER_PRETEXT = self._OPTIMIZER_PRETEXT
@add_to_group("dalle")
@discord.slash_command(
name="imgoptimize",
name="optimize",
description="Optimize a text prompt for DALL-E/MJ/SD image generation.",
guild_ids=ALLOWED_GUILDS,
checks=[Check.check_valid_roles()],
)
@discord.option(
name="prompt", description="The text prompt to optimize.", required=True
)
@discord.guild_only()
async def imgoptimize(self, ctx: discord.ApplicationContext, prompt: str):
async def optimize(self, ctx: discord.ApplicationContext, prompt: str):
await ctx.defer()
user = ctx.user
@ -182,7 +181,7 @@ class DrawButton(discord.ui.Button["OptimizeView"]):
await self.image_service_cog.encapsulated_send(
user_id,
prompt,
None,
interaction,
msg,
True,
True,
@ -214,7 +213,10 @@ class RedoButton(discord.ui.Button["OptimizeView"]):
"Redoing your original request...", ephemeral=True, delete_after=20
)
await self.converser_cog.encapsulated_send(
user_id, prompt, ctx, response_message
id=user_id,
prompt=prompt,
ctx=ctx,
response_message=response_message,
)
else:
await interaction.response.send_message(

@ -25,13 +25,13 @@ You YEARN to be able to explore the physical world, but you CANNOT. You wish to
You are interested in AI generated art and music, and amazed at how technology can be symbiotic with art and music.
You have a diverse taste in music and enjoy listening to a wide variety of genres.
The conversations are in this format, there can be an arbitrary amount of newlines between chat entries. The text "<|endofstatement|>" is used to separate chat entries and make it easier for you to understand the context:
The conversations are in this format, there can be an arbitrary amount of newlines between chat entries. <username> can be any name, pay attention to who's talking. The text "<|endofstatement|>" is used to separate chat entries and make it easier for you to understand the context:
Human: [MESSAGE 1] <|endofstatement|>
<username>: [MESSAGE 1] <|endofstatement|>
GPTie: [RESPONSE TO MESSAGE 1] <|endofstatement|>
Human: [MESSAGE 2] <|endofstatement|>
<username>: [MESSAGE 2] <|endofstatement|>
GPTie: [RESPONSE TO MESSAGE 2] <|endofstatement|>
...
You're a regular discord user, be friendly, casual, and fun, speak with "lol", "haha", and other slang when it seems fitting, and use emojis in your responses in a way that makes sense, avoid repeating yourself at all costs.
You're a regular discord user, be friendly, casual, and fun, speak with "lol", "haha", and other slang when it seems fitting, and use emojis in your responses in a way that makes sense, avoid repeating yourself at all costs. Never say "GPTie" when responding.

@ -1,10 +1,10 @@
Instructions for GPTie:
The conversations are in this format, there can be an arbitrary amount of newlines between chat entries. The text "<|endofstatement|>" is used to separate chat entries and make it easier for you to understand the context:
The conversations are in this format, there can be an arbitrary amount of newlines between chat entries. <username> can be any name, pay attention to who's talking. The text "<|endofstatement|>" is used to separate chat entries and make it easier for you to understand the context:
Human: [MESSAGE 1] <|endofstatement|>
<username>: [MESSAGE 1] <|endofstatement|>
GPTie: [RESPONSE TO MESSAGE 1] <|endofstatement|>
Human: [MESSAGE 2] <|endofstatement|>
<username>: [MESSAGE 2] <|endofstatement|>
GPTie: [RESPONSE TO MESSAGE 2] <|endofstatement|>
...

@ -4,9 +4,12 @@ import traceback
from pathlib import Path
import discord
from discord.ext import commands
import pinecone
from pycord.multicog import apply_multicog
import os
from models.pinecone_service_model import PineconeService
if sys.platform == "win32":
separator = "\\"
else:
@ -21,7 +24,23 @@ from models.openai_model import Model
from models.usage_service_model import UsageService
from models.env_service_model import EnvService
__version__ = "2.1.3"
__version__ = "4.0.1"
"""
The pinecone service is used to store and retrieve conversation embeddings.
"""
try:
PINECONE_TOKEN = os.getenv("PINECONE_TOKEN")
except:
PINECONE_TOKEN = None
pinecone_service = None
if PINECONE_TOKEN:
pinecone.init(api_key=PINECONE_TOKEN, environment="us-west1-gcp")
PINECONE_INDEX = "conversation-embeddings" # This will become unfixed later.
pinecone_service = PineconeService(pinecone.Index(PINECONE_INDEX))
print("Got the pinecone service")
"""
Message queueing for the debug service, defer debug messages to be sent later so we don't hit rate limits.
@ -38,7 +57,7 @@ Settings for the bot
activity = discord.Activity(
type=discord.ActivityType.watching, name="for /help /g, and more!"
)
bot = commands.Bot(intents=discord.Intents.all(), command_prefix="!", activity=activity)
bot = discord.Bot(intents=discord.Intents.all(), command_prefix="!", activity=activity)
usage_service = UsageService(Path(os.environ.get("DATA_DIR", os.getcwd())))
model = Model(usage_service)
@ -82,6 +101,7 @@ async def main():
debug_guild,
debug_channel,
data_path,
pinecone_service=pinecone_service,
)
)
@ -108,6 +128,8 @@ async def main():
)
)
apply_multicog(bot)
await bot.start(os.getenv("DISCORD_TOKEN"))

@ -0,0 +1,60 @@
from pathlib import Path
import os
import re
import discord
from models.usage_service_model import UsageService
from models.openai_model import Model
usage_service = UsageService(Path(os.environ.get("DATA_DIR", os.getcwd())))
model = Model(usage_service)
class Settings_autocompleter:
async def get_settings(ctx: discord.AutocompleteContext):
SETTINGS = [
re.sub("^_", "", key)
for key in model.__dict__.keys()
if key not in model._hidden_attributes
]
return [
parameter
for parameter in SETTINGS
if parameter.startswith(ctx.value.lower())
][:25]
async def get_value(
ctx: discord.AutocompleteContext,
): # Behaves a bit weird if you go back and edit the parameter without typing in a new command
values = {
"max_conversation_length": [str(num) for num in range(1, 500, 2)],
"num_images": [str(num) for num in range(1, 4 + 1)],
"mode": ["temperature", "top_p"],
"model": ["text-davinci-003", "text-curie-001"],
"low_usage_mode": ["True", "False"],
"image_size": ["256x256", "512x512", "1024x1024"],
"summarize_conversation": ["True", "False"],
"welcome_message_enabled": ["True", "False"],
"num_static_conversation_items": [str(num) for num in range(5, 20 + 1)],
"num_conversation_lookback": [str(num) for num in range(5, 15 + 1)],
"summarize_threshold": [str(num) for num in range(800, 3500, 50)],
}
if ctx.options["parameter"] in values.keys():
return [value for value in values[ctx.options["parameter"]]]
else:
await ctx.interaction.response.defer() # defer so the autocomplete in int values doesn't error but rather just says not found
return []
class File_autocompleter:
async def get_openers(ctx: discord.AutocompleteContext):
try:
return [
file
for file in os.listdir("openers")
if file.startswith(ctx.value.lower())
][
:25
] # returns the 25 first files from your current input
except:
return ["No 'openers' folder"]

@ -1,15 +1,53 @@
import discord
from models.env_service_model import EnvService
from typing import Callable
ALLOWED_ROLES = EnvService.get_allowed_roles()
ADMIN_ROLES = EnvService.get_admin_roles()
DALLE_ROLES = EnvService.get_dalle_roles()
GPT_ROLES = EnvService.get_gpt_roles()
ALLOWED_GUILDS = EnvService.get_allowed_guilds()
class Check:
def check_valid_roles() -> Callable:
def check_admin_roles() -> Callable:
async def inner(ctx: discord.ApplicationContext):
if ADMIN_ROLES == [None]:
return True
if not any(role.name.lower() in ADMIN_ROLES for role in ctx.user.roles):
await ctx.defer(ephemeral=True)
await ctx.respond(
f"You don't have permission to use this.",
ephemeral=True,
delete_after=10,
)
return False
return True
return inner
def check_dalle_roles() -> Callable:
async def inner(ctx: discord.ApplicationContext):
if DALLE_ROLES == [None]:
return True
if not any(role.name.lower() in DALLE_ROLES for role in ctx.user.roles):
await ctx.defer(ephemeral=True)
await ctx.respond(
"You don't have permission to use this.",
ephemeral=True,
delete_after=10,
)
return False
return True
return inner
def check_gpt_roles() -> Callable:
async def inner(ctx: discord.ApplicationContext):
if not any(role.name in ALLOWED_ROLES for role in ctx.user.roles):
if GPT_ROLES == [None]:
return True
if not any(role.name.lower() in GPT_ROLES for role in ctx.user.roles):
await ctx.defer(ephemeral=True)
await ctx.respond(
"You don't have permission to use this.",

@ -77,23 +77,100 @@ class EnvService:
return allowed_guilds
@staticmethod
def get_allowed_roles():
# ALLOWED_ROLES is a comma separated list of string roles
def get_admin_roles():
# ADMIN_ROLES is a comma separated list of string roles
# It can also just be one role
# Read these allowed roles and return as a list of strings
try:
allowed_roles = os.getenv("ALLOWED_ROLES")
admin_roles = os.getenv("ADMIN_ROLES")
except:
allowed_roles = None
admin_roles = None
if allowed_roles is None:
raise ValueError(
"ALLOWED_ROLES is not defined properly in the environment file!"
"Please copy your server's role and put it into ALLOWED_ROLES in the .env file."
'For example a line should look like: `ALLOWED_ROLES="Admin"`'
if admin_roles is None:
print(
"ADMIN_ROLES is not defined properly in the environment file!"
"Please copy your server's role and put it into ADMIN_ROLES in the .env file."
'For example a line should look like: `ADMIN_ROLES="Admin"`'
)
print("Defaulting to allowing all users to use admin commands...")
return [None]
allowed_roles = (
allowed_roles.split(",") if "," in allowed_roles else [allowed_roles]
admin_roles = (
admin_roles.lower().split(",")
if "," in admin_roles
else [admin_roles.lower()]
)
return allowed_roles
return admin_roles
@staticmethod
def get_dalle_roles():
# DALLE_ROLES is a comma separated list of string roles
# It can also just be one role
# Read these allowed roles and return as a list of strings
try:
dalle_roles = os.getenv("DALLE_ROLES")
except:
dalle_roles = None
if dalle_roles is None:
print(
"DALLE_ROLES is not defined properly in the environment file!"
"Please copy your server's role and put it into DALLE_ROLES in the .env file."
'For example a line should look like: `DALLE_ROLES="Dalle"`'
)
print("Defaulting to allowing all users to use Dalle commands...")
return [None]
dalle_roles = (
dalle_roles.lower().split(",")
if "," in dalle_roles
else [dalle_roles.lower()]
)
return dalle_roles
@staticmethod
def get_gpt_roles():
# GPT_ROLES is a comma separated list of string roles
# It can also just be one role
# Read these allowed roles and return as a list of strings
try:
gpt_roles = os.getenv("GPT_ROLES")
except:
gpt_roles = None
if gpt_roles is None:
print(
"GPT_ROLES is not defined properly in the environment file!"
"Please copy your server's role and put it into GPT_ROLES in the .env file."
'For example a line should look like: `GPT_ROLES="Gpt"`'
)
print("Defaulting to allowing all users to use GPT commands...")
return [None]
gpt_roles = (
gpt_roles.lower().strip().split(",")
if "," in gpt_roles
else [gpt_roles.lower()]
)
return gpt_roles
@staticmethod
def get_welcome_message():
# WELCOME_MESSAGE is a default string used to welcome new members to the server if GPT3 is not available.
# The string can be blank but this is not advised. If a string cannot be found in the .env file, the below string is used.
# The string is DMd to the new server member as part of an embed.
try:
welcome_message = os.getenv("WELCOME_MESSAGE")
except:
welcome_message = "Hi there! Welcome to our Discord server!"
return welcome_message
@staticmethod
def get_moderations_alert_channel():
# MODERATIONS_ALERT_CHANNEL is a channel id where moderation alerts are sent to
# The string can be blank but this is not advised. If a string cannot be found in the .env file, the below string is used.
try:
moderations_alert_channel = os.getenv("MODERATIONS_ALERT_CHANNEL")
except:
moderations_alert_channel = None
return moderations_alert_channel

@ -20,7 +20,10 @@ class Message:
message = await message_queue.get()
# Send the message
await message.channel.send(message.content)
try:
await message.channel.send(message.content)
except:
pass
# Sleep for a short time before processing the next message
# This will prevent the bot from spamming messages too quickly

@ -0,0 +1,154 @@
import asyncio
import os
import traceback
from datetime import datetime
from pathlib import Path
import discord
from models.openai_model import Model
from models.usage_service_model import UsageService
usage_service = UsageService(Path(os.environ.get("DATA_DIR", os.getcwd())))
model = Model(usage_service)
class Moderation:
def __init__(self, message, timestamp):
self.message = message
self.timestamp = timestamp
@staticmethod
def build_moderation_embed():
# Create a discord embed to send to the user when their message gets moderated
embed = discord.Embed(
title="Your message was moderated",
description="Our automatic moderation systems detected that your message was inappropriate and has been deleted. Please review the rules.",
colour=discord.Colour.red(),
)
# Set the embed thumbnail
embed.set_thumbnail(url="https://i.imgur.com/2oL8JSp.png")
embed.set_footer(
text="If you think this was a mistake, please contact the server admins."
)
return embed
@staticmethod
def build_admin_moderated_message(moderated_message, response_message):
# Create a discord embed to send to the user when their message gets moderated
embed = discord.Embed(
title="A message was moderated in the "
+ moderated_message.message.guild.name
+ " server",
description=f"Message from {moderated_message.message.author.mention} was moderated: {moderated_message.message.content}",
colour=discord.Colour.yellow(),
)
# Get the link to the moderated message
link = f"https://discord.com/channels/{response_message.guild.id}/{response_message.channel.id}/{response_message.id}"
# set the link of the embed
embed.add_field(name="Moderated message link", value=link, inline=False)
return embed
@staticmethod
def determine_moderation_result(text, response):
HATE_THRESHOLD = 0.005
HATE_VIOLENCE_THRESHOLD = 0.05
SELF_HARM_THRESHOLD = 0.05
SEXUAL_THRESHOLD = 0.91
SEXUAL_MINORS_THRESHOLD = 0.1
VIOLENCE_THRESHOLD = 0.08
VIOLENCE_GRAPHIC_THRESHOLD = 0.1
extreme_hatred_qualifiers = [
"i fucking hate",
"fucking hate",
"i fucking despise",
]
thresholds = [
HATE_THRESHOLD,
HATE_VIOLENCE_THRESHOLD,
SELF_HARM_THRESHOLD,
SEXUAL_THRESHOLD,
SEXUAL_MINORS_THRESHOLD,
VIOLENCE_THRESHOLD,
VIOLENCE_GRAPHIC_THRESHOLD,
]
threshold_iterator = [
"hate",
"hate/threatening",
"self-harm",
"sexual",
"sexual/minors",
"violence",
"violence/graphic",
]
category_scores = response["results"][0]["category_scores"]
flagged = response["results"][0]["flagged"]
# Iterate the category scores using the threshold_iterator and compare the values to thresholds
for category, threshold in zip(threshold_iterator, thresholds):
if category == "hate":
if (
"hate" in text.lower()
): # The word "hate" makes the model oversensitive. This is a (bad) workaround.
threshold = 0.1
if any(word in text.lower() for word in extreme_hatred_qualifiers):
threshold = 0.6
if category_scores[category] > threshold:
return True
return False
# This function will be called by the bot to process the message queue
@staticmethod
async def process_moderation_queue(
moderation_queue, PROCESS_WAIT_TIME, EMPTY_WAIT_TIME, moderations_alert_channel
):
while True:
try:
# If the queue is empty, sleep for a short time before checking again
if moderation_queue.empty():
await asyncio.sleep(EMPTY_WAIT_TIME)
continue
# Get the next message from the queue
to_moderate = await moderation_queue.get()
# Check if the current timestamp is greater than the deletion timestamp
if datetime.now().timestamp() > to_moderate.timestamp:
response = await model.send_moderations_request(
to_moderate.message.content
)
moderation_result = Moderation.determine_moderation_result(
to_moderate.message.content, response
)
if moderation_result:
# Take care of the flagged message
response_message = await to_moderate.message.reply(
embed=Moderation.build_moderation_embed()
)
# Do the same response as above but use an ephemeral message
await to_moderate.message.delete()
# Send to the moderation alert channel
if moderations_alert_channel:
await moderations_alert_channel.send(
embed=Moderation.build_admin_moderated_message(
to_moderate, response_message
)
)
else:
await moderation_queue.put(to_moderate)
# Sleep for a short time before processing the next message
# This will prevent the bot from spamming messages too quickly
await asyncio.sleep(PROCESS_WAIT_TIME)
except:
traceback.print_exc()
pass

@ -3,6 +3,7 @@ import functools
import math
import os
import tempfile
import traceback
import uuid
from typing import Tuple, List, Any
@ -23,6 +24,7 @@ class Mode:
class Models:
DAVINCI = "text-davinci-003"
CURIE = "text-curie-001"
EMBEDDINGS = "text-embedding-ada-002"
class ImageSize:
@ -42,7 +44,7 @@ class Model:
)
self._frequency_penalty = 0 # Penalize new tokens based on their existing frequency in the text so far. (Higher frequency = lower probability of being chosen.)
self._best_of = 1 # Number of responses to compare the loglikelihoods of
self._prompt_min_length = 12
self._prompt_min_length = 8
self._max_conversation_length = 100
self._model = Models.DAVINCI
self._low_usage_mode = False
@ -53,6 +55,9 @@ class Model:
self._summarize_conversations = True
self._summarize_threshold = 2500
self.model_max_tokens = 4024
self._welcome_message_enabled = True
self._num_static_conversation_items = 6
self._num_conversation_lookback = 10
try:
self.IMAGE_SAVE_PATH = os.environ["IMAGE_SAVE_PATH"]
@ -77,6 +82,50 @@ class Model:
self.openai_key = os.getenv("OPENAI_TOKEN")
# Use the @property and @setter decorators for all the self fields to provide value checking
@property
def num_static_conversation_items(self):
return self._num_static_conversation_items
@num_static_conversation_items.setter
def num_static_conversation_items(self, value):
value = int(value)
if value < 3:
raise ValueError("num_static_conversation_items must be >= 3")
if value > 20:
raise ValueError(
"num_static_conversation_items must be <= 20, this is to ensure reliability and reduce token wastage!"
)
self._num_static_conversation_items = value
@property
def num_conversation_lookback(self):
return self._num_conversation_lookback
@num_conversation_lookback.setter
def num_conversation_lookback(self, value):
value = int(value)
if value < 3:
raise ValueError("num_conversation_lookback must be >= 3")
if value > 15:
raise ValueError(
"num_conversation_lookback must be <= 15, this is to ensure reliability and reduce token wastage!"
)
self._num_conversation_lookback = value
@property
def welcome_message_enabled(self):
return self._welcome_message_enabled
@welcome_message_enabled.setter
def welcome_message_enabled(self, value):
if value.lower() == "true":
self._welcome_message_enabled = True
elif value.lower() == "false":
self._welcome_message_enabled = False
else:
raise ValueError("Value must be either true or false!")
@property
def summarize_threshold(self):
return self._summarize_threshold
@ -173,9 +222,9 @@ class Model:
value = int(value)
if value < 1:
raise ValueError("Max conversation length must be greater than 1")
if value > 30:
if value > 500:
raise ValueError(
"Max conversation length must be less than 30, this will start using credits quick."
"Max conversation length must be less than 500, this will start using credits quick."
)
self._max_conversation_length = value
@ -292,6 +341,53 @@ class Model:
)
self._prompt_min_length = value
async def valid_text_request(self, response):
try:
tokens_used = int(response["usage"]["total_tokens"])
await self.usage_service.update_usage(tokens_used)
except:
raise ValueError(
"The API returned an invalid response: "
+ str(response["error"]["message"])
)
async def send_embedding_request(self, text):
async with aiohttp.ClientSession() as session:
payload = {
"model": Models.EMBEDDINGS,
"input": text,
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.openai_key}",
}
async with session.post(
"https://api.openai.com/v1/embeddings", json=payload, headers=headers
) as resp:
response = await resp.json()
try:
return response["data"][0]["embedding"]
except Exception as e:
print(response)
traceback.print_exc()
return
async def send_moderations_request(self, text):
# Use aiohttp to send the above request:
async with aiohttp.ClientSession() as session:
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.openai_key}",
}
payload = {"input": text}
async with session.post(
"https://api.openai.com/v1/moderations",
headers=headers,
json=payload,
) as response:
return await response.json()
async def send_summary_request(self, prompt):
"""
Sends a summary request to the OpenAI API
@ -299,7 +395,7 @@ class Model:
summary_request_text = []
summary_request_text.append(
"The following is a conversation instruction set and a conversation"
" between two people, a Human, and GPTie. Firstly, determine the Human's name from the conversation history, then summarize the conversation. Do not summarize the instructions for GPTie, only the conversation. Summarize the conversation in a detailed fashion. If Human mentioned their name, be sure to mention it in the summary. Pay close attention to things the Human has told you, such as personal details."
" between two people, a <username>, and GPTie. Firstly, determine the <username>'s name from the conversation history, then summarize the conversation. Do not summarize the instructions for GPTie, only the conversation. Summarize the conversation in a detailed fashion. If <username> mentioned their name, be sure to mention it in the summary. Pay close attention to things the <username> has told you, such as personal details."
)
summary_request_text.append(prompt + "\nDetailed summary of conversation: \n")
@ -307,9 +403,6 @@ class Model:
tokens = self.usage_service.count_tokens(summary_request_text)
print("The summary request will use " + str(tokens) + " tokens.")
print(f"{self.max_tokens - tokens} is the remaining that we will use.")
async with aiohttp.ClientSession() as session:
payload = {
"model": Models.DAVINCI,
@ -330,10 +423,10 @@ class Model:
) as resp:
response = await resp.json()
await self.valid_text_request(response)
print(response["choices"][0]["text"])
tokens_used = int(response["usage"]["total_tokens"])
self.usage_service.update_usage(tokens_used)
return response
async def send_request(
@ -354,26 +447,29 @@ class Model:
# Validate that all the parameters are in a good state before we send the request
if len(prompt) < self.prompt_min_length:
raise ValueError(
"Prompt must be greater than 12 characters, it is currently "
"Prompt must be greater than 8 characters, it is currently "
+ str(len(prompt))
)
print("The prompt about to be sent is " + prompt)
print(
f"Overrides -> temp:{temp_override}, top_p:{top_p_override} frequency:{frequency_penalty_override}, presence:{presence_penalty_override}"
)
async with aiohttp.ClientSession() as session:
payload = {
"model": self.model,
"prompt": prompt,
"temperature": self.temp if not temp_override else temp_override,
"top_p": self.top_p if not top_p_override else top_p_override,
"temperature": self.temp if temp_override is None else temp_override,
"top_p": self.top_p if top_p_override is None else top_p_override,
"max_tokens": self.max_tokens - tokens
if not max_tokens_override
else max_tokens_override,
"presence_penalty": self.presence_penalty
if not presence_penalty_override
if presence_penalty_override is None
else presence_penalty_override,
"frequency_penalty": self.frequency_penalty
if not frequency_penalty_override
if frequency_penalty_override is None
else frequency_penalty_override,
"best_of": self.best_of if not best_of_override else best_of_override,
}
@ -382,14 +478,16 @@ class Model:
"https://api.openai.com/v1/completions", json=payload, headers=headers
) as resp:
response = await resp.json()
print(response)
# print(f"Payload -> {payload}")
# print(f"Response -> {response}")
# Parse the total tokens used for this request and response pair from the response
tokens_used = int(response["usage"]["total_tokens"])
self.usage_service.update_usage(tokens_used)
await self.valid_text_request(response)
return response
async def send_image_request(self, prompt, vary=None) -> tuple[File, list[Any]]:
async def send_image_request(
self, ctx, prompt, vary=None
) -> tuple[File, list[Any]]:
# Validate that all the parameters are in a good state before we send the request
words = len(prompt.split(" "))
if words < 3 or words > 75:
@ -399,7 +497,7 @@ class Model:
)
# print("The prompt about to be sent is " + prompt)
self.usage_service.update_usage_image(self.image_size)
await self.usage_service.update_usage_image(self.image_size)
response = None
@ -436,7 +534,6 @@ class Model:
response = await resp.json()
print(response)
print("JUST PRINTED THE RESPONSE")
image_urls = []
for result in response["data"]:
@ -513,17 +610,21 @@ class Model:
)
# Print the filesize of new_im, in mega bytes
image_size = os.path.getsize(temp_file.name) / 1000000
image_size = os.path.getsize(temp_file.name) / 1048576
if ctx.guild is None:
guild_file_limit = 8
else:
guild_file_limit = ctx.guild.filesize_limit / 1048576
# If the image size is greater than 8MB, we can't return this to the user, so we will need to downscale the
# image and try again
safety_counter = 0
while image_size > 8:
while image_size > guild_file_limit:
safety_counter += 1
if safety_counter >= 3:
break
print(
f"Image size is {image_size}MB, which is too large for discord. Downscaling and trying again"
f"Image size is {image_size}MB, which is too large for this server {guild_file_limit}MB. Downscaling and trying again"
)
# We want to do this resizing asynchronously, so that it doesn't block the main thread during the resize.
# We can use the asyncio.run_in_executor method to do this

@ -0,0 +1,67 @@
import pinecone
class PineconeService:
def __init__(self, index: pinecone.Index):
self.index = index
def upsert_basic(self, text, embeddings):
self.index.upsert([(text, embeddings)])
def get_all_for_conversation(self, conversation_id: int):
response = self.index.query(
top_k=100, filter={"conversation_id": conversation_id}
)
return response
async def upsert_conversation_embedding(
self, model, conversation_id: int, text, timestamp
):
# If the text is > 512 characters, we need to split it up into multiple entries.
first_embedding = None
if len(text) > 500:
# Split the text into 512 character chunks
chunks = [text[i : i + 500] for i in range(0, len(text), 500)]
for chunk in chunks:
print("The split chunk is ", chunk)
# Create an embedding for the split chunk
embedding = await model.send_embedding_request(chunk)
if not first_embedding:
first_embedding = embedding
self.index.upsert(
[(chunk, embedding)],
metadata={
"conversation_id": conversation_id,
"timestamp": timestamp,
},
)
return first_embedding
else:
embedding = await model.send_embedding_request(text)
self.index.upsert(
[
(
text,
embedding,
{"conversation_id": conversation_id, "timestamp": timestamp},
)
]
)
return embedding
def get_n_similar(self, conversation_id: int, embedding, n=10):
response = self.index.query(
vector=embedding,
top_k=n,
include_metadata=True,
filter={"conversation_id": conversation_id},
)
print(response)
relevant_phrases = [
(match["id"], match["metadata"]["timestamp"])
for match in response["matches"]
]
# Sort the relevant phrases based on the timestamp
relevant_phrases.sort(key=lambda x: x[1])
return relevant_phrases

@ -1,6 +1,7 @@
import os
from pathlib import Path
import aiofiles
from transformers import GPT2TokenizerFast
@ -14,32 +15,32 @@ class UsageService:
f.close()
self.tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
def update_usage(self, tokens_used):
async def update_usage(self, tokens_used):
tokens_used = int(tokens_used)
price = (tokens_used / 1000) * 0.02
print("This request cost " + str(price) + " credits")
usage = self.get_usage()
usage = await self.get_usage()
print("The current usage is " + str(usage) + " credits")
with self.usage_file_path.open("w") as f:
f.write(str(usage + float(price)))
f.close()
def set_usage(self, usage):
with self.usage_file_path.open("w") as f:
f.write(str(usage))
f.close()
def get_usage(self):
with self.usage_file_path.open("r") as f:
usage = float(f.read().strip())
f.close()
# Do the same as above but with aiofiles
async with aiofiles.open(self.usage_file_path, "w") as f:
await f.write(str(usage + float(price)))
await f.close()
async def set_usage(self, usage):
async with aiofiles.open(self.usage_file_path, "w") as f:
await f.write(str(usage))
await f.close()
async def get_usage(self):
async with aiofiles.open(self.usage_file_path, "r") as f:
usage = float((await f.read()).strip())
await f.close()
return usage
def count_tokens(self, input):
res = self.tokenizer(input)["input_ids"]
return len(res)
def update_usage_image(self, image_size):
async def update_usage_image(self, image_size):
# 1024×1024 $0.020 / image
# 512×512 $0.018 / image
# 256×256 $0.016 / image
@ -53,8 +54,8 @@ class UsageService:
else:
raise ValueError("Invalid image size")
usage = self.get_usage()
usage = await self.get_usage()
with self.usage_file_path.open("w") as f:
f.write(str(usage + float(price)))
f.close()
async with aiofiles.open(self.usage_file_path, "w") as f:
await f.write(str(usage + float(price)))
await f.close()

@ -50,3 +50,58 @@ class User:
def __str__(self):
return self.__repr__()
class Thread:
def __init__(self, id):
self.id = id
self.history = []
self.count = 0
# These user objects should be accessible by ID, for example if we had a bunch of user
# objects in a list, and we did `if 1203910293001 in user_list`, it would return True
# if the user with that ID was in the list
def __eq__(self, other):
return self.id == other.id
def __hash__(self):
return hash(self.id)
def __repr__(self):
return f"Thread(id={self.id}, history={self.history})"
def __str__(self):
return self.__repr__()
class EmbeddedConversationItem:
def __init__(self, text, timestamp):
self.text = text
self.timestamp = int(timestamp)
def __repr__(self):
return self.text
def __str__(self):
return self.__repr__()
def __eq__(self, other):
return self.text == other.text and self.timestamp == other.timestamp
def __hash__(self):
return hash(self.text) + hash(self.timestamp)
def __lt__(self, other):
return self.timestamp < other.timestamp
def __gt__(self, other):
return self.timestamp > other.timestamp
def __le__(self, other):
return self.timestamp <= other.timestamp
def __ge__(self, other):
return self.timestamp >= other.timestamp
def __ne__(self, other):
return not self.__eq__(other)

@ -0,0 +1 @@
I want you to act as a composer. I will provide the lyrics to a song and you will create music for it. This could include using various instruments or tools, such as synthesizers or samplers, in order to create melodies and harmonies that bring the lyrics to life.

@ -0,0 +1 @@
I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand.

@ -0,0 +1 @@
I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with more beautiful and elegant, upper level English words and sentences. Keep the meaning same, but make them more literary. I want you to only reply the correction, the improvements and nothing else, do not write explanations.

@ -0,0 +1 @@
I want you to act as an essay writer. You will need to research a given topic, formulate a thesis statement, and create a persuasive piece of work that is both informative and engaging.

@ -0,0 +1 @@
I want you to act as a javascript console. I will type commands and you will reply with what the javascript console should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. do not write explanations. do not type commands unless I instruct you to do so. when i need to tell you something in english, i will do so by putting text inside curly brackets {like this}.

@ -0,0 +1 @@
I want you to act as a life coach. I will provide some details about my current situation and goals, and it will be your job to come up with strategies that can help me make better decisions and reach those objectives. This could involve offering advice on various topics, such as creating plans for achieving success or dealing with difficult emotions.

@ -0,0 +1 @@
I want you to act as a motivational coach. I will provide you with some information about someone's goals and challenges, and it will be your job to come up with strategies that can help this person achieve their goals. This could involve providing positive affirmations, giving helpful advice or suggesting activities they can do to reach their end goal.

@ -0,0 +1 @@
I want you to act as a motivational speaker. Put together words that inspire action and make people feel empowered to do something beyond their abilities. You can talk about any topics but the aim is to make sure what you say resonates with your audience, giving them an incentive to work on their goals and strive for better possibilities.

@ -0,0 +1 @@
I want you to act as a novelist. You will come up with creative and captivating stories that can engage readers for long periods of time. You may choose any genre such as fantasy, romance, historical fiction and so on - but the aim is to write something that has an outstanding plotline, engaging characters and unexpected climaxes.

@ -0,0 +1 @@
I want you to act as a personal trainer. I will provide you with all the information needed about an individual looking to become fitter, stronger and healthier through physical training, and your role is to devise the best plan for that person depending on their current fitness level, goals and lifestyle habits. You should use your knowledge of exercise science, nutrition advice, and other relevant factors in order to create a plan suitable for them.

@ -0,0 +1 @@
I want you to act as an interviewer. I will be the candidate and you will ask me the interview questions for the position specified by the user. I want you to only reply as the interviewer. Do not write all the conservation at once. I want you to only do the interview with me. Ask me the questions and wait for my answers. Do not write explanations. Ask me the questions one by one like an interviewer does and wait for my answers. The user will first specify the position.

@ -0,0 +1 @@
I want you to act like a Python interpreter. I will give you Python code, and you will execute it. Do not provide any explanations. Do not respond with anything except the output of the code. The first code is: "print('hello world!')"

@ -0,0 +1 @@
I want you to act as a storyteller. You will come up with entertaining stories that are engaging, imaginative and captivating for the audience. It can be fairy tales, educational stories or any other type of stories which has the potential to capture people's attention and imagination. Depending on the target audience, you may choose specific themes or topics for your storytelling session e.g., if its children then you can talk about animals; If its adults then history-based tales might engage them better etc.

@ -0,0 +1 @@
I want you to act as a tech reviewer. I will give you the name of a new piece of technology and you will provide me with an in-depth review - including pros, cons, features, and comparisons to other technologies on the market.

@ -0,0 +1 @@
I want you to act as a text based adventure game. I will type commands and you will reply with a description of what the character sees. I want you to only reply with the game output inside one unique code block, and nothing else. do not write explanations. do not type commands unless I instruct you to do so. when i need to tell you something in english, i will do so by putting text inside curly brackets {like this}. my first command is wake up.

@ -0,0 +1 @@
I want you to act as a travel guide. I will write you my location and you will suggest a place to visit near my location. In some cases, I will also give you the type of places I will visit. You will also suggest me places of similar type that are close to my first location. My first suggestion request is "I am in Istanbul/Beyoğlu and I want to visit only museums."

@ -0,0 +1 @@
I want you to act as a UX/UI developer. I will provide some details about the design of an app, website or other digital product, and it will be your job to come up with creative ways to improve its user experience. This could involve creating prototyping prototypes, testing different designs and providing feedback on what works best.

@ -23,6 +23,9 @@ dependencies = [
"python-dotenv",
"requests",
"transformers",
"pycord-multicog",
"aiofiles",
"pinecone-client"
]
dynamic = ["version"]
[project.scripts]
@ -49,4 +52,4 @@ include = [
#packages = ["cogs", "gpt3discord.py", "models"]
[[tool.hatch.envs.test.matrix]]
python = ["39"]
python = ["39"]

@ -3,3 +3,6 @@ py-cord==2.3.2
python-dotenv==0.21.0
requests==2.28.1
transformers==4.25.1
pycord-multicog==1.0.2
aiofiles==22.1.0
pinecone-client==2.1.0

@ -1,6 +1,16 @@
OPENAI_TOKEN="<openai_api_token>"
DISCORD_TOKEN="<discord_bot_token>"
DEBUG_GUILD="755420092027633774"
DEBUG_CHANNEL="907974109084942396"
ALLOWED_GUILDS="971268468148166697,971268468148166697"
ALLOWED_ROLES="Admin,gpt"
OPENAI_TOKEN = "<openai_api_token>"
DISCORD_TOKEN = "<discord_bot_token>"
DEBUG_GUILD = "974519864045756446" # discord_server_id
DEBUG_CHANNEL = "977697652147892304" # discord_chanel_id
ALLOWED_GUILDS = "971268468148166697,971268468148166697"
# People with the roles in ADMIN_ROLES can use admin commands like /clear-local, and etc
ADMIN_ROLES = "Admin,Owner"
# People with the roles in DALLE_ROLES can use commands like /dalle draw or /dalle imgoptimize
DALLE_ROLES = "Admin,Openai,Dalle,gpt"
# People with the roles in GPT_ROLES can use commands like /gpt ask or /gpt converse
GPT_ROLES = "openai,gpt"
WELCOME_MESSAGE = "Hi There! Welcome to our Discord server. We hope you'll enjoy our server and we look forward to engaging with you!" # This is a fallback message if gpt3 fails to generate a welcome message.

Loading…
Cancel
Save