- We were not putting the modules in a dir so it was not being included in the install
- Fix the COPY directives
- Add a `find` run to show the src layout so someone building / looking at logs can inspect easy
Ran and got a new error which I will open an issue for using your recent additons ...
- Add a pyproject.toml to install all modules and packages
- Use hatch to build + install
- Still only claim support for 3.9
- We don't specify versions in pyproject.toml on purpose so people can have some flexibility there
- If they want the tested pinned versions they can use requirements.txt like Docker + CI does ...
- This will allow the package to be uploaded to PyPI if you'd like now :)
- `git mv main.py gpt3discord.py` as main.py is to common (incase someone installs to shared python env)
- Move initalization code into a init() function to have entry point script call
- Update `Dockerfile` to install via pyproject.toml
- Both docker + build CI install requirements.txt so we get the known versions of dependencies that work
- Update README for less steps now in installing
- Side note, we will have to track versions now and increment as we see fit - I'd like to discuss using GitHub releases here ...
- Sort requirements.txt to be alphabetical ...
Test:
- Add python 3.9 CI to install our module
- Paves the way to add more CI to help me test the module works in 3.11
- Build in a local venv
- `python3.9 -m venv /tmp/foovenv`
- `/tmp/foovenv/bin/pip install -r requirements.txt`
- `/tmp/foovenv/bin/pip install .`
- Build in docker
- `docker build -t gpt3discord .`
- Add a Dockerfile so people can run this bot in a docker container
- Stuck with recommendation of running with python3.9 for now
- Will later test with 3.11 + supply fixes if I get this working ...
- Added a DATA_DIR env param to use to choose the directory to write data we want persistent across docker container restarts to be written to
- We default to CWD like the code does today - we just explicitly pass it to functions / classes
Test:
- `docker build -t gpt3discord .`
```
crl-m1:GPT3Discord cooper$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gpt3discord latest 6d2832af2450 69 seconds ago 356MB
```
- Try run it ... I would guess if I had correct tokens things would work ...
- To do so I plan to bind mount over /bin/.env on my docker container when I run this ...
```
crl-m1:GPT3Discord cooper$ docker run gpt3discord
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Downloading: 100%|██████████| 1.04M/1.04M [00:02<00:00, 516kB/s]
Downloading: 100%|██████████| 456k/456k [00:01<00:00, 319kB/s]
Downloading: 100%|██████████| 1.36M/1.36M [00:03<00:00, 443kB/s]
Downloading: 100%|██████████| 665/665 [00:00<00:00, 740kB/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/discord/http.py", line 413, in static_login
data = await self.request(Route("GET", "/users/@me"))
File "/usr/local/lib/python3.9/site-packages/discord/http.py", line 366, in request
raise HTTPException(response, data)
discord.errors.HTTPException: 401 Unauthorized (error code: 0): 401: Unauthorized
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/bin/gpt3discord", line 79, in <module>
asyncio.get_event_loop().run_until_complete(main())
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/bin/gpt3discord", line 63, in main
await bot.start(os.getenv("DISCORD_TOKEN"))
File "/usr/local/lib/python3.9/site-packages/discord/client.py", line 658, in start
await self.login(token)
File "/usr/local/lib/python3.9/site-packages/discord/client.py", line 514, in login
data = await self.http.static_login(token.strip())
File "/usr/local/lib/python3.9/site-packages/discord/http.py", line 417, in static_login
raise LoginFailure("Improper token has been passed.") from exc
discord.errors.LoginFailure: Improper token has been passed.
Wrote PID to file the file bot.pid
The debug channel and guild IDs are 755420092027633774 and 907974109084942396
Improper token has been passed.
Removing PID file
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0xffff721a2dc0>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0xffff718fe0a0>, 170230.336548951)]']
connector: <aiohttp.connector.TCPConnector object at 0xffff721a2fd0>
```