•  


[Feature] Run different projects with docker-compose profiles · binary-husky/gpt_academic@f5f87d1 · GitHub
Skip to content

Commit

Permalink
[Feature] Run different projects with docker-compose profiles
Browse files Browse the repository at this point in the history
This commit introduces the usage of Docker Compose profiles to run different projects with varying functionalities. By utilizing the profiles feature, we can now manage and switch between different project configurations easily.

Changes made in this commit include:

- Added Docker Compose profiles for organizing project configurations
- Updated the README file to document the usage of profiles
- Modified the docker-compose.yml file to include multiple profile definitions

?次提交引入了使用 Docker Compose 的 profile 功能??行具有不同功能的?目。通?利用 profile 功能,我??在可以?松地管理和切?不同的?目配置。

本次提交的主要更改包括:

- 添加了 Docker Compose 的 profile,用于???目配置
- 更新了 README 文件,以?? profile 的使用方法
- 修改了 docker-compose.yml 文件,包含多? profile 定?
  • Loading branch information
k997 committed Aug 6, 2023
1 parent 43809c1 commit f5f87d1
Show file tree
Hide file tree
Showing 2 changed files with 92 additions and 99 deletions .
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ python main.py

### 安?方法II:使用Docker

1 . ?ChatGPT(推?大多?人??,等价于docker- compose方案1
1 . ?ChatGPT(推?大多?人??,等价于docker- compose ` nolocal ` 方案
[ ![ basic ] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master )] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml )
[ ![ basiclatex ] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master )] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml )
[ ![ basicaudio ] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master )] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml )
Expand All @@ -169,16 +169,14 @@ P.S. 如果需要依?Latex的?件功能,??Wiki。?外,?也可以
[ ![ chatglm ] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master )] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml )

``` sh
# 修改docker-compose.yml,保留方案2??除其他方案。修改docker-compose.yml中方案2的配置,?考其中注??可
docker-compose up
docker-compose --profile chatglm up
```

3 . ChatGPT + LLAMA + ?古 + RWKV(需要熟悉Docker)
[ ![ jittorllms ] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml/badge.svg?branch=master )] ( https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml )

``` sh
# 修改docker-compose.yml,保留方案3??除其他方案。修改docker-compose.yml中方案3的配置,?考其中注??可
docker-compose up
docker-compose --profile rwkv up
```


Expand Down
183 changes: 89 additions & 94 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,23 +1,49 @@
# 【?修改完??后,?除此行】?在以下方案中??一?,然后?除其他的方案,最后docker-compose up?行 | Please choose from one of these options below, delete other options as well as This Line
# ?在以下方案中??一?,根据需要修改 `x-environment` 中的?境?量,?行 `docker compose --profile <profile名?> up`
# Please choose one of the following options and modify the environment variables in `x-environment` as needed, then run `docker compose --profile <profile name> up`.
#
# Profile options: [ nolocal, chatglm, rwkv, latex, audio ]
#
# 1. nolocal: ? Chatgpt ,newbing ??程服?
# 2. chatglm: ChatGLM 本地模型
# 3. rwkv: ChatGPT + LLAMA + ?古 + RWKV本地模型
# 4. latex: ChatGPT + Latex
# 5. audio: ChatGPT + ?音助手 (?先?? docs/use_audio.md)

x-environment : &env
API_KEY : ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
USE_PROXY : ' True '
proxies : ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
WEB_PORT : ' 22303 '
ADD_WAIFU : ' True '
THEME : ' Chuanhu-Small-and-Beautiful '

ENABLE_AUDIO : ' False '
ALIYUN_APPKEY : ' RoP1ZrM84DnAFkZK '
ALIYUN_TOKEN : ' f37f30e0f9934c34a992f6f64f7eba4f '
# (无需??) ALIYUN_ACCESSKEY: 'LTAI5q6BrFUzoRXVGUWnekh1'
# (无需??) ALIYUN_SECRET: 'eHmI20AVWIaQZ0CiTD2bGQVsaP9i68'
# DEFAULT_WORKER_NUM: '10'
# AUTHENTICATION: '[("username", "passwd"), ("username2", "passwd2")]'


# ??的使用,nvidia0指第0?GPU
x-devices : &gpu
- /dev/nvidia0:/dev/nvidia0

# # ===================================================
# # 【方案一】 如果不需要?行本地模型(?chatgpt,newbing??程服?)
# # ===================================================
version : ' 3 '
services :
# # ===================================================
# # 【方案一】 如果不需要?行本地模型(?chatgpt,newbing??程服?)
# # ===================================================
gpt_academic_nolocalllms :
image : ghcr.io/binary-husky/gpt_academic_nolocal:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal)
profiles :
- nolocal
environment :
# ??? `config.py` 以?看所有的配置信息
API_KEY : ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
USE_PROXY : ' True '
proxies : ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "newbing"] '
WEB_PORT : ' 22303 '
ADD_WAIFU : ' True '
# DEFAULT_WORKER_NUM: ' 10 '
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "newbing"] '

<< : *env

# ?宿主的??融合
network_mode : " host "
Expand All @@ -26,87 +52,68 @@ services:
command : >
bash -c "python3 -u main.py"
# ## ===================================================
# ## 【方案二】 如果需要?行ChatGLM本地模型
# ## ===================================================

# ## ===================================================
# ## 【方案二】 如果需要?行ChatGLM本地模型
# ## ===================================================
version : ' 3 '
services :
gpt_academic_with_chatglm :
image : ghcr.io/binary-husky/gpt_academic_chatglm_moss:master # (Auto Built by Dockerfile: docs/Dockerfile+ChatGLM)
image : ghcr.io/binary-husky/gpt_academic_chatglm_moss:master # (Auto Built by Dockerfile: docs/Dockerfile+ChatGLM)
profiles :
- chatglm
environment :
# ??? `config.py` 以?看所有的配置信息
API_KEY : ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
USE_PROXY : ' True '
proxies : ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["chatglm", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] '
LOCAL_MODEL_DEVICE : ' cuda '
DEFAULT_WORKER_NUM : ' 10 '
WEB_PORT : ' 12303 '
ADD_WAIFU : ' True '
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '

# ??的使用,nvidia0指第0?GPU
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["chatglm", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] '
LOCAL_MODEL_DEVICE : ' cuda '

<< : *env

runtime : nvidia
devices :
- /dev/nvidia0:/dev/nvidia0

devices : *gpu

# ?宿主的??融合
network_mode : " host "
command : >
bash -c "python3 -u main.py"
# ## ===================================================
# ## 【方案三】 如果需要?行ChatGPT + LLAMA + ?古 + RWKV本地模型
# ## ===================================================
version : ' 3 '
services :
# ## ===================================================
# ## 【方案三】 如果需要?行ChatGPT + LLAMA + ?古 + RWKV本地模型
# ## ===================================================

gpt_academic_with_rwkv :
image : ghcr.io/binary-husky/gpt_academic_jittorllms:master
profiles :
- rwkv
environment :
# ??? `config.py` 以?看所有的配置信息
API_KEY : ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
USE_PROXY : ' True '
proxies : ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "newbing", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] '
LOCAL_MODEL_DEVICE : ' cuda '
DEFAULT_WORKER_NUM : ' 10 '
WEB_PORT : ' 12305 '
ADD_WAIFU : ' True '
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '

# ??的使用,nvidia0指第0?GPU
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "newbing", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] '
LOCAL_MODEL_DEVICE : ' cuda '

<< : *env

runtime : nvidia
devices :
- /dev/nvidia0:/dev/nvidia0

devices : *gpu

# ?宿主的??融合
network_mode : " host "

# 不使用代理??拉取最新代?
command : >
python3 -u main.py
# # ===================================================
# # 【方案四】 ChatGPT + Latex
# # ===================================================

# # ===================================================
# # 【方案四】 ChatGPT + Latex
# # ===================================================
version : ' 3 '
services :
gpt_academic_with_latex :
image : ghcr.io/binary-husky/gpt_academic_with_latex:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal+Latex)
image : ghcr.io/binary-husky/gpt_academic_with_latex:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal+Latex)
profiles :
- latex
environment :
# ??? `config.py` 以?看所有的配置信息
API_KEY : ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
USE_PROXY : ' True '
proxies : ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "gpt-4"] '
LOCAL_MODEL_DEVICE : ' cuda '
DEFAULT_WORKER_NUM : ' 10 '
WEB_PORT : ' 12303 '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "gpt-4"] '
LOCAL_MODEL_DEVICE : ' cuda '

<< : *env

# ?宿主的??融合
network_mode : " host "
Expand All @@ -115,36 +122,24 @@ services:
command : >
bash -c "python3 -u main.py"
# # ===================================================
# # 【方案五】 ChatGPT + ?音助手 (?先?? docs/use_audio.md)
# # ===================================================

# # ===================================================
# # 【方案五】 ChatGPT + ?音助手 (?先?? docs/use_audio.md)
# # ===================================================
version : ' 3 '
services :
gpt_academic_with_audio :
image : ghcr.io/binary-husky/gpt_academic_audio_assistant:master
profiles :
- audio
environment :
# ??? `config.py` 以?看所有的配置信息
API_KEY : ' fk195831-IdP0Pb3W6DCMUIbQwVX6MsSiyxwqybyS '
USE_PROXY : ' False '
proxies : ' None '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "gpt-4"] '
ENABLE_AUDIO : ' True '
LOCAL_MODEL_DEVICE : ' cuda '
DEFAULT_WORKER_NUM : ' 20 '
WEB_PORT : ' 12343 '
ADD_WAIFU : ' True '
THEME : ' Chuanhu-Small-and-Beautiful '
ALIYUN_APPKEY : ' RoP1ZrM84DnAFkZK '
ALIYUN_TOKEN : ' f37f30e0f9934c34a992f6f64f7eba4f '
# (无需??) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
# (无需??) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
LLM_MODEL : ' gpt-3.5-turbo '
AVAIL_LLM_MODELS : ' ["gpt-3.5-turbo", "gpt-4"] '
LOCAL_MODEL_DEVICE : ' cuda '

<< : *env

# ?宿主的??融合
network_mode : " host "

# 不使用代理??拉取最新代?
command : >
bash -c "python3 -u main.py"

0 comments on commit f5f87d1

Please sign in to comment.
- "漢字路" 한글한자자동변환 서비스는 교육부 고전문헌국역지원사업의 지원으로 구축되었습니다.
- "漢字路" 한글한자자동변환 서비스는 전통문화연구회 "울산대학교한국어처리연구실 옥철영(IT융합전공)교수팀"에서 개발한 한글한자자동변환기를 바탕하여 지속적으로 공동 연구 개발하고 있는 서비스입니다.
- 현재 고유명사(인명, 지명등)을 비롯한 여러 변환오류가 있으며 이를 해결하고자 많은 연구 개발을 진행하고자 하고 있습니다. 이를 인지하시고 다른 곳에서 인용시 한자 변환 결과를 한번 더 검토하시고 사용해 주시기 바랍니다.
- 변환오류 및 건의,문의사항은 juntong@juntong.or.kr로 메일로 보내주시면 감사하겠습니다. .
Copyright ⓒ 2020 By '전통문화연구회(傳統文化硏究會)' All Rights reserved.
 한국   대만   중국   일본