Skip to content

Commit e9ba084

Browse files
authored
Refactor multimodal dependencies (opea-project#1527)
* Refactor multimodal dependencies - move asr/lvm/tts => 3rd parties - unzip video-llama path as default Signed-off-by: Spycsh <sihan.chen@intel.com>
1 parent cfc43ee commit e9ba084

89 files changed

Lines changed: 323 additions & 279 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/docker/compose/asr-compose.yaml

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,3 @@ services:
77
build:
88
dockerfile: comps/asr/src/Dockerfile
99
image: ${REGISTRY:-opea}/asr:${TAG:-latest}
10-
whisper:
11-
build:
12-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile
13-
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
14-
whisper-gaudi:
15-
build:
16-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu
17-
image: ${REGISTRY:-opea}/whisper-gaudi:${TAG:-latest}

.github/workflows/docker/compose/lvms-compose.yaml

Lines changed: 0 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -2,40 +2,8 @@
22
# SPDX-License-Identifier: Apache-2.0
33

44
# this file should be run in the root of the repo
5-
# Deprecated original wrappers opea/lvm-tgi, opea/lvm-llava-svc, opea/lvm-video-llama
6-
# and merged functionalities in opea/lvm
7-
# Original video-llama-lvm-server renamed as lvm-video-llama
8-
95
services:
106
lvm:
117
build:
128
dockerfile: comps/lvms/src/Dockerfile
139
image: ${REGISTRY:-opea}/lvm:${TAG:-latest}
14-
lvm-video-llama:
15-
build:
16-
dockerfile: comps/lvms/src/integrations/dependency/video-llama/Dockerfile
17-
image: ${REGISTRY:-opea}/lvm-video-llama:${TAG:-latest}
18-
lvm-llava:
19-
build:
20-
dockerfile: comps/lvms/src/integrations/dependency/llava/Dockerfile
21-
image: ${REGISTRY:-opea}/lvm-llava:${TAG:-latest}
22-
llava-gaudi:
23-
build:
24-
dockerfile: comps/lvms/src/integrations/dependency/llava/Dockerfile.intel_hpu
25-
image: ${REGISTRY:-opea}/llava-gaudi:${TAG:-latest}
26-
lvm-predictionguard:
27-
build:
28-
dockerfile: comps/lvms/src/integrations/dependency/predictionguard/Dockerfile
29-
image: ${REGISTRY:-opea}/lvm-predictionguard:${TAG:-latest}
30-
lvm-llama-vision:
31-
build:
32-
dockerfile: comps/lvms/src/integrations/dependency/llama-vision/Dockerfile
33-
image: ${REGISTRY:-opea}/lvm-llama-vision:${TAG:-latest}
34-
lvm-llama-vision-tp:
35-
build:
36-
dockerfile: comps/lvms/src/integrations/dependency/llama-vision/Dockerfile.tp
37-
image: ${REGISTRY:-opea}/lvm-llama-vision-tp:${TAG:-latest}
38-
lvm-llama-vision-guard:
39-
build:
40-
dockerfile: comps/lvms/src/integrations/dependency/llama-vision/Dockerfile.guard
41-
image: ${REGISTRY:-opea}/lvm-llama-vision-guard:${TAG:-latest}

.github/workflows/docker/compose/third_parties-compose.yaml

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,3 +50,51 @@ services:
5050
dockerfile: comps/third_parties/vllm/src/Dockerfile.amd_gpu
5151
shm_size: '128g'
5252
image: ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest}
53+
whisper:
54+
build:
55+
dockerfile: comps/third_parties/whisper/src/Dockerfile
56+
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
57+
whisper-gaudi:
58+
build:
59+
dockerfile: comps/third_parties/whisper/src/Dockerfile.intel_hpu
60+
image: ${REGISTRY:-opea}/whisper-gaudi:${TAG:-latest}
61+
speecht5:
62+
build:
63+
dockerfile: comps/third_parties/speecht5/src/Dockerfile
64+
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}
65+
speecht5-gaudi:
66+
build:
67+
dockerfile: comps/third_parties/speecht5/src/Dockerfile.intel_hpu
68+
image: ${REGISTRY:-opea}/speecht5-gaudi:${TAG:-latest}
69+
gpt-sovits:
70+
build:
71+
dockerfile: comps/third_parties/gpt-sovits/src/Dockerfile
72+
image: ${REGISTRY:-opea}/gpt-sovits:${TAG:-latest}
73+
lvm-video-llama:
74+
build:
75+
dockerfile: comps/third_parties/video-llama/src/Dockerfile
76+
image: ${REGISTRY:-opea}/lvm-video-llama:${TAG:-latest}
77+
lvm-llava:
78+
build:
79+
dockerfile: comps/third_parties/llava/src/Dockerfile
80+
image: ${REGISTRY:-opea}/lvm-llava:${TAG:-latest}
81+
lvm-llava-gaudi:
82+
build:
83+
dockerfile: comps/third_parties/llava/src/Dockerfile.intel_hpu
84+
image: ${REGISTRY:-opea}/lvm-llava-gaudi:${TAG:-latest}
85+
lvm-predictionguard:
86+
build:
87+
dockerfile: comps/third_parties/predictionguard/src/Dockerfile
88+
image: ${REGISTRY:-opea}/lvm-predictionguard:${TAG:-latest}
89+
lvm-llama-vision:
90+
build:
91+
dockerfile: comps/third_parties/llama-vision/src/Dockerfile
92+
image: ${REGISTRY:-opea}/lvm-llama-vision:${TAG:-latest}
93+
lvm-llama-vision-tp:
94+
build:
95+
dockerfile: comps/third_parties/llama-vision/src/Dockerfile.tp
96+
image: ${REGISTRY:-opea}/lvm-llama-vision-tp:${TAG:-latest}
97+
lvm-llama-vision-guard:
98+
build:
99+
dockerfile: comps/third_parties/llama-vision/src/Dockerfile.guard
100+
image: ${REGISTRY:-opea}/lvm-llama-vision-guard:${TAG:-latest}

.github/workflows/docker/compose/tts-compose.yaml

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,3 @@ services:
77
build:
88
dockerfile: comps/tts/src/Dockerfile
99
image: ${REGISTRY:-opea}/tts:${TAG:-latest}
10-
speecht5:
11-
build:
12-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile
13-
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}
14-
speecht5-gaudi:
15-
build:
16-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu
17-
image: ${REGISTRY:-opea}/speecht5-gaudi:${TAG:-latest}
18-
gpt-sovits:
19-
build:
20-
dockerfile: comps/tts/src/integrations/dependency/gpt-sovits/Dockerfile
21-
image: ${REGISTRY:-opea}/gpt-sovits:${TAG:-latest}

.pre-commit-config.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,11 @@ ci:
55
autofix_prs: true
66
autoupdate_schedule: quarterly
77

8+
exclude: |
9+
(?x)^(
10+
.*\.patch$
11+
)$
12+
813
repos:
914
- repo: https://github.com/pre-commit/pre-commit-hooks
1015
rev: v5.0.0

comps/asr/deployment/docker_compose/README.md

Whitespace-only changes.

comps/asr/deployment/docker_compose/compose.yaml

Lines changed: 3 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,44 +1,10 @@
11
# Copyright (C) 2024 Intel Corporation
22
# SPDX-License-Identifier: Apache-2.0
33

4+
include:
5+
- ../../../third_parties/whisper/deployment/docker_compose/compose.yaml
6+
47
services:
5-
whisper-service:
6-
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
7-
container_name: whisper-service
8-
ports:
9-
- ${WHISPER_PORT:-7066}:7066
10-
ipc: host
11-
environment:
12-
no_proxy: ${no_proxy}
13-
http_proxy: ${http_proxy}
14-
https_proxy: ${https_proxy}
15-
restart: unless-stopped
16-
healthcheck:
17-
test: ["CMD", "curl", "-f", "http://localhost:7066/health"]
18-
interval: 10s
19-
timeout: 6s
20-
retries: 18
21-
whisper-gaudi-service:
22-
image: ${REGISTRY:-opea}/whisper-gaudi:${TAG:-latest}
23-
container_name: whisper-gaudi-service
24-
ports:
25-
- ${WHISPER_PORT:-7066}:7066
26-
ipc: host
27-
environment:
28-
no_proxy: ${no_proxy}
29-
http_proxy: ${http_proxy}
30-
https_proxy: ${https_proxy}
31-
HABANA_VISIBLE_DEVICES: all
32-
OMPI_MCA_btl_vader_single_copy_mechanism: none
33-
runtime: habana
34-
cap_add:
35-
- SYS_NICE
36-
restart: unless-stopped
37-
healthcheck:
38-
test: ["CMD", "curl", "-f", "http://localhost:7066/health"]
39-
interval: 10s
40-
timeout: 6s
41-
retries: 60
428
asr:
439
image: ${REGISTRY:-opea}/asr:${TAG:-latest}
4410
container_name: asr-service

comps/asr/src/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ pip install -r requirements.txt
1717
- Xeon CPU
1818

1919
```bash
20-
cd integrations/dependency/whisper
20+
cd comps/third_parties/whisper/src
2121
nohup python whisper_server.py --device=cpu &
2222
python check_whisper_server.py
2323
```
@@ -35,7 +35,7 @@ If the Whisper server is running properly, you should see the following output:
3535
```bash
3636
pip install optimum[habana]
3737

38-
cd dependency/
38+
cd comps/third_parties/whisper/src
3939
nohup python whisper_server.py --device=hpu &
4040
python check_whisper_server.py
4141

@@ -74,14 +74,14 @@ Alternatively, you can also start the ASR microservice with Docker.
7474

7575
```bash
7676
cd ../../..
77-
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile .
77+
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile .
7878
```
7979

8080
- Gaudi2 HPU
8181

8282
```bash
8383
cd ../../..
84-
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu .
84+
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile.intel_hpu .
8585
```
8686

8787
#### 2.1.2 ASR Service Image

comps/asr/src/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
accelerate
12
aiohttp
23
datasets
34
docarray[full]

comps/dataprep/src/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
accelerate
12
beautifulsoup4
23
cairosvg
34
decord

0 commit comments

Comments
 (0)