Skip to content

Commit a1fc5f0

Browse files
authored
Merge pull request #3 from yinxulai/main
feat: add GLM 4.5 model support and bump version to 0.0.3
2 parents 7541757 + bffe52e commit a1fc5f0

File tree

7 files changed

+146
-4
lines changed

7 files changed

+146
-4
lines changed

_assets/plugin_preview.png

34.2 KB
Loading

_assets/qiniu_ai.png

-53 KB
Binary file not shown.

manifest.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,4 +35,4 @@ resource:
3535
tool:
3636
enabled: false
3737
type: plugin
38-
version: 0.0.2
38+
version: 0.0.3

models/llm/glm45-air.yaml

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
model: glm-4.5-air
2+
label:
3+
zh_Hans: glm-4.5-air
4+
en_US: glm-4.5-air
5+
model_type: llm
6+
features:
7+
- agent-thought
8+
- multi-tool-call
9+
- stream-tool-call
10+
model_properties:
11+
mode: chat
12+
context_size: 128000
13+
parameter_rules:
14+
- name: temperature
15+
use_template: temperature
16+
- name: max_tokens
17+
use_template: max_tokens
18+
type: int
19+
default: 512
20+
min: 1
21+
max: 8192
22+
help:
23+
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
24+
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
25+
- name: top_p
26+
use_template: top_p
27+
- name: top_k
28+
label:
29+
zh_Hans: 取样数量
30+
en_US: Top k
31+
type: int
32+
help:
33+
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
34+
en_US: Only sample from the top K options for each subsequent token.
35+
required: false
36+
- name: frequency_penalty
37+
use_template: frequency_penalty
38+
- name: response_format
39+
label:
40+
zh_Hans: 回复格式
41+
en_US: Response Format
42+
type: string
43+
help:
44+
zh_Hans: 指定模型必须输出的格式
45+
en_US: specifying the format that the model must output
46+
required: false
47+
options:
48+
- text
49+
- json_object
50+
- name: enable_thinking
51+
required: false
52+
type: boolean
53+
default: true
54+
label:
55+
zh_Hans: 思考模式
56+
en_US: Thinking mode
57+
help:
58+
zh_Hans: 是否开启思考模式。
59+
en_US: Whether to enable thinking mode.
60+
- name: thinking_budget
61+
required: false
62+
type: int
63+
default: 512
64+
min: 1
65+
max: 8192
66+
label:
67+
zh_Hans: 思考长度限制
68+
en_US: Thinking budget
69+
help:
70+
zh_Hans: 思考过程的最大长度,只在思考模式为true时生效。
71+
en_US: The maximum length of the thinking process, only effective when thinking mode is true.

models/llm/glm45.yaml

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
model: glm-4.5
2+
label:
3+
zh_Hans: glm-4.5
4+
en_US: glm-4.5
5+
model_type: llm
6+
features:
7+
- agent-thought
8+
- multi-tool-call
9+
- stream-tool-call
10+
model_properties:
11+
mode: chat
12+
context_size: 128000
13+
parameter_rules:
14+
- name: temperature
15+
use_template: temperature
16+
- name: max_tokens
17+
use_template: max_tokens
18+
type: int
19+
default: 512
20+
min: 1
21+
max: 8192
22+
help:
23+
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
24+
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
25+
- name: top_p
26+
use_template: top_p
27+
- name: top_k
28+
label:
29+
zh_Hans: 取样数量
30+
en_US: Top k
31+
type: int
32+
help:
33+
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
34+
en_US: Only sample from the top K options for each subsequent token.
35+
required: false
36+
- name: frequency_penalty
37+
use_template: frequency_penalty
38+
- name: response_format
39+
label:
40+
zh_Hans: 回复格式
41+
en_US: Response Format
42+
type: string
43+
help:
44+
zh_Hans: 指定模型必须输出的格式
45+
en_US: specifying the format that the model must output
46+
required: false
47+
options:
48+
- text
49+
- json_object
50+
- name: enable_thinking
51+
required: false
52+
type: boolean
53+
default: true
54+
label:
55+
zh_Hans: 思考模式
56+
en_US: Thinking mode
57+
help:
58+
zh_Hans: 是否开启思考模式。
59+
en_US: Whether to enable thinking mode.
60+
- name: thinking_budget
61+
required: false
62+
type: int
63+
default: 512
64+
min: 1
65+
max: 8192
66+
label:
67+
zh_Hans: 思考长度限制
68+
en_US: Thinking budget
69+
help:
70+
zh_Hans: 思考过程的最大长度,只在思考模式为true时生效。
71+
en_US: The maximum length of the thinking process, only effective when thinking mode is true.

provider/qiniu.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ label:
33
en_US: Qiniu Cloud
44
zh_Hans: 七牛云
55
description:
6-
en_US: Official Qiniu Dify plugin providing AI inference services, supporting models such as deepseek-r1, deepseek-v3, and more.
7-
zh_Hans: 七牛云官方 Dify 插件,提供 AI 推理服务,支持例如 deepseek-r1、deepseek-v3 等模型。
6+
en_US: Official Qiniu Dify plugin providing AI inference services, supporting models such as glm 4.5, deepseek-r1, deepseek-v3, and more.
7+
zh_Hans: 七牛云官方 Dify 插件,提供 AI 推理服务,支持例如 glm 4.5、deepseek-r1、deepseek-v3 等模型。
88
icon_large:
99
en_US: icon_l_en.svg
1010
icon_small:

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
dify_plugin>=0.3.0,<0.4.0
1+
dify_plugin>=0.3.0,<0.5.0

0 commit comments

Comments
 (0)