Skip to content

Commit c30faac

Browse files
Milvus-doc-botMilvus-doc-bot
authored andcommitted
Release new docs to master
1 parent c1162c9 commit c30faac

File tree

2 files changed

+51
-40
lines changed

2 files changed

+51
-40
lines changed

v2.5.x/site/en/adminGuide/resource_group.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -65,10 +65,10 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
6565

6666
# create a resource group that exactly hold no query node.
6767
try:
68-
utility.create_resource_group(name, config=utility.ResourceGroupConfig(
68+
milvus_client.create_resource_group(name, config=ResourceGroupConfig(
6969
requests={"node_num": node_num},
7070
limits={"node_num": node_num},
71-
), using='default')
71+
))
7272
print(f"Succeeded in creating resource group {name}.")
7373
except Exception:
7474
print("Failed to create the resource group.")
@@ -81,7 +81,7 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
8181
To view the list of resource groups in a Milvus instance, do as follows:
8282

8383
```python
84-
rgs = utility.list_resource_groups(using='default')
84+
rgs = milvus_client.list_resource_groups()
8585
print(f"Resource group list: {rgs}")
8686

8787
# Resource group list: ['__default_resource_group', 'rg']
@@ -92,16 +92,19 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
9292
You can have Milvus describe a resource group in concern as follows:
9393

9494
```python
95-
info = utility.describe_resource_group(name, using="default")
95+
info = milvus_client.describe_resource_group(name)
9696
print(f"Resource group description: {info}")
9797

9898
# Resource group description:
99-
# <name:"rg">, // string, rg name
100-
# <capacity:1>, // int, num_node which has been transfer to this rg
101-
# <num_available_node:0>, // int, available node_num, some node may shutdown
102-
# <num_loaded_replica:{}>, // map[string]int, from collection_name to loaded replica of each collecion in this rg
103-
# <num_outgoing_node:{}>, // map[string]int, from collection_name to outgoging accessed node num by replica loaded in this rg
104-
# <num_incoming_node:{}>. // map[string]int, from collection_name to incoming accessed node num by replica loaded in other rg
99+
# ResourceGroupInfo:
100+
# <name:rg1>, // resource group name
101+
# <capacity:0>, // resource group capacity
102+
# <num_available_node:1>, // resource group node num
103+
# <num_loaded_replica:{}>, // collection loaded replica num in resource group
104+
# <num_outgoing_node:{}>, // node num which still in use by replica in other resource group
105+
# <num_incoming_node:{}>, // node num which is in use by replica but belong to other resource group
106+
# <config:{}>, // resource group config
107+
# <nodes:[]> // node detail info
105108
```
106109

107110
4. Transfer nodes between resource groups.
@@ -117,7 +120,7 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
117120
expected_num_nodes_in_rg = 1
118121

119122
try:
120-
utility.update_resource_groups({
123+
milvus_client.update_resource_groups({
121124
source: ResourceGroupConfig(
122125
requests={"node_num": expected_num_nodes_in_default},
123126
limits={"node_num": expected_num_nodes_in_default},
@@ -126,7 +129,7 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
126129
requests={"node_num": expected_num_nodes_in_rg},
127130
limits={"node_num": expected_num_nodes_in_rg},
128131
)
129-
}, using="default")
132+
})
130133
print(f"Succeeded in move 1 node(s) from {source} to {target}.")
131134
except Exception:
132135
print("Something went wrong while moving nodes.")
@@ -141,28 +144,25 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
141144
```python
142145
from pymilvus import Collection
143146

144-
collection = Collection('demo')
147+
collection_name = "demo"
145148

146149
# Milvus loads the collection to the default resource group.
147-
collection.load(replica_number=2)
150+
milvus_client.load_collection(collection_name, replica_number=2)
148151

149152
# Or, you can ask Milvus load the collection to the desired resource group.
150153
# make sure that query nodes num should be greater or equal to replica_number
151154
resource_groups = ['rg']
152-
collection.load(replica_number=2, _resource_groups=resource_groups)
155+
milvus_client.load_collection(replica_number=2, _resource_groups=resource_groups)
153156
```
154157

155158
Also, you can just load a partition into a resource group and have its replicas distributed among several resource groups. The following assumes that a collection named `Books` already exists and it has a partition named `Novels`.
156159

157160
```python
158-
collection = Collection("Books")
161+
collection = "Books"
162+
partition = "Novels"
159163

160164
# Use the load method of a collection to load one of its partition
161-
collection.load(["Novels"], replica_number=2, _resource_groups=resource_groups)
162-
163-
# Or, you can use the load method of a partition directly
164-
partition = Partition(collection, "Novels")
165-
partition.load(replica_number=2, _resource_groups=resource_groups)
165+
milvus_client.load_partitions(collection, [partition], replica_number=2, _resource_groups=resource_groups)
166166
```
167167

168168
Note that `_resource_groups` is an optional parameter, and leaving it unspecified have Milvus load the replicas onto the query nodes in the default resource group.
@@ -180,8 +180,8 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
180180
num_replicas = 1
181181

182182
try:
183-
utility.transfer_replica(source, target, collection_name, num_replicas, using="default")
184-
print(f"Succeeded in moving {num_node} replica(s) of {collection_name} from {source} to {target}.")
183+
milvus_client.transfer_replica(source, target, collection_name, num_replicas)
184+
print(f"Succeeded in moving {num_replicas} replica(s) of {collection_name} from {source} to {target}.")
185185
except Exception:
186186
print("Something went wrong while moving replicas.")
187187

@@ -193,17 +193,18 @@ All code samples on this page are in PyMilvus 2.5.3. Upgrade your PyMilvus insta
193193
You can drop a resource group that hold no query node (`limits.node_num = 0`) at any time. In this guide, resource group `rg` now has one query node. You need to change the configuration `limits.node_num` of resource group into zero first.
194194

195195
```python
196+
resource_group = "rg
196197
try:
197-
utility.update_resource_groups({
198-
"rg": utility.ResourceGroupConfig(
198+
milvus_client.update_resource_groups({
199+
resource_group: ResourceGroupConfig(
199200
requests={"node_num": 0},
200201
limits={"node_num": 0},
201202
),
202-
}, using="default")
203-
utility.drop_resource_group("rg", using="default")
204-
print(f"Succeeded in dropping {source}.")
203+
})
204+
milvus_client.drop_resource_group(resource_group)
205+
print(f"Succeeded in dropping {resource_group}.")
205206
except Exception:
206-
print(f"Something went wrong while dropping {source}.")
207+
print(f"Something went wrong while dropping {resource_group}.")
207208
```
208209

209210
For more details, please refer to the [relevant examples in pymilvus](https://github.com/milvus-io/pymilvus/blob/v2.4.3/examples/resource_group_declarative_api.py)
@@ -219,34 +220,33 @@ Here is a good practice for managing QueryNodes in a cloud environment:
219220
Here is an example setup:
220221

221222
```python
222-
from pymilvus import utility
223223
from pymilvus.client.types import ResourceGroupConfig
224224

225225
_PENDING_NODES_RESOURCE_GROUP="__pending_nodes"
226226

227227
def init_cluster(node_num: int):
228228
print(f"Init cluster with {node_num} nodes, all nodes will be put in default resource group")
229229
# create a pending resource group, which can used to hold the pending nodes that do not hold any data.
230-
utility.create_resource_group(name=_PENDING_NODES_RESOURCE_GROUP, config=ResourceGroupConfig(
230+
milvus_client.create_resource_group(name=_PENDING_NODES_RESOURCE_GROUP, config=ResourceGroupConfig(
231231
requests={"node_num": 0}, # this resource group can hold 0 nodes, no data will be load on it.
232232
limits={"node_num": 10000}, # this resource group can hold at most 10000 nodes
233233
))
234234

235235
# update default resource group, which can used to hold the nodes that all initial node in it.
236-
utility.update_resource_groups({
236+
milvus_client.update_resource_groups({
237237
"__default_resource_group": ResourceGroupConfig(
238238
requests={"node_num": node_num},
239239
limits={"node_num": node_num},
240240
transfer_from=[{"resource_group": _PENDING_NODES_RESOURCE_GROUP}], # recover missing node from pending resource group at high priority.
241241
transfer_to=[{"resource_group": _PENDING_NODES_RESOURCE_GROUP}], # recover redundant node to pending resource group at low priority.
242242
)})
243-
utility.create_resource_group(name="rg1", config=ResourceGroupConfig(
243+
milvus_client.create_resource_group(name="rg1", config=ResourceGroupConfig(
244244
requests={"node_num": 0},
245245
limits={"node_num": 0},
246246
transfer_from=[{"resource_group": _PENDING_NODES_RESOURCE_GROUP}],
247247
transfer_to=[{"resource_group": _PENDING_NODES_RESOURCE_GROUP}],
248248
))
249-
utility.create_resource_group(name="rg2", config=ResourceGroupConfig(
249+
milvus_client.create_resource_group(name="rg2", config=ResourceGroupConfig(
250250
requests={"node_num": 0},
251251
limits={"node_num": 0},
252252
transfer_from=[{"resource_group": _PENDING_NODES_RESOURCE_GROUP}],
@@ -271,7 +271,7 @@ Here is a good practice for managing QueryNodes in a cloud environment:
271271
We can use the API to scale a specific resource group to a designated number of QueryNodes without affecting any other resource groups.
272272
```python
273273
# scale rg1 into 3 nodes, rg2 into 1 nodes
274-
utility.update_resource_groups({
274+
milvus_client.update_resource_groups({
275275
"rg1": ResourceGroupConfig(
276276
requests={"node_num": 3},
277277
limits={"node_num": 3},
@@ -295,7 +295,7 @@ Here is a good practice for managing QueryNodes in a cloud environment:
295295

296296
```python
297297
# scale rg1 from 3 nodes into 2 nodes
298-
utility.update_resource_groups({
298+
milvus_client.update_resource_groups({
299299
"rg1": ResourceGroupConfig(
300300
requests={"node_num": 2},
301301
limits={"node_num": 2},

v2.5.x/site/en/integrations/kafka-connect-milvus.md

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,24 @@
11
---
22
id: kafka-connect-milvus.md
3-
summary: In this quick start guide we show how to setup open source kafka and Zilliz Cloud to ingest vector data.
4-
title: Integrate Milvus with WhyHow
3+
summary: Apache Kafka is integrated with Milvus and Zilliz Cloud to stream vector data. Learn how to use Kafka-Milvus connector to build real-time pipelines for semantic search, recommendation systems, and AI-driven analytics.
4+
title: Connect Apache Kafka® with Milvus/Zilliz Cloud for Real-Time Vector Data Ingestion
55
---
66

7-
# Connect Kafka with Milvus
7+
# Connect Apache Kafka® with Milvus/Zilliz Cloud for Real-Time Vector Data Ingestion
88

99
In this quick start guide we show how to setup open source kafka and Zilliz Cloud to ingest vector data.
1010

11+
This tutorial explains how to use Apache Kafka® to stream and ingest vector data into Milvus vector database and Zilliz Cloud (fully-managed Milvus), enabling advanced real-time applications such as semantic search, recommendation systems, and AI-powered analytics.
12+
13+
Apache Kafka is a distributed event streaming platform designed for high-throughput, low-latency pipelines. It is widely used to collect, store, and process real-time data streams from sources like databases, IoT devices, mobile apps, and cloud services. Kafka’s ability to handle large volumes of data makes it an important data source of vector databases like Milvus or Zilliz Cloud.
14+
15+
For example, Kafka can capture real-time data streams—such as user interactions, sensor readings, together with their embeddings from machine learning models—and publish these streams directly to Milvus or Zilliz Cloud. Once in the vector database, this data can be indexed, searched, and analyzed efficiently.
16+
17+
The Kafka integration with Milvus and Zilliz Cloud provides a seamless way to build powerful pipelines for unstructured data workflows. The connector works for both open-source Kafka deployment and hosted services such as [Confluent](https://www.confluent.io/hub/zilliz/kafka-connect-milvus) and [StreamNative](https://docs.streamnative.io/hub/connector-kafka-connect-milvus-sink-v0.1).
18+
19+
In this tutorial we use Zilliz Cloud as a demostration:
20+
21+
1122
## Step 1: Download the kafka-connect-milvus plugin
1223

1324
Complete the following steps to download the kafka-connect-milvus plugin.
@@ -116,4 +127,4 @@ Ensure you have Kafka and Zilliz Cloud setup and properly configured.
116127

117128
### Support
118129

119-
If you require any assistance or have questions regarding the Kafka Connect Milvus Connector, please feel free to reach out to our support team: **Email:** [[email protected]](mailto:[email protected])
130+
If you require any assistance or have questions regarding the Kafka Connect Milvus Connector, please feel free to reach out to the maintainer of the connector: **Email:** [[email protected]](mailto:[email protected])

0 commit comments

Comments
 (0)