Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/operators/dynamic-fork-using-array.mdx

This file was deleted.

263 changes: 263 additions & 0 deletions docs/operators/dynamic_fork.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,263 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Dynamic Fork

```json
"type" : "FORK_JOIN_DYNAMIC"
```

The Dynamic fork task is used when the number of forks is to be determined at the run-time. Whereas in a regular fork-join task, the number of forks is defined during the workflow creation.

## Configurations

* A FORK_JOIN_DYNAMIC can only have one task per fork. A sub-workflow can be utilized if there is a need for multiple tasks per fork.

### Input Parameters

|Attribute|Description|
|---|---|
| dynamicForkTasksParam | This JSON array lists the tasks in each fork that is to be created. Each entry corresponds to a separate fork. |
| dynamicForkTasksInputParamName | This is a JSON array where the keys are the taskReferenceName for each fork, and the values are the inputParameters for each task. |

The [JOIN](https://orkes.io/content/docs/reference-docs/join-task) task will run after all the dynamic tasks, collecting the output for all tasks. All the tasks must be completed before the JOIN completes the fork.

### Output Parameters

|Attribute|Description|
|---|---|
|joinOn | This is the output configuration of the JOIN task used in conjunction with the DYNAMIC_FORK_JOIN task. The output of the JOIN task is a map, where the keys are task reference names of the tasks being joined, and the keys are the corresponding outputs of those tasks. |

## Examples

<Tabs>
<TabItem value="JSON" label="JSON">

```json
{
"name": "dynamic",
"taskReferenceName": "dynamic_ref",
"inputParameters": {
"dynamicTasks": "",
"dynamicTasksInput": ""
},
"type": "FORK_JOIN_DYNAMIC",
"dynamicForkTasksParam": "dynamicTasks",
"dynamicForkTasksInputParamName": "dynamicTasksInput"
},
```
</TabItem>
<TabItem value="Java" label="Java">
This is a banana 🍌
</TabItem>
<TabItem value="Golang" label="Golang">
This is a banana 🍌
</TabItem>
<TabItem value="Python" label="Python">
This is a banana 🍌
</TabItem>
<TabItem value="CSharp" label="CSharp">
This is a banana 🍌
</TabItem>
<TabItem value="javascript" label="Javascript">
This is a banana 🍌
</TabItem>
<TabItem value="clojure" label="Clojure">
This is a banana 🍌
</TabItem>
</Tabs>

<details><summary>Add Examples</summary>
<p>
</p>
</details>

## Dynamic Fork Task Using Arrays

The dynamic fork is used to run parallel executions of the task with dynamism. Think of this as Conductor’s equivalent of stream parallel processing in Java:

```java
arrayItems.stream().parallel().forEach(item -> process(item));
```

Here each item of the array is passed to a method called process. Conductor allows you to do the same and covers several types of processes.

1. Simple Task - When we need to run a simple custom worker task.
2. [HTTP Task](./system-tasks/http-task) - When we need to run the system HTTP workers.
3. [Sub Workflows](./sub-workflow-task) - Use this when we want to run more than one task or a series of steps that can be a full-fledged complex flow.
4. Other Conductor Task Types - This can also be used for other task types such as EVENT, WAIT, etc.

<details><summary>Running Simple Tasks using Dynamic Fork​</summary>
<p>
Run a simple task for each of the inputs provided.

|Attribute|Description|
|---|---|
| forkTaskName | Specify the name of the simple task to execute. |
| forkTaskInputs | Array of inputs - a task will be executed for each input. |

In this example, each task will be executed with the following input:

```json
{
"inputText" : "value1",
"inputNumber" : 1,
"index": 0 // Added by the system to represent the array index for the object
}
```

Example:

```json
{
"name": "dynamic_workflow_array_simple",
"description": "Dynamic workflow array - run simple task",
"version": 1,
"tasks": [
{
"name": "dynamic_workflow_array_simple",
"taskReferenceName": "dynamic_workflow_array_simple_ref",
"inputParameters": {
"forkTaskName": "update_fruit_list_task",
"forkTaskInputs": [
{
"inputText" : "value1",
"inputNumber" : 1
},
{
"inputText" : "value2",
"inputNumber" : 2
},
{
"inputText" : "value3",
"inputNumber" : 3
}
]
},
"type": "FORK_JOIN_DYNAMIC",
"dynamicForkTasksParam": "dynamicTasks",
"dynamicForkTasksInputParamName": "dynamicTasksInput"
},
{
"name": "dynamic_workflow_array_simple_join",
"taskReferenceName": "dynamic_workflow_array_simple_join_ref",
"type": "JOIN"
}
],
"schemaVersion": 2,
"ownerEmail": "[email protected]"
}
```
We can also use simple values or a mix of complex and simple objects.
```json
[
"apple", "orange", "kiwi"
]
```
When using simple values, it will be passed with the key input and an index representing the element's index in the array.
```json
{
"input" : "apple", // Value
"index" : 0 // Index of the element
}
```
</p>
</details>

<details><summary>Running HTTP Tasks using Dynamic Fork​</summary>
<p>
To run HTTP, we will use the same parameters as running SIMPLE tasks; as shown above, the value of forkTaskName will be HTTP, and the inputs you provide will be what the HTTP task expects.

:::tip
**method** has a default value of GET and need not be specified if the HTTP call is GET.
:::

Example:
```json
{
"name": "dynamic_workflow_array_http",
"description": "Dynamic workflow array - run HTTP tasks",
"version": 1,
"tasks": [
{
"name": "dynamic_workflow_array_http",
"taskReferenceName": "dynamic_workflow_array_http_ref",
"inputParameters": {
"forkTaskName": "HTTP",
"forkTaskInputs": [
{
"url" : "https://orkes-api-tester.orkesconductor.com/get"
},
{
"url" : "https://orkes-api-tester.orkesconductor.com/get",
"method" : "GET"
}
]
},
"type": "FORK_JOIN_DYNAMIC",
"dynamicForkTasksParam": "dynamicTasks",
"dynamicForkTasksInputParamName": "dynamicTasksInput"
},
{
"name": "dynamic_workflow_array_http_join",
"taskReferenceName": "dynamic_workflow_array_http_join_ref",
"type": "JOIN"
}
],
"schemaVersion": 2,
"ownerEmail": "[email protected]"
}
```
</p>
</details>

<details><summary>Running Sub Workflows using Dynamic Fork​​</summary>
<p>
Run a sub-workflow for each of the inputs provided

|Attribute|Description|
|---|---|
| forkTaskWorkflow | Specify the name of the sub-workflow to be executed. |
| forkTaskWorkflowVersion | Optional version of the workflow to run. |
| forkTaskInputs | Array of inputs - a task will be executed for each input. |

>**Note :**
**forkTaskWorkflow** - When this value is present, Conductor treats this as a dynamic fork that runs sub workflows.

Example:
```json
{
"name": "dynamic_workflow_array_sub_workflow",
"description": "Dynamic workflow array - run sub workflow tasks",
"version": 1,
"tasks": [
{
"name": "dynamic_workflow_array_sub_workflow",
"taskReferenceName": "dynamic_workflow_array_sub_workflow_ref",
"inputParameters": {
"forkTaskWorkflow": "extract_user",
"forkTaskInputs": [
{
"input" : "value1"
},
{
"input" : "value2"
}
]
},
"type": "FORK_JOIN_DYNAMIC",
"dynamicForkTasksParam": "dynamicTasks",
"dynamicForkTasksInputParamName": "dynamicTasksInput"
},
{
"name": "dynamic_workflow_array_sub_workflow_join",
"taskReferenceName": "dynamic_workflow_array_sub_workflow_join_ref",
"type": "JOIN"
}
],
"schemaVersion": 2,
"ownerEmail": "[email protected]"
}
```
</p>
</details>
1 change: 0 additions & 1 deletion docs/operators/dynamic_fork.mdx

This file was deleted.

123 changes: 123 additions & 0 deletions docs/operators/start_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Start Workflow

```json
"type" : "START_WORKFLOW"
```

Start Workflow is an operator task used to start another workflow from an existing workflow. Unlike a sub-workflow task, a start workflow task doesn’t create a relationship between the current workflow and the newly started workflow. That means it doesn’t wait for the started workflow to get completed.

## Configurations​

A start workflow task is considered successful when the requested workflow begins or, more precisely, when the requested workflow is in the *RUNNING* state.

### Input Parameters​

| Attribute | Description |
| -- | -- |
| startWorkflow | Provide the workflow name to be started. |
| version | If the workflow has different versions, you can provide the version to be started here. If not specified, the latest version runs. |

### Output Parameters​

| Attribute | Description |
| -- | -- |
| workflowId | Displays the ID of the started workflow. |

## Examples

<Tabs>
<TabItem value="JSON" label="JSON">

```json
{
"name": "start",
"taskReferenceName": "start_ref",
"inputParameters": {
"startWorkflow": {
"name": "your_workflow_name_to_be_started",
"version": 3,
"input": {
"Some-Key-tl4ao": "Some-Value-tl4ao"
}
}
}
}
```
</TabItem>
<TabItem value="Java" label="Java">
This is a banana 🍌
</TabItem>
<TabItem value="Golang" label="Golang">
This is a banana 🍌
</TabItem>
<TabItem value="Python" label="Python">
This is a banana 🍌
</TabItem>
<TabItem value="CSharp" label="CSharp">
This is a banana 🍌
</TabItem>
<TabItem value="javascript" label="Javascript">
This is a banana 🍌
</TabItem>
<TabItem value="clojure" label="Clojure">
This is a banana 🍌
</TabItem>
</Tabs>

<details><summary>Sample Example</summary>
<p>
Let’s see a sample JSON file for the start workflow task:

```json
{
"name": "sample_start_workflow",
"description": "Sample Workflow to start a new workflow.",
"version": 1,
"tasks": [
{
"name": "start",
"taskReferenceName": "start_ref",
"inputParameters": {
"startWorkflow": {
"name": "your_workflow_name_to_be_started",
"version": 3,
"input": {}
}
},
"type": "START_WORKFLOW",
}
],
}
```

Here the input parameters are defined as:

```json
"inputParameters": {
"startWorkflow": {
"name": "your_workflow_name_to_be_started",
"version": 3
}
},
```

This would start your workflow named **“your_workflow_name_to_be_started”** with the version being 3.

The output shows the generated workflow ID of the started workflow.

```json
{
workflowId"8ca4184e-6a52-11ed-aaf5-f62716e2ae41"
}
```

From the workflow executions page, you can click on Start Workflow on the **Summary** tab to see the newly started workflow status.

<p align="center"><img src="/content/img/start-workflow-output-in-conductor.png" alt="Completed start workflow type" width="100%" height="auto" style={{paddingBottom: 40, paddingTop: 40}} /></p>

Even if the started workflow is not completed, the main workflow would be completed, i.e., in this case, even if **your_workflow_name_to_started** is not completed, the main workflow **sample_start_workflow** would be completed.
</p>
</details>
1 change: 0 additions & 1 deletion docs/operators/start_workflow.mdx

This file was deleted.

Loading