Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,8 @@ Fixed
* Make sure we don't log auth token and api key inside st2api log file if those values are provided
via query parameter and not header (``?x-auth-token=foo``, ``?st2-api-key=bar``). (bug fix) #4592
#4589
* Fix rendering of config_context in orquesta task that references action in different pack
(bug fix) #4570

2.10.3 - March 06, 2019
-----------------------
Expand Down
8 changes: 8 additions & 0 deletions contrib/examples/actions/render_config_context.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
name: render_config_context
pack: examples
description: Run render config context workflow
runner_type: orquesta
entry_point: workflows/render_config_context.yaml
enabled: true
parameters: {}
7 changes: 7 additions & 0 deletions contrib/examples/actions/workflows/render_config_context.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
version: 1.0
description: Testing config context render".
tasks:
task1:
action: tests.render_config_context
output:
- context_value: <% task(task1).result.result.context_value %>
5 changes: 4 additions & 1 deletion st2common/st2common/services/workflows.py
Original file line number Diff line number Diff line change
Expand Up @@ -547,9 +547,12 @@ def request_action_execution(wf_ex_db, task_ex_db, st2_ctx, ac_ex_req, delay=Non
# Identify the runner for the action.
runner_type_db = action_utils.get_runnertype_by_name(action_db.runner_type['name'])

# Identify action pack name
pack_name = action_ref.split('.')[0] if action_ref else st2_ctx.get('pack')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orquesta task can be action less. Example at https://github.com/StackStorm/st2/blob/master/contrib/examples/actions/workflows/tests/orquesta-test-action-less-tasks.yaml#L12. Please add a use case for this line that shows it can handle task that is action less.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested of action less workflow examples.orquesta-test-action-less-tasks and it went well. Not sure about add a use case, is to add comment or unit test case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Then we're good. Just want to make sure this use case is covered. Thank you for your diligence. It's appreciated.


# Set context for the action execution.
ac_ex_ctx = {
'pack': st2_ctx.get('pack'),
'pack': pack_name,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please default to st2_ctx.get('pack') when task is action less.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pack_name = action_ref.split('.')[0] if action_ref else st2_ctx.get('pack')
With above line, for action less, pack will be default to st2_ctx.get('pack') , not sure what need to be changed here. Please give more information.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're good here then. See comment above. Thanks for following up.

'user': st2_ctx.get('user'),
'parent': st2_ctx,
'orquesta': {
Expand Down
70 changes: 70 additions & 0 deletions st2common/tests/unit/services/test_workflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,9 @@
from st2common.exceptions import action as action_exc
from st2common.models.db import liveaction as lv_db_models
from st2common.models.db import execution as ex_db_models
from st2common.models.db import pack as pk_db_models
from st2common.persistence import execution as ex_db_access
from st2common.persistence import pack as pk_db_access
from st2common.persistence import workflow as wf_db_access
from st2common.services import action as action_service
from st2common.services import workflows as workflow_service
Expand All @@ -44,8 +46,12 @@
TEST_PACK = 'orquesta_tests'
TEST_PACK_PATH = st2tests.fixturesloader.get_fixtures_packs_base_path() + '/' + TEST_PACK

PACK_7 = 'dummy_pack_7'
PACK_7_PATH = st2tests.fixturesloader.get_fixtures_packs_base_path() + '/' + PACK_7

PACKS = [
TEST_PACK_PATH,
PACK_7_PATH,
st2tests.fixturesloader.get_fixtures_packs_base_path() + '/core'
]

Expand Down Expand Up @@ -363,3 +369,67 @@ def test_evaluate_action_execution_delay(self):
ac_ex_req = {'action': 'core.noop', 'input': None, 'item_id': 1}
actual_delay = workflow_service.eval_action_execution_delay(task_ex_req, ac_ex_req, True)
self.assertIsNone(actual_delay)

def test_request_action_execution_render(self):
# Manually create ConfigDB
output = 'Testing'
value = {
"config_item_one": output
}
config_db = pk_db_models.ConfigDB(pack=PACK_7, values=value)
config = pk_db_access.Config.add_or_update(config_db)
self.assertEqual(len(config), 3)

wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'render_config_context.yaml')

# Manually create the liveaction and action execution objects without publishing.
lv_ac_db = lv_db_models.LiveActionDB(action=wf_meta['name'])
lv_ac_db, ac_ex_db = action_service.create_request(lv_ac_db)

# Request the workflow execution.
wf_def = self.get_wf_def(TEST_PACK_PATH, wf_meta)
st2_ctx = self.mock_st2_context(ac_ex_db)
wf_ex_db = workflow_service.request(wf_def, ac_ex_db, st2_ctx)
spec_module = specs_loader.get_spec_module(wf_ex_db.spec['catalog'])
wf_spec = spec_module.WorkflowSpec.deserialize(wf_ex_db.spec)

# Pass down appropriate st2 context to the task and action execution(s).
root_st2_ctx = wf_ex_db.context.get('st2', {})
st2_ctx = {
'execution_id': wf_ex_db.action_execution,
'user': root_st2_ctx.get('user'),
'pack': root_st2_ctx.get('pack')
}

# Manually request task execution.
task_route = 0
task_id = 'task1'
task_spec = wf_spec.tasks.get_task(task_id)
task_ctx = {'foo': 'bar'}

task_ex_req = {
'id': task_id,
'route': task_route,
'spec': task_spec,
'ctx': task_ctx,
'actions': [
{'action': 'dummy_pack_7.render_config_context', 'input': None}
]
}
workflow_service.request_task_execution(wf_ex_db, st2_ctx, task_ex_req)

# Check task execution is saved to the database.
task_ex_dbs = wf_db_access.TaskExecution.query(workflow_execution=str(wf_ex_db.id))
self.assertEqual(len(task_ex_dbs), 1)
workflow_service.request_task_execution(wf_ex_db, st2_ctx, task_ex_req)

# Manually request action execution
task_ex_db = task_ex_dbs[0]
action_ex_db = workflow_service.request_action_execution(wf_ex_db, task_ex_db, st2_ctx,
task_ex_req['actions'][0])

# Check required attributes.
self.assertIsNotNone(str(action_ex_db.id))
self.assertEqual(task_ex_db.workflow_execution, str(wf_ex_db.id))
expected_parameters = {'value1': output}
self.assertEqual(expected_parameters, action_ex_db.parameters)
13 changes: 13 additions & 0 deletions st2tests/integration/orquesta/test_wiring.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,3 +144,16 @@ def test_output_on_error(self):

self.assertEqual(ex.status, ac_const.LIVEACTION_STATUS_FAILED)
self.assertDictEqual(ex.result, expected_result)

def test_config_context_renders(self):
config_value = "Testing"
wf_name = 'examples.render_config_context'

expected_output = {'context_value': config_value}
expected_result = {'output': expected_output}

ex = self._execute_workflow(wf_name)
ex = self._wait_for_completion(ex)

self.assertEqual(ex.status, ac_const.LIVEACTION_STATUS_SUCCEEDED)
self.assertDictEqual(ex.result, expected_result)
3 changes: 2 additions & 1 deletion st2tests/st2tests/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -618,7 +618,8 @@ def mock_st2_context(self, ac_ex_db, context=None):
st2_ctx = {
'st2': {
'api_url': api_util.get_full_public_api_url(),
'action_execution_id': str(ac_ex_db.id)
'action_execution_id': str(ac_ex_db.id),
'user': 'stanley'
}
}

Expand Down
2 changes: 2 additions & 0 deletions st2tests/st2tests/fixtures/packs/configs/dummy_pack_7.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
config_item_one: "testing"
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from st2common.runners.base_action import Action


class PrintPythonVersionAction(Action):

def run(self, value1):
return {"context_value": value1}
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
name: render_config_context
runner_type: python-script
description: Action that uses config context
enabled: true
entry_point: render_config_context.py
parameters:
value1:
description: Input for render_config_context. Defaults to config_context value.
required: false
type: "string"
default: "{{ config_context.config_item_one }}"
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
config_item_one:
description: "Item use to test config context."
type: "string"
required: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
name: render_config_context
pack: orquesta_tests
description: Run render config context workflow
runner_type: orquesta
entry_point: workflows/render_config_context.yaml
enabled: true
parameters: {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
version: 1.0
description: Testing config context render".
tasks:
task1:
action: dummy_pack_7.render_config_context
output:
- context_value: <% task(task1).result.result.context_value %>
26 changes: 22 additions & 4 deletions tools/launchdev.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ runner_count=1
scheduler_count=1
workflow_engine_count=1
use_gunicorn=true
copy_examples=false
copy_test_packs=false
load_content=true
use_ipv6=false
include_mistral=false
Expand All @@ -29,7 +29,7 @@ while getopts ":r:s:w:gxcu6m" o; do
use_gunicorn=false
;;
x)
copy_examples=true
copy_test_packs=true
;;
c)
load_content=false
Expand Down Expand Up @@ -209,9 +209,20 @@ function st2start(){
cp -Rp ./contrib/core/ $PACKS_BASE_DIR
cp -Rp ./contrib/packs/ $PACKS_BASE_DIR

if [ "$copy_examples" = true ]; then
echo "Copying examples from ./contrib/examples to $PACKS_BASE_DIR"
if [ "$copy_test_packs" = true ]; then
echo "Copying test packs examples and tests to $PACKS_BASE_DIR"
cp -Rp ./contrib/examples $PACKS_BASE_DIR
# Clone st2tests in /tmp directory.
pushd /tmp
git clone https://github.com/StackStorm/st2tests.git
ret=$?
if [ ${ret} -eq 0 ]; then
cp -Rp ./st2tests/packs/tests $PACKS_BASE_DIR
rm -R st2tests/
else
echo "Failed to clone st2tests repo"
fi
popd
fi

# activate virtualenv to set PYTHONPATH
Expand Down Expand Up @@ -413,6 +424,13 @@ function st2start(){
--config-file $ST2_CONF --register-all
fi

if [ "$copy_test_packs" = true ]; then
st2 run packs.setup_virtualenv packs=tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will fail when auth is enabled in the dev config. Can you test this scenario and make sure the launch script handle this gracefully and warn users that they have to setup the virtualenv on their own.

if [ $? != 0 ]; then
echo "Warning: Please setup virtualenv for pack \"tests\" before run integration test"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change the message to Unable to setup virtualenv for the \"tests\" pack. Please setup virtualenv for the \"tests\" pack before running integration tests.

fi
fi

# List screen sessions
screen -ls || exit 0
}
Expand Down