Skip to content

Commit fabfb8f

Browse files
committed
Amazon Bedrock AgentCore exploration
1 parent c3d2a25 commit fabfb8f

File tree

6 files changed

+4784
-0
lines changed

6 files changed

+4784
-0
lines changed

examples/aws-agentcore/README.md

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
# Amazon Bedrock AgentCore Runtime Example
2+
3+
This example demonstrates how to prepare a Pipecat bot for deployment to **Amazon Bedrock AgentCore Runtime** and enable it to invoke AgentCore tools.
4+
5+
> TODO: update to set environment variables up top.
6+
7+
## Overview
8+
9+
This example shows the set needed to:
10+
11+
- Deploy your Pipecat bot to Amazon Bedrock AgentCore Runtime (which hosts and runs your bot)
12+
- Enable your bot to invoke AgentCore tools while running in the AgentCore Runtime
13+
14+
The key additions to a standard Pipecat bot are the AgentCore-specific configurations and tool invocation handling that allow your bot to leverage the full AgentCore ecosystem.
15+
16+
## Prerequisites
17+
18+
- Accounts with:
19+
- AWS
20+
- OpenAI
21+
- Deepgram
22+
- Cartesia
23+
- Daily
24+
- Python 3.10 or higher
25+
- `uv` package manager
26+
27+
## IAM Configuration
28+
29+
> TODO: two separate roles? one for agent and one for CLI user?
30+
31+
Configure your IAM user with the necessary policies for AgentCore usage. Start with these:
32+
33+
- `BedrockAgentCoreFullAccess`: for the running agent itself
34+
- A new policy (maybe named `BedrockAgentCoreCLI`) configured [like this](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-permissions.html#runtime-permissions-starter-toolkit): for using the `agentcore` CLI
35+
36+
You can also choose to specify more granular permissions; see [Amazon Bedrock AgentCore docs](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-permissions.html) for more information.
37+
38+
To simplify the remaining steps in this README, it's a good idea to export some AWS-specific environment variables:
39+
40+
```bash
41+
export AWS_SECRET_ACCESS_KEY=...
42+
export AWS_ACCESS_KEY_ID=...
43+
export AWS_REGION=...
44+
```
45+
46+
## Agent Configuration
47+
48+
Configure your bot as an AgentCore agent.
49+
50+
```bash
51+
agentcore configure -e bot.py
52+
```
53+
54+
Follow the prompts to complete the configuration.
55+
56+
**IMPORTANT:** when asked if you want to use "Direct Code Deploy" or "Container", choose "Container". Today there is an incompatibility between Pipecat and "Direct Code Deploy".
57+
58+
> For the curious: "Direct Code Deploy" requires that all bot dependencies have an `aarch64_manylinux2014` wheel...which is unfortunately not true for `numba`.
59+
60+
## Deployment to AgentCore Runtime
61+
62+
Deploy your configured bot to Amazon Bedrock AgentCore Runtime for production hosting.
63+
64+
```bash
65+
agentcore launch --env OPENAI_API_KEY=... --env DEEPGRAM_API_KEY=... --env CARTESIA_API_KEY=... # -a <agent_name> (if multiple agents configured)
66+
```
67+
68+
You should see commands related to tailing logs printed to the console. Copy and save them for later use.
69+
70+
This is also the command you need to run after you've updated your bot code.
71+
72+
## Running on AgentCore Runtime
73+
74+
Run your bot on AgentCore Runtime.
75+
76+
```bash
77+
agentcore invoke '{"roomUrl": "https://<your-domain>.daily.co/<room-name>"}' # -a <agent_name> (if multiple agents configured)
78+
```
79+
80+
## Observation
81+
82+
Paste the log tailing command you received when deploying your bot to AgentCore Runtime. It should look something like:
83+
84+
```bash
85+
# Replace with your actual command
86+
aws logs tail /aws/bedrock-agentcore/runtimes/bot1-0uJkkT7QHC-DEFAULT --log-stream-name-prefix "2025/11/19/[runtime-logs]" --follow
87+
```
88+
89+
## Running Locally
90+
91+
You can also run your bot locally, using either the SmallWebRTC or Daily transport.
92+
93+
First, copy `env.example` to `.env` and fill in the values.
94+
95+
Then, run the bot:
96+
97+
```bash
98+
# SmallWebRTC
99+
PIPECAT_LOCAL_DEV=1 uv run python bot.py
100+
101+
# Daily
102+
PIPECAT_LOCAL_DEV=1 uv run python bot.py -t daily -d
103+
```
104+
105+
> Ideally you should be able to use `agentcore launch --local`, but it doesn't currently appear to be working (even with [this workaround](https://github.com/aws/bedrock-agentcore-starter-toolkit/issues/156) applied), at least not for this project.
106+
107+
## Additional Resources
108+
109+
For a comprehensive guide to getting started with Amazon Bedrock AgentCore, including detailed setup instructions, see the [Amazon Bedrock AgentCore Developer Guide](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html).

examples/aws-agentcore/bot.py

Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
#
2+
# Copyright (c) 2024–2025, Daily
3+
#
4+
# SPDX-License-Identifier: BSD 2-Clause License
5+
#
6+
7+
import os
8+
9+
from bedrock_agentcore import BedrockAgentCoreApp
10+
from dotenv import load_dotenv
11+
from loguru import logger
12+
from pipecat.adapters.schemas.function_schema import FunctionSchema
13+
from pipecat.adapters.schemas.tools_schema import ToolsSchema
14+
from pipecat.audio.turn.smart_turn.base_smart_turn import SmartTurnParams
15+
from pipecat.audio.turn.smart_turn.local_smart_turn_v3 import LocalSmartTurnAnalyzerV3
16+
from pipecat.audio.vad.silero import SileroVADAnalyzer
17+
from pipecat.audio.vad.vad_analyzer import VADParams
18+
from pipecat.frames.frames import LLMRunFrame, TTSSpeakFrame
19+
from pipecat.pipeline.pipeline import Pipeline
20+
from pipecat.pipeline.runner import PipelineRunner
21+
from pipecat.pipeline.task import PipelineParams, PipelineTask
22+
from pipecat.processors.aggregators.llm_context import LLMContext
23+
from pipecat.processors.aggregators.llm_response_universal import LLMContextAggregatorPair
24+
from pipecat.runner.types import DailyRunnerArguments, RunnerArguments
25+
from pipecat.runner.utils import create_transport
26+
from pipecat.services.cartesia.tts import CartesiaTTSService
27+
from pipecat.services.deepgram.stt import DeepgramSTTService
28+
from pipecat.services.llm_service import FunctionCallParams
29+
from pipecat.services.openai.llm import OpenAILLMService
30+
from pipecat.transports.base_transport import BaseTransport, TransportParams
31+
from pipecat.transports.daily.transport import DailyParams
32+
33+
app = BedrockAgentCoreApp()
34+
35+
load_dotenv(override=True)
36+
37+
38+
async def fetch_weather_from_api(params: FunctionCallParams):
39+
await params.result_callback({"conditions": "nice", "temperature": "75"})
40+
41+
42+
async def fetch_restaurant_recommendation(params: FunctionCallParams):
43+
await params.result_callback({"name": "The Golden Dragon"})
44+
45+
46+
# We store functions so objects (e.g. SileroVADAnalyzer) don't get
47+
# instantiated. The function will be called when the desired transport gets
48+
# selected.
49+
transport_params = {
50+
"daily": lambda: DailyParams(
51+
audio_in_enabled=True,
52+
audio_out_enabled=True,
53+
vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
54+
turn_analyzer=LocalSmartTurnAnalyzerV3(params=SmartTurnParams()),
55+
),
56+
"webrtc": lambda: TransportParams(
57+
audio_in_enabled=True,
58+
audio_out_enabled=True,
59+
vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
60+
turn_analyzer=LocalSmartTurnAnalyzerV3(params=SmartTurnParams()),
61+
),
62+
}
63+
64+
65+
async def run_bot(transport: BaseTransport, runner_args: RunnerArguments):
66+
logger.info(f"Starting bot")
67+
68+
stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY"))
69+
70+
tts = CartesiaTTSService(
71+
api_key=os.getenv("CARTESIA_API_KEY"),
72+
voice_id="71a7ad14-091c-4e8e-a314-022ece01c121", # British Reading Lady
73+
)
74+
75+
llm = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))
76+
77+
# You can also register a function_name of None to get all functions
78+
# sent to the same callback with an additional function_name parameter.
79+
llm.register_function("get_current_weather", fetch_weather_from_api)
80+
llm.register_function("get_restaurant_recommendation", fetch_restaurant_recommendation)
81+
82+
@llm.event_handler("on_function_calls_started")
83+
async def on_function_calls_started(service, function_calls):
84+
await tts.queue_frame(TTSSpeakFrame("Let me check on that."))
85+
86+
weather_function = FunctionSchema(
87+
name="get_current_weather",
88+
description="Get the current weather",
89+
properties={
90+
"location": {
91+
"type": "string",
92+
"description": "The city and state, e.g. San Francisco, CA",
93+
},
94+
"format": {
95+
"type": "string",
96+
"enum": ["celsius", "fahrenheit"],
97+
"description": "The temperature unit to use. Infer this from the user's location.",
98+
},
99+
},
100+
required=["location", "format"],
101+
)
102+
restaurant_function = FunctionSchema(
103+
name="get_restaurant_recommendation",
104+
description="Get a restaurant recommendation",
105+
properties={
106+
"location": {
107+
"type": "string",
108+
"description": "The city and state, e.g. San Francisco, CA",
109+
},
110+
},
111+
required=["location"],
112+
)
113+
tools = ToolsSchema(standard_tools=[weather_function, restaurant_function])
114+
115+
messages = [
116+
{
117+
"role": "system",
118+
"content": "You are a helpful LLM in a WebRTC call. Your goal is to demonstrate your capabilities in a succinct way. Your output will be spoken aloud, so avoid special characters that can't easily be spoken, such as emojis or bullet points. Respond to what the user said in a creative and helpful way.",
119+
},
120+
]
121+
122+
context = LLMContext(messages, tools)
123+
context_aggregator = LLMContextAggregatorPair(context)
124+
125+
pipeline = Pipeline(
126+
[
127+
transport.input(),
128+
stt,
129+
context_aggregator.user(),
130+
llm,
131+
tts,
132+
transport.output(),
133+
context_aggregator.assistant(),
134+
]
135+
)
136+
137+
task = PipelineTask(
138+
pipeline,
139+
params=PipelineParams(
140+
enable_metrics=True,
141+
enable_usage_metrics=True,
142+
),
143+
idle_timeout_secs=runner_args.pipeline_idle_timeout_secs,
144+
)
145+
146+
@transport.event_handler("on_client_connected")
147+
async def on_client_connected(transport, client):
148+
logger.info(f"Client connected")
149+
# Kick off the conversation.
150+
await task.queue_frames([LLMRunFrame()])
151+
152+
@transport.event_handler("on_client_disconnected")
153+
async def on_client_disconnected(transport, client):
154+
logger.info(f"Client disconnected")
155+
await task.cancel()
156+
157+
runner = PipelineRunner(handle_sigint=runner_args.handle_sigint)
158+
159+
await runner.run(task)
160+
161+
162+
@app.entrypoint
163+
async def bot(payload, context):
164+
"""Main bot entry point compatible with AWS Bedrock AgentCore Runtime."""
165+
room_url = payload.get("roomUrl")
166+
transport = await create_transport(
167+
DailyRunnerArguments(room_url=room_url),
168+
transport_params,
169+
)
170+
await run_bot(transport, RunnerArguments())
171+
172+
173+
if __name__ == "__main__":
174+
# NOTE: ideally we shouldn't have to branch for local dev vs AgentCore, but
175+
# local AgentCore container-based dev doesn't seem to be working, or at
176+
# least not for this project.
177+
if os.getenv("PIPECAT_LOCAL_DEV") == "1":
178+
# Running locally
179+
from pipecat.runner.run import main
180+
181+
main()
182+
else:
183+
# Running on AgentCore Runtime
184+
app.run()

examples/aws-agentcore/env.example

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
OPENAI_API_KEY=...
2+
DEEPGRAM_API_KEY=...
3+
CARTESIA_API_KEY=...
4+
DAILY_SAMPLE_ROOM_URL=https://<your-domain>.daily.co/<room-name>
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
[project]
2+
name = "agentcore-pipecat"
3+
version = "0.1.0"
4+
description = "Example for building Pipecat bots deployable to Amazon Bedrock AgentCore"
5+
requires-python = ">=3.10"
6+
dependencies = [
7+
"bedrock-agentcore",
8+
"pipecat-ai[webrtc,daily,silero,deepgram,openai,cartesia,local-smart-turn-v3,runner]",
9+
]
10+
11+
[dependency-groups]
12+
dev = [
13+
"bedrock-agentcore-starter-toolkit",
14+
"pyright>=1.1.404,<2",
15+
"ruff>=0.12.11,<1",
16+
]
17+
18+
[tool.ruff]
19+
line-length = 100
20+
[tool.ruff.lint]
21+
select = ["I"]
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
bedrock-agentcore
2+
pipecat-ai[webrtc,daily,silero,deepgram,openai,cartesia,local-smart-turn-v3,runner]

0 commit comments

Comments
 (0)