Skip to content

the-artinet-project/easy-a2a

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Website npm version npm downloads Apache License Known Vulnerabilities GitHub stars Discord

easy-a2a

Turn any OpenAI-compatible API (OpenAI, HuggingFace, OpenRouter, local models, etc.) into an A2A agent.

No frills, frameworks or vendor lock-in.

The Agent2Agent Protocol (A2A) is a standardized way to send messages and share context between AI Agents.

Quickstart

import a2a, { Task, getContent } from "easy-a2a";

const agent = a2a({
  baseURL: "https://your-api.com/api/v1",
  apiKey: "your-api-key",
})
  .ai("You are a helpful assistant.")
  .createAgent({
    agentCard: "MyAgent",
  });

const result: Task = await agent.sendMessage("Hello!");
console.log(getContent(result));

Installation

npm install easy-a2a

Features

  • Multi-Agent: Easily connect to other agents
  • Build Fast: Define your agents behaviour with chainable methods

Table of Contents

Examples:

Multi-Agent:

Create Multiple AI Agents in just a few lines of code. Reuse the same OpenAI client to avoid boilerplate.

import a2a, { getContent } from "easy-a2a";
import { OpenAI } from "openAi";

// Reuse the same client
const client = new OpenAI({ apiKey: "your-api-key" });

const cooking_agent = a2a(client)
  .ai("You're a helpful assistant that makes great cooking recomendations.")
  .createAgent({
    agentCard: {
      name: "Cooking Agent",
      description: "An agent thats great at making cooking recomentations.",
    },
  });

const hotel_agent = a2a(client)
  .ai("You are a helpful assistant that makes great hotel recomendations")
  .createAgent({ agentCard: "Hotel Agent" });

// Build an agent that can call other agents
const agent = a2a(client, [
  { name: "Cooking Agent", agent: cooking_agent },
  { name: "Hotel Agent", agent: hotel_agent },
])
  .ai("Use your agents to fulfill the users request.")
  .createAgent({ agentCard: "MainAgent" });

// Use your agent with the A2A protocol
await agent.sendMessage("What should I have for dinner tomorrow night?");

Workflow Builder:

Add steps to process information and keep the caller updated.

const agent = a2a({ apiKey: "your-api-key" })
  .text(({ content: userMessage })=> `Message Recieved: ${userMessage}`)
  .ai("Use the least amount of tokens to process the users.") // The ChatCompletion is passed to the next step
  .text(({ args, contextId })=> {
    const completion = args?.[0];
    if (completion.usage?.total_tokens > 1000) {
      return `You've run out of tokens for this request: ${contextId}`;
    }
    ...
  })
  .createAgent({ agentCard: "MyAgent" });

//Stream the responses
const stream = agent.streamMessage("Hello, World!");
for await (const update of stream) {
  console.log(getContent(update));
}

Customization:

Customize each .ai call with familiar interfaces ( i.e. OpenAI.ChatCompletionCreateParamsNonStreaming, OpenAI.RequestOptions)

const agent = a2a({ apiKey: "your-api-key" })
  .ai(
    {
      model: "gpt-4o-mini",
      messages: [
        { role: "system", content: "Say Hello World! to every new user" },
      ],
      max_completion_tokens: 5000,
      tools: ...
    },
    {
      maxRetries: 2,
      includeHistory: false,
    }
  )
  .createAgent({ agentCard: "MyAgent" });

Deployment:

Quickly turn your agent into an Express server so it can receive A2A requests from anywhere.

import { createAgentServer } from "easy-a2a";
import { agent } from "./my-agent.js";

const { app } = createAgentServer({
  agent: agent,
  basePath: "/a2a",
});

app.listen(3000, () => {
  console.log("MyAgent is running on http://localhost:3000/a2a");
});

Core Classes & Functions

a2a(client, agents?)

Creates a new agent builder with an OpenAI client and additional agents.

Parameters:

.ai(body, options?)

Adds a step to the agents workflow that triggers a call to the target API.

Parameters:

  • body: System prompt string or full OpenAI.ChatCompletionCreateParamsNonStreaming
  • options?: OpenAI.RequestOptions including the following flags for creating the messages array based to the llm:
    • includeHistory:boolean: Include the Task.history as { role: "assistant" | "user"; content: string; }[].
    • includeArgs:boolean: Include the args (see below) as { role: "system" as const, content: JSON.stringify(args) }.
    • includeContent:boolean: Include the content (see below) as { role: "user"; content: "content"}.

Steps

Steps are middleware-like components that allow you to quickly create custom AgentExecutors without having to deal with protocol related boilerplate.

Each step expects to be passed a StepFunction -

Parameters:

  • content:string: The main message text.
  • context:Context: An object containing details about the agents runtime -
    • contextId:string: A unique identifier representing the current invocation.
    • isCancelled():boolean: Returns a boolean indicating if a request was recieved to cancel the current Task.
    • State():Task: Returns a Task indicating the current state of the agents execution.
  • command: A MessageSendParams object that contains the entire A2A request.
  • args:unknown[]: An array of arguments passed from the previous step.

Expects:

  • Part/Part[]/Promise<Part>/Promise<Part[]>/{ parts: Part[]; args:unknown[]; }/Promise<{ parts: Part[]; args:unknown[]; }>

Example Format:

async ({ command, context, content, args }) => {
  return {
    parts: []
    args: []
  }
};

.text

A step that expects every returned Part to be a string.

.file

A step that expects every returned Part to be a FilePart:

{
  name?: string;
  mimeType?: string;
  bytes?: string; // Cannot contain uri if this field is included
  uri?: string; // Cannot contain bytes if this field is included
}

.data

A step that expects every returned Part to be a Record<string,unknown>.

convertExecutor(executor)

Converts an A2A.AgentExecutor(from @a2a-js/sdk) into an easy to use AgentEngine for agent creation.

License

Apache-2.0

Resources