-
Notifications
You must be signed in to change notification settings - Fork 160
feat!: langchainjs 1.0 #2444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: langchainjs 1.0 #2444
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Module Path Mismatch Breaks Test Mocks
The vi.mock("@langchain/openai") targets the wrong module path. The test imports ChatOpenAI and OpenAIEmbeddings from @langchain/openaiV0.3 (line 31), but the mock is registered for @langchain/openai. This causes the mock to not apply, meaning tests will use the real implementation instead of the mocked one, breaking the test setup and potentially causing test failures or unexpected behavior.
js/packages/openinference-instrumentation-langchain/test/langchainV3.test.ts#L67-L68
openinference/js/packages/openinference-instrumentation-langchain/test/langchainV3.test.ts
Lines 67 to 68 in 5308863
| vi.mock("@langchain/openai", async () => { |
js/packages/openinference-instrumentation-langchain/test/langchainV3.test.ts
Outdated
Show resolved
Hide resolved
5308863 to
7434f2a
Compare
js/packages/openinference-instrumentation-langchain/tsconfig.json
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-langchain/test/tracer.test.ts
Outdated
Show resolved
Hide resolved
js/packages/openinference-instrumentation-langchain/test/tracer.test.ts
Outdated
Show resolved
Hide resolved
@arizeai/openinference-core
@arizeai/openinference-genai
@arizeai/openinference-instrumentation-anthropic
@arizeai/openinference-instrumentation-bedrock
@arizeai/openinference-instrumentation-bedrock-agent-runtime
@arizeai/openinference-instrumentation-beeai
@arizeai/openinference-instrumentation-langchain
@arizeai/openinference-instrumentation-langchain-v0
@arizeai/openinference-instrumentation-mcp
@arizeai/openinference-instrumentation-openai
@arizeai/openinference-mastra
@arizeai/openinference-semantic-conventions
@arizeai/openinference-vercel
commit: |
cephalization
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, the agent example looks good to me when I run it myself. Not sure how non-agent use cases work but I assume the agent uses normal langchain calls under the hood
| { | ||
| "extends": "../../tsconfig.base.esnext.json", | ||
| "compilerOptions": { | ||
| "outDir": "dist/esnext", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't actually think that esnext is an accessible dist location. I think we can nuke the build step for esnext. commonjs apps will access the commonjs dist, and esm apps access the esm dist. I don't know which runtimes actually access the esnext dist. We can shrink the package size by quite a bit by dropping esnext builds
aa708e5 to
ff4be73
Compare
| const agent = createAgent({ | ||
| model: "gpt-4o-mini", // Using a more accessible model | ||
| tools: [getWeather, calculateMath, searchInfo], | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Type mismatch: createAgent model parameter.
The createAgent function is called with model: "gpt-4o-mini" as a string parameter, but based on LangChain.js 1.0 documentation, createAgent expects an instantiated model object (like ChatOpenAI), not a model name string. This will cause a runtime error when the agent attempts to use the model for inference.
js/packages/openinference-instrumentation-langchain/package.json
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Metadata Tests: Fix Convention Drift
The METADATA import was removed from @arizeai/openinference-semantic-conventions but the tests still reference metadata as a hardcoded string literal (e.g., metadata: "{}") in multiple assertions. This creates inconsistency with the semantic conventions pattern used elsewhere and makes the code more fragile if the metadata key name ever changes in the semantic conventions package.
js/packages/openinference-instrumentation-langchain/test/tracer.test.ts#L6-L7
openinference/js/packages/openinference-instrumentation-langchain/test/tracer.test.ts
Lines 6 to 7 in df8b058
| import { | |
| MESSAGE_FUNCTION_CALL_NAME, |
| openAIApiKey: process.env.OPENAI_API_KEY, | ||
| modelName: "o3-mini", | ||
| reasoningEffort: "medium", | ||
| reasoning: { effort: "medium" }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: ChatOpenAI API Key Parameter Inconsistency
The ChatOpenAI constructor still uses openAIApiKey parameter while the rest of the codebase (including all test files in this PR) was updated to use apiKey for @langchain/openai version 1.1.0. This inconsistency suggests the example wasn't fully migrated and may fail if openAIApiKey was deprecated in favor of apiKey in the newer version.
resolves #2430
