-
Notifications
You must be signed in to change notification settings - Fork 94
Add Langfuse session and user metadata to OpenAI traces #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughIntroduces per-chat/session identifiers in LLM calls. Assistant::Responder now derives session_id and user_identifier from chat context and passes them to provider.chat_response. Provider::LlmConcept and Provider::OpenAI method signatures are extended to accept these. OpenAI implementation forwards identifiers to payloads and Langfuse logging. Tests assert propagation. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Responder as Assistant::Responder
participant Provider as Provider::OpenAI
participant Lfuse as Langfuse
User->>Responder: handle_message(message)
Responder->>Responder: chat_session_id(), chat_user_identifier()
Responder->>Provider: chat_response(..., session_id, user_identifier)
alt Streaming
Provider->>Lfuse: start generation (session_id, user_id)
Provider-->>Responder: stream tokens
Provider->>Lfuse: end generation (usage, session_id, user_id)
else Non-streaming
Provider->>Lfuse: log generation (input/output, usage, session_id, user_id)
Provider-->>Responder: full response
end
Responder-->>User: deliver response
note over Responder,Provider: session_id = chat.id (string)<br/>user_identifier = SHA256(user_id)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/models/provider/openai.rb (1)
96-103: Includeuserfor abuse/safety tracking;session_idisn’t supported
Adduser: user_identifierto theclient.responses.createcall to leverage OpenAI’s built-in safety monitoring. Continue handlingsession_idin your own layer or via the Conversations/Assistants APIs.
🧹 Nitpick comments (1)
app/models/assistant/responder.rb (1)
94-98: Remove redundant user_id presence check
Thebelongs_to :userassociation in Chat requiresuser_idby default, sochat.user_idwill never be nil—replacereturn unless chat&.user_idwith
return unless chatto simplify.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
app/models/assistant/responder.rb(2 hunks)app/models/provider/llm_concept.rb(1 hunks)app/models/provider/openai.rb(4 hunks)test/models/assistant_test.rb(4 hunks)
🧰 Additional context used
📓 Path-based instructions (12)
app/models/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Domain models should not call Provider::Registry directly; use a Provided concern within the model’s namespace to select providers and expose convenience methods
Use ActiveRecord validations for forms and complex domain constraints.
app/models/**/*.rb: Place business logic in POROs and model classes under app/models
Models should answer questions about themselves (e.g., prefer account.balance_series over service objects)
Implement complex validations and business logic with ActiveRecord validations
Model-level validations may mirror DB constraints but are not strictly required
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
app/models/provider/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-design.mdc)
Concrete provider classes must be under the Provider:: namespace, inherit from Provider, wrap calls with with_provider_response, and raise on invalid/unavailable data
Files:
app/models/provider/openai.rbapp/models/provider/llm_concept.rb
app/**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Place Rails application Ruby code (models, controllers, services, jobs, mailers, components) under app/
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
**/*.rb
📄 CodeRabbit inference engine (AGENTS.md)
Ruby style: 2-space indentation; snake_case for methods/variables; CamelCase for classes/modules
Files:
app/models/provider/openai.rbtest/models/assistant_test.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
**/*.{rb,js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{rb,js,jsx,ts,tsx}: Make changes atomic, testable, and explain their impact briefly in code suggestions.
Respect existing tests and add tests when changing critical logic.
Files:
app/models/provider/openai.rbtest/models/assistant_test.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
**/app/**/*.{rb,erb,js,css}
📄 CodeRabbit inference engine (CLAUDE.md)
Push Rails to its limits before adding new dependencies; a strong technical/business reason is required for new dependencies.
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
**/app/models/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
Business logic should primarily reside in models; use concerns and POROs for organization.
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
**/*.{rb,erb,haml,slim}
📄 CodeRabbit inference engine (.cursor/rules/general-rules.mdc)
**/*.{rb,erb,haml,slim}: UseCurrent.userfor the current user; do not usecurrent_user
UseCurrent.familyfor the current family; do not usecurrent_family
Ignore i18n methods; hardcode strings in English for now (do not useI18n.t,t, or similar)
Files:
app/models/provider/openai.rbtest/models/assistant_test.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
app/{helpers,models}/**/*.rb
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Format currencies, numbers, and dates on the server side (Ruby) before sending to the client
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
app/{models,controllers,views}/**/*.{rb,erb}
📄 CodeRabbit inference engine (.cursor/rules/project-conventions.mdc)
Avoid N+1 queries
Files:
app/models/provider/openai.rbapp/models/assistant/responder.rbapp/models/provider/llm_concept.rb
test/**/*_test.rb
📄 CodeRabbit inference engine (AGENTS.md)
Name Minitest files with *_test.rb and mirror the app/ structure under test/
test/**/*_test.rb: ALWAYS use Minitest + fixtures + Mocha for tests; NEVER RSpec or FactoryBot.
Use Mocha for mocking in tests when necessary.
Use VCR for external API tests.
test/**/*_test.rb: Always use Minitest for tests; do not use RSpec
Do not use factories (e.g., FactoryBot) in tests; rely on fixtures instead
For tests needing many records, use Rails helpers to construct data and inline the creation in the test
Only write tests for critical and important code paths
Avoid tests that merely verify framework/ActiveRecord behavior
Test boundaries correctly: for queries, assert returned values; for commands, assert collaborators are called with correct params
Never test the implementation details of one class in another class’s test suite
Use the mocha gem for stubs and mocks
Prefer OpenStruct for mock instances; use a mock class for complex cases
Only mock what’s necessary; don’t mock return values unless they are under test
Files:
test/models/assistant_test.rb
**/test/**/*.rb
📄 CodeRabbit inference engine (CLAUDE.md)
**/test/**/*.rb: ALWAYS use Minitest and fixtures for Ruby tests; NEVER RSpec or factories.
Only test critical and important code paths; system tests should be used sparingly for critical user flows.
Files:
test/models/assistant_test.rb
🧬 Code graph analysis (2)
app/models/provider/openai.rb (1)
app/models/provider/llm_concept.rb (1)
chat_response(21-33)
app/models/provider/llm_concept.rb (1)
app/models/provider/openai.rb (1)
chat_response(61-133)
🔇 Additional comments (11)
test/models/assistant_test.rb (3)
15-16: LGTM! Test setup correctly mirrors the production implementation.The expected values match the implementation in
app/models/assistant/responder.rb: session_id as the chat ID string and user_identifier as the SHA256 digest of the user_id.
51-52: LGTM! Assertions verify metadata propagation in basic response flow.The test correctly asserts that session_id and user_identifier are passed through the provider.chat_response call.
105-106: LGTM! Comprehensive coverage of metadata propagation across multiple provider calls.The assertions correctly verify that session_id and user_identifier propagate through both provider calls in the tool function sequence (initial function request + follow-up response).
Also applies to: 116-117
app/models/assistant/responder.rb (3)
70-72: LGTM! Session and user metadata correctly passed to LLM.The new parameters are appropriately added to the chat_response call, enabling per-session tracking in Langfuse.
90-92: LGTM! Safe session ID extraction.The safe-navigation operators correctly handle the case where chat or id might be nil.
100-102: LGTM! Appropriate memoization of chat.Memoizing the chat reference avoids repeated database queries and is consistent with Rails conventions.
app/models/provider/llm_concept.rb (1)
21-31: LGTM! Interface signature correctly extended for session tracking.The addition of session_id and user_identifier parameters with nil defaults maintains backward compatibility while enabling per-session tracking in concrete implementations.
app/models/provider/openai.rb (4)
61-71: LGTM! Signature correctly implements the LlmConcept interface.The chat_response method signature matches the interface defined in
app/models/provider/llm_concept.rbwith the new session_id and user_identifier parameters.
110-117: LGTM! Session metadata correctly logged in streaming path.The streaming path properly forwards session_id and user_identifier to Langfuse generation logging.
121-129: LGTM! Session metadata correctly logged in non-streaming path.The non-streaming path properly forwards session_id and user_identifier to Langfuse generation logging.
144-165: LGTM! Langfuse integration correctly enhanced with session tracking.The log_langfuse_generation method properly:
- Accepts session_id and user_identifier parameters
- Maps user_identifier to user_id for Langfuse API
- Includes metadata in both trace() and generation() calls
- Maintains existing error handling
|
@coderabbitai generate docstrings |
✅ Actions performedDocstrings generation started. |
Docstrings generation was requested by @jjmata. * #174 (comment) The following files were modified: * `app/models/assistant/responder.rb` * `app/models/provider/llm_concept.rb` * `app/models/provider/openai.rb`
|
Note Generated docstrings for this pull request at #175 |
Docstrings generation was requested by @jjmata. * #174 (comment) The following files were modified: * `app/models/assistant/responder.rb` * `app/models/provider/llm_concept.rb` * `app/models/provider/openai.rb`
|
Note Generated docstrings for this pull request at #176 |

Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68daafd5d41c83328af563e99a785fd4
Summary by CodeRabbit
Chores
Tests
Refactor