Skip to content

feat: add Groq and Mistral providers and System Info sfn example#1202

Open
J926L wants to merge 1 commit intoyomorun:masterfrom
J926L:feat/mcp-tools-extension
Open

feat: add Groq and Mistral providers and System Info sfn example#1202
J926L wants to merge 1 commit intoyomorun:masterfrom
J926L:feat/mcp-tools-extension

Conversation

@J926L
Copy link

@J926L J926L commented Mar 17, 2026

Description

This PR adds two new LLM providers and a practical Serverless LLM Function (sfn) example to the YoMo ecosystem:

  1. New LLM Providers:

    • Groq: Added support for Groq's ultra-fast Llama3 and Mixtral models via their OpenAI-compatible API. This is highly beneficial for low-latency edge AI scenarios.
    • Mistral: Added support for Mistral AI's models.
  2. New Serverless LLM Function (sfn) Example:

    • System Information: A new sfn that provides real-time system resource metrics (CPU, Memory, OS) from edge nodes. This demonstrates how YoMo can be used to monitor geo-distributed edge infrastructure through an AI Agent.
    • Included a dedicated and for the example.

Changes

  • : Implementation of Groq provider.
  • : Implementation of Mistral provider.
  • : Registered the new providers.
  • : Added unit tests for provider registration.
  • : New sfn example and documentation.

Verification

  • Ran === RUN TestParseZipperAddr
    === RUN TestParseZipperAddr/Valid_address
    === RUN TestParseZipperAddr/Valid_address_of_localhost
    �[2m3:12PM�[0m �[91mERR�[0m invalid zipper address, return default �[2maddr=�[0mlocalhost �[2mdefault=�[0mlocalhost:9000 �[2merr=�[0m"address localhost: missing port in address"
    === RUN TestParseZipperAddr/Invalid_address
    �[2m3:12PM�[0m �[91mERR�[0m invalid zipper address, return default �[2maddr=�[0minvalid �[2mdefault=�[0mlocalhost:9000 �[2merr=�[0m"address invalid: missing port in address"
    === RUN TestParseZipperAddr/Localhost
    === RUN TestParseZipperAddr/Unspecified_IP
    --- PASS: TestParseZipperAddr (0.00s)
    --- PASS: TestParseZipperAddr/Valid_address (0.00s)
    --- PASS: TestParseZipperAddr/Valid_address_of_localhost (0.00s)
    --- PASS: TestParseZipperAddr/Invalid_address (0.00s)
    --- PASS: TestParseZipperAddr/Localhost (0.00s)
    --- PASS: TestParseZipperAddr/Unspecified_IP (0.00s)
    === RUN TestParseConfig
    === RUN TestParseConfig/Config_not_found
    === RUN TestParseConfig/Config_format_error
    === RUN TestParseConfig/Valid_config
    === RUN TestParseConfig/Default_server_address
    === RUN TestParseConfig/malformaled_config
    --- PASS: TestParseConfig (0.00s)
    --- PASS: TestParseConfig/Config_not_found (0.00s)
    --- PASS: TestParseConfig/Config_format_error (0.00s)
    --- PASS: TestParseConfig/Valid_config (0.00s)
    --- PASS: TestParseConfig/Default_server_address (0.00s)
    --- PASS: TestParseConfig/malformaled_config (0.00s)
    === RUN TestNewProviderFromConfig
    === RUN TestNewProviderFromConfig/openai
    === RUN TestNewProviderFromConfig/groq
    === RUN TestNewProviderFromConfig/mistral
    --- PASS: TestNewProviderFromConfig (0.00s)
    --- PASS: TestNewProviderFromConfig/openai (0.00s)
    --- PASS: TestNewProviderFromConfig/groq (0.00s)
    --- PASS: TestNewProviderFromConfig/mistral (0.00s)
    === RUN TestResponseWriter
    --- PASS: TestResponseWriter (0.00s)
    === RUN TestOpSystemPrompt
    === RUN TestOpSystemPrompt/disabled
    === RUN TestOpSystemPrompt/overwrite_with_empty_system_prompt
    === RUN TestOpSystemPrompt/empty_system_prompt_should_not_overwrite
    === RUN TestOpSystemPrompt/overwrite_with_not_empty_system_prompt
    === RUN TestOpSystemPrompt/prefix_with_empty_system_prompt
    === RUN TestOpSystemPrompt/prefix_with_not_empty_system_prompt
    === RUN TestOpSystemPrompt/client_preferred_with_client_system_prompt
    === RUN TestOpSystemPrompt/client_preferred_without_client_system_prompt
    === RUN TestOpSystemPrompt/client_preferred_with_empty_system_prompt
    --- PASS: TestOpSystemPrompt (0.00s)
    --- PASS: TestOpSystemPrompt/disabled (0.00s)
    --- PASS: TestOpSystemPrompt/overwrite_with_empty_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/empty_system_prompt_should_not_overwrite (0.00s)
    --- PASS: TestOpSystemPrompt/overwrite_with_not_empty_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/prefix_with_empty_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/prefix_with_not_empty_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/client_preferred_with_client_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/client_preferred_without_client_system_prompt (0.00s)
    --- PASS: TestOpSystemPrompt/client_preferred_with_empty_system_prompt (0.00s)
    === RUN TestServiceInvoke
    === RUN TestServiceInvoke/invoke_with_tool_call
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"this is a system prompt"},{"role":"user","content":"hi"}],"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"this is a system prompt"},{"role":"user","content":"hi"},{"role":"assistant","tool_calls":[{"id":"call_abc123","type":"function","function":{"name":"get_current_weather","arguments":"{\n"location": "Boston, MA"\n}"}}]},{"role":"tool","content":"temperature: 31°C","tool_call_id":"call_abc123"}],"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    === RUN TestServiceInvoke/invoke_without_tool_call
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"this is a system prompt"},{"role":"user","content":"hi"}]}
    --- PASS: TestServiceInvoke (0.00s)
    --- PASS: TestServiceInvoke/invoke_with_tool_call (0.00s)
    --- PASS: TestServiceInvoke/invoke_without_tool_call (0.00s)
    === RUN TestServiceChatCompletion
    === RUN TestServiceChatCompletion/chat_with_tool_call
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"this is a system prompt"},{"role":"user","content":"How is the weather today in Boston, MA?"}],"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"this is a system prompt"},{"role":"user","content":"How is the weather today in Boston, MA?"},{"role":"assistant","tool_calls":[{"id":"call_abc123","type":"function","function":{"name":"get_current_weather","arguments":"{\n"location": "Boston, MA"\n}"}}]},{"role":"tool","content":"temperature: 31°C","tool_call_id":"call_abc123"}],"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    === RUN TestServiceChatCompletion/chat_without_tool_call
    [mock provider] request: {"model":"","messages":[{"role":"system","content":"You are an assistant."},{"role":"user","content":"How are you"}],"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    === RUN TestServiceChatCompletion/chat_with_tool_call_in_stream
    [mock provider] stream request: {"model":"","messages":[{"role":"system","content":"You are a weather assistant"},{"role":"user","content":"How is the weather today in Boston, MA?"}],"stream":true,"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    [mock provider] stream request: {"model":"","messages":[{"role":"system","content":"You are a weather assistant"},{"role":"user","content":"How is the weather today in Boston, MA?"},{"role":"assistant","tool_calls":[{"index":0,"id":"call_9ctHOJqO3bYrpm2A6S7nHd5k","type":"function","function":{"name":"get_current_weather","arguments":"{"location":"Boston, MA"}"}}]},{"role":"tool","content":"temperature: 31°C","tool_call_id":"call_9ctHOJqO3bYrpm2A6S7nHd5k"}],"stream":true,"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    === RUN TestServiceChatCompletion/chat_without_tool_call_in_stream
    [mock provider] stream request: {"model":"","messages":[{"role":"system","content":"You are a weather assistant"},{"role":"user","content":"How is the weather today in Boston, MA?"}],"stream":true,"tools":[{"type":"function","function":{"name":"get_current_weather","parameters":null}}]}
    === RUN TestServiceChatCompletion/deepseek-v3.2_stream_with_tools
    [mock provider] stream request: {"model":"","messages":[{"role":"system","content":"You are a weather assistant"},{"role":"user","content":"how is the weather in tokyo?"}],"stream":true,"tools":[{"type":"function","function":{"name":"market-get-weather","parameters":null}}]}
    [mock provider] stream request: {"model":"","messages":[{"role":"system","content":"You are a weather assistant"},{"role":"user","content":"how is the weather in tokyo?"},{"role":"assistant","tool_calls":[{"index":0,"id":"call_a6dea19b4490485bbdad7047","type":"function","function":{"name":"market-get-weather","arguments":"{"city": "Tokyo", "latitude": 35.6762, "longitude": 139.6503}"}}]},{"role":"tool","content":"temperature: 31°C","tool_call_id":"call_a6dea19b4490485bbdad7047"}],"stream":true,"tools":[{"type":"function","function":{"name":"market-get-weather","parameters":null}}]}
    --- PASS: TestServiceChatCompletion (0.00s)
    --- PASS: TestServiceChatCompletion/chat_with_tool_call (0.00s)
    --- PASS: TestServiceChatCompletion/chat_without_tool_call (0.00s)
    --- PASS: TestServiceChatCompletion/chat_with_tool_call_in_stream (0.00s)
    --- PASS: TestServiceChatCompletion/chat_without_tool_call_in_stream (0.00s)
    --- PASS: TestServiceChatCompletion/deepseek-v3.2_stream_with_tools (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai (cached)
    === RUN TestTimeoutCallSyncer
    --- PASS: TestTimeoutCallSyncer (0.00s)
    === RUN TestCallSyncer
    --- PASS: TestCallSyncer (0.00s)
    === RUN TestCaller
    --- PASS: TestCaller (0.00s)
    === RUN TestMockCaller
    --- PASS: TestMockCaller (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/caller (cached)
    === RUN TestMockProviderRequest
    [mock provider] request: {"model":"","messages":[{"role":"user","content":"hi, llm bridge"}]}
    [mock provider] stream request: {"model":"","messages":[{"role":"user","content":"hi, yomo"}]}
    --- PASS: TestMockProviderRequest (0.00s)
    === RUN TestMockProvider
    === RUN TestMockProvider/Name()
    === RUN TestMockProvider/GetChatCompletions()
    [mock provider] request: {"model":"gpt-4o-2024-05-13","messages":null}
    === RUN TestMockProvider/GetChatCompletionsStream()
    [mock provider] stream request: {"model":"","messages":null}
    --- PASS: TestMockProvider (0.00s)
    --- PASS: TestMockProvider/Name() (0.00s)
    --- PASS: TestMockProvider/GetChatCompletions() (0.00s)
    --- PASS: TestMockProvider/GetChatCompletionsStream() (0.00s)
    === RUN TestProviders
    === RUN TestProviders/ListProviders
    === RUN TestProviders/GetProvider
    === RUN TestProviders/GetProvider/ok
    === RUN TestProviders/GetProvider/name_is_empty
    === RUN TestProviders/GetProvider/not_found
    --- PASS: TestProviders (0.00s)
    --- PASS: TestProviders/ListProviders (0.00s)
    --- PASS: TestProviders/GetProvider (0.00s)
    --- PASS: TestProviders/GetProvider/ok (0.00s)
    --- PASS: TestProviders/GetProvider/name_is_empty (0.00s)
    --- PASS: TestProviders/GetProvider/not_found (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider (cached)
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/anthropic [no test files]
    === RUN TestNewProvider
    --- PASS: TestNewProvider (0.00s)
    === RUN TestAzureOpenAIProvider_Name
    --- PASS: TestAzureOpenAIProvider_Name (0.00s)
    === RUN TestAzureOpenAIProvider_GetChatCompletions
    provider_test.go:63: Post "https://yomo.openai.azure.com/openai/deployments/test/chat/completions?api-version=test-version": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    provider_test.go:66: Post "https://yomo.openai.azure.com/openai/deployments/test/chat/completions?api-version=test-version": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    --- PASS: TestAzureOpenAIProvider_GetChatCompletions (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/azopenai (cached)
    === RUN TestAzureAIFoundryProvider_Name
    --- PASS: TestAzureAIFoundryProvider_Name (0.00s)
    === RUN TestAzureOpenAIProvider_GetChatCompletions
    --- PASS: TestAzureOpenAIProvider_GetChatCompletions (0.00s)
    === RUN TestNewProvider
    --- PASS: TestNewProvider (0.00s)
    === RUN TestNewConfigFunction
    --- PASS: TestNewConfigFunction (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/azure-ai-foundry (cached)
    === RUN TestNewProvider
    --- PASS: TestNewProvider (0.00s)
    === RUN TestCerebrasProvider_Name
    --- PASS: TestCerebrasProvider_Name (0.00s)
    === RUN TestCerebrasProvider_GetChatCompletions
    provider_test.go:48: error, status code: 401, status: 401 Unauthorized, message: %!s(), body: {"message":"Wrong API Key","type":"invalid_request_error","param":"api_key","code":"wrong_api_key"}
    provider_test.go:52: error, status code: 401, status: 401 Unauthorized, message: %!s(), body: {"message":"Wrong API Key","type":"invalid_request_error","param":"api_key","code":"wrong_api_key"}
    --- PASS: TestCerebrasProvider_GetChatCompletions (1.79s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/cerebras (cached)
    === RUN TestNewProvider
    �[2m3:12PM�[0m �[91mERR�[0m parameters are required �[2mcfEndpoint=�[0m"" �[2mapiKey=�[0m"" �[2mresource=�[0m"" �[2mdeploymentID=�[0m""
    --- PASS: TestNewProvider (0.00s)
    === RUN TestName
    --- PASS: TestName (0.00s)
    === RUN TestCloudflareAzureProvider_GetChatCompletions
    provider_test.go:65: Post "https://facker.gateway.ai.cloudflare.com/v1/111111111111111111/ai-cc-test/azure-openai/test/test/chat/completions?api-version=test-version": EOF
    provider_test.go:68: Post "https://facker.gateway.ai.cloudflare.com/v1/111111111111111111/ai-cc-test/azure-openai/test/test/chat/completions?api-version=test-version": EOF
    --- PASS: TestCloudflareAzureProvider_GetChatCompletions (3.54s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/cfazure (cached)
    === RUN TestCloudflareOpenAIProvider_Name
    --- PASS: TestCloudflareOpenAIProvider_Name (0.00s)
    === RUN TestNewProvider
    === RUN TestNewProvider/with_parameters
    === RUN TestNewProvider/with_environment_variables
    --- PASS: TestNewProvider (0.00s)
    --- PASS: TestNewProvider/with_parameters (0.00s)
    --- PASS: TestNewProvider/with_environment_variables (0.00s)
    === RUN TestCloudflareOpenAIProvider_GetChatCompletions
    provider_test.go:70: Post "https://faker.gateway.ai.cloudflare.com/v1/111111111111111111/ai-cc-test/openai/chat/completions": EOF
    provider_test.go:73: Post "https://faker.gateway.ai.cloudflare.com/v1/111111111111111111/ai-cc-test/openai/chat/completions": EOF
    --- PASS: TestCloudflareOpenAIProvider_GetChatCompletions (3.44s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/cfopenai (cached)
    === RUN TestDeepSeekProvider_Name
    --- PASS: TestDeepSeekProvider_Name (0.00s)
    === RUN TestDeepSeekProvider_GetChatCompletions
    provider_test.go:37: error, status code: 401, status: 401 Unauthorized, message: invalid character 'A' looking for beginning of value, body: Authentication Fails (governor)
    provider_test.go:41: error, status code: 401, status: 401 Unauthorized, message: invalid character 'A' looking for beginning of value, body: Authentication Fails (governor)
    --- PASS: TestDeepSeekProvider_GetChatCompletions (0.37s)
    === RUN TestDeepSeekProvider_GetChatCompletionsWithoutModel
    provider_test.go:62: error, status code: 401, status: 401 Unauthorized, message: invalid character 'A' looking for beginning of value, body: Authentication Fails (governor)
    provider_test.go:66: error, status code: 401, status: 401 Unauthorized, message: invalid character 'A' looking for beginning of value, body: Authentication Fails (governor)
    --- PASS: TestDeepSeekProvider_GetChatCompletionsWithoutModel (0.05s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/deepseek (cached)
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/gemini [no test files]
    === RUN TestGithubModelsProvider_Name
    --- PASS: TestGithubModelsProvider_Name (0.00s)
    === RUN TestNewProvider
    === RUN TestNewProvider/with_parameters
    === RUN TestNewProvider/with_environment_variables
    --- PASS: TestNewProvider (0.00s)
    --- PASS: TestNewProvider/with_parameters (0.00s)
    --- PASS: TestNewProvider/with_environment_variables (0.00s)
    === RUN TestGithubModelsProvider_GetChatCompletions
    --- PASS: TestGithubModelsProvider_GetChatCompletions (2.53s)
    === RUN TestGithubModelsProvider_GetChatCompletionsStream
    --- PASS: TestGithubModelsProvider_GetChatCompletionsStream (0.43s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/githubmodels (cached)
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/groq [no test files]
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/mistral [no test files]
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/ollama [no test files]
    === RUN TestNewProvider
    --- PASS: TestNewProvider (0.00s)
    === RUN TestOpenAIProvider_Name
    --- PASS: TestOpenAIProvider_Name (0.00s)
    === RUN TestCloudflareOpenAIProvider_GetChatCompletions
    provider_test.go:60: Post "https://api.openai.com/v1/chat/completions": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    provider_test.go:63: Post "https://api.openai.com/v1/chat/completions": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    --- PASS: TestCloudflareOpenAIProvider_GetChatCompletions (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/openai (cached)
    ? github.com/yomorun/yomo/pkg/bridge/ai/provider/vertexai [no test files]
    === RUN TestVLlmProvider_Name
    --- PASS: TestVLlmProvider_Name (0.00s)
    === RUN TestVLlmProvider_GetChatCompletions
    provider_test.go:37: Post "http://127.0.0.1:8000/chat/completions": dial tcp 127.0.0.1:8000: connect: connection refused
    provider_test.go:41: Post "http://127.0.0.1:8000/chat/completions": dial tcp 127.0.0.1:8000: connect: connection refused
    --- PASS: TestVLlmProvider_GetChatCompletions (0.00s)
    === RUN TestVLlmProvider_GetChatCompletionsWithoutModel
    provider_test.go:62: Post "http://127.0.0.1:8000/chat/completions": dial tcp 127.0.0.1:8000: connect: connection refused
    provider_test.go:66: Post "http://127.0.0.1:8000/chat/completions": dial tcp 127.0.0.1:8000: connect: connection refused
    --- PASS: TestVLlmProvider_GetChatCompletionsWithoutModel (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/vllm (cached)
    === RUN TestXAIProvider_Name
    --- PASS: TestXAIProvider_Name (0.00s)
    === RUN TestXAIProvider_GetChatCompletions
    provider_test.go:41: Post "https://api.x.ai/v1/chat/completions": context deadline exceeded
    provider_test.go:45: Post "https://api.x.ai/v1/chat/completions": context deadline exceeded
    provider_test.go:53: Post "https://api.x.ai/v1/chat/completions": context deadline exceeded
    provider_test.go:57: Post "https://api.x.ai/v1/chat/completions": context deadline exceeded
    --- PASS: TestXAIProvider_GetChatCompletions (1.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/provider/xai (cached)
    === RUN TestRegister
    --- PASS: TestRegister (0.00s)
    PASS
    ok github.com/yomorun/yomo/pkg/bridge/ai/register (cached) and all tests passed (including the new provider tests).
  • Verified the build process for the new sfn example.

@J926L J926L requested a review from woorui as a code owner March 17, 2026 07:14
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the YoMo ecosystem by integrating two new large language model providers, Groq and Mistral, which broadens the range of AI models available for use. Concurrently, it introduces a practical serverless function example designed to monitor system resources on edge nodes, showcasing how YoMo can facilitate real-time infrastructure health monitoring through AI agents in distributed environments.

Highlights

  • New LLM Providers: Added support for Groq and Mistral AI models, enhancing the platform's large language model capabilities with options like Groq's ultra-fast Llama3 and Mixtral models.
  • System Information SFN Example: Introduced a new serverless function (sfn) example that monitors real-time system resource metrics (CPU, Memory, OS) from edge nodes, demonstrating YoMo's utility in monitoring geo-distributed edge infrastructure through an AI Agent.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • _examples/sfn-system-info/go.mod
    • Added module dependencies for the new system info sfn.
  • _examples/sfn-system-info/main.go
    • Implemented the core logic for the system information sfn.
  • pkg/bridge/ai/config.go
    • Registered the new Groq and Mistral LLM providers.
  • pkg/bridge/ai/config_test.go
    • Added unit tests to validate the registration and instantiation of the new providers.
  • pkg/bridge/ai/provider/groq/provider.go
    • Implemented the Groq LLM provider.
  • pkg/bridge/ai/provider/mistral/provider.go
    • Implemented the Mistral LLM provider.
Activity
  • All existing unit tests passed, including new tests for the Groq and Mistral providers.
  • The build process for the new sfn example was successfully verified.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

J926L added a commit to J926L/J926L that referenced this pull request Mar 17, 2026
@woorui woorui requested a review from fanweixiao March 17, 2026 07:17
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Groq and Mistral LLM providers and adds a new serverless function example for retrieving system information. The implementation of the new providers and their configuration is well-done and includes relevant tests. However, the new sfn-system-info example has some areas for improvement, particularly around handling default arguments and error logging. I've provided specific suggestions to enhance the robustness and clarity of the example code.

Comment on lines +40 to +42
var p LLMArguments
// deserialize the arguments from llm tool_call response
ctx.ReadLLMArguments(&p)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The LLMArguments struct is initialized with zero values, resulting in IncludeCPU and IncludeMemory being false by default. This contradicts the jsonschema description, which specifies default=true. If ctx.ReadLLMArguments is called without arguments, it will return an error, and the handler will proceed with the false values. To align with the documented behavior, you should initialize LLMArguments with the desired default values before reading arguments from the context.

Suggested change
var p LLMArguments
// deserialize the arguments from llm tool_call response
ctx.ReadLLMArguments(&p)
p := LLMArguments{
IncludeCPU: true,
IncludeMemory: true,
}
// deserialize the arguments from llm tool_call response
_ = ctx.ReadLLMArguments(&p)

Comment on lines +48 to +53
if p.IncludeCPU {
percent, err := cpu.Percent(time.Second, false)
if err == nil && len(percent) > 0 {
res += fmt.Sprintf("- CPU Usage: %.2f%%\n", percent[0])
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Errors from cpu.Percent are silently ignored. While this is an example, it's good practice to demonstrate proper error handling. Consider logging the error if retrieving CPU usage fails, which provides better visibility for debugging.

if p.IncludeCPU {
		percent, err := cpu.Percent(time.Second, false)
		if err != nil {
			slog.Warn("could not get CPU usage", "err", err)
		} else if len(percent) > 0 {
			res += fmt.Sprintf("- CPU Usage: %.2f%%\n", percent[0])
		}
	}

Comment on lines +55 to +61
if p.IncludeMemory {
v, err := mem.VirtualMemory()
if err == nil {
res += fmt.Sprintf("- Memory Usage: %.2f%% (Used: %v MB, Total: %v MB)\n",
v.UsedPercent, v.Used/1024/1024, v.Total/1024/1024)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error returned by mem.VirtualMemory is silently ignored. It's a good practice, even in examples, to handle or at least log errors. This makes the function more robust and easier to debug.

if p.IncludeMemory {
		v, err := mem.VirtualMemory()
		if err != nil {
			slog.Warn("could not get memory usage", "err", err)
		} else {
			res += fmt.Sprintf("- Memory Usage: %.2f%% (Used: %v MB, Total: %v MB)\n",
				v.UsedPercent, v.Used/1024/1024, v.Total/1024/1024)
		}
	}

@codecov
Copy link

codecov bot commented Mar 17, 2026

Codecov Report

❌ Patch coverage is 9.52381% with 38 lines in your changes missing coverage. Please review.
✅ Project coverage is 47.91%. Comparing base (8ca34ed) to head (136be6f).

Files with missing lines Patch % Lines
pkg/bridge/ai/provider/groq/provider.go 0.00% 19 Missing ⚠️
pkg/bridge/ai/provider/mistral/provider.go 0.00% 19 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1202      +/-   ##
==========================================
- Coverage   48.09%   47.91%   -0.19%     
==========================================
  Files          93       95       +2     
  Lines        5510     5552      +42     
==========================================
+ Hits         2650     2660      +10     
- Misses       2664     2696      +32     
  Partials      196      196              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks for your PR, Please don't commit this _example

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants