-
| 
         Dear all, I wonder whether Kong support routing requests based on ‘model’ field in request body. We known that route request based ‘model’ field in request header is very easy to implement. And we already achieve it. But now, we want to conform the openai api specification, which means just put ‘model’ field in body, not header. I’d like to share our current request path: A request to our LLM API ----> LoadBalancer ----> Kong Gateway Service ----> Ingress(there are several ingresses, each of them corresponds to a different model, requests will be routed to different ingress based on ‘model’ field in header) ----> service behind ingress(every service bind with an ai-proxy KongPlugin, so that request will be forwarded to LLM service such like Qwen or GPT) ----> Qwen, GP, etc. Hope my question is enough clear.  | 
  
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
| 
         Based on my experience, Kong Gateway does not support routing based on the request body "out of the box" directly in its core routing rules. However, I have no idea whether AI gateway has planned to support it, could let our AI developer consider this question @oowl  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         You can do this with a pre-function: Now set each of your "routes" to be on the same path e.g.   | 
  
Beta Was this translation helpful? Give feedback.
You can do this with a pre-function:
Now set each of your "routes" to be on the same path e.g.
paths: [ "~/chat/completions$" ]but route by header value e.g.x-model: gpt-4o