OPEA - AmazonBedrock integration#1031
Conversation
|
Hi @Vihanth, The comps/llms/text-generation module has been refactored for v1.2. Could you please update your code to align with these changes? Thanks. |
|
Hi @Vihanth , OPEA is approaching its code freeze. Could you please check the CI failures? |
|
@joshuayao Sorry about the delay we had a bit of a staffing shake up, we have someone on this again. How much time do we have before the code freeze? |
|
hi @smguggen , the code freeze is on Friday. Also it seems to be a CI issue of missing credential. So please feel free if you need any help in resolving the issue/merging this PR. |
|
@srinarayan-srikanthan That's exactly what was causing the delay, our security reviewer was not a fan of storing long-lasting credentials to perform the test, so we set up OIDC. I think we can get that together today, so everything should be handled well before Friday. Thanks! |
|
I can help to add the new secrets for CI. But my concern is this PR is not adapt to the new code structure. The |
|
@chensuyue Those changes are underway, can you share the name of the key for the secret? The secret itself was shared with @srinarayan-srikanthan |
|
@chensuyue We will need to use the secret key to get AWS credentials. Once we have the secret key, we will need to update 1165 to use that secret key, and ensure that PR is merged prior to this one. |
srikanthan send me the AWS_IAM_ROLE_ARN, and I have added it into the secrets. But how about those 3 parameter, seems the test also need them. |
The AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION token are automatically set by the "configure-aws-credentials" action. That action needs an IAM Role ARN as input, and sets those three environment variables for future steps. See the documentation for configure-aws-credentials. |
|
@chensuyue Hopefully the above comment clarifies your question and the question on 1165. The 1165 pull request must be merged so that the Workflow checks in this PR can pass. My comment above should address the comment on that PR as well. |
This PR has been merged. But look at the CI, there is another issue need to resolve. |
|
Regarding to the |
|
Hi @jonminkin97 , please delete the This is for the new file structure refactoring of this release. All of the llm-related components will be integrated into |
|
Switch the target branch for test. After the test pass, we can switch back to main branch. |
|
I have setting the permission in |
* initial code for sql agent llama Signed-off-by: minmin-intel <minmin.hou@intel.com> * add test for sql agent Signed-off-by: minmin-intel <minmin.hou@intel.com> * update sql agent test Signed-off-by: minmin-intel <minmin.hou@intel.com> * fix bugs and use vllm to test sql agent Signed-off-by: minmin-intel <minmin.hou@intel.com> * add tag-bench test and google search tool Signed-off-by: minmin-intel <minmin.hou@intel.com> * test sql agent with hints Signed-off-by: minmin-intel <minmin.hou@intel.com> * fix bugs for sql agent with hints and update test Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add readme for sql agent and fix ci bugs Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add sql agent using openai models Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bugs in sql agent openai Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * make wait time longer for sql agent microservice to be ready Signed-off-by: minmin-intel <minmin.hou@intel.com> * update readme Signed-off-by: minmin-intel <minmin.hou@intel.com> * fix test bug Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * skip planexec with vllm due to vllm-gaudi bug Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * debug ut issue Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * use vllm for all uts Signed-off-by: minmin-intel <minmin.hou@intel.com> * debug ci issue Signed-off-by: minmin-intel <minmin.hou@intel.com> * change vllm port Signed-off-by: minmin-intel <minmin.hou@intel.com> * update ut Signed-off-by: minmin-intel <minmin.hou@intel.com> * remove tgi server Signed-off-by: minmin-intel <minmin.hou@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * align vllm port Signed-off-by: minmin-intel <minmin.hou@intel.com> --------- Signed-off-by: minmin-intel <minmin.hou@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> opea bedrock integration Signed-off-by: vihanth sura <vihanth@amazon.com>
* remove examples gateway. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove gateway. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refine service code. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update http_service.py * remove gateway ut. * remove gateway ut. * fix conflict service name. * Update http_service.py * add handle message ut. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove `multiprocessing.Process` start server code. * fix ut. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove multiprocessing and enhance ut for coverage. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: chen, suyue <suyue.chen@intel.com>
* vllm support openai API Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix bug Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix bug Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * test_llms_text-generation_vllm_langchain_on_intel_hpu.sh Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix time Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * fix bug Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix bug Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> --------- Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This reverts commit c36c503. Co-authored-by: lkk <33276950+lkk12014402@users.noreply.github.com>
* Disable telemetry by default Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Refine retrievers Dockerfile and requirements.txt and move --extra-index-url into Dockerfile for CPU Docker image. Signed-off-by: letonghan <letong.han@intel.com>
Signed-off-by: Jonathan Minkin <minkinj@amazon.com> Add id-token permissions for workflow Signed-off-by: Jonathan Minkin <minkinj@amazon.com>
* Fix test_telemetry.py import Signed-off-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Enable AWS access when testing microservices Signed-off-by: Jonathan Minkin <minkinj@amazon.com> * Add id-token permissions to workflow to allow OIDC auth Signed-off-by: Jonathan Minkin <minkinj@amazon.com> --------- Signed-off-by: Jonathan Minkin <minkinj@amazon.com>
)" (opea-project#1173) This reverts commit b0abbde to recover the CI test. Signed-off-by: chensuyue <suyue.chen@intel.com>
Issue: When property graph store gets filled (~12K nodes, 15K relationships) insertion time in dataprep gets slow. Extraction + insertion starts at ~30 sec and once it gets filled grows to (~12K nodes, 15K relationships) ~800 sec Perf bottleneck this cypher call in llama-index to do node upsert: https://github.com/run-llama/llama_index/blob/795bebc2bad31db51b854a5c062bedca42397630/llama-index-integrations/graph_stores/llama-index-graph-stores-neo4j/llama_index/graph_stores/neo4j/neo4j_property_graph.py#L334 Performance optimizations in this PR: 1. Move neo4j GraphStore initialization out of detaprep and retrieve function so it's only performed once at the begining 2. Disable schema_refresh of neo4j graph when not necessary because for large graph this is very slow. 3. Switch to OpenAILike class from llama-index to work with vllm or tgi endpoints without code changes (only docker compose.yaml changes) 4. Added concurrency and batching for generating community summaries and generating answers from summaries --------- Signed-off-by: Rita Brugarolas <rita.brugarolas.brufau@intel.com>
According to the RFC's Phase 2 plan, this PR adds image query support, PDF ingestion support, and dynamic ports to the microservices used by MultimodalQnA. This PR goes with this one in GenAIExamples. Signed-off-by: dmsuehir <dina.s.jones@intel.com> Signed-off-by: Melanie Buehler <melanie.h.buehler@intel.com>
Signed-off-by: chensuyue <suyue.chen@intel.com>
The original output was unclear, this optimization ensures that the user can find the source of the path check CI failure. Signed-off-by: ZePan110 <ze.pan@intel.com>
Enable redis for saving agent_config, messages: 1. agent recovery from redis (agent_config) 2. assemble history for multi-turn related opea-project#977
* Add helm-chart CI test workflow. --------- Signed-off-by: ZePan110 <ze.pan@intel.com>
Refactor dataprep microservice, including opensearch, elasticsearch, pinecone, pgvector, neo4j, qdrant, vdms. Signed-off-by: lvliang-intel <liang1.lv@intel.com> Co-authored-by: chen, suyue <suyue.chen@intel.com>
Fix the issue where CD test files cannot be obtained and add GOOGLE secrets. Signed-off-by: ZePan110 <ze.pan@intel.com>
Enhance the bug & feature template according to the issue opea-project/GenAIExamples#1002. Co-authored-by: ZhangJianyu <zhang.jianyu@outlook.com>
1. Fix CD workflow type error. 2. Change dockerhub default value and add the judgment that matrix is not empty to fix workflow errors. Signed-off-by: ZePan110 <ze.pan@intel.com>
* delete intent detection Remove comps that use the old code structure Signed-off-by: Jonathan Minkin <minkinj@amazon.com> Trigger new run of test Re-trigger new run of test
46d96dc to
bec28d9
Compare
Description
This PR adds Amazon Bedrock LLMs to OPEA text-generation.
Issues
n/a
Type of change
List the type of change like below. Please delete options that are not relevant.
Tests
Tested it by building docker image and deploying it using docker compose.