Resolves bad rebase for artifact pulling#2473
Conversation
Reviewer's GuideRefactors OCI transport handling into a strategy-based architecture to reliably distinguish and operate on OCI images vs artifacts (including HTTP-based artifact snapshots), tightens engine/version handling via SemVer, and updates CLI behavior, mounts, and tests to fix bad artifact pulling/rebase issues and improve robustness of listing, removal, and quadlet/kube generation. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary of ChangesHello @ieaves, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly overhauls the OCI artifact pulling and management system, addressing previous rebase conflicts and enhancing the overall reliability and standardization of artifact interactions. By introducing a flexible strategy pattern and adopting CNCF specifications, the system is now more adaptable to different container environments and provides a more consistent user experience for model lifecycle operations. The integration of semantic versioning further refines version-dependent logic, ensuring compatibility and correct behavior across various container engine versions. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
3be730a to
d3f9131
Compare
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The new OCI.remove implementation delegates to the strategy and returns a boolean, but the CLI _rm_model caller no longer checks the return value; consider either raising a KeyError/CalledProcessError on failure or having _rm_model handle a False return so failed removals are surfaced to the user instead of silently ignored.
- engine_version now returns a SemVer instance instead of a string; please ensure all existing callers (including any not touched in this PR) have been updated to avoid string comparisons or other assumptions about the previous return type.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new OCI.remove implementation delegates to the strategy and returns a boolean, but the CLI _rm_model caller no longer checks the return value; consider either raising a KeyError/CalledProcessError on failure or having _rm_model handle a False return so failed removals are surfaced to the user instead of silently ignored.
- engine_version now returns a SemVer instance instead of a string; please ensure all existing callers (including any not touched in this PR) have been updated to avoid string comparisons or other assumptions about the previous return type.
## Individual Comments
### Comment 1
<location path="ramalama/transports/oci/strategy.py" line_range="46-47" />
<code_context>
-def engine_version(engine: SUPPORTED_ENGINES) -> str:
+@lru_cache
+def engine_version(engine: SUPPORTED_ENGINES | Path | str) -> SemVer:
# Create manifest list for target with imageid
</code_context>
<issue_to_address>
**issue (bug_risk):** Using `lru_cache` with a `ModelStore` argument will fail because `ModelStore` is likely unhashable.
Because `lru_cache` uses function arguments as cache keys, this will raise `TypeError: unhashable type` on the first call if `ModelStore` doesn’t define `__hash__`. If you need caching, restrict the cache key to hashable inputs (e.g. `(engine, engine_name, kind)`) and pass `model_store` via another mechanism (closure, instance attribute), or use your own dict keyed by a stable identifier such as the model store root path instead of `lru_cache`.
</issue_to_address>
### Comment 2
<location path="test/e2e/test_artifact.py" line_range="18-17" />
<code_context>
import pytest
-from test.conftest import skip_if_docker, skip_if_no_container, skip_if_podman_too_old, skip_if_windows
+from test.conftest import skip_if_docker, skip_if_no_container
from test.e2e.utils import RamalamaExecWorkspace, check_output
</code_context>
<issue_to_address>
**issue (testing):** Artifact e2e tests no longer skip Windows but still assume POSIX-style `file://` URIs, which will likely break on Windows.
These tests used to be safe on Windows by combining `path_to_uri` with `@skip_if_windows`. Now we construct `file://` URIs inline and have removed `skip_if_windows` from some artifact e2e tests. On Windows, that URI format is invalid and will cause consistent test failures. Please either reintroduce a helper that produces valid `file://` URIs on Windows (e.g., via `path.as_uri()`) or retain `skip_if_windows` on tests that build these URIs manually.
</issue_to_address>
### Comment 3
<location path="test/system/056-artifact.bats" line_range="343" />
<code_context>
+ is "$output" ".*large-artifact.*latest" "large artifact was created"
+
+ # Verify size is reasonable
+ size=$(run_ramalama list --json | jq -r '.[0].size')
+ assert [ "$size" -gt 1000000 ] "artifact size is at least 1MB"
+
</code_context>
<issue_to_address>
**suggestion (testing):** JSON list tests assume the artifact is at index 0, which can make them order-dependent and flaky.
In the new "large file handling" (and "size reporting accuracy") test, size is read with `jq -r '.[0].size'`. If other artifacts are present, index 0 may not be the newly created one, making the test order-dependent and flaky.
Instead, filter by `name`, e.g.:
```bash
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="large-artifact:latest") | .size')
```
and similarly for `size-test-artifact`, so the assertion is stable regardless of list ordering or additional entries.
Suggested implementation:
```shell
# Verify size is reasonable
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="large-artifact:latest") | .size')
assert [ "$size" -gt 1000000 ] "artifact size is at least 1MB"
```
There is another JSON size-based assertion for `size-test-artifact` (in the “size reporting accuracy” test or equivalent). To make that test order-independent as well, update its `jq` call from using `.[0].size` (or any other index-based access) to:
```bash
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="size-test-artifact:latest") | .size')
```
Make sure the `select(.name==...)` string exactly matches how the artifact is named in the test (`size-test-artifact:latest` or the appropriate tag).
</issue_to_address>
### Comment 4
<location path="test/unit/test_transport_base.py" line_range="329-338" />
<code_context>
+class TestOCIModelSetupMountsDocker:
</code_context>
<issue_to_address>
**suggestion (testing):** Docker OCI mount tests only cover the image case; artifact mounting behavior is not exercised.
Given the new strategy-based OCI handling and Docker artifact support, please add a complementary test that exercises an OCI artifact under Docker (e.g., via `force_oci_artifact`). It should assert that `setup_mounts` invokes `mount_cmd` with the expected bind-mount arguments for snapshot-based artifacts, so both image and artifact flows are covered and future regressions in strategy selection or mount construction are caught.
Suggested implementation:
```python
class TestOCIModelSetupMountsDocker:
"""Test OCI model setup_mounts for Docker"""
@pytest.fixture
def mock_docker_engine(self):
"""Create a mock Docker engine for testing"""
engine = Mock()
engine.use_podman = False
engine.use_docker = True
engine.add = Mock()
return engine
def test_setup_mounts_oci_artifact_uses_bind_mount_for_snapshot(
self,
mock_docker_engine,
oci_model_docker,
monkeypatch,
):
"""
When forcing OCI artifacts under Docker, setup_mounts should invoke
mount_cmd with a bind mount targeting the snapshot-based artifact,
not an image mount.
"""
# Ensure the Docker engine is used for this model
oci_model_docker.engine = mock_docker_engine
# Spy on mount_cmd so we can assert how it is invoked
mount_cmd_spy = monkeypatch.spy(oci_model_docker, "mount_cmd")
# Force the OCI artifact path to be used instead of an image
args = SimpleNamespace(force_oci_artifact=True)
oci_model_docker.setup_mounts(args)
# mount_cmd should have been called exactly once with the bind-mount args
mount_cmd_spy.assert_called_once()
(mount_args,) = mount_cmd_spy.call_args.args
# Expect a bind mount (snapshot-based artifact), not an image mount
# and ensure it targets the model mount directory.
assert any(
mount.startswith("--mount=type=bind")
and f",destination={MNT_DIR}" in mount
for mount in mount_args
), f"Expected a bind mount to {MNT_DIR} for OCI artifact, got: {mount_args}"
```
To make this test work with the rest of the file, you will also need to:
1. Ensure `SimpleNamespace` is imported at the top of `test_transport_base.py`:
```python
from types import SimpleNamespace
```
2. Provide (or adapt) an `oci_model_docker` fixture used in the new test so that:
- It has an `engine` attribute (settable to `mock_docker_engine`),
- It implements `setup_mounts(args)`,
- It implements `mount_cmd(mount_args)` which ultimately calls `engine.add(mount_args)`.
If your existing Podman tests already use a fixture like `oci_model_podman` with this interface, you can mirror that fixture for Docker or generalize it and reuse it here.
3. If your actual mount command format differs (e.g., destination subpath is `f"{MNT_DIR}/models"` or similar), adjust the `destination` assertion in the test accordingly so it matches the real bind-mount arguments used for snapshot-based artifacts.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| @lru_cache | ||
| def get_strategy( |
There was a problem hiding this comment.
issue (bug_risk): Using lru_cache with a ModelStore argument will fail because ModelStore is likely unhashable.
Because lru_cache uses function arguments as cache keys, this will raise TypeError: unhashable type on the first call if ModelStore doesn’t define __hash__. If you need caching, restrict the cache key to hashable inputs (e.g. (engine, engine_name, kind)) and pass model_store via another mechanism (closure, instance attribute), or use your own dict keyed by a stable identifier such as the model store root path instead of lru_cache.
There was a problem hiding this comment.
No – that comment was overcautious here.
ModelStore doesn’t define __eq__ or __hash__, so it inherits object.__hash__ and is hashable by identity. That means:
@lru_cachewill not raiseTypeErrorfor themodel_storeargument.- The cache key will distinguish different
ModelStoreinstances, even if they point at the same underlying directory.
So as the code stands, using @lru_cache on get_strategy with model_store: ModelStore is technically fine.
If you want to be a bit more future-proof / explicit, you have a couple of options, but none are strictly required:
-
Keep as-is, but rely on the current identity-based hashing of
ModelStore. -
Or: change the cache key to something more stable, e.g. base path, and avoid using the
ModelStoreobject itself:@lru_cache def _get_strategy_cached( engine: str, engine_name: SUPPORTED_ENGINES, model_store_base: str, kind: Literal["image", "artifact"], ) -> BaseArtifactStrategy | BaseImageStrategy: cls_generator = get_engine_image_strategy if kind == "image" else get_engine_artifact_strategy cls = cls_generator(engine, engine_name) # model_store is passed in by the caller, not cached by value raise RuntimeError("This helper should only be called through a wrapper that supplies model_store")
and then wrap it in something that passes the real
ModelStoreinstance. But that’s only worth doing if you care about decoupling the cache key from the concreteModelStoreobject.
Bottom line: there’s no immediate bug from lru_cache + ModelStore in the current code; my original warning about TypeError: unhashable type doesn’t apply here.
| from subprocess import STDOUT, CalledProcessError | ||
|
|
||
| import pytest | ||
|
|
There was a problem hiding this comment.
issue (testing): Artifact e2e tests no longer skip Windows but still assume POSIX-style file:// URIs, which will likely break on Windows.
These tests used to be safe on Windows by combining path_to_uri with @skip_if_windows. Now we construct file:// URIs inline and have removed skip_if_windows from some artifact e2e tests. On Windows, that URI format is invalid and will cause consistent test failures. Please either reintroduce a helper that produces valid file:// URIs on Windows (e.g., via path.as_uri()) or retain skip_if_windows on tests that build these URIs manually.
| is "$output" ".*large-artifact.*latest" "large artifact was created" | ||
|
|
||
| # Verify size is reasonable | ||
| size=$(run_ramalama list --json | jq -r '.[0].size') |
There was a problem hiding this comment.
suggestion (testing): JSON list tests assume the artifact is at index 0, which can make them order-dependent and flaky.
In the new "large file handling" (and "size reporting accuracy") test, size is read with jq -r '.[0].size'. If other artifacts are present, index 0 may not be the newly created one, making the test order-dependent and flaky.
Instead, filter by name, e.g.:
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="large-artifact:latest") | .size')and similarly for size-test-artifact, so the assertion is stable regardless of list ordering or additional entries.
Suggested implementation:
# Verify size is reasonable
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="large-artifact:latest") | .size')
assert [ "$size" -gt 1000000 ] "artifact size is at least 1MB"
There is another JSON size-based assertion for size-test-artifact (in the “size reporting accuracy” test or equivalent). To make that test order-independent as well, update its jq call from using .[0].size (or any other index-based access) to:
size=$(run_ramalama list --json | jq -r '.[] | select(.name=="size-test-artifact:latest") | .size')Make sure the select(.name==...) string exactly matches how the artifact is named in the test (size-test-artifact:latest or the appropriate tag).
| class TestOCIModelSetupMountsDocker: | ||
| """Test OCI model setup_mounts for Docker""" | ||
|
|
||
| @pytest.fixture | ||
| def mock_docker_engine(self): | ||
| """Create a mock Docker engine for testing""" | ||
| engine = Mock() | ||
| engine.use_podman = False | ||
| engine.use_docker = True | ||
| engine.add = Mock() |
There was a problem hiding this comment.
suggestion (testing): Docker OCI mount tests only cover the image case; artifact mounting behavior is not exercised.
Given the new strategy-based OCI handling and Docker artifact support, please add a complementary test that exercises an OCI artifact under Docker (e.g., via force_oci_artifact). It should assert that setup_mounts invokes mount_cmd with the expected bind-mount arguments for snapshot-based artifacts, so both image and artifact flows are covered and future regressions in strategy selection or mount construction are caught.
Suggested implementation:
class TestOCIModelSetupMountsDocker:
"""Test OCI model setup_mounts for Docker"""
@pytest.fixture
def mock_docker_engine(self):
"""Create a mock Docker engine for testing"""
engine = Mock()
engine.use_podman = False
engine.use_docker = True
engine.add = Mock()
return engine
def test_setup_mounts_oci_artifact_uses_bind_mount_for_snapshot(
self,
mock_docker_engine,
oci_model_docker,
monkeypatch,
):
"""
When forcing OCI artifacts under Docker, setup_mounts should invoke
mount_cmd with a bind mount targeting the snapshot-based artifact,
not an image mount.
"""
# Ensure the Docker engine is used for this model
oci_model_docker.engine = mock_docker_engine
# Spy on mount_cmd so we can assert how it is invoked
mount_cmd_spy = monkeypatch.spy(oci_model_docker, "mount_cmd")
# Force the OCI artifact path to be used instead of an image
args = SimpleNamespace(force_oci_artifact=True)
oci_model_docker.setup_mounts(args)
# mount_cmd should have been called exactly once with the bind-mount args
mount_cmd_spy.assert_called_once()
(mount_args,) = mount_cmd_spy.call_args.args
# Expect a bind mount (snapshot-based artifact), not an image mount
# and ensure it targets the model mount directory.
assert any(
mount.startswith("--mount=type=bind")
and f",destination={MNT_DIR}" in mount
for mount in mount_args
), f"Expected a bind mount to {MNT_DIR} for OCI artifact, got: {mount_args}"To make this test work with the rest of the file, you will also need to:
-
Ensure
SimpleNamespaceis imported at the top oftest_transport_base.py:from types import SimpleNamespace
-
Provide (or adapt) an
oci_model_dockerfixture used in the new test so that:- It has an
engineattribute (settable tomock_docker_engine), - It implements
setup_mounts(args), - It implements
mount_cmd(mount_args)which ultimately callsengine.add(mount_args).
If your existing Podman tests already use a fixture like
oci_model_podmanwith this interface, you can mirror that fixture for Docker or generalize it and reuse it here. - It has an
-
If your actual mount command format differs (e.g., destination subpath is
f"{MNT_DIR}/models"or similar), adjust thedestinationassertion in the test accordingly so it matches the real bind-mount arguments used for snapshot-based artifacts.
There was a problem hiding this comment.
Code Review
This is a substantial and well-executed refactoring of the OCI transport layer. The introduction of the strategy pattern to handle different OCI object types (images vs. artifacts) and container engines (Podman vs. Docker) is a significant architectural improvement. This makes the code more modular, maintainable, and extensible. The addition of a fallback mechanism for artifact pulling via HTTP also enhances robustness. The code is well-organized into new logical modules, and the test suite has been updated comprehensively to cover the new functionality. I've made a few minor suggestions to improve exception handling by using more specific exception types instead of broad except Exception: clauses.
| except Exception: | ||
| continue |
There was a problem hiding this comment.
Catching a broad Exception can hide unexpected errors. It's better to be more specific about the exceptions you expect to handle. In this case, run_cmd can raise subprocess.CalledProcessError and .decode() can raise UnicodeDecodeError. Catching these specific exceptions makes the code more robust and easier to debug.
| except Exception: | |
| continue | |
| except (subprocess.CalledProcessError, UnicodeDecodeError): | |
| continue |
| except Exception: | ||
| raise KeyError( | ||
| "You must specify a registry for the model in the form " | ||
| f"'oci://registry.acme.org/ns/repo:tag', got instead: {self.model}" | ||
| ) |
There was a problem hiding this comment.
Catching a broad Exception can hide unexpected errors. The unpack operation registry, reference = model.split("/", 1) will raise a ValueError if the split does not produce enough values to unpack. It's better to catch the more specific ValueError.
| except Exception: | |
| raise KeyError( | |
| "You must specify a registry for the model in the form " | |
| f"'oci://registry.acme.org/ns/repo:tag', got instead: {self.model}" | |
| ) | |
| except ValueError: | |
| raise KeyError( | |
| "You must specify a registry for the model in the form " | |
| f"'oci://registry.acme.org/ns/repo:tag', got instead: {self.model}" | |
| ) |
| except Exception: | ||
| return False |
There was a problem hiding this comment.
Catching a broad Exception can hide unexpected errors. It's better to catch subprocess.CalledProcessError, which is what run_cmd is expected to raise on failure. This makes the code more robust. This pattern of catching Exception appears in other methods in this file as well (e.g., remove, HttpArtifactStrategy.exists, etc.) and should be addressed there too. You'll also need to add import subprocess at the top of the file.
| except Exception: | |
| return False | |
| except subprocess.CalledProcessError: | |
| return False |
| return manifest | ||
| except Exception: |
There was a problem hiding this comment.
Catching a broad Exception can hide unexpected errors. The client.get_manifest() method can raise exceptions like urllib.error.HTTPError or json.JSONDecodeError. It's better to catch these specific exceptions to make the code more robust. You'll need to import urllib.error and json.
| return manifest | |
| except Exception: | |
| except (urllib.error.HTTPError, json.JSONDecodeError): | |
| return None |
|
Squash commits. |
olliewalsh
left a comment
There was a problem hiding this comment.
doesn't resolve the rebase issues
| message = ( | ||
| f"Could not authenticate with {self.provider}." | ||
| "The provided API key was either missing or invalid.\n" | ||
| " The provided API key was either missing or invalid.\n" |
There was a problem hiding this comment.
that's an issue but not related to this feature so I would remove and address in a follow-up
There was a problem hiding this comment.
Come on man.
Ditto. If this was a trivial PR then sure. However it is a huge PR. Every unrelated change just reduces the signal to noise ratio, requiring far more effort to identify and review the relevant changes.
There was a problem hiding this comment.
Also more likely to cause conflicts with other PRs
There was a problem hiding this comment.
This PR wasn't as large when it was opened in... November? Dan opened his PR with podman in particular after I opened this but we agreed to push his through and rebase this into it which forced the scope to expand.
I just don't think you have any appreciation for how much time I've spent rebasing this thing to keep up with main over the past four months with zero eyes.
There was a problem hiding this comment.
Also more likely to cause conflicts with other PRs
This isn't a credible concern applied to correcting an errant whitespace.
There was a problem hiding this comment.
Every unrelated change...
There was a problem hiding this comment.
This PR wasn't as large when it was opened in... November? Dan opened his PR with podman in particular after I opened this but we agreed to push his through and rebase this into it which forced the scope to expand.
I just don't think you have any appreciation for how much time I've spent rebasing this thing to keep up with main over the past four months with zero eyes.
Not sure why that frustration is directed my way when the feature was mostly a collaboration between you and @rhatdan.
Resolving merge conflicts when rebasing a branch is never fun. It just can't be avoided sometimes, esp with large PRs or multiple PRs that depend on each other.
Cleaning up a bad rebase really is not fun. It is not how I wanted to spend my Friday afternoon. I've pretty much dropped everything to get this PR merged again ASAP before something else conflicts with it.
There was a problem hiding this comment.
Not sure why that frustration is directed my way {...} I've pretty much dropped everything to get this PR merged again ASAP before something else conflicts with it.
Because this thread is almost entirely nits. This thread is about an entirely cosmetic whitespace change that, were it a problem, could have been flagged during review. This is not the appropriate context for long drawn out back and forths about these sorts of stylistic preferences.
There was a problem hiding this comment.
👍 yeah, this change is nothing - just was the first of many so the thread spawned from here. Most of the unrelated changes that made this PR quite difficult to review have been addressed now in any case
| @lru_cache | ||
| def get_strategy( |
|
@olliewalsh I don't know what is going on with GitHub's rendering of the diff but there are quite a few of your comments which are responsive to changes not actually in the current PR. Are you looking at an intentionally old commit or something? For example, I see you commented on the info parser here but the actual diff doesn't contain any of those changes. You can verify those changes aren't in my source repo here as of the most recent push 19 hours ago. |
olliewalsh
left a comment
There was a problem hiding this comment.
still seeing rebase issues though
| from ramalama.transports.huggingface import Huggingface | ||
| from ramalama.transports.modelscope import ModelScope | ||
| from ramalama.transports.oci import OCI | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
|
|
||
| if TYPE_CHECKING: | ||
| from ramalama.chat import ChatOperationalArgs | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
|
|
||
| from ramalama.common import MNT_DIR, run_cmd | ||
| from ramalama.transports.oci import OCI | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
| from ramalama.config import get_config | ||
| from ramalama.transports.base import Transport, compute_ports, compute_serving_port | ||
| from ramalama.transports.oci import OCI | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
| def test_rlcr_inherits_from_oci(self, rlcr_model): | ||
| """Test that RLCR properly inherits from OCI""" | ||
| from ramalama.transports.oci import OCI | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
| from ramalama.model_store.store import ModelStore | ||
| from ramalama.transports.huggingface import Huggingface | ||
| from ramalama.transports.oci import OCI | ||
| from ramalama.transports.oci.oci import OCI |
There was a problem hiding this comment.
nit: from ramalama.transports.oci import OCI should still work
Yeah, that's really bizarre - github has some serious bugs I guess. All of the comments were added to the same diff. |
|
|
||
|
|
||
| class OpenAICompletionsProviderTests: | ||
| class TestOpenAICompletionsProvider: |
There was a problem hiding this comment.
nit: not related to this PR
https://www.githubstatus.com/incidents/kv1lzpgzr9yp looks like it could be related to caching... |
|
I've restored the test changes you've identified to upstream/main. |
ramalama/cli.py
Outdated
| ) | ||
| parser.add_argument("--json", dest="json", action="store_true", help="display AI Model information in JSON format") | ||
| parser.add_argument("MODEL", nargs="?", completer=local_models) # positional argument | ||
| parser.add_argument("MODEL", completer=local_models) # positional argument |
There was a problem hiding this comment.
This was part of the original PR. MODEL is actually a required field although in main enforcement is deferred to inspect_cli. It's a legitimate bug fix but can be moved to a future PR.
| """Return a configured API key for the given provider scheme, if any.""" | ||
|
|
||
| if resolver := PROVIDER_API_KEY_RESOLVERS.get(scheme): | ||
| return resolver() |
There was a problem hiding this comment.
Every unrelated change...
e.g this doesn't look related to artifact pulling, but it's not obvious whether it is an intentional change or a rebase issue
There was a problem hiding this comment.
Just better edge case handling, no grand conspiracy here.
test/e2e/test_serve.py
Outdated
| assert exc_info.value.returncode == 22 | ||
| assert "quay.io/ramalama/rag" in exc_info.value.output.decode("utf-8") | ||
| assert "does not exist" in exc_info.value.output.decode("utf-8") | ||
| assert re.search(r"Error: quay.io/ramalama/rag does not exist.*", exc_info.value.output.decode("utf-8")) |
There was a problem hiding this comment.
this line was relevant AFAICT
|
@olliewalsh I know both of us are frustrated right now for a variety of reasons but I do appreciate your time reviewing and wanted to say thank you. |
Much appreciated and thanks for being very responsive on getting this sorted! |
|
@rhatdan did you still want the commits squashed? |
… podman Signed-off-by: Ian Eaves <ian.k.eaves@gmail.com>
743a9e2 to
d1d19d5
Compare
|
No problem. |
Summary by Sourcery
Refactor OCI transport and artifact handling to use pluggable strategies with proper OCI/CNCF model manifest support, improve engine/version detection and SemVer handling, and extend CLI and tests to better support artifacts, RLCR pulling, and container interoperability.
New Features:
Bug Fixes:
Enhancements:
--, matching shell semantics.Tests: