Experimental command line interface UX#6135
Experimental command line interface UX#6135raydouglass merged 35 commits intorapidsai:branch-24.12from
Conversation
viclafargue
left a comment
There was a problem hiding this comment.
Good job! Just have a few comments. I focused my review on the estimator proxy and the interception function.
| setattr(self, func_name, func) | ||
|
|
||
| @classmethod | ||
| def _hyperparam_translator(cls, **kwargs): |
There was a problem hiding this comment.
I think that the function could be simplified. Also edge case, but what happens when the result of the translation is supposed to be an "accept" or "dispatch" string?
There was a problem hiding this comment.
I also think we could improve this a bit.
Before looking at how it was done in this PR my expectation was to find something like this:
class Base:
def _translate_hyperparameters(self, foo="some value", **hyper_parameters):
if foo != "some value":
hyper_parameters["foo"] = "some value"
return hyper_parameters
class A(Base):
def _translate_hyperparameters(self, bar=True, **hyper_parameters):
hyper_parameters = super()._translate_hyperparameters(**hyper_parameters)
if not bar:
raise NotImplemented(f"Can't dispatch with '{bar=}', has to be True.")
return hyper_parametersSo that we can accumulate translations from Base upwards, save on having the extra layer of the translation dictionary
(I need to think a bit more/try out the code, so bugs ahead but right now I'm channeling my best inner Raymond Hettinger: "there must be a better way" :D)
There was a problem hiding this comment.
@betatim the reason to avoid a method per class like that is to avoid a ton of extra code per algorithm, the dictionary is actually based on scikit-learn's parameter constraints just simplified for now,
@viclafargue not sure I understand the edge case? what happens when the result of the translation is supposed to be an "accept" or "dispatch" string?
There was a problem hiding this comment.
Let's say there's an hyperparameter called nan_values_handling. It can take the value "allow" in cuML, but its equivalency in sklearn is "accept". Is there a way to write a dictionary that does the translation?
There was a problem hiding this comment.
Can we just use an Enum for accept and dispatch? To me it feels brittle if there's some estimator in the future that uses "accept" string as a valid hyperparam value.
There was a problem hiding this comment.
I'm not sure we have to worry about the case where the value is "accept" or "dispatch". The way I understand the dictionary like
_hyperparam_interop_translator = {
"solver": {
"lbfgs": "qn",
"liblinear": "qn",
"newton-cg": "qn",
"newton-cholesky": "qn",
"sag": "qn",
"saga": "qn"
},
}is that it lists those hyper-parameters that needs translating and for those it lists the values and their translations. For example solver="lbfgs" needs translating to solver="qn", but say foobar=42 won't need translating because foobar isn't listed. Similarly solver="dazzle" doesn't need translating because it isn't listed. This makes me think we don't really need an entry with a value of "accept", we could just have no entry ("accept" is the default assumption).
That is the "accept" magic value dealt with. Then there is the case of "dispatch" which is used for parameter values that can't be translated. It means "use scikit-learn". I think we should replace it by something like NotImplemented or some other exception or similar. This would deal with the case where a cuml value for a parameter should be "dispatch" - here we aren't able to tell if we should use scikit-learn or use cuml with this value. Hence lets use something like NotImplemented.
The parameter values used in scikit-learn don't matter because they are keys in the dictionaries.
There was a problem hiding this comment.
Having slept on my comment I've changed my mind. I still think having a method on each class that modifies the parameters as it sees fit could be nice. Not sure if it will create more typing/work than using a dict ¯_(ツ)_/¯.
I also pondered the _hyperparam_translator function and I think this is how I'd write it. It doesn't use my suggestion from above (NotImplemented) but it could. The main changes are that it isn't a class method anymore (I couldn't work out why it was one), it merged the base classes translations with those of the derived class (I assume those are the only two interesting ones, we don't need to merge them together from the whole inheritance tree), I removed the if cases that were for "accept" (I think we don't do anything in those cases other than 👍 )
def _hyperparam_translator(self, **kwargs):
"""
This method is meant to do checks and translations of hyperparameters
at estimator creating time.
Each children estimator can override the method, returning either
modifier **kwargs with equivalent options, or
"""
gpuaccel = True
# Copy it so we can modify it
translations = dict(super()._hyperparam_interop_translator)
# Allow the derived class to overwrite the base class
translations.update(self._hyperparam_interop_translator)
for parameter_name, value in kwargs.items():
# maybe clean up using: translations.get(parameter_name, {}).get(value, None)?
if parameter_name in translations:
if value in translations[parameter_name]:
if translations[parameter_name][value] == "dispatch":
gpuaccel = False
else:
kwargs[arg] = translations[parameter_name][value]
return kwargs, gpuaccelOne more thought on "dispatch": if we don't replace it with NotImplemented or similar, can we use use_cpu or something? I can't keep it straight in my head what "dispatch" means in the various libraries (most mean "use a GPU" when they talk about dispatching, in cuml it means "use a CPU"). Maybe something explicit like "use_cpu" makes it easier to reason about what is happening (thought my preference is still NotImplemented or similar).
There was a problem hiding this comment.
let's chat about this offline since it's getting long winded a bit, if my memory serves correctly, I got the idea of using the dict from you @betatim 😄
wphicks
left a comment
There was a problem hiding this comment.
Finished first half of the review but would like to continue going over the rest more carefully from here.
| # limitations under the License. | ||
| # | ||
|
|
||
| from __future__ import annotations |
There was a problem hiding this comment.
Let's remove any unused code from this file.
There was a problem hiding this comment.
Agree, though amusingly enough, this is not one of them. It is not explicitly used but implicitly to allow the forward reference of _instance in
class ModuleAcceleratorBase(
importlib.abc.MetaPathFinder, importlib.abc.Loader
):
_instance: ModuleAcceleratorBase | None = None| frame = sys._getframe() | ||
| # We cannot possibly be at the top level. | ||
| assert frame.f_back | ||
| calling_module = pathlib.PurePath(frame.f_back.f_code.co_filename) |
There was a problem hiding this comment.
Have we profiled to understand the performance implications of this mechanism? Inspecting the frame seems like something we should do only if we really have to. In the context of cuML, we're already tracking whether or not we're internal to the cuML API, so do we need to use this?
There was a problem hiding this comment.
tl;dr: the time spent doing this is negligible.
Just did a profiling of our importing times which was very illuminating after a long time of not doing it, and here is the current status quo of which I was a bit surprised:
- Time to import regular cuML: 11.21s
- Time to import cuml.experimental.accel (includes importing cuML): 12.389206647872925
Now, if we look further, here are the main components that take time to import:
- UMAP 6.79s! of which:
- pynndescent 3.53
- cuML (without UMAP) 4.72s of which:
- cuDF 1.9s
- CUDA contest initialization ~ 1s (estimated)
- sklearn 0.9s
- Dask 0.8s
- Rest 0.45s of which
- 0.44 importing IPython
- time spent inspecting frame (i.e. the actual question here): 0.003s
Gotta say, I was a bit surprised that the UMAP package is responsible for more than half our import time!
| f".._wrappers.{mode.slow_lib}", __name__ | ||
| ) | ||
| try: | ||
| (self,) = ( |
There was a problem hiding this comment.
Why the tuple unpacking here?
There was a problem hiding this comment.
because of the list comprehension
(
p
for p in sys.meta_path
if isinstance(p, cls)
and p.slow_lib == mode.slow_lib
and p.fast_lib == mode.fast_lib
)finds the moduleaccelerator if it already exists, but puts it in a tuple, and we return self at the end of the function
| return self | ||
|
|
||
|
|
||
| def disable_module_accelerator() -> contextlib.ExitStack: |
There was a problem hiding this comment.
As a follow-on, we should be able to make this much faster using our global settings object. Not critical for the initial merge though.
| setattr(self, func_name, func) | ||
|
|
||
| @classmethod | ||
| def _hyperparam_translator(cls, **kwargs): |
There was a problem hiding this comment.
Can we just use an Enum for accept and dispatch? To me it feels brittle if there's some estimator in the future that uses "accept" string as a valid hyperparam value.
| ) | ||
|
|
||
|
|
||
| def pytest_load_initial_conftests(early_config, parser, args): |
There was a problem hiding this comment.
So the no-code change magic kicks in when running the pytest suite? Very cool.
By the way, does it affect the other pytests that are outside cuml.experimental.accel? Many of our existing tests assert that cuML algorithm matches output of sklearn's counterpart.
There was a problem hiding this comment.
It only affects pytests that are run with the -p flag pytest -p cuml.experimental.accel ...
|
I ran the following small snippet to see things in action, but I'm now puzzled about whether or not cuml was used. Is there an easy way to tell (assume I'm a simple minded user who isn't going to dig into the cuml codebase)? import cuml.experimental.accel
cuml.experimental.accel.install()
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
X, y = make_blobs()
km = KMeans()
km.fit(X, y)
print(f"{km.cluster_centers_=}")
print(km.score(X, y))This outputs the following: I was expecting to see either a log message saying "This was run on the GPU!" (or something similarly positive and simple) or as an alternative something like what I proposed in scikit-image where we issue a The second thing I thought might tell me if it was dispatched was inspecting a fitted attribute, though I guess cuml array works hard to make that hard :-/ |
| original_class_a # Store a reference to the original class | ||
| ) | ||
| # print("HYPPPPP") | ||
| kwargs, self._gpuaccel = self._hyperparam_translator(**kwargs) |
There was a problem hiding this comment.
We need to handle things like KMeans(8) as well. So not just translating keyword arguments but also the (few?) occasions where positional arguments are allowed for the scikit-learn estimator.
Right now kwargs = {'args': 8} when you instantiate KMeans(8) like this. That seems definitely not what we want :D
There was a problem hiding this comment.
I think we can use the result of inspect.signature(self._cpu_model_class).bind(*args, **kwargs).arguments to get a dictionary that contains everything the user passed.
To get all arguments, including defaults that the user didn't pass:
signature = inspect.signature(self._cpu_model_class).bind(*args, **kwargs)
signature.apply_defaults()
print(signature.arguments)I think this is what we need to pass to the translator
There was a problem hiding this comment.
I had never seen this KMeans(8), what is the expected behavior here? I thought non positional arguments except for X and y were not allowed?
There was a problem hiding this comment.
I also thought that positional arguments are no longer supported but KMeans(8) == KMeans(n_clusters=8)` :-/ I've not looked through the docs to find out which estimators still allow positional arguments, was thinking it would be simpler to "make it work" than try to find out how big a problem it was (given there is at least one case).
|
In general I think we can fix/change most things here after people start trying it. These are things I'd fix before:
|
| **Serialization/Pickling of ProxyEstimators** | ||
|
|
||
| Since pickle has strict rules about serializing classes, we cannot | ||
| (reasonably) create a method that just pickles and unpickles a ProxyEstimat |
There was a problem hiding this comment.
| (reasonably) create a method that just pickles and unpickles a ProxyEstimat | |
| (reasonably) create a method that just pickles and unpickles a ProxyEstimator |
|
For me its fine to merge. We can always keep working on things |
wphicks
left a comment
There was a problem hiding this comment.
I think this is ready for merge as an experimental feature. There are some things which remain to be tweaked/discussed, but the basic functionality works well and is very useful. Let's merge!
|
/merge |
PR adds a first version of a command line user experience that covers the following estimators: