-
Notifications
You must be signed in to change notification settings - Fork 68
Release 0.4.3 #332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Release 0.4.3 #332
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We don't have a standard way of specifying types yet in MDF but the current code is definitely not good since it is always just Tensor. Now it should be aligned with pytorch and numpy.
- Properly handle ONNX operations that return multiple outputs. Now expressions for the value of and output port cand index into the tuple, for example. OutputPort(id='_13', value="onnx_BatchNormalization_1[0]"). This was needed for ResNet This also fixed an existing hack for MaxPool. Might be more of these lying around, need to take a better look. - In order to get the above working, I changed "onnx::" to "onnx_" in function\parameter ids generated by the exported. This lets eval work because "::" is a can't be parsed in Python. - Moved modeic ONNX opset version to 15 from 13. - Fixed issue where improper schema (wrong opset) was being looked up for ops in the exporter now that ONNX has moved to 15.
I moved it to a better place and made it general for all Reshape ops, not just ops with Reshape_1 ids.
This looks like a hack needed for multiple outputs from Clip op. I think this is handled generally now.
I am pretty sure this is related to not handling precision consistently in the execution engine. We are converting back and forth in different places from float32 to float64. The results are matching intermittently without a tolerance but I have set and absolute tolerance for now. Need to investigate thie further.
These fixtures are only used by pytorch to mdf conversion tests. Lets keep them separate.
… been installed. Currently, we are not testing if core mdf works without PyTorch or PsyNeuLink because these are installed before core package is installed (PsyNeuLink depends on PyTorch). I have moved this install to the end, after installation of all backends (including PNL) from PIP. This is better for testing that a clean install works.
… into examples/pytorch-to-mdf � Conflicts: � tests/conftest.py
This should support both ways depending on version of torchvision.
@parikshit14, I changed some of your code to remove a lot of duplication. test_import.py under pytorch now enumerates all models in torchvision and creates a test for each different type of model. All the models have the same interface so we can consolidate the testing code into a parameterized pytest. We have 5 out of 20 tests failing still, lets see if we can track down why each of these models is failing to run in the execution engine.
Specifically, h5py which isn't being used and is causing an uneeded dependency.
math, numpy, and python builtins are all supported by expression evaluation
currently only variable dependencies in args are checked
execution_engine: check dependencies inside function value
…ples/pytorch-to-mdf
- We can't have '-' characters in port names in MDF. Some PyTorch models have these. - Also fixed issue where models were being tested with all zeros inputs which made some models pass when in reality they were not computing the correct values. More bugs to track down ...
The pytorch exporter now removes constant ops from the graph and simply inserts them as parameters.
ONNX Batch normalization can return and optional number of outputs. To complicate this, the optional outputs are not allowed unless the attribute training_mode=1 is set. I have added a hardcoded check for this to handle this case.
Torchvision models run through PyTorch and MDF do not have exactly matching results. I have set the comparison tolerance to and absolute 1e-5 for now. I imagine the difference could be a lot of things. One could be how we are not really controlling precision well in MDF models. Another could be that ONNX ops don't implement exactly the same algorithms that PyTorch ops do.
The outputs of the PyTorch and MDF model for this torchvsion version of inception are different. Need to investigate.
… into examples/pytorch-to-mdf
Examples/pytorch to mdf
…were being thrown with earlier versions
update mdf_to_onnx branch to current scenario
Updates to v0.4.3 for mdf package; pinning onnx packages due to opsets
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.