Skip to content

Conversation

@akaptano
Copy link
Collaborator

@akaptano akaptano commented Jun 5, 2022

Currently the SINDyPI implementation is quite weak in the code, and there is no compatibility for using SINDy-PI on general PDEs. This branch deprecates the SINDyPILibrary class and implements this functionality within the PDE and WeakPDE classes. There remain some open issues:

  1. I am not quite sure how to use 'tweights' for the WeakPDELibrary integrals when RHS temporal derivatives are involved. I thought I would let @znicolaou take a quick stab at it. So the WeakPDELibrary currently runs but does not produce correct output.
  2. normalize_columns=True seems not to work with the SINDyPI optimizer. This is true without my changes. I seem to recall this being an issue because of the way the optimizer works (using constraints and cvxpy) but can't remember and not sure where I wrote it down. If this flag is unusable we should at least add a warning about this.
  3. I am waiting for the modified KdV data from Kardy's Youtube lecture to add a SINDyPI-PDE example at the end of the now modified Example 9 notebook.

… WeakPDELibrary is similar, but havent figured out the tweights issue to correctly integrate things. Replaced the SINDyPI example with an updated and better example. Waiting on modified KdV data from Kardy to try it out on PDE data. One other issue is generating polynomials of derivative terms. Going to talk to Zack about easiest way to implement since he does this somewhere.
@akaptano akaptano requested review from KKahirman and znicolaou June 5, 2022 22:52
@codecov-commenter
Copy link

codecov-commenter commented Jun 5, 2022

Codecov Report

Merging #195 (df0472a) into master (d0517b7) will decrease coverage by 0.84%.
The diff coverage is 89.09%.

❗ Current head df0472a differs from pull request most recent head 0e6ed24. Consider uploading reports for the commit 0e6ed24 to get more accurate results

@@            Coverage Diff             @@
##           master     #195      +/-   ##
==========================================
- Coverage   93.78%   92.94%   -0.85%     
==========================================
  Files          33       33              
  Lines        3297     3329      +32     
==========================================
+ Hits         3092     3094       +2     
- Misses        205      235      +30     
Impacted Files Coverage Δ
pysindy/feature_library/weak_pde_library.py 95.92% <81.81%> (-0.45%) ⬇️
pysindy/feature_library/pde_library.py 94.94% <88.88%> (-1.84%) ⬇️
pysindy/feature_library/base.py 92.51% <100.00%> (ø)
pysindy/feature_library/sindy_pi_library.py 74.21% <100.00%> (-21.02%) ⬇️
pysindy/pysindy.py 88.76% <100.00%> (+0.68%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0517b7...0e6ed24. Read the comment docs.

…kPDELibrary, but now PDELibrary also has such a grid. So can just check for some WeakPDELibrary specific attribute like Hxt instead. There will definitely be future issues with the ensembling.
@znicolaou
Copy link
Collaborator

Will take a look at this in the next day or two--it should be easy!

@znicolaou
Copy link
Collaborator

@akaptano I think I got the weak library acting correctly for the implicit_terms case--there was just one minor change, to rescale the weights if they include time derivatives correctly (the old library assumed the time derivative order was 0 around line 508 when it computes the scaling factor np.product(H_xt_k[k] ** (1.0 - deriv)). I tried it on the ODE example in notebook 9. It works okay, but you need to include a lot of domains (K=2000 there), as otherwise the optimizer gets stuck in a suboptimal solution. Anyway, still will have to test with the PDE example, but I think it's doing what it's supposed to.

@KIM-cpu05
Copy link

@akaptano I think I got the weak library acting correctly for the implicit_terms case--there was just one minor change, to rescale the weights if they include time derivatives correctly (the old library assumed the time derivative order was 0 around line 508 when it computes the scaling factor np.product(H_xt_k[k] ** (1.0 - deriv)). I tried it on the ODE example in notebook 9. It works okay, but you need to include a lot of domains (K=2000 there), as otherwise the optimizer gets stuck in a suboptimal solution. Anyway, still will have to test with the PDE example, but I think it's doing what it's supposed to.

When will implicit_terms be available in weak_pde_library?

@znicolaou
Copy link
Collaborator

znicolaou commented Jun 17, 2022

When will implicit_terms be available in weak_pde_library?

I've completed a first pass at implementing this (see #185), but that PR includes a lot of updates. The other developers will need to review the changes before we make a new release. In the meanwhile, if you'd like to test the new features, you can install the package from a clone of the repository on the simplify-fitting brach, e.g.,

git clone https://github.com/dynamicslab/pysindy.git
cd pysindy
git checkout simplify-fitting
pip install -r requirements-dev.txt
jupyter notebook examples/9_sindypi_with_sympy.ipynb

Let us know if you try it out and notice and issues.

@KKahirman
Copy link
Collaborator

KKahirman commented Jun 17, 2022 via email

@znicolaou
Copy link
Collaborator

Hi guys, Let me know how I can help. Thanks!

Thanks, @KKahirman ! Just take a look at the example notebook 9, and see if I'm misrepresenting any results! It's terse and rough now also, so feel free to add details also if you'd like.

@KIM-cpu05
Copy link

KIM-cpu05 commented Jun 25, 2022

When will implicit_terms be available in weak_pde_library?

I've completed a first pass at implementing this (see #185), but that PR includes a lot of updates. The other developers will need to review the changes before we make a new release. In the meanwhile, if you'd like to test the new features, you can install the package from a clone of the repository on the simplify-fitting brach, e.g.,

git clone https://github.com/dynamicslab/pysindy.git
cd pysindy
git checkout simplify-fitting
pip install -r requirements-dev.txt
jupyter notebook examples/9_sindypi_with_sympy.ipynb

Let us know if you try it out and notice and issues.

BUG. Everytime I rerun a code it will give different result particularly the coefficients.....

For instance,

x0x0 = -0.312 1 + 0.151 x0 + 0.010 x1 + 0.072 x0x1 + 2.824 x5 + -12.574 x0x5 + 5.437 x0x0x5 + 0.127 x0x3 + 0.034 x0x0x3

and

x0x0 = -0.328 1 + 0.169 x0 + 0.010 x1 + 0.073 x0x1 + 2.960 x5 + -13.206 x0x5 + 5.545 x0x0x5 + 0.127 x0x3 + 0.034 x0x0x3.

@akaptano
Copy link
Collaborator Author

Not sure why this is but if I had to guess I would say this is from cvxpy calling a solver with a nondeterministic initial guess or something like that. Try increasing the error_tolerance and see if you start getting more converged results.

jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this pull request May 9, 2024
* The current use of `pre_model_hook` only applied the hook in training.
This can be useful, but UMAL requires the hook also in validation and
test. Thus, with the old setup UMAL only worked for the training.
My propsoed changes make the `pre_model_hook` part of the model, apply
it everywhere, and allow UMAL validation and evaluation.

I think in the future we should also allow for different hook behaviors
according to the setting at which it is called. But, for now  the
proposed changes are enough.

* Simpler Hook and Cleaner Pipeline

This commit comprises two things.
(1) A pre model hook that is simpler than the one in  original PR.
(2) An idea to avoid copying the whole dataset that just involves
a copy of the labels. That is still suboptimal in terms of memory
use, but make the overall code simpler. Not sure if it is the best
version. However,  at some point we have to extend the labels for
the loss, and we need to do so  withouth breaking the whole
downstream procedure. So maybe this is a good middle ground.

* Simplified UMAL Sampling Logic

This commit implements an idea from Martin that simplifies the sampling logic of the UMAL sampling util so that the sampling automatically checks whether the data has been extended.
Thus no extra argument is required for the function call for `sample_umal`.

* Spell correction neuralhydrology/modelzoo/basemodel.py

Updated comment so that additional is spelled correctly

Co-authored-by: Martin Gauch <[email protected]>

* Spell correction for neuralhydrology/utils/samplingutils.py

UMAl -> UMAL

Co-authored-by: Martin Gauch <[email protected]>

* Imporvement for Comments

Added some minor changes to the comments.

---------

Co-authored-by: Martin Gauch <[email protected]>
jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this pull request May 9, 2024
* Fix typo in models doc (dynamicslab#190)

* CPU environment now pulls from conda forge, which is necessary to ensure an up-to-date xarray. (dynamicslab#193)

* Fix handling of weekly frequencies. (dynamicslab#194)

From neuralhydrology/neuralhydrology#111:
pd.infer_freq will return strings like "W-SUN" for weekly data, which
pd.Timestamp doesn't understand. We now convert these frequencies to
their equivalent multiple of 7D.

* Correcting the pre model hook and UMAL sampling (dynamicslab#195)

* The current use of `pre_model_hook` only applied the hook in training.
This can be useful, but UMAL requires the hook also in validation and
test. Thus, with the old setup UMAL only worked for the training.
My propsoed changes make the `pre_model_hook` part of the model, apply
it everywhere, and allow UMAL validation and evaluation.

I think in the future we should also allow for different hook behaviors
according to the setting at which it is called. But, for now  the
proposed changes are enough.

* Simpler Hook and Cleaner Pipeline

This commit comprises two things.
(1) A pre model hook that is simpler than the one in  original PR.
(2) An idea to avoid copying the whole dataset that just involves
a copy of the labels. That is still suboptimal in terms of memory
use, but make the overall code simpler. Not sure if it is the best
version. However,  at some point we have to extend the labels for
the loss, and we need to do so  withouth breaking the whole
downstream procedure. So maybe this is a good middle ground.

* Simplified UMAL Sampling Logic

This commit implements an idea from Martin that simplifies the sampling logic of the UMAL sampling util so that the sampling automatically checks whether the data has been extended.
Thus no extra argument is required for the function call for `sample_umal`.

* Spell correction neuralhydrology/modelzoo/basemodel.py

Updated comment so that additional is spelled correctly

Co-authored-by: Martin Gauch <[email protected]>

* Spell correction for neuralhydrology/utils/samplingutils.py

UMAl -> UMAL

Co-authored-by: Martin Gauch <[email protected]>

* Imporvement for Comments

Added some minor changes to the comments.

---------

Co-authored-by: Martin Gauch <[email protected]>

* Update __about__.py

---------

Co-authored-by: Martin Gauch <[email protected]>
Co-authored-by: Grey Nearing <[email protected]>
Co-authored-by: Daniel Klotz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants