Conversation
src/liquid.py
Outdated
| mmbar = pymbar.MBAR(mU_kln, mN_k, verbose=False, relative_tolerance=5.0e-8, method='self-consistent-iteration') | ||
| mW1 = mmbar.getWeights() | ||
| mmbar = pymbar.MBAR(mU_kln, mN_k, verbose=False, relative_tolerance=5.0e-8) | ||
| mW1 = mmbar.weights() |
There was a problem hiding this comment.
I have no idea what impact the selection of this method entails. It's not referenced in migration guide or clearly described elsewhere in the docstrings. It's not actually clear to me what it was doing before; it's not a named argument in the initializer yet I don't see in any logs the warning about it being an unrecognized keyword argument ... this bit in the current version implies to me this is currently the default behavior. I'm not really sure.
A comment a few lines up leads me to believe this isn't very important ... https://github.com/leeping/forcebalance/pull/261/files#diff-60185d7a8e64e95075d00867f23a65af80149090d8f4f2465d1317f0f1177ed6R923
There was a problem hiding this comment.
Removing the argument gets different/bad behavior.
I filed a ticket with the pymbar developers: choderalab/pymbar#472
|
Hi Matt, Thanks a lot for your help on this. One of my concerns was that the major use cases for pymbar involved running a large number of long simulations, which could not be covered in the unit tests out of necessity. (About 50 simulations of water boxes at different temperatures with 8 ns each). It might be possible to design a unit test by storing a large 3D matrix of energies extracted from one of these jobs. Do you think I could add this as a test, with the expected result provided by pymbar 2, and make the result a requirement for proceeding with the newer versions?
|
|
Sure! My objective here is to update the code to work with the new API, and improved tests would help there. If you write up that test that works with version 3 in a separate PR I can incorporate it into here - I think that would be the path with the minimum friction for you? That the test is running and getting to the assertion step indicates to me that these changes might be correct - but maybe the method/solver protocol is misbehaving because of insufficient simulation data? I'm a bit over my head in really understanding what the existing test does. |
|
Hi Matt, I just took a closer look at the tests, and I think that it should be a good enough to determine the correctness of pymbar 4. The test of pymbar happens in the folder In short, I don't think we need to add another test. However, it is worth thinking about whether we could check for regressions in performance, in terms of wall time or number of cycles taken to reach convergence. If I remember correctly, pymbar 3 was slower than pymbar 2 because some C++ helper code was removed, but the slowdown was still acceptable. If the default convergence algorithm is changed in pymbar 4, then the number of iterations required to converge might increase as well. I'd like to avoid a future situation where a future user finds out that pymbar took ~1 hour to complete a step that took minutes several years ago.
|
|
Hi @leeping, I agree with your concerns about performance but I think it's something we should tackle in a future PR, for a few reasons:
In principle I could add some sort of a check to make sure the water study test takes less than some time to run, but I think that's undesirable because it doesn't really test the code here and is prone to flakiness due to the number of factors (particularly upstream) that can cause performance to go up or down. |
1e4f1e2 to
2b6240c
Compare
Splitting out some of #259 to tackle these updates one at a time