-
Notifications
You must be signed in to change notification settings - Fork 358
Description
Problem?
I recently added Kalman smoothing to the derivative package, and the next step is figuring out how to also pass the smoothed values back to SINDy to use. This seems like a useful effort, since several of the differentiation methods also smooth the coordinate values, but don't return them. I debated coding this change to demonstrate why the feature would be useful, but in reading through all the pysindy/differentiation/* and the derivative package, it seemed the "right way" to do this isn't straightforward. I'd hope this 🧵 can also be a discussion of refactoring path in order to make this an easier feature to implement, if indeed this is a desirable feature.
Ideal world
Option 1
There's some code smells and design asymmetry in how pysindy uses derivatives. In an ideal world, I could precede this line with
x = self.smoothing_method(x,t)A mock implementation:
def kalman_smoother_posits(x,t):
return kalman_smoother(x,t)[0]
def kalman_smoother_velocities(x,t):
return kalman_smoother(x,t)[1]
@lru_cache(1)
def kalman_smoother(x,t):
# the implementation
...
x = self.kalman_smoother_posits(x,t)
x_dot = self.kalman_smoother_velocities(x,t)where the cache is really the only state needed. __init__ would have to detect smoothing_method is None and replace it with the identity function.
Alternatives
Option 2: Classy
For less dramatic change that would replace the cache with standard object-oriented statefulness, we could ask Differentiators/smoothers to add a smoothed_x attribute, if they want. SINDy.fit() would have the lines:
x_dot = self.differentiation_method(x,t)
try:
x = self.differentiation_method.smoothed_x
except AttributeError:
passHere, the differentiation_method object would be responsible for setting the smoothed_x property. We could obviate the hasattr try-except by modifying BaseDifferentiation.__call__() to set smoothed_x = x. The reason I don't like this option is that PySINDY differentiation objects are stateless objects whose only external API comprises __init__() and __call__(). In other words, they're not really classes. The only nice things about having classes is that you can set the kwargs first and called the function later, but you can do that with functools.partial.
On a related note: SmoothedFiniteDifference objects are a subclass of FiniteDifference. I think this is a case of "composition over inheritance." To illustrate the subclass problem, if I want to apply the smoother to a different differentiation method, or do some other preprocessing, I could end up with combinatorial subclasses. The first method solves this problem, but if going with option 2, I would recommend replacing SmoothedFiniteDifference with SmoothedDifferentiation that handles smoothing and takes another BaseDifferentiation subclass as an argument.
Additional context
-
SINDyDerivative doesn't inherit from
BaseDifferentiation. I'm not sure why not - it repeats the definition of__call__(), and overrides all the other methods. In addition, because SINDyDerivative callsderivative.dydx()instead of the classes inderivativedirectly, it can't implement the smoothing in either Option 1 or Option 2. Sincedydx()uses a public dictionary of differentiation methods (derivative.methods), we could haveSINDyDerivativejust use the same dictionary to access the exact class it wants, bypassingdydx(). -
one con of option 1: the functional interface might allow nonsensical combinations of smoothing and derivative. E.g. the kalman positions and velocities are returned from the same maximum likelihood expression, so it wouldn't make sense to allow a different smoother. This problem suggests option 3, which is to implement the interface:
x_dot, x = self.differentiation_and_smoothing_method(x,t)Minimal way of accomplishing this is to modify BaseDifferentiation.__call__() to return self.derivative(x, t), x
Refactoring path??:
Rapid thoughts because I've gotta wrap this up:
- Make
SINDyDerivativewrap the exact class it wants and not rely ondydx. - Remove FiniteDifference as the superclass of SmoothedFiniteDifference.
- Implement Option 2. Stop here if that's all we want.
- Make
BaseDifferentiatordoself.differentiate = functools.partial(self.differentiate, **kwargs) - implement cacheing and smoothing function of Option 1, but keep inside classes for now.
- Expose class differentiate methods as module level functions & deprecate the classes. Or not if you want to keep the classes.
Additional, Additional context
- Currently,
pysindy.fit()andscore()have identical, 55-line blocks of code for calculating derivatives.SINDyobjects also have adifferentiate()function that is currently unused in the package except in tests, and handles fewer cases than the blocks infit()andscore(). Can I refactor those 55 lines intodifferentiate()and repurpose that function?