Commit fdc839a
authored
perf: avoid graph break for SiLUT when inferring (#4790)
This pull request simplifies and optimizes the implementation of the
`forward` method in the `ActivationFn` class within
`deepmd/pt/utils/utils.py`. The changes streamline the logic by removing
unnecessary condition checks and directly using `torch.where` for
computation.
I've evaluated this change using inference efficiency tasks from
LAMBench with DPA 3.1 3M model.
| System | Before: Avg Time ± Std (s) | After: Avg Time ± Std (s) |
Speedup | Success Rate |
|-------------------|---------------------------|--------------------------|---------|--------------|
| `catalysts_500.traj` | 211.82 ± 19.31 | **196.14 ± 18.11** | +7.1% |
100.0% |
| `inorganic_500.traj` | 204.62 ± 40.22 | **191.20 ± 36.44** | +6.4% |
100.0% |
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **Refactor**
- Improved the internal logic of the SiLU activation function for more
streamlined processing. No changes to user-facing functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->1 parent c46dc7d commit fdc839a
1 file changed
+4
-7
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
149 | 149 | | |
150 | 150 | | |
151 | 151 | | |
152 | | - | |
153 | | - | |
154 | | - | |
155 | | - | |
156 | | - | |
157 | | - | |
158 | | - | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
159 | 156 | | |
160 | 157 | | |
161 | 158 | | |
| |||
0 commit comments