Conversation
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
mp2p_icp/src/covariance.cpp (1)
146-150: Cache the whitening factor outsideerrorLambda.
p.cov_invis pose-invariant, so this LLT now runs once percov2covpairing for every finite-difference Jacobian sample. Precomputing one factor per pairing beforeestimateJacobian()would keep the math fix and avoid multiplying that cost by the 6-DoF sweep.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mp2p_icp/src/covariance.cpp` around lines 146 - 150, The LLT decomposition of p.cov_inv (currently done via Eigen::LLT<Eigen::Matrix3d>(cov_inv).matrixU() assigned to L_T inside the errorLambda) is computed repeatedly for every finite-difference sample; move that decomposition out of errorLambda and precompute a single whitening matrix per cov2cov pairing (e.g., compute L_T once from p.cov_inv before calling estimateJacobian()) and then use that cached L_T when assigning err.block<3,1>(...) = L_T * ret.asEigen() inside errorLambda so the expensive Eigen::LLT is not recomputed for each sample.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mp2p_icp/src/covariance.cpp`:
- Around line 168-181: The chi² rescaling currently multiplies cov by chi2/(m-6)
unconditionally after calling errorLambda, but chi2 is only meaningful for
fully-whitened residuals; restrict this scaling to the fully-normalized
paired_cov2cov case. In the covariance() block where errorLambda(xInitial,
lmbParams, errAtOpt) is computed, add a guard that checks whether the residuals
are the whitened paired_cov2cov family (e.g., detect the lmbParams/whitening
flag or the active residual type used in paired_cov2cov) and only then compute
sigma2 = chi2/dof and apply cov.asEigen() *= sigma2; otherwise skip the
empirical scaling. Ensure you reference and use the existing symbols
errorLambda, covariance (the enclosing function), and paired_cov2cov to locate
and implement the guard.
---
Nitpick comments:
In `@mp2p_icp/src/covariance.cpp`:
- Around line 146-150: The LLT decomposition of p.cov_inv (currently done via
Eigen::LLT<Eigen::Matrix3d>(cov_inv).matrixU() assigned to L_T inside the
errorLambda) is computed repeatedly for every finite-difference sample; move
that decomposition out of errorLambda and precompute a single whitening matrix
per cov2cov pairing (e.g., compute L_T once from p.cov_inv before calling
estimateJacobian()) and then use that cached L_T when assigning
err.block<3,1>(...) = L_T * ret.asEigen() inside errorLambda so the expensive
Eigen::LLT is not recomputed for each sample.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7df2d103-27fa-4d43-a23f-646577fa0c79
📒 Files selected for processing (1)
mp2p_icp/src/covariance.cpp
The cov2cov branch in covariance.cpp whitened residuals with the full information matrix (cov_inv * e), so the assembled Hessian became J^T * cov_inv^2 * J instead of J^T * cov_inv * J. Combined with hundreds of pairings this drove |cov| down to ~1e-20 and made the estimate unusable. - Use the Cholesky factor L^T (with L L^T = cov_inv) to whiten the cov2cov residual, matching what optimal_tf_gauss_newton accumulates. - Multiply the inverse-Hessian by chi^2 / (m - 6), the standard a-posteriori unit-weight variance, to rescale the (otherwise optimistic) result by the empirical residual level.
564970d to
1bdc476
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #59 +/- ##
===========================================
+ Coverage 78.62% 78.66% +0.03%
===========================================
Files 191 191
Lines 10641 10657 +16
Branches 986 988 +2
===========================================
+ Hits 8367 8383 +16
Misses 2274 2274
🚀 New features to boost your workflow:
|
The cov2cov branch in covariance.cpp whitened residuals with the full information matrix (cov_inv * e), so the assembled Hessian became J^T * cov_inv^2 * J instead of J^T * cov_inv * J. Combined with hundreds of pairings this drove |cov| down to ~1e-20 and made the estimate unusable.
Summary by CodeRabbit
Release Notes