Skip to content

Commit a24b062

Browse files
committed
Update documentation
1 parent 0c45294 commit a24b062

File tree

1 file changed

+25
-14
lines changed

1 file changed

+25
-14
lines changed

src/include/stir/recon_buildblock/PoissonLogLikelihoodWithLinearModelForMean.h

Lines changed: 25 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -66,14 +66,12 @@ START_NAMESPACE_STIR
6666
the <i>sensitivity</i> because (if \f$r=0\f$) it is the total
6767
probability of detecting a count (in any bin) originating from \f$v\f$.
6868
69-
This class computes the gradient as a sum of these two terms. The
70-
sensitivity has to be computed by the virtual function
71-
\c add_subset_sensitivity(). The sum is computed by
72-
\c compute_sub_gradient_without_penalty_plus_sensitivity().
73-
74-
The reason for this is that the sensitivity is data-independent, and
75-
can be computed only once. See also
76-
PoissonLogLikelihoodWithLinearModelForMeanAndListModeData.
69+
This class computes the gradient directly, via \c compute_sub_gradient_without_penalty().
70+
However, an additional method (\c compute_sub_gradient_without_penalty_plus_sensitivity())
71+
is provided that computes the sum of the subset gradient and the sensitivity.
72+
This method is used in STIR algorithms such as \c OSMAPOSL.
73+
74+
See also \c PoissonLogLikelihoodWithLinearModelForMeanAndListModeData.
7775
7876
\par Relation with Kullback-Leibler distance
7977
@@ -121,6 +119,10 @@ public GeneralisedObjectiveFunction<TargetT>
121119
//PoissonLogLikelihoodWithLinearModelForMean();
122120

123121
//! Computes the gradient of the data fit term
122+
/*!
123+
This function is implemented in terms of \c actual_compute_sub_gradient_without_penalty()
124+
by setting do_subtraction = true
125+
*/
124126
virtual void
125127
compute_sub_gradient_without_penalty(TargetT& gradient,
126128
const TargetT &current_estimate,
@@ -130,12 +132,8 @@ public GeneralisedObjectiveFunction<TargetT>
130132
/*!
131133
This function is used for instance by OSMAPOSL.
132134
133-
This computes
134-
\f[ {\partial L \over \partial \lambda_v} + P_v =
135-
\sum_b P_{bv} {y_b \over Y_b}
136-
\f]
137-
(see the class general documentation).
138-
The sum will however be restricted to a subset.
135+
This function is implemented in terms of \c actual_compute_sub_gradient_without_penalty()
136+
by setting do_subtraction = false
139137
*/
140138
virtual void
141139
compute_sub_gradient_without_penalty_plus_sensitivity(TargetT& gradient,
@@ -253,7 +251,20 @@ public GeneralisedObjectiveFunction<TargetT>
253251
*/
254252
void compute_sensitivities();
255253

254+
//! computes the objective function subset gradient without the penalty
255+
/*!
256+
If do_subtraction is false, this computes
257+
\f[ {\partial L \over \partial \lambda_v} + P_v =
258+
\sum_b P_{bv} {y_b \over Y_b}
259+
\f]
260+
(see the class general documentation).
261+
The sum will however be restricted to a subset.
256262
263+
However, if do_subtraction is true, this function will instead compute
264+
\f[ {\partial L \over \partial \lambda_v} =
265+
\sum_b P_{bv} ({y_b \over Y_b} - 1)
266+
\f]
267+
*/
257268
virtual void
258269
actual_compute_sub_gradient_without_penalty(TargetT& gradient,
259270
const TargetT &current_estimate,

0 commit comments

Comments
 (0)