-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-24634][SS] Add a new metric regarding number of rows later than watermark #21617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…rmark * This adds a new metric to count the number of rows arrived later than watermark
|
add to whitelist |
|
Test build #92239 has finished for PR 21617 at commit
|
|
Test build #92241 has finished for PR 21617 at commit
|
|
Retest this please. |
|
Test build #92276 has finished for PR 21617 at commit
|
|
LGTM, but note that the rows being counted here are the rows persisted into the state store, which aren't necessarily the input rows. So the side-channel described in the JIRA would be orthogonal to this. |
|
@jose-torres Btw, as I described in the JIRA, my final goal is pushing late events to side-output (as Beam and Flink represented) but being stuck with couple of concerns (Please correct me anytime if I'm missing here):
Query can have couple of transformations before reaching stateful operator and being filtered out due to watermark. Providing transformed rows is not ideal and I guess that's you said as "aren't necessarily the input rows". Ideally we would be better to provide origin input rows, rather than transformed one, but then we should put major restriction on watermark: We can't apply transformation(s) to populate/manipulate timestamp field, and timestamp field must not be modified during transformations. For example, Flink provides timestamp assigner to extract timestamp value from input stream, and reserved field name
I have been struggling on this, but as far as my understanding is correct, RDD itself doesn't support multiple outputs, as the nature of RDD. For me, this looks like major difference between pull model vs push model, cause in push model which other streaming frameworks use, defining another output stream is really straightforward, just like adding remote listener, whereas I'm not sure how it can be clearly defined in pull model. I also googled about multiple outputs on RDD (as someone could have struggled before) but no luck. The alternative approaches I can imagine are kinds of workarounds: RPC, listener bus, callback function. Nothing can define another stream within current DAG, and I'm also not sure that we can create another DataFrame based on the data and let end users compose another query. It would be really helpful if you can think about better alternatives and share. It would be best if I'm missing here and Spark already provides side-channel. |
|
Well, "clear" is relative. Since we're trying to provide functionality in the Dataframe API, it's perfectly alright for the RDD graph to end up looking a bit weird. It seems feasible to do something like:
I agree that watermarks should be applied immediately after the data reader - other streaming systems generally require this, and Spark does not seem to be getting any benefits from having a more general watermark concept. I haven't had time to push for this change, but I think it's known that the current Spark watermark model is flawed, and I'd support fixing it for sure. |
| ("numRowsUpdated" -> JInt(numRowsUpdated)) ~ | ||
| ("memoryUsedBytes" -> JInt(memoryUsedBytes)) | ||
| ("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~ | ||
| ("numLateInputRows" -> JInt(numLateInputRows)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you are measuring the number of "keys" filtered out of the state store since they have crossed the late threshold correct ? It may be better to rename this metrics here and at other places to "number of evicted rows". Its better if we could rather expose the actual number of events that were late.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you are measuring the number of "keys" filtered out of the state store since they have crossed the late threshold correct ?
No, it is based on "input" rows which are filtered out due to watermark threshold. Note that the meaning of "input" is relative, cause it doesn't represent for input rows in overall query, but represents for input rows in state operator.
Its better if we could rather expose the actual number of events that were late.
I guess the comment is based on missing, but I would think that it would be correct that we filtered out late events from the first phase of query (not from state operator) so that we can get correct count of late events. For now filters affect the count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I meant was, if the input to the state operator is the result of the aggregate, then we would not be counting the actual input rows to the group by. There would be max one row per key, so would give the impression that there are not as many late events but in reality it may be more.
If this is not the case then I am fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arunmahadevan Ah yes got it. If we would want to have accurate number we need to filter out late events from the first time anyway. I guess we may need to defer addressing this until we change the behavior.
|
Abandoning the patch. While I think the JIRA issue is still valid, looks like we should address watermark issue to have correct number of late events. Thanks for reviewing @jose-torres @arunmahadevan . |
What changes were proposed in this pull request?
This adds a new metric to count the number of rows arrived later than watermark.
The metric will be exposed to two places:
numLateInputRowsinstateOperatorsnumber of rows which are later than watermarkin state operator execPlease refer https://issues.apache.org/jira/browse/SPARK-24634 to see rationalization of the issue.
How was this patch tested?
Modified existing UTs.