Skip to content

Conversation

@HeartSaVioR
Copy link
Contributor

What changes were proposed in this pull request?

This adds a new metric to count the number of rows arrived later than watermark.

The metric will be exposed to two places:

  1. streaming query listener -numLateInputRows in stateOperators
  2. SQL tab in UI - number of rows which are later than watermark in state operator exec

Please refer https://issues.apache.org/jira/browse/SPARK-24634 to see rationalization of the issue.

How was this patch tested?

Modified existing UTs.

…rmark

* This adds a new metric to count the number of rows arrived later than watermark
@HeartSaVioR
Copy link
Contributor Author

@HyukjinKwon
Copy link
Member

add to whitelist

@SparkQA
Copy link

SparkQA commented Jun 23, 2018

Test build #92239 has finished for PR 21617 at commit ff1b895.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jun 23, 2018

Test build #92241 has finished for PR 21617 at commit ff1b895.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member

Retest this please.

@SparkQA
Copy link

SparkQA commented Jun 25, 2018

Test build #92276 has finished for PR 21617 at commit ff1b895.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jose-torres
Copy link
Contributor

LGTM, but note that the rows being counted here are the rows persisted into the state store, which aren't necessarily the input rows. So the side-channel described in the JIRA would be orthogonal to this.

@HeartSaVioR
Copy link
Contributor Author

HeartSaVioR commented Jun 25, 2018

@jose-torres
Yes, you're right. They would be the rows which applies other transformation and filtering, not origin input rows. I just haven't find proper alternative word than "input row" since in point of state operator's view, they're input rows.

Btw, as I described in the JIRA, my final goal is pushing late events to side-output (as Beam and Flink represented) but being stuck with couple of concerns (Please correct me anytime if I'm missing here):

  1. Which events to push?

Query can have couple of transformations before reaching stateful operator and being filtered out due to watermark. Providing transformed rows is not ideal and I guess that's you said as "aren't necessarily the input rows".

Ideally we would be better to provide origin input rows, rather than transformed one, but then we should put major restriction on watermark: Filter with watermark should be applied in data reader (or having a filter just after data reader), which means input rows itself should have timestamp field.

We can't apply transformation(s) to populate/manipulate timestamp field, and timestamp field must not be modified during transformations. For example, Flink provides timestamp assigner to extract timestamp value from input stream, and reserved field name rowtime is used for timestamp field.

  1. Does the nature of RDD support multiple outputs?

I have been struggling on this, but as far as my understanding is correct, RDD itself doesn't support multiple outputs, as the nature of RDD. For me, this looks like major difference between pull model vs push model, cause in push model which other streaming frameworks use, defining another output stream is really straightforward, just like adding remote listener, whereas I'm not sure how it can be clearly defined in pull model. I also googled about multiple outputs on RDD (as someone could have struggled before) but no luck.

The alternative approaches I can imagine are kinds of workarounds: RPC, listener bus, callback function. Nothing can define another stream within current DAG, and I'm also not sure that we can create another DataFrame based on the data and let end users compose another query.

It would be really helpful if you can think about better alternatives and share. It would be best if I'm missing here and Spark already provides side-channel.

@jose-torres
Copy link
Contributor

jose-torres commented Jun 25, 2018

Well, "clear" is relative. Since we're trying to provide functionality in the Dataframe API, it's perfectly alright for the RDD graph to end up looking a bit weird. It seems feasible to do something like:

  • Have a stream reader RDD filter out late rows, writing them to some special shuffle partition (set of partitions?) which the main query knows not to read.
  • Have a stream writer RDD with two heterogeneous sets of partitions: one to write the main query to the sink, and another to apply the specified action to the late rows.

I agree that watermarks should be applied immediately after the data reader - other streaming systems generally require this, and Spark does not seem to be getting any benefits from having a more general watermark concept. I haven't had time to push for this change, but I think it's known that the current Spark watermark model is flawed, and I'd support fixing it for sure.

("numRowsUpdated" -> JInt(numRowsUpdated)) ~
("memoryUsedBytes" -> JInt(memoryUsedBytes))
("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~
("numLateInputRows" -> JInt(numLateInputRows))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you are measuring the number of "keys" filtered out of the state store since they have crossed the late threshold correct ? It may be better to rename this metrics here and at other places to "number of evicted rows". Its better if we could rather expose the actual number of events that were late.

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Jun 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arunmahadevan

Here you are measuring the number of "keys" filtered out of the state store since they have crossed the late threshold correct ?

No, it is based on "input" rows which are filtered out due to watermark threshold. Note that the meaning of "input" is relative, cause it doesn't represent for input rows in overall query, but represents for input rows in state operator.

Its better if we could rather expose the actual number of events that were late.

I guess the comment is based on missing, but I would think that it would be correct that we filtered out late events from the first phase of query (not from state operator) so that we can get correct count of late events. For now filters affect the count.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant was, if the input to the state operator is the result of the aggregate, then we would not be counting the actual input rows to the group by. There would be max one row per key, so would give the impression that there are not as many late events but in reality it may be more.

If this is not the case then I am fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arunmahadevan Ah yes got it. If we would want to have accurate number we need to filter out late events from the first time anyway. I guess we may need to defer addressing this until we change the behavior.

@HeartSaVioR
Copy link
Contributor Author

Abandoning the patch. While I think the JIRA issue is still valid, looks like we should address watermark issue to have correct number of late events. Thanks for reviewing @jose-torres @arunmahadevan .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants