Skip to content

Commit dce610a

Browse files
the-other-tim-brownfengjian
authored andcommitted
[HUDI-4399][RFC-57] Protobuf support in DeltaStreamer (apache#6111)
1 parent e46df7c commit dce610a

1 file changed

Lines changed: 111 additions & 0 deletions

File tree

rfc/rfc-57/rfc-57.md

Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
<!--
2+
Licensed to the Apache Software Foundation (ASF) under one or more
3+
contributor license agreements. See the NOTICE file distributed with
4+
this work for additional information regarding copyright ownership.
5+
The ASF licenses this file to You under the Apache License, Version 2.0
6+
(the "License"); you may not use this file except in compliance with
7+
the License. You may obtain a copy of the License at
8+
9+
http://www.apache.org/licenses/LICENSE-2.0
10+
11+
Unless required by applicable law or agreed to in writing, software
12+
distributed under the License is distributed on an "AS IS" BASIS,
13+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
See the License for the specific language governing permissions and
15+
limitations under the License.
16+
-->
17+
# RFC-57: DeltaStreamer Protobuf Support
18+
19+
20+
21+
## Proposers
22+
23+
- @the-other-tim-brown
24+
25+
## Approvers
26+
- @bhasudha
27+
- @vinothchandar
28+
29+
## Status
30+
31+
JIRA: https://issues.apache.org/jira/browse/HUDI-4399
32+
33+
> Please keep the status updated in `rfc/README.md`.
34+
35+
## Abstract
36+
37+
Support consuming Protobuf messages from Kafka with the DeltaStreamer.
38+
39+
## Background
40+
Hudi's DeltaStreamer currently supports consuming Avro and JSON data from Kafka but it does not support Protobuf. Adding support will require:
41+
1. Parsing the data from Kafka into Protobuf Messages
42+
2. Generating a schema from a Protobuf Message class
43+
3. Converting from Protobuf to Avro and Row
44+
45+
## Implementation
46+
47+
### Parsing Data from Kafka
48+
Users will provide a classname for the Protobuf Message that is contained within a jar that is on the path. We will then implement a deserializer that parses the bytes from the kafka message into a protobuf Message.
49+
50+
Configuration options:
51+
hoodie.deltastreamer.schemaprovider.proto.className - The class to use
52+
53+
### ProtobufClassBasedSchemaProvider
54+
This new SchemaProvider will allow the user to provide a Protobuf Message class and get an Avro Schema. In the proto world, there is no concept of a nullable field so people use wrapper types such as Int32Value and StringValue to represent a nullable field. The schema provider will also allow the user to treat these wrapper fields as nullable versions of the fields they are wrapping instead of treating them as a nested message. In practice, this means that the user can choose between representing a field `Int32Value my_int = 1;` as `my_int.value` or simply `my_int` when writing the data out to the file system.
55+
56+
#### Field Mappings
57+
Protobuf -> Avro
58+
bool -> boolean
59+
float -> float
60+
double -> double
61+
enum -> enum
62+
string -> string
63+
bytes -> bytes
64+
int32 -> int
65+
sint32 -> int
66+
fixed32 -> int
67+
sfixed32 -> int
68+
uint32 -> long [1]
69+
int64 -> long
70+
uint64 -> long
71+
sint64 -> long
72+
fixed64 -> long
73+
sfixed64 -> long
74+
message -> record [2]
75+
repeated -> array
76+
map -> array [3]
77+
78+
79+
[1] Handling of Unsigned Integers and Longs: Protobuf provides support for unsigned integers and longs while Avro does not. The schema provider will convert unsigned integers and longs to Avro long type in the schema definition.
80+
[2] All messages will be translated to a union[null, record] with a default of null.
81+
[3] Protobuf maps allow non-string keys while avro does not so we convert maps to an array of records containing a key and a value.
82+
83+
#### Schema Evolution
84+
**Adding a Field:**
85+
Protobuf has a default value for all fields and the translation from proto to avro schema will carry over this default value so there are no errors when adding a new field to the proto definition.
86+
**Removing a Field:**
87+
If a user removes a field in the Protobuf schema, the schema provider will not be able to add this field to the avro schema it generates. To avoid issues when writing data, users must use `hoodie.datasource.write.reconcile.schema=true` to properly reconcile the schemas if a field is removed from the proto definition. Users can avoid this situation by using `deprecated` field option in proto instead of removing the field from the schema.
88+
89+
Configuration Options:
90+
hoodie.deltastreamer.schemaprovider.proto.className - The class to use
91+
hoodie.deltastreamer.schemaprovider.proto.flattenWrappers (Default: false) - By default the wrapper classes will be treated like any other message and have a nested `value` field. When this is set to true, we do not have a nested `value` field and treat the field as nullable in the generated Schema
92+
93+
### ProtoToAvroConverter and ProtoToRowConverter
94+
95+
A utility will be provided that can take in a Protobuf Message and convert it to an Avro GenericRecord. This will be used inside the SourceFormatAdapter to properly convert to an avro RDD. This change will be adding a new `Source.SourceType` as well so other sources in the future can implement this source type, for example Protobuf messages on PubSub.
96+
97+
To convert to `Dataset<Row>` we will initially convert to Avro and then to Row to reduce the amount of changes required to initially ship the feature. Then we will do a fast-follow with a direct proto to row conversion that follows a similar approach as the proto to avro converter.
98+
99+
Special handling for maps:
100+
Protobuf allows you to use any integral or string type as a key while Avro requires a string. To account for this, we convert all maps to lists of entries in that map.
101+
102+
## Rollout/Adoption Plan
103+
104+
This change simply adds new functionality and will not impact an existing users. The changes to the SourceFormatAdapter will only impact Proto source types and this is the first of its kind. Users will need to start running a new DeltaStreamer to get this functionality.
105+
106+
## Test Plan
107+
108+
- The new source will have testing that mirrors what we are currently doing for the TestJsonKafkaSource. This will exercise both reading the records from kafka into an `JavaRDD<GenericRecord>` and a `DataSet<Row>`
109+
- The converter code will also be executed on the above path, but it will be more thoroughly tested in its own unit tests. The unit tests will use a sample protobuf message that covers nested messages, primitive, and wrapped field types.
110+
- The schema provider will similarly be tested within its own unit tests to validate the behavior matches expectations.
111+

0 commit comments

Comments
 (0)