Conversation
|
Just to comment on the format of the timestamps...
If I had to guess ISO is the default and unix could work if I was able to set the timestamp type(currently cannot do this through go-sdk) |
|
I guess we can send this as email for feedback to the admin backend so we can add better sdk docs for it |
5e0368b to
a7e7348
Compare
|
@wtrocki basic functionality is in just for consuming and producing, tried to stick to design guidelines but I may have missed something |
|
|
| // setting timestamp as "" (not set by user) is not valid | ||
| // not setting timestamp is handled by the request |
There was a problem hiding this comment.
This comment describes SDK behaviour
| // setting timestamp as "" (not set by user) is not valid | |
| // not setting timestamp is handled by the request |
| if opts.timestamp != "" { | ||
| // setting timestamp as "" (not set by user) is not valid | ||
| // not setting timestamp is handled by the request | ||
| request = request.Timestamp(opts.timestamp) |
There was a problem hiding this comment.
I would typically validate that timestamp and document values that are possible
There was a problem hiding this comment.
So instead of letting the api throw errors if the format is correct parse it ourselves and then return our custom error. I guess we also add possible format options in the long description of the command.
There was a problem hiding this comment.
Good API accept only single format. Good client accept as many formats and converts them for the API. If input cannot be converted to proper format supported by API error should be returned.
|
Okay some issues with using --wait. Unless I need to use a different endpoint using the currently consume api retrieves all records matching that query no matter if the current connection already consumed that. This means if we consume over and over while there may not be any new records produced we still get all the old ones again and again. There could be ways to use this and still get a nice result but I think we would then be going around the issue. @wtrocki thoughts? |
That is why we have offset argument set in the consume api. We can control offset :) |
|
Oh okay that makes sense I didn't think of this... I forgot offset existed |
|
| flags.AddOutput(&opts.outputFormat) | ||
| flags.StringVar(&opts.topicName, "name", "", f.Localizer.MustLocalize("kafka.topic.common.flag.name.description")) | ||
| flags.Int32Var(&opts.partition, "partition", 0, f.Localizer.MustLocalize("kafka.topic.consume.flag.partition.description")) | ||
| flags.StringVar(&opts.timestamp, "timestamp", "", f.Localizer.MustLocalize("kafka.topic.consume.flag.timestamp.description")) |
There was a problem hiding this comment.
Do we need date flag as well?
There was a problem hiding this comment.
Date flag? what would this do? would this not be what timestamp already covers
There was a problem hiding this comment.
There are number of date formats. We need to be specific what timestamp is. For example databases have specific format associated with timestamp etc.
There was a problem hiding this comment.
Oh okay I understand, just something to specify the format used
| } | ||
| } | ||
|
|
||
| func mapRecordsToRows(topic string, records *[]kafkainstanceclient.Record) []kafkaRow { |
There was a problem hiding this comment.
Question: Why we need this method? Any reason why we do not want to use API structure?
When API structure changes we will need to make manual change here which is not ideal
There was a problem hiding this comment.
The Record has members we may not want or need like timestamp type. This was the standard way in other parts of cli to just extract the values we want to see. Personally I don't mind either way.
There was a problem hiding this comment.
We have key value output for that (default) and json should be just output of api.
|
Left basic comments. Missing info that we are waiting for user input. Missing validation on non existent topic. [wtrocki@graphapi app-services-cli (produce-consume)]$ rhoas kafka topic produce --name=test --file connector.json No API error handling present. |
|
|
No validation for --partition flag - value too large |
|
JSON format is unusable. Value needs to be escaped thus it is hard to read. [wtrocki@graphapi app-services-cli (produce-consume)]$ rhoas kafka topic consume --name=test |
|
Key value format seems to be missformatted. Some messages with key will get formatted differently than others: Key: test |
|
Some valid cases return 400 for me: |
| @@ -86,7 +86,7 @@ func NewConsumeTopicCommand(f *factory.Factory) *cobra.Command { | |||
| flags.Int32Var(&opts.partition, "partition", 0, f.Localizer.MustLocalize("kafka.topic.consume.flag.partition.description")) | |||
| flags.StringVar(&opts.from, "from", DefaultTimestamp, f.Localizer.MustLocalize("kafka.topic.consume.flag.from.description")) | |||
There was a problem hiding this comment.
Wondering if we need 2 flags for Unix Epoch and Actual Date.
I would prefer 2 flags for simplicity of documentation and validation
There was a problem hiding this comment.
Not sure how much epoch is used so I was thinking of some kind of falg to say that is what yuo are using.
For example, use --from as normal either ISO or unix time and just have another bool flag like --unix-time which tells the cli how to parse time default would be iso.
There was a problem hiding this comment.
I personally think flag modifying another flag can be hard to work on and confusing.
I would suggest:
--from-time (ISO 8601 date - short format)
--from-timestamp (epoch/unix timestamp)
I would love to save this discussion somewhere so we can recall this when building CLI guidelines.
There was a problem hiding this comment.
Btw. That is my opinion and this is opinionated part so do not feel that you need to do it this way. It is more about my personal preference to keep docs clean.
Was not able to replicate this so will wait for feedback from others to find the issue |
I think you would need messages on specific partitions to replicate that. |
|
It works now (throws validation error) |
|
Used https://awesomeopensource.com/project/jdorfman/awesome-json-datasets
Nice! |
6af7dc9 to
847b655
Compare
wtrocki
left a comment
There was a problem hiding this comment.
We need more examples/docs and we good to merge
| cmd := &cobra.Command{ | ||
| Use: "consume", | ||
| Short: f.Localizer.MustLocalize("kafka.topic.consume.cmd.shortDescription"), | ||
| Long: f.Localizer.MustLocalize("kafka.topic.consume.cmd.longDescription"), |
There was a problem hiding this comment.
Lets add Hidden: True to hide commands for now
847b655 to
1fedfec
Compare
c9d8944 to
cbdecd3
Compare
|
@wtrocki added more examples and made commands hidden, will I squash and merge this now? |
|
Every new change - new review and verification. doing it now |
| $ rhoas kafka topic consume --name=topic-1 --wait --from=2022-06-17T07:05:34+00:00Z | ||
|
|
||
| # Consume from a topic starting from a certain time using unix time format | ||
| $ rhoas kafka topic consume --name=topic-1 --wait --unix-time --from=812762 |
There was a problem hiding this comment.
missing jq example etc. all examples are for wait - missing non wait examples
|
We need more examples for typical use cases:
|
This PR adds a produce and cosume command to a kafka topic. Flags like the partition, key, limit, and timestamp are optional. This is useful for debugging purposes and currently you need another thrid-party kafka cli for this feature.
Verification Steps
Type of change