Skip to content

Conversation

@2-towns
Copy link
Contributor

@2-towns 2-towns commented Jul 21, 2025

@2-towns 2-towns force-pushed the feat/abi-encoder branch 2 times, most recently from 7a165a6 to 5580e75 Compare July 31, 2025 07:19
@2-towns 2-towns marked this pull request as ready for review July 31, 2025 14:06
@2-towns
Copy link
Contributor Author

2-towns commented Aug 1, 2025

I did a benchmark that shows the current implementation outperforms contractabi, especially with objects and large data.
It is slower for small primitives, short strings, and small arrays.

@arnetheduck
Copy link
Member

It is slower for small primitives, short strings, and small arrays.

oh - that's interesting - if I were two guess, there's an extra allocation somewhere, probably made by faststreams ..

Copy link
Contributor

@emizzle emizzle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice effort, Arnaud! 👏 I'll have to come back around to reviewing the encoding side of things in another review round as it's quite a lot to review in one sitting.

var data: seq[seq[byte]] = @[]
var offset = totalSerializedFields(T) * abiSlotSize

value.enumInstanceSerializedFields(_, fieldValue):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we know the size of the non-dynamic part of the encoding, we can use https://github.com/status-im/nim-faststreams/blob/a9c6b884b0971c03e2251094ac88d1ecbc2c20bb/faststreams/outputs.nim#L380 to reserve a write area for it and write the data directly to the stream, avoiding the need for var data

See https://github.com/status-im/nim-faststreams/blob/master/tests/test_outputs.nim#L474 for an example

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay done in my last commit 763a399

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! I'm curious if it helped the benchmark

@arnetheduck
Copy link
Member

lgtm and certainly a big improvement over the status quo!

  • there's a few efficiency improvements that can be made
  • int/uint - ie architecture-specific integers should not be allowed anywhere
  • range support is problematic for the same reason, ie range[0..10] is I think architecture-specific since the range bounds are int - one would have to disallow that explicitly with something like when range.low is int|uint: error...

Fine with merging and opening issues for the above, or fixing them in this PR directly

@2-towns
Copy link
Contributor Author

2-towns commented Aug 15, 2025

lgtm and certainly a big improvement over the status quo!

  • there's a few efficiency improvements that can be made
  • int/uint - ie architecture-specific integers should not be allowed anywhere
  • range support is problematic for the same reason, ie range[0..10] is I think architecture-specific since the range bounds are int - one would have to disallow that explicitly with something like when range.low is int|uint: error...

Fine with merging and opening issues for the above, or fixing them in this PR directly

I prefer to fix them in the PR directly

@2-towns
Copy link
Contributor Author

2-towns commented Aug 22, 2025

Okay I fixed everything except the delayFixedSizeWrite which is a bit more complicated to implement. I suggest to open an issue for that as you suggested @arnetheduck . Waiting for your validation.

@@ -0,0 +1,437 @@
import
std/typetraits,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

odd indent

decoder.input.advance(offsets[i].int - pos)
result[i] = decoder.decode(T)

return result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if result is used, we should not use return - alternatively, use a separate variable and expression return

resultObj = decoder.decode(T)

decoder.finish()
result = resultObj
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
result = resultObj
resultObj

broadly, we encourage expression return: https://status-im.github.io/nim-style-guide/language.result.html

@arnetheduck
Copy link
Member

LGTM! Let's get status-im/nimbus-eth2#7376 bumped to these latest changes with the delayed writes and merge as soon as that is green.

@2-towns
Copy link
Contributor Author

2-towns commented Aug 28, 2025

LGTM! Let's get status-im/nimbus-eth2#7376 bumped to these latest changes with the delayed writes and merge as soon as that is green.

Okay I am gonna to merge it, nimbus PR is green (except Lint because this PR is not on master yet) https://github.com/status-im/nimbus-eth2/actions/runs/17289414384/job/49073028914.

@2-towns 2-towns merged commit 48fb2d4 into master Aug 28, 2025
20 checks passed
@2-towns 2-towns deleted the feat/abi-encoder branch August 28, 2025 12:51
arnetheduck added a commit that referenced this pull request Sep 25, 2025
* Updates to support the new json_serialization streaming
* New ABI encoder/decoder
(#216)
@arnetheduck arnetheduck mentioned this pull request Sep 25, 2025
arnetheduck added a commit that referenced this pull request Sep 25, 2025
* Updates to support the new json_serialization streaming
* New ABI encoder/decoder
(#216)
git-ravenbin added a commit to git-ravenbin/nim-web3 that referenced this pull request Oct 26, 2025
* Updates to support the new json_serialization streaming
* New ABI encoder/decoder
(status-im/nim-web3#216)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Encoding FixedBytes uses left-padding instead of right-padding

3 participants