-
Notifications
You must be signed in to change notification settings - Fork 112
RFC: More extrapolation behaviors #50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,138 @@ | ||
| """ | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had an implementation that didn't use dispatch so much but rather had a bunch of (read: ton of) compile-time if-clauses. Would that be easier to understand, do you think?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I could see either one. The bigger issue is this: presumably these are non-exported functions. I presume you've written help for them as encouragement for others to contribute to the development of Interpolations, which I definitely support. However, these are useful only if there's a little bit of context provided somewhere to aid in understanding the overarching purpose of these methods. In other words, "1-dimensional, same lo/hi schemes" (as the first lines of text in the file) is nearly useless on its own.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I now removed all the one-liners that explained each method separately (I figure that understanding can be reconstructed with
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 Very nice indeed! |
||
| `extrap_prep(T, Val{1}())` | ||
|
|
||
| 1-dimensional, same lo/hi schemes | ||
| """ | ||
| extrap_prep{T}(::Type{T}, n::Val{1}) = extrap_prep(T, n, Val{1}()) | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{T}, Val{1}())` | ||
|
|
||
| 1-dimensional, same lo/hi schemes but specified as if N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(::Type{Tuple{T}}, n::Val{1}) = extrap_prep(T, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{T,T}, Val{1}())` | ||
|
|
||
| 1-dimensional, same lo/hi schemes but specified as tuple | ||
|
|
||
| Equivalent with `extrap_prep(T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(::Type{Tuple{T,T}}, n::Val{1}) = extrap_prep(T, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{Tuple{T,T}}, Val{1}())` | ||
|
|
||
| 1-dimensional, same lo/hi schemes but specified as tuple and as if N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(::Type{Tuple{Tuple{T,T}}}, n::Val{1}) = extrap_prep(T, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{S,T}, Val{1}())` | ||
|
|
||
| 1-dimensional, different lo/hi schemes | ||
| """ | ||
| function extrap_prep{S,T}(::Type{Tuple{S,T}}, n::Val{1}) | ||
| quote | ||
| $(extrap_prep(S, n, Val{1}(), Val{:lo}())) | ||
| $(extrap_prep(T, n, Val{1}(), Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{Tuple{S,T}}, Val{1}())` | ||
|
|
||
| 1-dimensional, different lo/hi schemes but specified as if N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(Tuple{S,T}, Val{1}())` | ||
| """ | ||
| extrap_prep{S,T}(::Type{Tuple{Tuple{S,T}}}, n::Val{1}) = extrap_prep(Tuple{S,T}, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{...}, Val{1}())` | ||
|
|
||
| 1-dimensional, but incorrect tuple spec | ||
| """ # needed for ambiguity resolution | ||
| extrap_prep{T<:Tuple}(::Type{T}, ::Val{1}) = :(throw(ArgumentError("The 1-dimensional extrap configuration $T is not supported"))) | ||
|
|
||
| """ | ||
| `extrap_prep(T, Val{N}())` | ||
|
|
||
| N-dimensional, all schemes same | ||
| """ | ||
| function extrap_prep{T,N}(::Type{T}, n::Val{N}) | ||
| exprs = Expr[] | ||
| for d in 1:N | ||
| push!(exprs, extrap_prep(T, n, Val{d}())) | ||
| end | ||
| return Expr(:block, exprs...) | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{...}, Val{N}())` | ||
|
|
||
| N-dimensional, different specs in different dimensions. | ||
|
|
||
| Note that the tuple must be of length N. | ||
| """ | ||
| function extrap_prep{N,T<:Tuple}(::Type{T}, n::Val{N}) | ||
| length(T.parameters) == N || return :(throw(ArgumentError("The $N-dimensional extrap configuration $T is not supported - must be a tuple of length $N (was length $(lenght(T.parameters)))"))) | ||
| exprs = Expr[] | ||
| for d in 1:N | ||
| Tdim = T.parameters[d] | ||
| if Tdim <: Tuple | ||
| length(Tdim.parameters) == 2 || return :(throw(ArgumentError("The extrap configuration $Tdim for dimension $d is not supported - must be a tuple of length 2 or a simple configuration type"))) | ||
| if Tdim.parameters[1] != Tdim.parameters[2] | ||
| push!(exprs, extrap_prep(Tdim, n, Val{d}())) | ||
| else | ||
| push!(exprs, extrap_prep(Tdim.parameters[1], n, Val{d}())) | ||
| end | ||
| else | ||
| push!(exprs, extrap_prep(Tdim, n, Val{d}())) | ||
| end | ||
| end | ||
| return Expr(:block, exprs...) | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(T, Val{N}(), Val{d}())` | ||
|
|
||
| N-dimensional fallback for expanding the same scheme lo/hi in a single dimension | ||
| """ | ||
| function extrap_prep{T,N,d}(::Type{T}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(T, n, dim, Val{:lo}())) | ||
| $(extrap_prep(T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{S,T}, Val{N}(), Val{d}())` | ||
|
|
||
| N-dimensional fallback for expanding the different schemes lo/hi in a single dimension | ||
| """ | ||
| function extrap_prep{S,T,N,d}(::Type{Tuple{S,T}}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(S, n, dim, Val{:lo}())) | ||
| $(extrap_prep(T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Tuple{T,T}, Val{N}(), Val{d}())` | ||
|
|
||
| N-dimensional fallback for expanding the same schemes lo/hi in a single dimension | ||
|
|
||
| Equivalent with `extrap_prep(T, Val{N}(), Val{d}())` | ||
| """ | ||
| function extrap_prep{T,N,d}(::Type{Tuple{T,T}}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(T, n, dim, Val{:lo}())) | ||
| $(extrap_prep(T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,115 @@ | ||
| """ | ||
| `extrap_prep(Val{:gradient}(), T, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, same lo/hi schemes | ||
| """ | ||
| extrap_prep{T}(g::Val{:gradient}, ::Type{T}, n::Val{1}) = extrap_prep(g, T, n, Val{1}()) | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{T}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, same lo/hi schemes but specified as if N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(Val{:gradient}(), T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(g::Val{:gradient}, ::Type{Tuple{T}}, n::Val{1}) = extrap_prep(g, T, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{T,T}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, same lo/hi schemes, but specified as tuple | ||
|
|
||
| Equivalent with `extrap_prep(Val{:gradient}(), T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(g::Val{:gradient}, ::Type{Tuple{T,T}}, n::Val{1}) = extrap_prep(g, T, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{Tuple{T}}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, same lo/hi schemes, but specified as tuple and N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(Val{:gradient}(), T, Val{1}())` | ||
| """ | ||
| extrap_prep{T}(g::Val{:gradient}, ::Type{Tuple{Tuple{T,T}}}, n::Val{1}) = extrap_prep(g, T, n) | ||
|
|
||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{S,T}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, different lo/hi schemes | ||
| """ | ||
| function extrap_prep{S,T}(g::Val{:gradient}, ::Type{Tuple{S,T}}, ::Val{1}) | ||
| quote | ||
| $(extrap_prep(g, S, n, Val{1}(), Val{:lo}())) | ||
| $(extrap_prep(g, T, n, Val{1}(), Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{Tuple{S,T}}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, different lo/hi schemes but specified as if N-dimensional | ||
|
|
||
| Equivalent with `extrap_prep(Val{:gradient}(), Tuple{S,T}, Val{1}())` | ||
| """ | ||
| extrap_prep{S,T}(g::Val{:gradient}, ::Type{Tuple{Tuple{S,T}}}, n::Val{1}) = extrap_prep(g, Tuple{S,T}, n) | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), Tuple{...}, Val{1}())` | ||
|
|
||
| 1-dimensional for gradient, but incorrect tuple spec | ||
| """ # needed for ambiguity resolution | ||
| extrap_prep{T<:Tuple}(::Val{:gradient}, ::Type{T}, ::Val{1}) = :(throw(ArgumentError("The 1-dimensional extrap configuration $T is not supported"))) | ||
|
|
||
|
|
||
| function extrap_prep{T,N}(g::Val{:gradient}, ::Type{T}, n::Val{N}) | ||
| Expr(:block, [extrap_prep(g, T, n, Val{d}()) for d in 1:N]...) | ||
| end | ||
|
|
||
| function extrap_prep{T<:Tuple,N}(g::Val{:gradient}, ::Type{T}, n::Val{N}) | ||
| length(T.parameters) == N || return :(throw(ArgumentError("The $N-dimensional extrap configuration $T is not supported"))) | ||
| exprs = Expr[] | ||
| for d in 1:N | ||
| Tdim = T.parameters[d] | ||
| if Tdim <: Tuple | ||
| length(Tdim.parameters) == 2 || return :(throw(ArgumentError("The extrap configuration $Tdim for dimension $d is not supported - must be a tuple of length 2 or a simple configuration type")) | ||
| ) | ||
| if Tdim.parameters[1] != Tdim.parameters[2] | ||
| push!(exprs, extrap_prep(g, Tdim, n, Val{d}())) | ||
| else | ||
| push!(exprs, extrap_prep(g, Tdim.parameters[1], n, Val{d}())) | ||
| end | ||
| else | ||
| push!(exprs, extrap_prep(g, Tdim, n, Val{d}())) | ||
| end | ||
| end | ||
| return Expr(:block, exprs...) | ||
| end | ||
|
|
||
| function extrap_prep{S,T,N,d}(g::Val{:gradient}, ::Type{Tuple{S,T}}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(g, S, n, dim, Val{:lo}())) | ||
| $(extrap_prep(g, T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| function extrap_prep{T,N,d}(g::Val{:gradient}, ::Type{Tuple{T,T}}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(g, T, n, dim, Val{:lo}())) | ||
| $(extrap_prep(g, T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| function extrap_prep{T,N,d}(g::Val{:gradient}, ::Type{T}, n::Val{N}, dim::Val{d}) | ||
| quote | ||
| $(extrap_prep(g, T, n, dim, Val{:lo}())) | ||
| $(extrap_prep(g, T, n, dim, Val{:hi}())) | ||
| end | ||
| end | ||
|
|
||
| """ | ||
| `extrap_prep(Val{:gradient}(), args...)` | ||
|
|
||
| Fallback that defaults to the same behavior as for value extrapolation (works e.g. for all schemes that are coordinate transformations). | ||
| """ | ||
| extrap_prep(g::Val{:gradient}, args...) = extrap_prep(args...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might need to insert a
$(Expr(:meta, :inline))here if you don't want to pay a splatting penalty. (Worth testing.)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely worth testing.
I'm not sure if I did this correctly, so happy for any comments on the following benchmark (mostly if this would be the correct way to implement the inlining):
Repeating the final two incantations give results somewhere in the range 0.003 to 0.005 seconds for both functions (I'd have to start collecting larger samples and look at the distributions of timing results to see a difference). If my benchmarking strategy is valid, I think it's safe to assume that splatting will be negligible compared to array allocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a test this simple,
baris being inlined automatically:You can see the absence of a
top(call)(:bar, ...expression. You can fix that by adding@noinlinein front of the definition ofbar.That said, there are several other non-ideal aspects of this test, including (1) you're not generating a version that elides the splat in the call to
bar, and (2) you're doing something really expensive in your test which can mask the impact of spaltting. Here's a better one:Results:
As you can see, the difference is not small 😄. I was half-expecting
call_bar2ato be fast, so even call-site splatting is deadly.In this case, eliding the splatting in
call_bar*is totally unimportant because it's only called once; the analog of what I was suggesting with the:metaexpression are the@noinline/@inlinestatements in front ofbar1andbar2.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a further general point, avoid deliberate allocations when you're benchmarking stuff. That way any allocations that happen because of type-instability, splatting, etc, show up very clearly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, thanks a lot! As always, you don't only help me write the code I need, you also help me understand why I need it :)
The main reason I didn't write a benchmark without the array allocation is that in the actual use case has it, so I wanted to see which effect was important. But the trick with
xargsseems much cleaner anyway, so I'll go with that and be happy.