-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
infinite recursion of to_vec for wrapped arrays #141
Comments
Similarly for julia> bm = BlockDiagonal([ones(2, 2), ones(3, 3)])
5×5 BlockDiagonal{Float64,Array{Float64,2}}:
1.0 1.0 0.0 0.0 0.0
1.0 1.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
julia> to_vec(bm)
ERROR: StackOverflowError:
Stacktrace:
[1] Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}(::Int64) at ./multinverses.jl:52
[2] SignedMultiplicativeInverse at ./multinverses.jl:89 [inlined]
[3] map at ./tuple.jl:157 [inlined]
[4] __reshape at ./reshapedarray.jl:192 [inlined]
[5] _reshape(::BlockDiagonal{Float64,Array{Float64,2}}, ::Tuple{Int64}) at ./reshapedarray.jl:177
[6] reshape at ./reshapedarray.jl:112 [inlined]
[7] reshape at ./reshapedarray.jl:116 [inlined]
[8] vec at ./abstractarraymath.jl:41 [inlined]
[9] to_vec(::BlockDiagonal{Float64,Array{Float64,2}}) at /Users/mzgubic/JuliaEnvs/BlockDiagonals.jl/dev/FiniteDifferences/src/to_vec.jl:57
[10] to_vec(::Base.ReshapedArray{Float64,1,BlockDiagonal{Float64,Array{Float64,2}},Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}) at /Users/mzgubic/JuliaEnvs/BlockDiagonals.jl/dev/FiniteDifferences/src/to_vec.jl:69
... (the last 2 lines are repeated 39989 more times)
[79989] to_vec(::BlockDiagonal{Float64,Array{Float64,2}}) at /Users/mzgubic/JuliaEnvs/BlockDiagonals.jl/dev/FiniteDifferences/src/to_vec.jl:57 |
AFAICT this is happening because we define A straightforward workaround would be to explicitly define Does The obvious way to fix this is to not define an |
I think simply adding a |
I think this has been fixed by #156, at least the BlockDiagonals issue was fixed JuliaArrays/BlockDiagonals.jl#69 |
This has not been fixed, see MWE with FillArrays: julia> using FiniteDifferences, FillArrays
julia> x = rand(3, 4)
3×4 Matrix{Float64}:
0.931495 0.746732 0.416411 0.342526
0.317067 0.497913 0.21896 0.11197
0.230811 0.619113 0.00301425 0.329961
julia> y = OneElement(3.14, (2, 3), axes(x))
3×4 OneElement{Float64, 2, Tuple{Int64, Int64}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}:
⋅ ⋅ ⋅ ⋅
⋅ ⋅ 3.14 ⋅
⋅ ⋅ ⋅ ⋅
julia> jvp(central_fdm(5, 1), sum, (x, y))
ERROR: DimensionMismatch: arrays could not be broadcast to a common size; got a dimension with lengths 12 and 5
Stacktrace:
[1] _bcs1
@ ./broadcast.jl:555 [inlined]
[2] _bcs
@ ./broadcast.jl:549 [inlined]
[3] broadcast_shape
@ ./broadcast.jl:543 [inlined]
[4] combine_axes
@ ./broadcast.jl:524 [inlined]
[5] instantiate
@ ./broadcast.jl:306 [inlined]
[6] materialize
@ ./broadcast.jl:903 [inlined]
[7] (::FiniteDifferences.var"#85#86"{FiniteDifferences.var"#87#88"{typeof(sum), FiniteDifferences.var"#Array_from_vec#34"{Matrix{Float64}, typeof(identity)}}, Vector{Float64}, Vector{Float64}})(ε::Float64)
@ FiniteDifferences ~/.julia/packages/FiniteDifferences/zWRHl/src/grad.jl:48
[8] newf
@ ~/.julia/packages/StaticArrays/EHHaF/src/broadcast.jl:186 [inlined]
[9] macro expansion
@ ~/.julia/packages/StaticArrays/EHHaF/src/broadcast.jl:135 [inlined]
[10] __broadcast
@ ~/.julia/packages/StaticArrays/EHHaF/src/broadcast.jl:123 [inlined]
[11] _broadcast
@ ~/.julia/packages/StaticArrays/EHHaF/src/broadcast.jl:119 [inlined]
[12] copy
@ ~/.julia/packages/StaticArrays/EHHaF/src/broadcast.jl:60 [inlined]
[13] materialize
@ ./broadcast.jl:903 [inlined]
[14] _eval_function(m::FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}, f::FiniteDifferences.var"#85#86"{FiniteDifferences.var"#87#88"{typeof(sum), FiniteDifferences.var"#Array_from_vec#34"{…}}, Vector{Float64}, Vector{Float64}}, x::Float64, step::Float64)
@ FiniteDifferences ~/.julia/packages/FiniteDifferences/zWRHl/src/methods.jl:249
[15] _estimate_magnitudes(m::FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}, f::FiniteDifferences.var"#85#86"{FiniteDifferences.var"#87#88"{typeof(sum), FiniteDifferences.var"#Array_from_vec#34"{…}}, Vector{Float64}, Vector{Float64}}, x::Float64)
@ FiniteDifferences ~/.julia/packages/FiniteDifferences/zWRHl/src/methods.jl:378
[16] estimate_step(m::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{…}}, f::FiniteDifferences.var"#85#86"{FiniteDifferences.var"#87#88"{…}, Vector{…}, Vector{…}}, x::Float64)
@ FiniteDifferences ~/.julia/packages/FiniteDifferences/zWRHl/src/methods.jl:365
[17] AdaptedFiniteDifferenceMethod
@ ~/.julia/packages/FiniteDifferences/zWRHl/src/methods.jl:193 [inlined]
[18] _jvp
@ ~/.julia/packages/FiniteDifferences/zWRHl/src/grad.jl:48 [inlined]
[19] jvp(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::typeof(sum), ::Tuple{Matrix{Float64}, OneElement{Float64, 2, Tuple{Int64, Int64}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}})
@ FiniteDifferences ~/.julia/packages/FiniteDifferences/zWRHl/src/grad.jl:60
[20] top-level scope
@ REPL[6]:1
Some type information was truncated. Use `show(err)` to see complete types.
julia> FiniteDifferences.to_vec(y::OneElement) = FiniteDifferences.to_vec(collect(y))
julia> jvp(central_fdm(5, 1), sum, (x, y))
3.1400000000000525 |
I didn't look at the implementation details of
to_vec
but I wanted to report a bug we encountered inFluxML/NNlib.jl#272 when applying
to_vec
to a custom array wrapper.We worked around the issue defining:
Could this be used as a generic fallback?
The text was updated successfully, but these errors were encountered: