Incorrect behavior in Array for ITensor with no nonzero QN blocks

I have noticed that Array has poor behavior when called on an ITensor with quantum numbers but with no nonzero blocks. It should return an array with all zeros, but it returns garbage. Here is an example:

using ITensors

sites = siteinds("Electron",2; conserve_qns=true)

A = onehot(sites[1]=>1) * onehot(sites[2]=>1)
B = onehot(dag(sites[1])=>2)
C = A*B
@show C
@show norm(C)
v = Array(C,sites[2])
@show v

The output:

C = ITensor ord=1
Dim 1: (dim=4|id=589|"Electron,Site,n=2") <Out>
 1: QN(("Nf",0,-1),("Sz",0)) => 1
 2: QN(("Nf",1,-1),("Sz",1)) => 1
 3: QN(("Nf",1,-1),("Sz",-1)) => 1
 4: QN(("Nf",2,-1),("Sz",0)) => 1
NDTensors.BlockSparse{Float64, Vector{Float64}, 1}
 4-element

norm(C) = 0.0
v = [1.0e-323, 0.0, 0.0, 0.0]

Presumably the 1.0e-323 is just undefined bits, which might sometimes be a number that is not so small. I noticed this on a more complex example (e.g. with two indices in the array) but I don’t know all the conditions where this can occur.

Interesting, thanks for the report, looks like a bug. We’ll take a look.

@mtfishman I have worked out whats going on and it has to do with C being an empty but iterable

NDTensors.data(C) = Float64[]
C[1] = 0.0

From what I understand the code does in Julia Base is construct an Array using out = Array{ElT, N}(undef, size(C)) then it iterates though the elements of C and adds them to the elements of out. I am only able to determine this using the @code_llvm command because our Array code calls base julia code

@which Array{Float64, 1}(tensor(C))
Array{T, N}(x::AbstractArray{S, N}) where {T, N, S}
     @ Base array.jl:673

If we call data(C) instead of tensor(C) then the output would be

Array{Float64, 1}(data(C))
Float64[]

Also this is consistent with running the commands inline

julia> v = Array{Float64, 1}(undef, 4)
4-element Vector{Float64}:
 1.012e-320
 0.0
 4.0e-323
 5.0e-324

julia> v .+ 0
4-element Vector{Float64}:
 1.012e-320
 0.0
 4.0e-323
 5.0e-324

I tracked down the issue.

That conversion code is calling this generic Array conversion definition in Base Julia:

which creates an output Array with undefined memory and then calls this generic copyto! function:

Tensor objects have a generic definition of isempty here:

which checks if the storage is empty. In this case, since there are no blocks in the block sparse array, isempty returns true, so copyto! exits without copying any data over because of this line:

Long story short, this is caused by an incorrect definition of isempty for Tensor objects, they should only be considered empty if the product of the dimensions is zero, rather than how much data is stored. I’ll make a quick PR to fix it, I think we can just remove that definition of isempty(::Tensor) and rely on the generic Julia definition.

Fixed by [NDTensors] Fix `isempty(::Tensor)` by mtfishman · Pull Request #1401 · ITensor/ITensors.jl · GitHub (hopefully no code was relying on the current incorrect definition).

1 Like

@srwhite this should be fixed in the latest NDTensors v0.3.2, please update your packages to try it out.

1 Like