Problem with addition and contraction of QN preserving tensors with `delta`

Hi everyone,

I encountered some errors with addition and multiplication of seemingly innocent tensors.
I created in julia a simple ITensor A with QN-perserving indices, and tried to calculate the sum- A*A' + delta([i,j], [i,j`]):


using ITensors

# initializing
i = Index([QN(0) => 2, QN(1) => 3], "i")
j = Index([QN(0) => 2, QN(1) => 3], "j")
A = randomITensor(i, j)

A_squared = dag(A)*A'
A_delta = delta([dag(i), dag(j)], [i', j'])
# Here A_squared and A_delta have the same indices with the same order, the same flux ect.

# trying to sum them together
@show A_squared + A_delta

At this point I got the following error message:

ERROR: perutedims!(output_tensor::Tensor, tensor::Tensor, perm, f::Function=(r, t) -> t) not implemented for typeof(output_tensor) = NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, typeof(tensor) = NDTensors.DiagBlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 4}}, perm = (1, 2, 3, 4), and f = #281.
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] permutedims!(output_tensor::NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, tensor::NDTensors.DiagBlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 4}}, perm::NTuple{4, Int64}, f::Function)
@ NDTensors ~/.julia/packages/NDTensors/oXCSC/src/tensoralgebra/generic_tensor_operations.jl:20
[3] permutedims!!(output_tensor::NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, tensor::NDTensors.DiagBlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 4}}, perm::NTuple{4, Int64}, f::Function)
@ NDTensors ~/.julia/packages/NDTensors/oXCSC/src/tensoralgebra/generic_tensor_operations.jl:14
[4] _map!!(f::Function, R::NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, T1::NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, T2::NDTensors.DiagBlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 4}})
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/itensor.jl:1921
[5] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/itensor.jl:1926
[6] copyto!
@ ~/.julia/packages/ITensors/HjjU3/src/broadcast.jl:301 [inlined]
[7] materialize!
@ ./broadcast.jl:884 [inlined]
[8] materialize!
@ ./broadcast.jl:881 [inlined]
[9] _add(A::NDTensors.BlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, B::NDTensors.DiagBlockSparseTensor{Float64, 4, NTuple{4, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 4}})
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/itensor.jl:1822
[10] +(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/itensor.jl:1830
[11] top-level scope
@ REPL[267]:1

I don’t know what went wrong. Notice that the permutation to perform is (1,2,3,4)- meaning everything is already in order.

The other problem with delta I encountered is when I tried to multiply them together-

# as before:
i = Index([QN(0) => 2, QN(1) => 3], "i")
j = Index([QN(0) => 2, QN(1) => 3], "j")

@show delta(i,i')*delta(j,j')

Here I get another angry error message:

ERROR: UndefVarError: IndsR not defined
Stacktrace:
[1] contraction_output_type(TensorT1::Type{NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}}, TensorT2::Type{NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}}, indsR::NTuple{4, Index{Vector{Pair{QN, Int64}}}})
@ NDTensors ~/.julia/packages/NDTensors/oXCSC/src/blocksparse/diagblocksparse.jl:278
[2] zero_contraction_output
@ ~/.julia/packages/NDTensors/oXCSC/src/tensor/tensor.jl:389 [inlined]
[3] contraction_output
@ ~/.julia/packages/NDTensors/oXCSC/src/blocksparse/diagblocksparse.jl:295 [inlined]
[4] contraction_output
@ ~/.julia/packages/NDTensors/oXCSC/src/tensoralgebra/generic_tensor_operations.jl:47 [inlined]
[5] contract(tensor1::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, labelstensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, labelstensor2::Tuple{Int64, Int64}, labelsoutput_tensor::NTuple{4, Int64})
@ NDTensors ~/.julia/packages/NDTensors/oXCSC/src/tensoralgebra/generic_tensor_operations.jl:93
[6] contract(::Type{NDTensors.CanContract{NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}}}, tensor1::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, labels_tensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, labels_tensor2::Tuple{Int64, Int64})
@ NDTensors ~/.julia/packages/NDTensors/oXCSC/src/tensoralgebra/generic_tensor_operations.jl:76
[7] contract
@ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
[8] _contract(A::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}}, B::NDTensors.DiagBlockSparseTensor{Float64, 2, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.DiagBlockSparse{Float64, Float64, 2}})
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/tensor_operations/tensor_algebra.jl:3
[9] _contract(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/tensor_operations/tensor_algebra.jl:9
[10] contract(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/tensor_operations/tensor_algebra.jl:104
[11] *(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/HjjU3/src/tensor_operations/tensor_algebra.jl:91
[12] top-level scope
@ REPL[268]:1

What went wrong?
I use ITensors v0.3.34 in julia 1.9.1

Thank you in advance:)

The delta type of tensor was originally created primarily for contracting with other tensors, so we have not defined addition of QN-conserving delta tensors at this time.

If it is an important operation for your code, I would suggest making a small function which creates the equivalent of a delta tensor by making a QN-conserving ITensor (just making an ITensor from your QN-labeled indices) then setting the diagonal elements equal to one in a for-loop, then returning that tensor to use instead of a delta. (As a technical note, delta tensors use a special internal storage type that is different from regular QN-conserving ITensors, so addition has to be implemented specially for that type. We might do so in the future, but for now please use the workaround described above if it is important.)

Hi Guys, FYI the denseblocks() function will convert the delta ITensor from DiagBlockSparse to BlockSparse storage and allow addition operation to proceed.

A_delta = denseblocks(delta([dag(i), dag(j)], [i', j']))

Obviously it wastes some memory so use with caution.
Regards
Jan

1 Like

Thank you very much!
I thought the question had been long forgotten, denseblocks is an elegant workaround.

1 Like