Inplace multiplication of tensors with the same shape but different indicies

I was reading the docs on in place operations and have a few questions.

Firstly, suppose I have two tensors one rank 4 and the other a square rank 2. If we contact these, the resulting tensor has the same shape as the origional rank 4 but with a differing index. Can someone explain the reason behind this error? How would I go about doing a in place contraction like this, if it is possible.

julia> A
ITensor ord=4 (dim=100|id=563) (dim=100|id=563)' (dim=100|id=563)'' (dim=100|id=563)'''
NDTensors.Dense{Float64, Vector{Float64}}

julia> B
ITensor ord=2 (dim=100|id=563)''' (dim=100|id=563)'4
NDTensors.Dense{Float64, Vector{Float64}}

julia> C = A * B
ITensor ord=4 (dim=100|id=563) (dim=100|id=563)' (dim=100|id=563)'' (dim=100|id=563)'4
NDTensors.Dense{Float64, Vector{Float64}}

julia> C += A * B
ITensor ord=4 (dim=100|id=563) (dim=100|id=563)' (dim=100|id=563)'' (dim=100|id=563)'4
NDTensors.Dense{Float64, Vector{Float64}}

julia> C .+= A .* B
ERROR: You are trying to add an ITensor with indices:

((dim=100|id=563), (dim=100|id=563)', (dim=100|id=563)'', (dim=100|id=563)''')

into an ITensor with indices:

((dim=100|id=563), (dim=100|id=563)', (dim=100|id=563)'', (dim=100|id=563)'4)

but the indices are not permutations of each other.

Stacktrace:
 [1] error(s::String)
   @ Base .\error.jl:35
 [2] _map!!(f::Function, R::NDTensors.DenseTensor{Float64, 4, NTuple{4, Index{Int64}}, NDTensors.Dense{Float64, Vector{Float64}}}, T1::NDTensors.DenseTensor{Float64, 4, NTuple{4, Index{Int64}}, NDTensors.Dense{Float64, Vector{Float64}}}, T2::NDTensors.DenseTensor{Float64, 4, NTuple{4, Index{Int64}}, NDTensors.Dense{Float64, Vector{Float64}}})
   @ ITensors C:\Users\Oliver\.julia\packages\ITensors\HjjU3\src\itensor.jl:1900
 [3] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
   @ ITensors C:\Users\Oliver\.julia\packages\ITensors\HjjU3\src\itensor.jl:1926
 [4] copyto!
   @ C:\Users\Oliver\.julia\packages\ITensors\HjjU3\src\broadcast.jl:487 [inlined]
 [5] materialize!
   @ .\broadcast.jl:871 [inlined]
 [6] materialize!(dest::ITensor, bc::Base.Broadcast.Broadcasted{ITensors.ITensorStyle, Nothing, typeof(+), Tuple{ITensor, Base.Broadcast.Broadcasted{ITensors.ITensorStyle, Nothing, typeof(*), Tuple{ITensor, ITensor}}}})
   @ Base.Broadcast .\broadcast.jl:868
 [7] top-level scope
   @ REPL[31]:1

I’m using ITensors@0.3.34 in Julia@1.8.5

Hi Oliver, Since the experts are on Holiday today I will try and add some insight. It almost looks like there is something going wrong with the operator precidence. I tried

C .+= (A .* B)

to no effect. But if I force and intermediate result as follows then it works:

using ITensors
i=Index(10)
A=randomITensor(i,i',i'',i''')
B=randomITensor(i''',i'''')
C=ITensor(0.0,i,i',i'',i'''')
D=ITensor(0.0,i,i',i'',i'''')
D .= A .* B
C .+= D

I suspect that this not a heavily used feature of ITensors, so there may be a bug under the hood.
I hope this helps
JR

@OliverDudgeon Hello all, this is a good question. I am looking into this today and will verify my answer:
I do not believe that it is possible to do an in-place contraction using the * operator in ITensors. This is because the * operator calls an out of place contract function. I think its possible to use the function ITesors.contract!(C::ITensor, A::ITensor, B::ITensor, α::Number, β::Number=0) which is in tensor_algebra.jl line 212

@kmp5 it’s a bit of a “secret” feature, but as @JanReimers pointed out, C .= A .* B (note the dots) overloads Julia’s broadcasting interface to implement in-place tensor contraction, which ultimately calls contract!.

I could definitely believe there is a bug in C .+= A .* B, the broadcasting support for ITensors have a number of missing operations and is not so comprehensive. I’m not so happy that I overloaded Julia’s broadcast in this way for in-place tensor contraction, since it isn’t following Julia’s standard meaning of broadcasting as an elementwise operation. C .= A .* B should really be a Hadamard/elementwise operation if we followed Julia’s canonical definition of broadcasting.

An alternative syntax I’ve considered for this is using a macro @!:

@! C = A * B
@! C += A * B

similar to these packages:

Hi, thanks for the responses. I did find the use of broadcasting of the * operation for inplace contraction a bit odd but I think it’s ok as most users won’t want to do a element-wise multiplication of two ITensors. If this is a bug/missing feature, how tricky is it to implement? Is there a github issue I can follow? Just so I know if/how long unil it’s available.

I had a go using contract! as recommended by @kmp5. It runs when using α=1 and β=1 now, but I don’t get numeric agreement with the out-of-place equivalent. So far I can only reproduce this inside my project. Testing this for a single use of contract! seems to work fine.

I am not sure why you cannot reproduce the out of place result in your project. Can you show an example of where it fails?

I tried

using ITensors, Test

begin 
i = Index(20)
j = Index(30)
k = Index(40)

A = randomITensor(i,j)
B = randomITensor(j,k)

C = A * B
ITensors.contract!(C, A, B, 1.0, 1.0)

Cout = A * B
Cout += A * B

@test C == Cout
end

and this test passes. I also ran this test with non-matrix higher order tensors that require permutations and find no issue. Thanks!

I replaced the line next_two_time_pt .+= two_time_tt * map with contract!(next_two_time_pt, two_time_tt, map, 1, 1). These ITensors have dimensions (4,4,4,4) * (4,4) = (4,4,4,4) where the element type is ComplexF64. I can’t get a minimal repro working yet, the difference seems small but this is part of an iteration of many steps so if there’s a small error I’m seeing it because of it compounding.

So by “not getting numerical agreement” do you mean that the answers aren’t matching in just the last few digits? That may just be numerical floating point round-off error, which would be expected since you are calling slightly different functions.

No, the results I get at the end of my program are significantly different. Not just a machine error, and it’s just by swapping out those two lines.

Right, but you say you only see small differences for each iteration and then they compound, could the discrepancies in each iteration be due to numerical roundoff errors? It does sound suspicious and I wouldn’t normally expect numerical roundoff to cause large discrepencies even in a long calculation, I’m just trying to get a clearer picture for what could be going on.

Hope this might make it clearer:

Let’s say I have a collection of rank 2 ([\mathcal E_q]_{i,j}) and rank 4 ([\mathcal T_{p,q}]_{j,k,l,m}) complex-valued tensors. I want to compute the tensor: [R_p]_{i,k,l,m}=\sum_q[\mathcal E_q]_{i,j}[\mathcal T_{p,q}]_{j,k,l,m}. I.e I am contracting over the j index on each q th pair of tensors to yield a new rank 4 tensor. To do this I create a ITensor (R) with the resulting indices (i,k,l,m) of the contractions which is just populated with a dense array of zeros (I found that the EmptyStorage doesn’t work with contract!). Then I have a for-loop that on each iteration contracts the q th tensors together and adds them to tensor R. I find, that I get a different result if I use contract!(R, T, E, 1, 1) compared to R += E * T and R .+= E * T.

The sum will have something on the order of ~100 terms so probably can’t be just machine errors accumulating.

Which one out of those three gives a different result from the others?

Also, related to what Matt mentioned, is the result also different when you sum just a few tensors? Or does the difference only appear when you sum many tensors?

I think I’ve got a minimal example for a repro. I have saved two tensors in an hdf5 file then the script loads the tensors and contracts them together and adds them twice, one fully in-place and one only partially in-place. The assertion fails as the tensors are different.

Data file

using HDF5
using ITensors
import ITensors.contract!

# a couple of Index helpers

ÎĽ_ext(ÎĽ, i, pl=0) = tagprime(ÎĽ, "$i, ext", pl)

function tagprime(index::Index, ts::String, plev::Int=0)
  return setprime(addtags(index, ts), plev)
end

# load the data that causes an issue

h5open("tensors.h5") do fid
  global map = read(fid, "map", ITensor)
  global two_time_tt = read(fid, "two_time_tt", ITensor)
end

# get the index from the ITensors

ÎĽ = addtags(removetags(inds(map)[1], tags(inds(map)[1])), "ÎĽ")

# parameters for the correct output shape

m = 5
n = 10

# create two empty tensors with seperate storage

T1 = ITensor(zeros(ComplexF64, dim(ÎĽ)^4), ÎĽ_ext(ÎĽ, 1), ÎĽ_ext(ÎĽ, m + 1), ÎĽ_ext(ÎĽ, m + 1)', ÎĽ_ext(ÎĽ, n + 1)')
T2 = ITensor(zeros(ComplexF64, dim(ÎĽ)^4), ÎĽ_ext(ÎĽ, 1), ÎĽ_ext(ÎĽ, m + 1), ÎĽ_ext(ÎĽ, m + 1)', ÎĽ_ext(ÎĽ, n + 1)')

# version 1

T1 .+= two_time_tt * map
T1 .+= two_time_tt * map

# version 2

contract!(T2, two_time_tt, map, 1, 1)
contract!(T2, two_time_tt, map, 1, 1)

@assert T1 == T2 # errors

Hi @miles, @mtfishman. Have either of you been able to test this repro on your machines?

Hi Oliver, Thanks for the follow up. I have run this example on my machine and see that it is failing to give consistent answers. I am trouble shooting it now!

@OliverDudgeon I was able to recreate your bug and have fixed it in this PR [debug][NDTensors] In-place contraction bug by kmp5VT · Pull Request #1152 · ITensor/ITensors.jl · GitHub. It was related to permutations of T1 and T2, an the .+= code forms the correct output tensor without the changes to contract!

The permutation problem sounds right. Thanks for tracking it down!

Hi @kmp5, I’m giving 0.3.38 a go. I’m now getting an error that occurs when using contract!(C, A, B, 1, 1) and doesn’t when using C .+= A * B.

ERROR: MethodError: Cannot `convert` an object of type 
  Tuple{Int64{},Int64{},Int64,Int64} to an object of type 
  Tuple{Int64{},Int64{}}

Closest candidates are:
  convert(::Type{T}, ::Tuple{Vararg{Any, N}}) where {N, T<:Tuple}
   @ Base essentials.jl:412
  convert(::Type{T}, ::T) where T<:Tuple
   @ Base essentials.jl:411
  convert(::Type{T}, ::CartesianIndex) where T<:Tuple
   @ Base multidimensional.jl:128
  ...

The tensors are like those I gave before: C is rank 4, A is rank 4 but B is rank-2. Any suggstions?

1 Like

@OliverDudgeon It looks like I had a small typo in the change in NDTensors/src/dense/tensoralgebra/contract.jl on line 382. It should read pC = NTuple{NC,Int}(props.PC). I just verified that this change fixes the problem. Sorry about that!
I opened a PR with the change and verified by removing the code I was using to test and copied your original code. [NDTensors][Debug] Fix Typo `NB` should be `NC` by kmp5VT · Pull Request #1158 · ITensor/ITensors.jl · GitHub.