Cannot compute dot product of ITensors due to mismatch of indices.

I am trying to write a piece of code which involves initializing entries of a matrix as dot products between ITensors. The code is shown below

function PerformDiis(R_iter_storage,p,T2_storage,R2_storage)
    B = zeros(p+1,p+1)
    Bpp = dot(R2_storage[p],R2_storage[p])
    for i in 1:p
        for j in 1:i
            # B[i,j] = SquareDot(R_iter_storage[i],R_iter_storage[j])/Bpp
            B[i,j] = dot(R2_storage[i],R2_storage[j])
            B[j,i] = B[i,j]
        end
    end
    # display(@views B[1:p,1:p])
    B[p+1, 1:p] .= -1
    B[1:p, p+1] .= -1
    B[p+1, p+1] = 0
    Id = zeros(p+1)
    Id[p+1]= -1
    C = B\Id
    pop!(C) # As C[p+1] is the Lagrange multiplier
    a_1, a_2, i_1, i_2 = inds(T2_storage[1])
    t = ITensor(zeros(size(R_iter_storage[1])),a_1,a_2,i_1,i_2)
    for i in 1:p
        t = t + C[i].*T2_storage[i]
    end
    # display(B)
    # display(C)
    return (t)
end

However, the line that would compute the off diagonal term throws the error

ERROR: DimensionMismatch: In scalar(T) or T[], ITensor T is not a scalar (it has indices ((dim=2|id=485|"a_1"), (dim=2|id=158|"a_2"), (dim=4|id=164|"i_1"), (dim=4|id=50|"i_2"), (dim=2|id=442|"a_1"), (dim=2|id=841|"a_2"), (dim=4|id=773|"i_1"), (dim=4|id=57|"i_2"))).
Stacktrace:
 [1] getindex
   @ ~/.julia/packages/ITensors/2Xz5q/src/itensor.jl:873 [inlined]
 [2] dot(A::ITensor, B::ITensor)
   @ ITensors ~/.julia/packages/ITensors/2Xz5q/src/itensor.jl:1932
 [3] PerformDiis(R_iter_storage::Vector{ITensor}, p::Int64, T2_storage::Vector{ITensor}, R2_storage::Vector{ITensor})
   @ Main ~/Desktop/Andreas/phase_2/julia-code/ccd/itensors/ccd-helper.jl:448
 [4] ccd_by_hand(maxitr::Int64)
   @ Main ~/Desktop/Andreas/phase_2/julia-code/ccd/itensors/ccd.jl:65
 [5] macro expansion
   @ ./timing.jl:279 [inlined]
 [6] top-level scope
   @ ~/Desktop/Andreas/phase_2/julia-code/ccd/itensors/ccd.jl:92

I guess that this is happening because even though the dimension of the ITensor R2_storage[i] and R2_storage[j] are the same, the memory location of their indices are different. Even though the names of the indices are same in both cases a_1,a_2,a_3,a_4. How can I possibly fix this error? As the dimensions are all same, can I make all them use the same indices so that I can do the dot product?

Can you turn your example code into something we can run ourselves, for example by also including in your script example (maybe random) ITensors that you would put into that function, and how you would call that function with those random ITensor inputs? As an example, see how I turned your previous post into a minimal runnable example here: How to update a tensor element-wise in ITensors ? - #5 by mtfishman.

I’ve asked you multiple times in previous posts you have made that when you share example code that you should share code that we can run ourselves, and that is also instructed in Please Read: Make It Easier to Help You which I have pointed out to you to read multiple times. What that means is that we should be able to copy and paste your code exactly as you have written it, and it will run on our computers, say if we copy and paste your code into a file and run the file in Julia with include("..."). We’re happy to help, but please help us help you.

As you have identified, dot only works on ITensors that share indices, so you could either make sure that the ITensor indices match from the start, or make them match by contracting with delta tensors or using a function such as replaceinds to replace the indices of one ITensor to make them match with the other one.

1 Like

This seemed to solve the problem when I forced all the ITensors to have the same indices by adding the following lines to my code

    a_1, a_2, i_1, i_2 = inds(T2_storage[1])
    for i in 1:p
        R2_storage[i] = replaceinds(R2_storage[i], inds(R2_storage[i]) => (a_1,a_2,i_1,i_2))
    end

Also, I understand that I should provide a minimal example that reproduces the problem that I am faced with and will keep that in mind in subsequent discussions.

For you and others, I would not recommend writing code like replaceinds(a, inds(a) => new_inds), that can easily lead to incorrect results if you input a with different Index ordering, for example:

julia> using ITensors: Index, inds, permute, random_itensor, replaceinds

julia> i, j = Index.((2, 2))
((dim=2|id=183), (dim=2|id=167))

julia> a = random_itensor(i, j)
ITensor ord=2 (dim=2|id=183) (dim=2|id=167)
NDTensors.Dense{Float64, Vector{Float64}}

julia> a[i => 1, j => 2]
-0.6311743998126356

julia> k, l = Index.((2, 2))
((dim=2|id=282), (dim=2|id=60))

julia> b = replaceinds(a, inds(a) => (k, l))
ITensor ord=2 (dim=2|id=282) (dim=2|id=60)
NDTensors.Dense{Float64, Vector{Float64}}

julia> a = permute(a, j, i)
ITensor ord=2 (dim=2|id=167) (dim=2|id=183)
NDTensors.Dense{Float64, Vector{Float64}}

julia> c = replaceinds(a, inds(a) => (k, l))
ITensor ord=2 (dim=2|id=282) (dim=2|id=60)
NDTensors.Dense{Float64, Vector{Float64}}

julia> b[k => 1, l => 2], c[k => 1, l => 2]
(-0.6311743998126356, 0.7321160309889322)
1 Like