Is there a better way to use the `combiner` function?

Hello all,

I am trying to do some operations to a 6 leg tensor. I would like to combine two of the legs together and to do so I use combiner. However, for the dimensions of my tensor this operation is the bottleneck of my code; my code could work without combining the indices but I think it keeps everything cleaner, is there a better way to combine two tensors? (Simple code below so you can see my problem):

using ITensors

i1 = Index(30)
i2 = Index(30)
i3 = Index(30)
i4 = Index(30)
i5 = Index(30)
i6 = Index(30)

T = randomITensor(i1,i2,i3,i4,i5,i6)
c = combiner(i1,i2)

@time T*c

This operation takes the order of 100 seconds.

Thank you for your time,
Aleix

It takes about 6-8 seconds on my laptop:

julia> using ITensors

julia> i = Index.(fill(30, 6));

julia> T = randomITensor(i...)
ITensor ord=6 (dim=30|id=289) (dim=30|id=121) (dim=30|id=806) (dim=30|id=475) (dim=30|id=715) (dim=30|id=19)
NDTensors.Dense{Float64, Vector{Float64}}

julia> c = combiner(i[1], i[2])
ITensor ord=3 (dim=900|id=212|"CMB,Link") (dim=30|id=289) (dim=30|id=121)
NDTensors.Combiner

julia> @time T * c
  8.235806 seconds (47 allocations: 10.863 GiB, 0.06% gc time)
ITensor ord=5 (dim=30|id=806) (dim=30|id=475) (dim=30|id=715) (dim=30|id=19) (dim=900|id=212|"CMB,Link")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @time permute(T, reverse(i))
  7.280412 seconds (46 allocations: 10.863 GiB, 19.25% gc time)
ITensor ord=6 (dim=30|id=19) (dim=30|id=715) (dim=30|id=475) (dim=30|id=806) (dim=30|id=121) (dim=30|id=289)
NDTensors.Dense{Float64, Vector{Float64}}

which is consistent with the amount of time it takes to permute that tensor (internally contracting with a combiner performs a permutation and then a reshape).

Hello Matt,

I executed the same code as yours in my laptop and the results are not the same:

I am using Julia 1.9.2 and the ITensor version I have installed is ITensors v0.3.51.
I have seen that there has been performance issues with certain versions of Julia. Should I upload Julia to the newest version or go back to Julia 1.8?

Thank you very much,
Aleix

After some profiling I identified a few performance issues in the combiner contraction code and tensor permutation code more generally, which I’m fixing here: [NDTensors] Optimize `permutedims` by mtfishman · Pull Request #1288 · ITensor/ITensors.jl · GitHub.

With that improvement I get:

julia> using ITensors

julia> i = Index.(fill(30, 6));

julia> T = randomITensor(i...)
ITensor ord=6 (dim=30|id=759) (dim=30|id=313) (dim=30|id=56) (dim=30|id=649) (dim=30|id=932) (dim=30|id=772)
NDTensors.Dense{Float64, Vector{Float64}}

julia> c = combiner(i[1], i[2])
ITensor ord=3 (dim=900|id=483|"CMB,Link") (dim=30|id=759) (dim=30|id=313)
NDTensors.Combiner

julia> @time T * c
  1.268162 seconds (50 allocations: 5.431 GiB, 0.15% gc time)
ITensor ord=5 (dim=30|id=56) (dim=30|id=649) (dim=30|id=932) (dim=30|id=772) (dim=900|id=483|"CMB,Link")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @time permute(T, reverse(i))
  1.320166 seconds (21.63 k allocations: 5.433 GiB, 0.14% gc time, 3.34% compilation time: 100% of which was recompilation)
ITensor ord=6 (dim=30|id=772) (dim=30|id=932) (dim=30|id=649) (dim=30|id=56) (dim=30|id=313) (dim=30|id=759)
NDTensors.Dense{Float64, Vector{Float64}}

This matches the performance of permuting tensors using Strided.jl, which is the library we use for tensor permutations:

julia> using Strided

julia> a = randn(fill(30, 6)...);

julia> @time @strided b = permutedims(a, reverse(1:6));
  1.225725 seconds (13 allocations: 5.431 GiB, 0.07% gc time)

which I wasn’t seeing before (we were doing extra operations and making extra intermediate tensors before).

I’m not sure why your timings are so much slower than mine in general, but hopefully that improvement speeds things up for you as well.

1 Like

Thank you very much Matt.

I install the new version and report the time it takes to do the operations.

Aleix

Your improvements has decreased significaly the time of execution. I still do not get your speed but that is probably because of my laptop or my Julia version. Thank you very much,
Aleix

Great, glad to hear that helped.

1 Like