Using ITensors with MultiFloats.jl

Hi,
Many thanks for this excellent package. I have some code that I am trying to use in combination with MultiFloats.jl, which promises seamless use with higher precision types. May I have some advice on how one would go about adapting code in ITensors to make it work for other types of numbers?

I have checked out a version of ITensors and made sure that the element type ElT in abstractmps.jl is preserved for most operations. After doing this, initialising the tensors with the correct type seemingly works on the surface for basic operations. However, I often see NaN errors when doing simple operations such as adding two MPSs with the density_matrix algorithm or when computing simple inner products. The errors don’t reliably occur at the same time in my code but almost always do occur, so it is hard to create a minimal example. I’m guessing perhaps that there’s other parts of ITensors that implicitly assume Float64s and I’m not catching them all? I’d greatly appreciate pointers on where to start.

Thanks!

P.S. The issue is reminiscent of this one but at a higher precision level.

1 Like

Good question, ideally that would work. It would be helpful if you shared examples of operations that don’t work (ideally as minimal as possible, say for example a certain tensor decomposition).

Hi all,

This is very interesting as Matt and I have been trying to make things very type agnostic. I did a quick example and potentially have an idea of what is going wrong. I ran this code

using ITensors, MultiFloats
N = 3
sites = siteinds("S=1", N)
Οˆβ‚€ = randomMPS(Float64x2, sites)
Οˆβ‚€ + Οˆβ‚€

and get two different errors depending on when I run. The first is what you see
ERROR: ArgumentError: Trying to perform the eigendecomposition of a matrix containing NaNs or Infs
And the second is
ERROR: MethodError: no method matching eigen!(::LinearAlgebra.Hermitian{MultiFloat{Float64, 2}, Matrix{MultiFloat{Float64, 2}}}; sortby::Nothing)
I think the real issue is that linear algebra is not defined for this number type. This operation does work Οˆβ‚€ .+ Οˆβ‚€.

Also I tried this code to disentangle the problem from ITensors

m = rand(Float64x2, (10,10))
svd(m)

and see the error ERROR: MethodError: no method matching svd!(::Matrix{MultiFloat{Float64, 2}}; full::Bool, alg::LinearAlgebra.DivideAndConquer)

Thanks for investigating this @kmp5.

Julia calls out to LAPACK for SVD and eigendecomposition which only support BLAS types (Float32, Float64, and their complex versions) so I’m not surprised those fail. Packages like GitHub - JuliaLinearAlgebra/GenericLinearAlgebra.jl: Generic numerical linear algebra in Julia could be used for more general number types like those in MultiFloats.jl. It would be a nice project to make a package extension that makes use of GenericLinearAlgebra.jl for matrix decompositions of tensors with non-BLAS number types. We already have experimental support for using Octavian.jl as a backend for matrix multiplication (and therefore tensor contraction) involving non-BLAS number types.

I’m curious why some operations give NaN errors, however. I would have thought they would just error based on calling non-existent methods like eigen! and svd! as @kmp5 showed in his last two examples.

@abhinavd I would also be curious to hear where the element types are not being preserved. If you could share some examples of that it would be helpful.

Indeed, thanks for your responses @mtfishman and @kmp5. Yes, I have already been using GenericLinearAlgebra.jl, so this minimal example still works:

using LinearAlgebra
using GenericLinearAlgebra
using ITensors, MultiFloats
N = 3
sites = siteinds("S=1", N)
Οˆβ‚€ = randomMPS(Float64x2, sites)
Οˆβ‚€ + Οˆβ‚€

I think the source of the errors are coming from taking inner products, even though the tensors involved don’t really have large entries. This fact makes me wonder whether Octavian.jl would help. May I know how to use Octavian as a backend? Is it as simple as prepending using Octavian to the code? If it helps, I can also print the tensors whose contraction is producing NaNs occasionally.

Please share the most minimal examples you can (say individual tensor operations if possible) that you see fail.

using Octavian ideally should work, I forget the details of that since we set it up a while ago. I’m not sure if Octavian works with MultiFloats, maybe good to check that separately.

1 Like

So I tried another example

using GenericLinearAlgebra, MultiFloats, ITensors
N = 3
sites = siteinds("S=1", N)
Οˆβ‚€ = randomMPS(Float64x2,sites)
for i in 1:100
  print("$i\t")
  @show inner(Οˆβ‚€,Οˆβ‚€)
end

and I find that we spontaneously fail

85      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999999950311007434809315
86      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999999950311007434809315
87      β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
88      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999999950311007434809315
89      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999999950311007434809315
90      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999999950311007434809315

And it looks like the number of times this operation fails is correlated with the number of sites such that when N = 101 it fails every time

1       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
2       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
3       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
4       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
5       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
6       β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN

Though it doesn’t fail every time at N=100.

** UPDATE**
I added Octavian and ran the same code and it no longer fails with Nan for a majority of these calls with N = 101, but it still does occasionally fail with Octavian
Here is the result where it fails

20      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999998798219714702364828
21      β”Œ Warning: The inner product (or normΒ²) you are computing is very large (NaN). You should consider using `lognorm` or `loginner` instead, which will help avoid floating point errors. For example if you are trying to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by `B = A ./ z` where `z = exp(lognorm(A) / length(A))`.
β”” @ ITensors ~/.julia/dev/ITensors/src/mps/abstractmps.jl:1131
inner(Οˆβ‚€, Οˆβ‚€) = NaN
22      inner(Οˆβ‚€, Οˆβ‚€) = 0.999999999999999999999999999998798219714702364828

Is there the same issue with Float64 in that case?

There can be issues with taking inner products of large MPS, it may help to use loginner.

@mtfishman I do not see this issue with Float64, I took the same code as my previous post and replaced Float64x2 with Float64 . I increased number of sites to 700 and ran 1000 iterations of inner and have seen no nan returns

Thanks for looking into this, again. I see roughly the same results.
Why is it that inner sometimes works and sometimes fails for the same MPS? Does it have to do with a changing ortho center? I don’t believe there’s any randomness used anywhere downstream of inner, is there?
I do think loginner can be a workaround if one patches MultiFloats to handle logarithms of small numbers properly, but for now I’d like to understand why inner fails, especially because in the example provided, the number of sites is just 3 and none of the component tensors have large entries.

I modified ITensors’ inner product code in abstractmps.jl to simply retry a bunch of times if it gets NaN answers, and it works most of the time. This looks like a bug but I have no idea where it’s coming from. The behaviour is also present when using DoubleFloats.jl, so it is likely an issue with ITensors.
Here’s how I changed abstractmps.jl:

   if !isfinite(dot_M1_M2) || isnan(dot_M1_M2)
    # Retry a few times
    for i in 2:11
      O = M1dag[1] * M2[1]
      for j in eachindex(M1)[2:end]
        O = (O * M1dag[j]) * M2[j]
      end
      dot_M1_M2 = O[]
      if isfinite(dot_M1_M2) && !isnan(dot_M1_M2)
        println("Worked on try ", i)
        return dot_M1_M2
      end
    end
    error("The inner product (or normΒ²) you are computing is very large " *
      "($dot_M1_M2). You should consider using `lognorm` or `loginner` instead, " *
      "which will help avoid floating point errors. For example if you are trying " *
      "to normalize your MPS/MPO `A`, the normalized MPS/MPO `B` would be given by " *
      "`B = A ./ z` where `z = exp(lognorm(A) / length(A))`.")
  end

A minimal example file drawing on @kmp5 's example:

# using Octavian
using GenericLinearAlgebra, MultiFloats, ITensors, DoubleFloats
MultiFloats.use_clean_multifloat_arithmetic()
N = 10
sites = siteinds("S=1", N)
# T = Double64
# T = Float64x1
T = Float64x2

Οˆβ‚€ = randomMPS(T,sites)
println(Οˆβ‚€.data)
for i in 1:100
  print("$i\t")
  try
    @show inner(Οˆβ‚€,Οˆβ‚€)
  catch
    @show loginner(Οˆβ‚€,Οˆβ‚€)
  end
end

gives the output

ITensor[ITensor ord=2
Dim 1: (dim=3|id=955|"S=1,Site,n=1")
Dim 2: (dim=1|id=475|"Link,l=1")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 3Γ—1
  0.9024682454819287533157501021796001
 -0.3696171004913222062250941721199386
 -0.2212109059724610504166110400916367, ITensor ord=3
Dim 1: (dim=1|id=475|"Link,l=1")
Dim 2: (dim=3|id=655|"S=1,Site,n=2")
Dim 3: (dim=1|id=445|"Link,l=2")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.1214899423332925510789074311245403  0.8073751098711058074946163233476233  -0.5773955540809723590037545389624751, ITensor ord=3
Dim 1: (dim=1|id=445|"Link,l=2")
Dim 2: (dim=3|id=498|"S=1,Site,n=3")
Dim 3: (dim=1|id=927|"Link,l=3")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.5064928114790571647843660904146737  0.8431578131772069103199533635408137  0.1804160026113828622999509294773879, ITensor ord=3
Dim 1: (dim=1|id=927|"Link,l=3")
Dim 2: (dim=3|id=293|"S=1,Site,n=4")
Dim 3: (dim=1|id=174|"Link,l=4")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.3191126839212917625819365200272996  -0.946777071476780535655682514790659  -0.04219326825929627416190420498243662, ITensor ord=3
Dim 1: (dim=1|id=174|"Link,l=4")
Dim 2: (dim=3|id=993|"S=1,Site,n=5")
Dim 3: (dim=1|id=18|"Link,l=5")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.8031895842734982090707786359839693  0.1815289084819520257431632178941978  -0.5673920576637605739629228029228936, ITensor ord=3
Dim 1: (dim=1|id=18|"Link,l=5")
Dim 2: (dim=3|id=223|"S=1,Site,n=6")
Dim 3: (dim=1|id=601|"Link,l=6")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.86778737294072372026144189447951304  -0.4929202929815074806905301272856098  0.06304490567573332661272029639758355, ITensor ord=3
Dim 1: (dim=1|id=601|"Link,l=6")
Dim 2: (dim=3|id=92|"S=1,Site,n=7")
Dim 3: (dim=1|id=641|"Link,l=7")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 -0.9177125536684423210132744144358787  -0.25395896544991142559992191895038103  -0.3054644213438891928395291360024312, ITensor ord=3
Dim 1: (dim=1|id=641|"Link,l=7")
Dim 2: (dim=3|id=134|"S=1,Site,n=8")
Dim 3: (dim=1|id=809|"Link,l=8")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 0.50909567473115157572078113943343113  -0.6482621284662864778043189450994564  0.5661959084684323308770663982549032, ITensor ord=3
Dim 1: (dim=1|id=809|"Link,l=8")
Dim 2: (dim=3|id=944|"S=1,Site,n=9")
Dim 3: (dim=1|id=277|"Link,l=9")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3Γ—1
[:, :, 1] =
 0.6085265473196872972354489811738419  -0.672543287340768222824019589446403  -0.4211661998072177280194636967728686, ITensor ord=2
Dim 1: (dim=1|id=277|"Link,l=9")
Dim 2: (dim=3|id=16|"S=1,Site,n=10")
NDTensors.Dense{MultiFloat{Float64, 2}, Vector{MultiFloat{Float64, 2}}}
 1Γ—3
 -0.6876961251864609565734127202335213  0.4556496089484765675765650908914224  0.5652056911148611240955093601848548]
1       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
2       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
3       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
4       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
5       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
6       Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
7       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
8       Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
9       inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
10      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
11      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
12      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
13      Worked on try 3
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
14      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
15      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
16      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
17      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
18      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
19      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
20      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
21      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
22      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
23      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
24      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
25      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
26      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
27      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
28      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
29      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
30      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
31      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
32      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
33      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
34      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
35      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
36      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
37      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
38      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
39      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
40      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
41      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
42      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
43      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
44      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
45      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
46      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
47      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
48      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
49      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
50      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
51      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
52      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
53      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
54      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
55      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
56      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
57      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
58      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
59      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
60      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
61      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
62      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
63      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
64      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
65      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
66      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
67      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
68      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
69      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
70      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
71      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
72      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
73      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
74      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
75      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
76      Worked on try 3
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
77      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
78      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
79      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
80      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
81      Worked on try 2
inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
82      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
83      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
84      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
85      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
86      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
87      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
88      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
89      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
90      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
91      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
92      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
93      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
94      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
95      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
96      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
97      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
98      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
99      inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358
100     inner(Οˆβ‚€, Οˆβ‚€) = 0.9999999999999999999999999999999815110725338825358

Is it possible to reduce it to a minimal example involving just ITensor contractions?