Revert back to older version of ITensors.jl

Hi all,

Is there a straightforward way to revert back to (or do a fresh install of) older version of ITensors.jl, say v0.3.43? I have tried ] add ITensors@0.3.43, but then it is not compatible with other packages that are downloaded with ITensors.jl, mostly NDTensors.jl that has gone through huge revamp. So, performing a fresh install or replacing the new version with ITensors@0.3.43 by hand throws lots of errors due to incompatibility. I also tried to dev the downloaded v0.3.43, but it was still downloading the latest NDTensors.

Actually, I was working with a long-range 2D Hamiltonian, where a hand written version of DMRG was working as expected (matched with analytical results, where available). But then I performed the update of ITensors.jl, and in the new version (0.3.52) the same code is giving totally unexpected results. I wanted to revert back to old version to understand the origin of the problem.

Thanks in advance!

Regards,
Titas

Sorry to hear you had an issue with the latest version of ITensors.jl.

I would start by making a new local environment (see 4. Working with Environment · Pkg.jl) that has nothing installed, and then try ] add ITensors@0.3.43. That should install all of the dependencies from scratch and avoid conflicts.

I tried that. I also tried by removing the entire Julia installation and reinstalling it again (removed hidden cache files by hand). But still such fresh installation was downloading the latest NDTensors.jl.

I see, I didn’t realize you also wanted a specific version of NDTensors. Can’t you then just add NDTensors at a specific version as well?

1 Like

Actually, older version of ITensors.jl seems incompatible with newer NDTensors.jl.

Let me try ] dev 'ing the older NDTensors as well.

Sorry, it did not work. I performed ] dev on NDTensors@0.2.11 that comes with v0.3.43. I am getting following error. It seems there is incompatibility with another package.

ERROR: LoadError: MethodError: no method matching sort(::Tuple{Int64, Int64, Int64})

Closest candidates are:
  sort(::AbstractUnitRange)
   @ Base range.jl:1397
  sort(::AbstractRange)
   @ Base range.jl:1400
  sort(::Dictionaries.AbstractIndices{I}; kwargs...) where I
   @ Dictionaries ~/.julia/packages/Dictionaries/7aBxp/src/AbstractIndices.jl:376
  ...

Stacktrace:
  [1] permutedims_combine_output(T::NDTensors.BlockSparseTensor{Float64, 5, NTuple{5, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 5}}, is::Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, perm::NTuple{5, Int64}, combdims::Tuple{Int64, Int64, Int64}, blockperm::Vector{Int64}, blockcomb::Vector{Int64})
    @ NDTensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/NDTensors/src/blocksparse/blocksparsetensor.jl:412
  [2] permutedims_combine(T::NDTensors.BlockSparseTensor{Float64, 5, NTuple{5, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 5}}, is::Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, perm::NTuple{5, Int64}, combdims::Tuple{Int64, Int64, Int64}, blockperm::Vector{Int64}, blockcomb::Vector{Int64})
    @ NDTensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/NDTensors/src/blocksparse/blocksparsetensor.jl:440
  [3] contract(tensor::NDTensors.BlockSparseTensor{Float64, 5, NTuple{5, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 5}}, tensor_labels::NTuple{5, Int64}, combiner_tensor::NDTensors.Tensor{Number, 4, NDTensors.Combiner, NTuple{4, Index{Vector{Pair{QN, Int64}}}}}, combiner_tensor_labels::NTuple{4, Int64})
    @ NDTensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/NDTensors/src/blocksparse/combiner.jl:72
  [4] _contract(A::NDTensors.BlockSparseTensor{Float64, 5, NTuple{5, Index{Vector{Pair{QN, Int64}}}}, NDTensors.BlockSparse{Float64, Vector{Float64}, 5}}, B::NDTensors.Tensor{Number, 4, NDTensors.Combiner, NTuple{4, Index{Vector{Pair{QN, Int64}}}}})
    @ ITensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/src/tensor_operations/tensor_algebra.jl:3
  [5] _contract(A::ITensor, B::ITensor)
    @ ITensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/src/tensor_operations/tensor_algebra.jl:9
  [6] contract(A::ITensor, B::ITensor)
    @ ITensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/src/tensor_operations/tensor_algebra.jl:104
  [7] _contract(As::Tuple{ITensor, ITensor, ITensor}, sequence::Vector{Int64}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ ITensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/src/tensor_operations/tensor_algebra.jl:207
  [8] _contract(As::Tuple{ITensor, ITensor, ITensor}, sequence::Vector{Int64})
    @ ITensors ~/MEGA/projects/FQH_cavity/julia_test/0.3.43/ITensors.jl-0.3.43/src/tensor_operations/tensor_algebra.jl:206

That’s an issue that Julia temporarily define sort(::Tuple) but then removed it. You could try ] add Compat@4.10.0.

Great!!! It worked like a charm !!! Thanks !!!

1 Like

Glad that worked, please keep us updated about any issues you are finding in more recent versions of NDTensors/ITensors.

Hi again,

The problem I am facing starts from v0.3.47, and till v0.3.46 it was working well.

In my case, I have a long-range Hamiltonian, and I am doing DMRG to get the GS. In the solvable limit, I know the exact energy to be zero. Since the Hamiltonian is long-range, DMRG has a tendency to get stuck in some local minima (there are lots of them). Till v0.3.46, a noise level of 1e-8 was enough to get the energy til E~1e-7 (this is the limit I can get considering all the approximations used in construction of the MPO). But from v0.3.47, smaller noise like 1e-8 gets stuck at E~1.1…, and with larger noise > 1e-3 it gets stuck at E~1e-4. There is no way I could reduce the energy to E~1e-7 using noise. Only by doing a global subspace expansion (ala TDVP), I could get again E~1e-7.

I will try to figure out the exact causes for such behavior.

Perhaps related to this change: Fix behavior of factorize mindim by ryanlevy · Pull Request #1214 · ITensor/ITensors.jl · GitHub? That was included in the v0.3.47 release.

Hi Matt,

Sorry for the late reply. I deleted the previous post, as I was wrong.
You are right, the cause of the problem is that:

  Lis = indices(Linds...)
  dL, dR = dim(Lis), dim(indices(setdiff(inds(A), Lis)))
  maxdim = get(kwargs, :maxdim, min(dL, dR))

was replaced by

  Lis = commoninds(A, indices(Linds...))
  Ris = uniqueinds(A, Lis)
  dL, dR = dim(Lis), dim(Ris)
  # maxdim is forced to be at most the max given SVD
  if isnothing(maxdim)
    maxdim = min(dL, dR)
  end
  maxdim = min(maxdim, min(dL, dR))

in src/tensor_operations/matrix_decomposition.jl.

While the change in the PR #1214 is justified for the situations where noise is zero. But in presence of perturbations, the restriction of maxdim in factorize_eigen is strongly affecting the convergence of DMRG.

For example, even presence of noise, the bond dimension in the bulk is getting bounded by 16 after one fullsweep (left-to-right and right-to-left) for spin=1/2 systems.

I guess that makes sense, since the noise term can make those eigenvalues nonzero.

Maybe we can just remove that maxdim restriction if !isnothing(eigen_perturbation), or set it to max(dL, dR). @ryanlevy, any comments on this?

Probably the noise term should not be handled inside of factorize since it doesn’t seem like the right place for it, but that’s a separate issue.

@titaschanda feel free to make a PR to fix this, not sure I’ll be able to get to it soon.

Sorry about my delayed reply. max(dL, dR) is still a bit strange to me, since factorize as a standalone object I would imagine shouldn’t be allowed to grow the rank. But for just eigen I think the max restriction is reasonable for the moment

Right, but when you pass a noise term, you are actually factorizing a sum of two tensors (based on the original one and the noise term), so I think the issue is that the rank of that sum of tensors can exceed the maximum dimension set in Fix behavior of factorize mindim by ryanlevy · Pull Request #1214 · ITensor/ITensors.jl · GitHub.

@titaschanda I proposed a fix in [ITensors] Fix `eigen_perturbation` rank by mtfishman · Pull Request #1302 · ITensor/ITensors.jl · GitHub, could you test it out and see if it fixes the issue you are seeing?

Hi @mtfishman, great! I will test this over the weekends, and get back to you. Enormously grateful, as always!

Hi Matt, it works!! :smiley:

1 Like

Glad to hear it, I merged the PR and it will be included in the next release.