Subspace expansion "noise" in DMRG

Anyone have an implementation of subspace expansion DMRG, which is an alternative to the density matrix “noise” perturbation available directly from ITensors.dmrg? I am having convergence trouble for a large DMRG calculation and not only does the density matrix perturbation not appear to help but it quickly dominates the run time at even modest bond dimensions.

Strictly single-site DMRG algorithm with subspace expansion

Thanks!
Ben

There was some work implementing 2-site expansion at 1-site cost here: Subspace expansion by b-kloss · Pull Request #23 · ITensor/ITensorTDVP.jl · GitHub and here: Subspace expansion by wladiKrin · Pull Request #160 · ITensor/ITensorNetworks.jl · GitHub (essentially an implementation of the approach discussed in [2403.00562] Comment on "Controlled Bond Expansion for Density Matrix Renormalization Group Ground State Search at Single-Site Costs" (Extended Version)), but that has stalled for now. It is definitely a high priority for us but we haven’t had a chance to get back to it and finish it up, it requires adding some basic tooling like a generic randomized SVD that supports general linear operators and QN conserving ITensors, re-implementing based on that, and integrating it in a nice way into the solvers like DMRG and TDVP.

I was going to suggest the global subspace expansion introduced here: GitHub - ITensor/ITensorTDVP.jl: Time dependent variational principle (TDVP) of MPS based on ITensors.jl. if density matrix perturbation is failing, but you are saying that also the cost is an issue, and likely global expansion will cost even more than density matrix perturbation.

Excellent, thanks for the info! I’m glad it’s on your radar, as I would be very interested in trying it out. Although perhaps not interested enough to go digging around in those PRs…

I’ve tried using ITensorTDVP.expand as well as simply diagonalizing in the Krylov subspace. Both are better than no perturbation but I’m undecided whether they are better than using the perturbation or not. Neither fix the underlying problem that the relative error in the energy remains stuck at 1% as I increase the bond dimension from 128 to 1024.

As for how I am able to do the global expansion I use the “zipup” algorithm (with a slight change it can support MPO-MPS products as well and appears to be faster and use less memory than the density matrix approach) to get \ket{r} \approx (H - E) \ket{\psi_\text{truncated}} and then use the “fit” algorithm from ITensorsTDVP to variationaly converge to \ket{r} \approx (H - E) \ket{\psi}. This is reasonably efficient and appears to work reasonably well, although in general I can’t calculate \braket{\psi| H^\dagger H | \psi} to see exactly how good the approximation is.

That’s a good idea to use the zip-up algorithm for that since it doesn’t require high accuracy, we should add a zip-up backend for MPO*MPS…

I wish I could take credit for it, but it was outlined Time-evolution methods for matrix-product states

Our experience. The zip-up method (Section 2.8.3) is typically sufficiently accurate and fast. In
some cases, it is worthwhile to follow up on it with some additional sweeps of the variational
optimization (Section 2.8.2) to increase accuracy

I can submit a PR if you want, I think it was really just a few lines.

1 Like

That sounds like a good idea for a PR, though ideally it would be written generically so that there is a single version that handles both MPO*MPO and MPO*MPS, say written in terms of AbstractMPS.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.