Anyone have an implementation of subspace expansion DMRG, which is an alternative to the density matrix “noise” perturbation available directly from ITensors.dmrg? I am having convergence trouble for a large DMRG calculation and not only does the density matrix perturbation not appear to help but it quickly dominates the run time at even modest bond dimensions.
Excellent, thanks for the info! I’m glad it’s on your radar, as I would be very interested in trying it out. Although perhaps not interested enough to go digging around in those PRs…
I’ve tried using ITensorTDVP.expand as well as simply diagonalizing in the Krylov subspace. Both are better than no perturbation but I’m undecided whether they are better than using the perturbation or not. Neither fix the underlying problem that the relative error in the energy remains stuck at 1% as I increase the bond dimension from 128 to 1024.
As for how I am able to do the global expansion I use the “zipup” algorithm (with a slight change it can support MPO-MPS products as well and appears to be faster and use less memory than the density matrix approach) to get \ket{r} \approx (H - E) \ket{\psi_\text{truncated}} and then use the “fit” algorithm from ITensorsTDVP to variationaly converge to \ket{r} \approx (H - E) \ket{\psi}. This is reasonably efficient and appears to work reasonably well, although in general I can’t calculate \braket{\psi| H^\dagger H | \psi} to see exactly how good the approximation is.
Our experience. The zip-up method (Section 2.8.3) is typically sufficiently accurate and fast. In some cases, it is worthwhile to follow up on it with some additional sweeps of the variational optimization (Section 2.8.2) to increase accuracy
I can submit a PR if you want, I think it was really just a few lines.
That sounds like a good idea for a PR, though ideally it would be written generically so that there is a single version that handles both MPO*MPO and MPO*MPS, say written in terms of AbstractMPS.