Disorder in DMRG

Hi there,

So I’ve been running some DMRG calculations on a 1-D chain, implemented as

  ampo = OpSum()
  for j=1:L-1

    ampo += f(j, j+1) "C",j,"Cdag",j+1
    ampo += f(j, j+1) "C",j+1,"Cdag",j

  for j=1:L
    for k=1:j-1

      ampo += g(j, k),"N",j,"N",k

Where f ~ exp(r), g ~ 1/r are some function dependent on the distance between each pair of sites.

Now the algorithm gives good results (by that I mean both compared to ED for shorter chains, and the excited states were ordered by their energies for the longer chains) for the perfect chain, i.e. there’s no disorder in the system.

But I’ve noticed that once I started to add disorder, i.e. varying the location of each site randomly, the ITensor DMRG starts to give wrong result for long chains ( say ~60 sites), and by wrong results I mean when I searched for excited states, states with lower energies would appear after states with high energies, the excited states were not ordered by their energy.

Also interestingly, the disordered DMRG calculation seems to get worse the longer the chains are. The short disordered chain DMRG results are identical to the ED ones, but at longer chains, the energy orders are all messed up.

I’m wondering if this has something to do with how long range interactions are coded in ITensor (maybe by approximation to exponential?) and if there are ways to explicitly write them out if that’s case.
If this is something physical, by my understanding even 1D critical system should be solvable by DMRG without blowing up exponentially, but I wonder if this is something more physical.

Thanks for the question. What you are observing is almost certainly the effect of DMRG getting “stuck” i.e. becoming trapped in a local optimum rather than finding the true ground state. It is a well-known shortcoming of DMRG, that the results can depend on the state used to initialize DMRG and on details of how the algorithm is run.

The sticking problem is known or expected to be the hardest to deal with in disordered systems.

The main tools you can use to combat this “sticking” issue are:

  • choosing an initial state that is “close” to the true ground state or otherwise some good initial state
  • using the “noise term” feature of DMRG

Are you using a non-zero noise value when setting your sweep parameters?

One other idea for you is that, since you are studying a fermionic system, then there is a very nice technology to take solutions of non-interacting systems, including disordered ones, and compute a very accurate MPS approximation of their ground state. We have implemented this method in the “ITensorGaussianMPS” package which you can find here:
which is based on this paper: [1504.07701] Compression of Correlation Matrices and an Efficient Method for Forming Matrix Product States of Fermionic Gaussian States

Using that technique based on just the non-interacting part of your Hamiltonian, or using a Hartree-Fock solution of your interacting Hamiltonian, might be an excellent way to prepare initial states.

Thank you very much for the reply.

The current tests so far with relative small noise seem to have no effect on convergence to global minima, and I’m looking into obtaining better initial states.

In the meantime, would it be possible for you to give an intuitive (or rigorous) explanation for the sticking problem in the DMRG cases (i.e. how disorder makes it a lot worse)? Heuristically, I could see that the disorder disrupts the local symmetry as the sweeps are performed from site to site if the perfect system is translationally invariant, but my system has open b.c. and long range interaction, therefore even for the perfect system, the local Hamiltonian on each site is already not identical, but DMRG are almost always able to converge to global minima. I guess the answer has to depend on the details of the interaction within the system.

Regardless, I thought this was a really interesting phenomenon.

Glad you’re making some progress. The reason for the sticking problem in DMRG is that the main step of DMRG before it does optimization is to project the Hamiltonian into a basis defined by the current MPS being optimized. If this MPS is too “poor” (an extreme example would be if it is a product state) then the Hamiltonian projection can be very drastic, then the optimization will be given only part of the Hamiltonian and will not suceed at improving the MPS enough. The two site algorithm helps a lot with this, because at least correlations between neighboring sites can be built up correctly (and other correlations too if the MPS basis is good). But even for longer-ranged, non-disordered interactions sticking can happen and is quite common when using quantum number conservation and having further-neighbor interactions at the same time.

Whether sticking is worse with disorder could depend on many details and on the choice of initial state etc., so I was mostly just intuitively guessing that disorder could make it worse. Especially when you mentioned you have longer-ranged interactions and when you mentioned not being able to find certain states I thought of the sticking problem.

Thank you very much for the reply.

I implemented the free fermion initial state calculation in my algorithm. One calculation of a randomized disorder chain yields the following results

  1 -192.19705345671255
  2 -192.11510616708222
  3 -192.10806337523604
  4 -192.03542473618216
  5 -191.98463069104702
  6 -191.96270323016614
  7 -191.95696474068114
  8 -191.94202885660698
  9 -191.92460672291062
 10 -191.8851185150214
 11 -191.8238240712801
 12 -191.68626366161303
 13 -192.45435358732954
 14 -192.412292737249
 15 -192.3426191881551
 16 -192.32189451919524
 17 -192.31932997391883
 18 -192.30128656654645
 19 -192.29134759316125
 20 -192.27212439851596
 21 -192.26749375941571
 22 -192.24459738083758

So the energies are still out of order, but there’s more structure compared to before.

I’m now wondering if the currently strategy could work: If we are confident that the convergence lower bound of DMRG algorithm is the true ground state (i.e., it will not go below the true GS), then we can trust that every time a lower energy is obtained we are at least ‘closer’ to the true GS, so we can discard all the previous states with higher energies, since they are not the eigenstate of the system, and continue the search from there.

I suppose there’s also no guarantee that this iterative process will give the true GS or just a lucky list of increasing energies above the GS.

I will also implement a search with states with small disorder as initial states.

Anyway, I appreciate the answers so far it’s been very helpful!

Good idea to use the free fermion initialization. Did you use the ITensorGaussianMPS package?

Yes you are correct that DMRG is guaranteed to never go below the true ground state energy. (Actually this is true for any wavefunction that is normalized to 1, so it includes the wavefunction computed by DMRG.)

Unfortunately the possibility of “skipping over” excited states is just a drawback of the orthogonalizing method for finding excited states. However I don’t know of any better method to find excited states with DMRG – trust me if I did we would add it to ITensor very quickly!

One other thing you can try to improve things is to try adapting the method I showed in some code here:
to your system and calculations. I believe I explain the idea there also, but basically it is to compute a “mini Hamiltonian” by projecting H into all the states you currently have, so like

h_ij = \langle\psi_i| H |\psi_j\rangle

and compute a similar matrix n_ij for the overlaps of the states, which importantly is not guaranteed to be the identity matrix so you must include it. Then by solving a generalized eigenvalue problem you can find linear combinations of the MPS you currently have which have smaller overlaps than before and energies closer to the exact ones.

Thank you very much for the reply, I’m currently looking into several different approaches.

One last quick question, currently I’m using the ‘standard deviation’ of the Hamiltonian, i.e. \sigma = \sqrt{ |\langle H^2 \rangle - \langle H\rangle ^ 2|} to see if the calculated state is close to the true eigenstate, do you know if this is sufficient and/or if there are better approach to measuring how ‘good’ a eigenstate is?

About whether there is a better approach, is there a specific sense of “better” than you mean? Like more sensitive to deviations or else faster to compute?

The only better approach I have heard about is an approach to approximate the variance using a method that is less exact but cheaper. I could send you a reference if you want, but it’s only better in the sense of being faster. If you can get the variance in a reasonable amount of time it’s the best measurement I am aware of.