I am trying to set up a finite temperature MPS calculation, using METTS. The system I am interested in is the Fermi-Hubbard model on a square lattice. I am doing the imaginary time evolution using the TDVP algorithm, and sampling the wave function in the spin X-basis, similar to the publication https://journals.aps.org/prx/pdf/10.1103/PhysRevX.11.031007.
The code works fine for small onsite interaction U/t, I think (around U/t~10), however I am having problems going to larger U, for which one would expect a transition to a ferromagnetic state at finite doping. Ideally, I would like to go up to U/t ~ 100.
The observable I am interested in is the nearest neighbor spin correlation function.
However, when trying to do so the autocorrelation time between subsequent METTS steps becomes very large (up to hundreds of subsequent steps), compared to order one steps for smaller U, (as also observed in the publication above). For some instances I even observe some sort of “trapping” of the METTS calculation at specific classical product states, i.e. the probability to sample the same product state as in the step before is very large (>90 percent).
Do you have experience with the parameter regime I am interested in (very large U), or is there a reason I am missing for which there are conceptual issues with the METTS algorithm there? I would very much appreciate your opinion on this!
Hi Johannes,
That’s interesting to hear you are trying METTS on some other Hubbard type models. It’s a reasonable question about this autocorrelation issue, but unclear to me exactly why it is happening in your case. There could be a variety of reasons.
Here are some suggestions for you:
can you write the same model on just two sites (“Hubbard dimer”) and study the structure of the METTS in the bases you are using for the case of very large U? That could be revealing and educational. I’m not sure what it will show but it might be useful.
What sort of physical temperature range are you in when you observe the autocorrelation issue? If you are (physically) at high temperature, then all sorts of autocorrelation issues can happen without carefully thinking through the choice of basis. At low temperatures, on the other hand, things usually can go better for METTS.
If you are at low temperatures, then most of the METTS resemble the ground state, plus fluctuations. So you could just prepare the ground state using DMRG then study taking samples of it in different bases to gauge whether you are getting widely different samples or just the same sample. So e.g. if your state is ordering maybe you will just get the same sample almost all the time (all spins up, say). Of course you’ll need to be careful about one subtle thing which is that DMRG doesn’t usually break symmetries unless you “pin” them at the boundary whereas METTS generically does break symmetries, because the initial states used in the time evolution effectively “choose” which way to break the symmetry.
Finally, from what you’re saying about seeing the same product state, it does sound like either the time evolution is not really building up much entanglement (i.e. you might effectively be at a high temperature relative to physical energy scales, or else the ground state might be a very simple state itself, like nearly a product state e.g. if it’s a ferromagnet). Or else / relatedly you might be in a symmetry-broken phase and that could be the culprit.
Hi Miles,
thanks a lot for the useful tips! It is true that I observe the issues at relatively large temperatures, it sounds reasonable that the issue is somewhere there. I realized that I may not have the projection error well enough under control, which is coming from the TDVP time evolution. It is probably quite sizeable since the starting bond dimension for the time evolution is only 1 of course. I was thinking that this could, in particular for small imaginary times, mess up the time evolution and lead to an underestimated entanglement of the calculated METTS.
I was wondering if it makes sense to benchmark the TDVP algorithm with time evolving block decimation, to see how severe the projection error actually is. However, for a 2D model (in my case Hubbard), it seems non-trivial to implement due to the need for swap gates in the trotter decomposition.
Is there an example code on one of the github pages for TEBD with a 2D model? I couldn’t find one, so probably not, right?
Hi Johannes, glad you’re making progress. It would be good if we had a 2D example. But fortunately the way the apply function works is that you can actually input long-range gates (meaning just gates which act on any pair of sites you want) and it will automatically “swap” the sites of the MPS internally to allow these gates to be applied. So as a user you can just focus on inputting the gates in a sensible order that could lead to a hopefully minimal number of swaps needing to happen.
For an initial seed state (product state) of a METTS, the swap overhead should be negligible for the first few time steps, so using TEBD through our apply function could be a very good way to take the initial time steps and build up the bond dimension.
Then for the next set of times (say roughly from tau = 0.5 to 1 or maybe 1 to 2), you should try using two-site TDVP, before finally using 1-site TDVP. Or you might want to use two-site TDVP throughout all of the remaining time.
I would recommend doing tests on small clusters of sites where you can also use exact diagonalization / full state calculations of the METTS as a comparison. Or you could test against the U=0 noninteracting limit and so on.