convergence issue with Hubbard chain


I am trying to perform some calculations for Hubbard model (t and +U) on a chain away from half filling. In a previous a question I asked about the sweep parameters for a clean case, your suggestion of starting with very small number of bond dimension (10,20,etc) for a few steps does help in the convergence. However, the same sweep set is not helping the convergence if I introduce bond disorder to the problem. I have tried a couple of sweep parameters where I observed entropy is increasing and then after few sweeps it starts decreasing but never really converges.

My colleague who uses conventional style DMRG performed the same calculation (with disorder) with very high bond dimension like 4000 and 1000 alternatively to achieve convergence in less than 20 sweeps. Is there a fundamental difference between the algorithm for DMRG as implemented in itensor and conventional DMRG?

He also achieved convergence for calculations with lower max_D=min_D(say 400) but with a little less accuracy. One difference is that I am always starting with a product state but he is starting with a random initial state. I am setting the initial state, where holes are equally spread out throughout the chain and the electrons in between are set such that the total spin of the state is always zero. I am doing so, because to my understanding, if QN conservation is set to be true (which is the default configuration), then the system will remain in the sector set by the initial condition. I want to check if my convergence issue can be resolved with random MPS. Can you suggest the possible ways to do so?

I tried to perform a calculation by setting max_D=min_D=4000 and cut-off=0 or 10^-16. In some of the calculations, even though it reads the sweeps input correctly, it is truncating the calculation at 16 for sweep 1 and 256 for sweep 2. Can you guess why that might be happening?

I want to increase the speed of my calculation by using multiple processors. Can you guide me the way it can be done?

Hi, so there is not a fundamental difference between ITensor DMRG and other DMRG codes. The main differences might be in whether they offer certain features (like the noise term or other subspace expansions), how fast the underlying library is, and whether it offers multithreading etc.

Regarding maxdim, cutoff, and other sweep settings, it really requires trying out different things on smaller systems because every Hamiltonian is different and the physics can vary quite a lot. So I would encourage you to experiment with different things.

Yes we do have a function that makes random MPS but right now, for the C++ version of ITensor it only handles the non-quantum-number case. We do have a quantum number supporting randomMPS function for the Julia version.

For multithreading we also have more support for this in the Julia version, and extensive documentation on this. Also the Julia DMRG code is faster overall, so if you are concerned with these performance questions I would encourage you to look into that.

However, the explanation of how to enable multithreading in the C++ DMRG code is the following:

  • when using conserved quantum numbers, you can set the environment variable OMP_NUM_THREADS to various values to set the number of threads that should be used to parallelize over separate non-zero blocks within the tensors
  • you can set environment variables such as MKL_NUM_THREADS or OPENBLAS_NUM_THREADS to separately control the number of threads used by your BLAS, with the appropriate environment variable depending on which BLAS you are using

In practice, if you are using MKL say, it is usually best to turn off the BLAS multithreading and turn on the OMP (block) multithreading. For more information about the kind of performance gains possible with these choices, see the benchmarks in the ITensor Paper: