DMRG 2D Hubbard slow convergence

Hello together,

in my Phd research of the 2D Hubbard Model for cuprades I use the C++ DMRG of iTensor.

I use the Cluster Model with one d- and four p-Orbitals.
Currently I try to calculate the Cluster as a ladder like 2x8 d-Orbitals, every d-Orbital has four p-Orbital neighbors.

My settings are:
auto sweeps = Sweeps(10);
sweeps.maxdim() = 100,200,400,800,1600,2000,3000,4000,5000,6000;
sweeps.cutoff() = 1E-5,1E-6,1E-7,1E-8,1E-8,1E-8,1E-8,1E-8,1E-8,1E-8;
sweeps.niter() = 10,8,6,4,4,4,4,4,4,4;
sweeps.noise() = 1E-5,1E-6,1E-7,1E-8,1E-10,1E-10,1E-10,1E-10,1E-10,1E-10;

I find out, that the way I use DMRG the convergence is very slow and the differences of the energies with maxdim=5000 and 6000 is to much.
I compare the results with the DMRG Code of my Prof. and see that his DMRG get much faster convergence, so I could use lower maxdim to get good results. The DMRG Code of my Prof. is very old and don’t use MPI. That’s why I want to try iTensor DMRG to get faster results of bigger systems.

I think I use the iTensor DMRG wrong and want to ask you for tips, what could be my mistake and how can I use the hole potential of iTensor DMRG.

The URL for my attachments:

Tanks and best Regards

Polat

Hi Polat,
Thanks for your question.

Your sweeps definition looks reasonable, so based on that information alone it doesn’t seem like you are using ITensor DMRG incorrectly or anything.

It’s hard for us to immediately say why you are getting different results compared to the other code of your advisors.

Here are some possibilities to explore:

  1. what are the actual bond dimensions that your advisor’s code realizes for the state during the sweeps it does? And what bond dimensions does ITensor realize? Note that when you put maxdim of 8000, say, but a cutoff of 1E-8 (which is a very reasonable setting) that ITensor might not actually reach a bond dimension of 8000 during that whole sweep, but might start with a lower bond dimension from the previous sweep, and also even though it will go up, it might not go up to 8000 but could just reach, say, 7500.

  2. are both codes using the same “BLAS” backend for doing the matrix or tensor contractions? Which BLAS are you using?

  3. are both codes using conserved quantum numbers and using the same set of quantum numbers?

  4. are both codes using the same MPS path through the system, in terms of how the orbitals are ordered in a 1D fashion?

  5. is your advisor’s code using 2-site DMRG or 1-site DMRG?

There could be other possibilities. It’s hard to say or guess without more information.

Best regards,
Miles

Hi Miles,

thank you for your tips.
The solution for my problem was your tip 4.
I used a lattice mesh code where I numbered the sides horizontal. Now I changed it vertical, so I have the snake mesh like in the literature for DMRG.
This solves my problem and I get very fast convergences.

Thank you very much for the tip :slight_smile:

Next question: I can request several cpu’s from our computing center for each computing job. Take iTensor DMRG automatically the cpu’s for calculations, so I get faster results?
When not, how can I determine in iTensor that I will use N cpu’s for calculations?

BR

Polat

Glad your issue got fixed!

We have a guide to using multi threading with ITensor here <https://itensor.github.io/ITensors.jl/stable/Multithreading.html>.

Basically if you are using quantum numbers ITensor can parallelize over the sparse blocks if you enable it properly as discussed in the link above. You can check your cpu usage and do some test timings to see if it’s working (it rarely gives perfect parallelism but can still be faster).