Near factor of two memory improvement for DMRG?

When you make an MPO for some common terms in a Hamiltonian, such as a hopping operator cdag_i c_j, you find that the terms with i< j make certain rows/cols in the MPO matrices which are quite distinct from i>j. It is natural, and just as fast or faster, I think, to make two different MPOs for these cases, each with about half the rows/cols, and treat them as summed in DMRG. Other terms, such as Sz_i Sz_j, are already Hermitian and so one might make three MPOs: Hermitian terms, i<j non Hermitian, and i>j non Hermitian.

Now one of the non-Hermitian MPOs is redundant, since they are conjugates of each other. So there is no need to create and store one of them. The edge tensors (ProjMPOs) for the missing one is just the conjugate of the one you have. This could save you about a factor two in storage in a typical case like the Hubbard model where the MPOs for the hopping are bigger than for diagonal Hermitian terms like nup-ndn.

Within the DMRG solver, you would need to apply both the term and the conjugate, so there would not be a speedup for the solver. You would have a speed up in not having to update some of the edge tensors, a modest overall speedup.

Hi Steve,

That’s an interesting suggestion. My initial impression is that it would be best to handle this through a custom projected MPO type, since it seems a bit too specialized to put into the main library. Have you tried that? ITensorParallel.jl should provide some good examples for implementing custom projected MPO types.

Cheers,
Matt