Can you please explain how to calculate the required memory for the dmrg algorithm when maxdim is set to a fixed value D such that when one is submitting a job to a cluster computer they can specify this rough approximation to the required memory and avoid getting their job killed by out of memory error?

For example I would assume the effective Hamiltonian is the largest object in the dmrg algorithm and has requirements of roughly (D x D x 2 x 2 x 16)/1e6 MB (16 bytes per complex number) when implemented like a linear map instead of a full matrix which would need 2 x (D x D x D x D x 2 x 2 x 16/1e6) - factor of 2 for permuting indices requiring out of place data copying.

Thanks again for the help in advance and I have to also give my credits to the ITensors team for the impressive speed performance of the Julia DMRG code!

Hi, so I think only a rather rough estimate is possible, since the exact memory usage can depend on subtle details, not least of which is how the Julia â€śgarbage collectorâ€ť behaves (i.e. how long it lets memory pile up before it is freed by the Julia â€śruntimeâ€ť system).

That being said, a good estimate of the order of magnitude could be made this way: say that the bond dimension of your Hamiltonian MPO is k and the bond dimension of your MPS is D. Next, you are right that the effective Hamiltonian is the largest object made by the DMRG algorithm. Each tensor making up this object has two indices of size D and one of size k for a total memory size of D^2 k. Finally there are about N of these tensors where N is the number of sites.

So I would estimate the memory usage will scale as N D^2 k. But there will be some prefactor, so you could look at your actual memory usage for a few moderate values of D then fit these values to a curve to estimate what values it might reach for larger values of D.