Hi,
I’m studying quantum-number (QN) conserving DMRG calculation. In the normal DMRG, it is sufficient to only optimize the MPS elements, but in the QN calculation, I think the representations of QNs in each link should also be optimized variationally. arXiv:1710.03733 shows one method, but ITensor adopts a similar method?
I looked at the paper but it’s quite long. Could you point me to the part that discusses the SVD?
Also I’d be happy to explain how the SVD works for QN-conserving ITensors. The input tensor that is being SVD’d has a set of non-zero blocks. Each of these blocks is SVD’d separately. The squared singular values from all of the block SVD’s are collected in an array and sorted. Then the truncation logic of a combination of the maxdim
(max rank) and the truncation error cutoff
are used to determine what size of singular values are kept versus discarded (a threshold that says if a singular value is larger than this threshold it is kept). Finally, this threshold is used to truncate each of the block SVD’s.
One other important detail is that in this blocked SVD, there is in principle some freedom to choose how the blocks are arranged between the factor tensors U,S,V. We choose to arrange them so that the non-zero blocks of U and of V make these tensors have zero quantum number “flux”, meaning that contracting with them doesn’t change the total quantum number of any tensor. This is the proper behavior for isometries because they just enact a change of basis plus a projection.
Also, yes, our SVD does optimize the QN representations in each link, if I take your meaning. Because of the global thresholding approach described above, after the SVD, new QN’s in the link (new index linking U to S and to V) can be dynamically generated and also QN subspaces can be truncated by different amounts and even dynamically removed.