implement full QR/SVD with QNs preserved

I am aware that full QR and SVD factorizations are not currently supported in ITensors.jl for block-sparse tensors. However, I need this functionality, and would like to tackle the implementation on my own. More specifically, I would like to compute the nullspace of a given tensor (relative to the given choice of left and right indices), so only QR is required.

Are there any pointers, warnings, etc. that can be given to me as I attempt this? Mainly, I am not very familiar with how the blocks are handled internally or how to iterate over them. Is each block a dense tensor? I would like to have some loop like,

‘’‘for block in T
n = nullspace(block)
etc.
.
.
.
end’‘’

where “nullspace(block)” is the function “nullspace” called on dense tensor called “block”. This part would be easy as it simply calls on the LinearAlgebra.jl package to then to the QR of the dense block treated as a matrix.

Thanks for any help!

Hey,
I came along the same problem, and I had to implement this on my own. I have a semi organized jupyter notebook which has the required functions implemented. It provides a full QR decomposition for dense and QN sparse case together with an implementation of the small QR decomposition on QN sparse iTensors.
By full I mean that you also get back a second isometry parametrizing the orthogonal space if your sparse matrix was not quadratic. I think all routines are working, but I am not did an extensive testing:D

If you are interested, I uploaded that notebook to my github.

It’s a good question about how to get the nullspace of an ITensor. We are actually working on a function to let you do this, called nullspace which is already in the library but just wasn’t advertised or documented yet, though will be in the next version to be released. More information about it here:
https://itensor.github.io/ITensors.jl/dev/ITensorType.html#LinearAlgebra.nullspace-Tuple{ITensor,%20Vararg{Any}}

To the other parts of your question:

  • actually the SVD factorization is supported for block-sparse tensors (unless you were meaning something specific by the term “full SVD”?)
  • correct that the QR is not yet implemented for block-sparse tensors - it’s something we’ve been meaning to add for a while

So please let us know if you still think implementing QR for block sparse is your best route, or whether you can use the nullspace function. If you want to try to implement QR, the best thing would be to study how we implement SVD which is here:
https://github.com/ITensor/ITensors.jl/blob/1f47c7397abc2e1124a5df313cebaa088667924b/NDTensors/src/blocksparse/linearalgebra.jl#L35
inside of NDTensors which is called by the ITensor library from the file src/decomp.jl