Implementing block Sparse MPOs and relation with Quantum Numbers

It’s a good question. The QN ITensors in ITensors.jl are in fact built on top of a more general block sparse tensor type in NDTensors.jl, however right now that is more an internal detail that is not easily exposed to users. Having said that, you can use a slight hack to construct block sparse tensors that don’t have specified quantum numbers:

using ITensors: ITensor, Index, QN, Block

i = Index([QN() => 2, QN() => 2])
j = Index([QN() => 2, QN() => 2])
a = ITensor(Float64, i, j)
a[Block(1, 1)] = randn(2, 2)
a[Block(2, 2)] = randn(2, 2)

which constructs a block sparse tensor with two nonzero blocks:

julia> show(a)
ITensor ord=2
Dim 1: (dim=4|id=613) <Out>
 1: QN() => 2
 2: QN() => 2
Dim 2: (dim=4|id=939) <Out>
 1: QN() => 2
 2: QN() => 2
NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
 4×4
Block(2, 2)
 [3:4, 3:4]
  0.8563781437612569   -0.3262202190043517
 -0.17888014351124132  -0.5242686688221203

Block(1, 1)
 [1:2, 1:2]
  0.18128185576868744  -1.5151649967961565
 -0.7430343642573164   -0.3957748521149144

In our ongoing rewrite of ITensors.jl, non-QN block sparse tensors will be directly supported through the BlockSparseArrays.jl package: GitHub - ITensor/BlockSparseArrays.jl , which will provide the basis for our new symmetric tensor types but will be usable as a standalone array backend for ITensor.

However, one thing to keep in mind is that in some operations like SVD, it isn’t necessarily straightforward to preserve the block sparsity, so the usefulness of a general block sparse tensor may be limited depending on the algorithm.