Slicing tensor with QN

Hi there,

Suppose I have the following block-sparse tensor:

using ITensors

let
  A = [1.0 0.0 0.0 0.0;
       0.0 1e-9 0.0 0.0;
       0.0 0.0 2.0 3.0;
       0.0 0.0 1e-10 4.0];
		
       i = Index([QN(0)=>2, QN(1)=>2], "i");

       T = ITensor(A, i', dag(i));
end

Is it possible to slice T such that I can @show/manipulate the blocks corresponding to QN(0) or QN(1) individually?

Something like

B = [1.0 0.0;
     0.0 1e-9];

C = [2.0 3.0;
     1e-10 4.0];

T[(slice of block QN(0)] = B;
T[(slice of block QN(1)] = C;

#or slicing to show

@show T[(slice of block QN(0)] 

## Output: block B

Hi qumaciel, This is a good question you are asking. In order to work with QN blocks directly like this you sometimes need to work at the NDTensors level, which is a library underneath ITensors. There may be a better way to do what you want, but here is one way:

using ITensors
import NDTensors: blockview,tensor,nzblocks
let
    B = [1.0 0.0;0.0 1e-9];
    C = [2.0 3.0;1e-10 4.0];
    i = Index([QN(0)=>2, QN(1)=>2], "i");
    T = ITensor(0.0, i', dag(i));
    blockview(tensor(T),nzblocks(T)[1]).=B
    blockview(tensor(T),nzblocks(T)[2]).=C
    @show T
end

In real code you cannot rely on the ordering of nzblocks(T), so you will need to add code that looks at the indices of the blocks to make sure you get the data inserted into the correct blocks.

I hope this helps.
JR

Here is an alternate method working at the ITensors level:

using ITensors
let
       iB=Index([QN(0)=>2], "iB");
       iC=Index([QN(1)=>2], "iC");
       B=ITensor([1.0 0.0;0.0 1e-9], prime(iB), dag(iB));
       C=ITensor([2.0 3.0;1e-10 4.0], prime(iC), dag(iC));
       T,Tinds=directsum(B=>inds(B),C=>inds(C))
       @show dense(T) 
end

Thanks @JanReimers ! I’m transitioning from C++ ITensor to julia ITensor and I’m still learning how to approach things in julia.

Do you know what makes nzblocks(T) ordering unreliable?

Just a side note, look the difference between both approaches in terms of runtime and memory allocation:

# blockview approach
0.472095 seconds (774.64 k allocations: 51.870 MiB, 2.75% gc time, 99.64% compilation time)

# directsum approach
3.412558 seconds (10.92 M allocations: 712.118 MiB, 6.12% gc time, 99.88% compilation time)

I am also transitioning from C++ to Julia … I believe the effort is worth it. Regarding nzblocks ordering: I think it is arbitrary by design, in the interests of efficiency. At any point new allowed blocks might be inserted under the hood. Also blocks can get combined or removed (if they are all zeros). It should be possible to optmize the algo for directsum to get better performance.