How to replace SVD in ITensors by random SVD

Hello everone!

I just want to ask how can I construct my own SVD algorithms (for example random singular value decomposition) to deal with itensors with QN. And then impose rsvd algorithm to TDVP or anywhere truncation is needed.

A typical random singular value decomposition was posted below


I have already realize a simpled rsvd function to fractorize itensors. It works well paritially. In such a method, one needs to make a normal Gaussian distributed random matrix with a predetermined dimension k (or the rank of the fractorized matrix to target), thus one has to define an index with dimension k to generate the random matrix.

But here is the problem: most systems have a conservative quantity, such that indices of tensors are QN indices. The storage type of tensors with QN are BlockSparse. But the random matrix are NDTensors.Dense type with one index with QN and another index without QN. The way to contract NDTensors.Dense tensors with NDTensors.BlockSparse is to dense() the BlockSparse tensors. And finally, the fractorized tensors are Dense type, and error occurs when call the orthogonalize!(psi,i) due the same reason of BlockSparse tensors and Dense tensors (they can not be contracted).

Here is my code for random SVD

using ITensors
using LinearAlgebra

function power_iteration(A::ITensor,omega::ITensor,q::Int)

    min_i=commonind(A,omega)
    Y=A*omega
    max_i=commonind(A,Y)
    l_i=commonind(Y,omega)

    for i=1:q
        Q_1,_=qr(Y,max_i)
        replaceind!(Q_1,inds(Q_1)[2],l_i)
        Q_1,_=qr(permute(A,(uniqueind(A,Q_1),commonind(A,Q_1)))*Q_1,min_i)
        replaceind!(Q_1,inds(Q_1)[2],l_i)
        Y=A*Q_1
    end

    return Y
end

function rsvd(A::ITensor,
    Linds...;kwargs...)

    maxdim=get(kwargs,:maxdim,nothing)
    q=get(kwargs,:iter,0)
    p=get(kwargs,:oversam,0)

    Lis=commoninds(A,Linds...)
    Ris=uniqueinds(A,Lis...)

    CL=combiner(Lis)
    CR=combiner(Ris)

    AC=dense(A * CR * CL)

    cL=combinedind(CL)
    cR=combinedind(CR)

    if inds(AC) != (cL, cR)
        AC = permute(AC, cL, cR)
    end

    if (maxdim==nothing || maxdim >= min(dim(cL),dim(cR)))
        maxdim=min(dim(cL),dim(cR))
    end

    k=maxdim

    l=k+p

    l_i=Index(l,"omega_l")

    if dim(cL)>=dim(cR)
        omega=ITensor(randn(dim(cR),dim(l_i)),(dag(cR),l_i))
        
        if q==0
            Y=AC*omega
        else
            Y=power_iteration(AC,omega,q)
        end

        if inds(Y)!=(cL,l_i)
            Y=permute(Y,(cL,l_i))
        end

        Q,_=qr(Y,cL)
        replaceind!(Q,inds(Q)[2],l_i)

        U_t,S,V=svd(AC*Q,l_i;maxdim=k)
        U=dag(Q*U_t)*CL
        V=dag(V)*CR

        return (U,S,V)

    elseif dim(cL)<dim(cR)
        omega=ITensor(randn(dim(l_i),dim(cL)),(l_i,dag(cL)))

        if q==0
            Y=omega*AC
        else
            Y=power_iteration(AC,omega,q)
        end

        if inds(Y)!=(l_i,cR)
            Y=permute(Y,(l_i,cR))
        end

        Q,_=qr(Y,cR)
        replaceind!(Q,inds(Q)[2],l_i)

        U,S,V_t=svd(AC*Q,cL;maxdim=k)
        Q=permute(Q,(l_i,cR))
        V=dag(V_t*Q)*CR
        U=dag(U)*CL

        return (U,S,V)
    end
end

It works well for normal itensors, but errors occur for itensors with QN because of random matrix omega=ITensor(randn(dim(cR),dim(l_i)),(dag(cR),l_i))

Any help would be appreciate.

Hello Wang,

I am familiar with this approach and these kinds of problem in ITensor. It looks like the problem is that you are constructing l_i as a dense index. So when the input is a BlockSparseTensor you are mixing indices

cR = (dim=2|id=315|"CMB,Link") <Out>
 1: QN() => 2
l_i = (dim=2|id=11|"omega_l")

Linear and multilinear algebra is one of my particular research interest so I am currently looking into making randomized linear algebra more available in the backend of ITensors, this will, in principle, eliminate these kinds of issues.

Karl

Hi Wang & Karl,
recently I am also considering using randomSVD and thanks for your posts from which I benefits. Besides, I wonder whether you have experiences of using randomSVD in tasks like factorizing two-site tensors in two-site DMRG and calculating entropy and, if yes, do you think randomSVD stable and suitable for them?
Best,
Zhou

Hi Zhou,
It’s a good question. While I don’t have so much direct experience putting randomized SVD into algorithms like DMRG, from my experience writing and testing randomized SVD algorithms, they are extremely stable. I think there is a common sense most people have when first encountering the RSVD that it will be like Monte Carlo, where the randomness brings fluctuations and things are “noisy”. But RSVD is not like that. In RSVD the way to think about the randomness is that the random vectors are just “not special / not adversarial” in other words they are “typical”. And then really the heart of RSVD is a stable QR calculation to capture the “U” of the SVD. All of the fluctuations go into the R matrix which gets thrown away. Then the U is appled back to the matrix (without any randomness) to compute the S and V deterministically.

Another important point is that the RSVD does get some of the singular values wrong, but they are always the last 5-10 singular values computed. The “leading” singular values are very accurate. So a common feature in a good RSVD code is to compute k+p left singular vectors and values, then only keep k of them and discard the other p. The parameter p is known as the “oversampling” parameter and can be chosen to be about 10 in many cases. (Basically you can think of the quality of the k singular values you keep as becoming exponentially more accurate as a function of p getting larger. This may not be a precise statement - I’d have to check - but you can think of it as a kind of intuition.)

2 Likes

Hi Miles,

For RSVD with quantum number (QN) conservation, the random matrix Ω should also carry QNs. The left index of Ω corresponds to the right index of A, and the right index of Ω can be obtained as the dagger of the left one.

In this case, however, how should one properly choose the dimension of each QN sector?

Best regards,
ZH L

It’s a good question about how to choose the dimension of each QN sector. I think there must be an automated (or “adaptive”) way to do this. One way to develop such an approach would be to think about forming the ‘density matrix’ (\rho = M^\dagger M) that is used to define the matrix U resulting from the SVD (\rho = U S^2 U^\dagger). Probably one can analyze \rho or think about its properties to develop a way to obtain or estimate the different QN sectors and their sizes automatically. But this analysis hasn’t been done as far as I know.

More practically, McCulloch and Osborne describe some of their choices for this issue in the following paper:
https://arxiv.org/abs/2403.00562
I think they do something like looking at all the indices in the “row space”, combined into a single index, and find all of the quantum number blocks or sectors in that row space and the corresponding sizes. Then they multiply these sizes by a fixed multiple, if I remember correctly. I’d definitely recommend reading through their paper in detail.

Hi Miles,

Thanks for sharing your thoughts. I also noticed this paper and have actually implemented their idea of using RSVD in the DMRG3S method.

My approach to RSVD is similar to what you described: I scale the dimension of the QN sectors of the combined index by a fixed multiple. I also set a minimum dimension for each QN sector, which I keep at 2. So far, this method seems to work well, suggesting that a highly accurate SVD may not be necessary in this context.

Best,
Zhaochen

Great. Also if it’s really important for downstream applications to have the optimal bond dimension / index size and QN block sizes, then you could follow up the randomized SVD with a regular SVD of either (US) or (SV). If I’m thinking about it correctly, it should have similar scaling to randomized SVD since the low-rank compression has already been found.

:grinning_face_with_smiling_eyes: I’ll be uploading my implementation to GitHub later this week, so anyone interested can take a look.