I’ve been running some timing tests to check when MPS/MPO “outspeeds” matrix multiplication. My end goal is to use MPS to represent my states and run quantum jumps, and I think my bottleneck in speed is running product(MPS, MPO).
I did some quick timing for applying S_- = \sum_{i=1}^N\sigma_-^i to the initial state all spin up. My MPO/MPS run is faster than dense matrices, but much slower than sparse. Is there something inefficient in how I’m writing this that can be sped up?
For reference, I’m running the following code (timing just the matrix multiplication vs. the product in MPS/MPO) for N=12, where my product()
beats a dense matrix code:
# MPS/MPO Timing
let
# Setting some parameters + site index
N = 12
state = ["Up" for n=1:N] # define the state to be all spin up (|e>)
s = siteinds("S=1/2",N; conserve_qns=true) # all the atoms are spin 1/2
psi = MPS(s, state) # Creating State, MPS
Smin = OpSum()
for i in 1:N
Smin += 1.,"S-", i # first argument is the coefficient. We can alter this as need be for WG, for example.
end
SminMPO = MPO(Smin, s) # Creating MPO of Smin
@time product(SminMPO, psi) # Timing
end
0.002528 seconds (20.08 k allocations: 4.358 MiB)
function tensor_lower_sp(i, N)
# Function to create S- = sum of all individual sigma_-
id = spzeros(2,2); lower = spzeros(2,2) # creating lowering operator and identity
lower[2,1] = 1.
id[1,1] = 1.; id[2,2] = 1.
prod = 1
for x in 1:N
if x == i
prod = kron(prod, lower)
else
prod = kron(prod, id)
end
end
return prod
end
let
# Sparse mult timing.
N = 12
psiMat = spzeros(2^N); psiMat[1] = 1.0 # Initial state, 2^N dimension
SminMat = spzeros(2^N, 2^N) # Creating lowering op, 2^N by 2^N matrix
for i in 1:N
SminMat = SminMat + tensor_lower_sp(i, N)
end
@time res1 = SminMat * psiMat
end
0.000009 seconds (9 allocations: 96.188 KiB)