Doubled / product SiteTypes and matrix product super operators

Hi guys, once again, thanks for the great package!

We are currently looking into using ITensors (julia) for some open quantum systems (1D) work and trying to figure out what the best way forward for defining super operators and their interaction with density matrices is.

We are working on this currently with the aim to open source the results, so we would be interested in developing this externally / potentially merging parts in the future. Either way, your input would be greatly appreciated!

There seem to be two options: defining “matrix product super operators” (MPSO) and how they interact with MPOs, or defining doubled hilbert spaces so that the current MPS/MPO code can be reused for superoperator-operator modeling.

Firstly, I’m curious which of these would be recommended performance wise. I suppose the MPSO-MPO setup may introduce some reshaping overhead in order to fundamentally perform the same operations, but since the internal data and indices are fairly seperated, with appropriate initial shaping of the internal data this could probably be equivalent to the MPO-MPS interaction in the doubled hilbert space (perhaps with some minor additional index manipulation)?

Secondly, regarding doubled hilbert spaces, I’m curious whether you guys have any ideas for how to go about programmatically defining these by reusing the current SiteTypes and ops. Or indeed, defining product spaces in general in terms of the current SiteTypes, as I imagine these could potentially be useful more broadly. Perhaps some sort of predefined symbol in the strings so that “S=1/2 x S=1/2” or “S=1/2 \otimes S=1/2” is interpreted as a product space and “Sz x Sz” or “Sz \otimes Sz” is interpreted as a tensor product of Sz with itself acting on the product hilbert space and constructed accordingly?

We have defined a doubled S=1/2 space manually, which works great, but is not ideal in terms of duplicated work for defining other spaces in the future.

Perhaps both of the above options may be beneficial to have for distinct reasons, i.e. MPSOs for effective abstraction and doubled/product spaces for more complex local hilbert spaces.

We look forward to your input!

2 Likes

Hi Dominic,

Good question, thanks for the kind words and also your great contributions to ITensor!

In PastaQ.jl we’ve gone with the strategy of using MPOs for representing mixed states/densitry matrices, and then representing super operators as Kraus gates/operators. Evolution is performed by conjugating the MPO by the Kraus gates/operators, and the Kraus dimension is just an extra index on the gate that gets contracted over. This is how we represent noisy gates and how we currently plan to represent open system dynamics (see the pull request by Giacomo here).

Does that fit what you have in mind? If so, it would be great to collaborate on that so we are not doubling our efforts, we are definitely interested in having more general support for open system dynamics.

You could go the strategy of using doubled Hilbert spaces, we felt like the above approach was simpler to program and allowed us to use the more natural MPO representation directly without having to switch back and forth between representations.

For using a doubled Hilbert space, I might recommend using an MPS with two "S=1/2" site indices per tensor, since then you don’t have to define a new set of site types and operators. For example:

n = 10
s = siteinds("S=1/2", 2n)
ψ = randomMPS(s)
ψ = MPS([ψ[n] * ψ[n + 1] for n in 1:2:(n - 1)])
SzSz1 = op("Sz", s, 1) * op("Sz", s, 2)
apply(SzSz1, ψ)
SzI1 = op("Sz", s, 1)
apply(SzI1, ψ)
ISz1 = op("Sz", s, 2)
apply(ISz1, ψ)

Just a thought, it may end up being easier to define doubled site types if you go with an MPS representation of the density matrix.

Cheers,
Matt

Hi Matt.

For the PastaQ package, how to act the Kraus operators on a density operator based on the Born rule? Suppose I have Kraus operators K_{\uparrow} and K_{\downarrow}. I want to do a weak measurement and the outcome (either \uparrow or \downarrow) is determined by the Born rule. Then my naive thinking is that I first calculate p = \text{tr}(K_{\uparrow}\rho K_{\uparrow}^{\dagger}) and then generate a random number. If it is smaller than p, then my final state is K_{\uparrow}\rho K_{\uparrow}^{\dagger}/p. If it is larger than p, my final state is K_{\downarrow}\rho K_{\downarrow}^{\dagger}/(1-p). But if I want to calculate p and I use tr(apply(K, rho;apply_dag= true)), then it seems that my MPO rho has been changed. Is there a way to fix this problem? Thanks

Here is the code I would write for the procedure you outlined (just translating exactly the procedure from your post, maybe there is a more efficient way to do it):

Kup::ITensor # Kraus operator
Kdn::ITensor # Kraus operator
rho_up = apply(rho, Kup; apply_dag=true)
p = tr(rho_up)
if rand() > p
  rho_f = rho_up / p
else
  rho_f = apply(rho, Kdn; apply_dag=true) / (1 - p)
end
1 Like

Oh, so rho_up = apply(rho, Kup; apply_dag=true) will not change rho, right? Sorry, I thought it will also change rho. Thanks!

No problem. The convention in Julia is that a function will not modify an input if it doesn’t end with !, which is what we follow in ITensor and PastaQ.

For example:

julia> x = [3, 2, 1]
3-element Vector{Int64}:
 3
 2
 1

julia> sort(x)
3-element Vector{Int64}:
 1
 2
 3

julia> x
3-element Vector{Int64}:
 3
 2
 1

julia> sort!(x)
3-element Vector{Int64}:
 1
 2
 3

julia> x
3-element Vector{Int64}:
 1
 2
 3

If it modified the inputs we would call it apply!. We could provide that, but I prefer making high level functions non-modifying unless there is a noticeable performance advantage, which is not the case for apply.

Hi Matt,

Apologies for the slow response! This is great, thanks for the pointers! I didn’t realise the MPS setup was so generic re. the solution to doubled Hilbert spaces (I have since also seen the product space example in the docs), nor that you were doing this in PastaQ.jl.

We will take a closer look at what you’re doing in PastaQ. We are also looking at implementing a method which minimizes overlap of a new MPS with the circuit action on the old MPS, rather than truncating, so it could be good to compare.

Ultimately we are also interested in implementing an MPS based QJMC, which I see is on your list of issues in PastaQ. Perhaps we could also combine forces there eventually.

Cheers,
Dom