Dear all,
Is there a reason why one can do
MPS(sites; linkdims=2)
but not
MPO(sites; linkdims=2)
?
Thank you very much for your work
Dear all,
Is there a reason why one can do
MPS(sites; linkdims=2)
but not
MPO(sites; linkdims=2)
?
Thank you very much for your work
I don’t think there’s a very good reason for this inconsistency. I think it points to some improvements we need to make, such as making either both of these work or else discouraging use of these in favor of random_mps
and random_mpo
if your goal was to make randomized versions of each network.
We do offer random_mps
and random_mpo
which would be preferable to use here, but awkwardly random_mpo
also does not accept the linkdims
argument right now either. It’s a missing feature from random_mpo
that I should work on adding.
I should ask, though, was your goal to make a random MPO or just to initialize an MPO in some way which you were then going to overwrite or modify anyway?
I think there is a more unambiguous way of constructing random MPS (start from a product state and apply a random circuit), while I think it is less obvious/unambiguous how to make random MPOs. There may be something similar, where a user is expected to pass a product of operators and then a random circuit is applied to one or both sides of the MPO. I think the story becomes even more complicated when thinking about quantum number conservation.
Hi Miles,
Thanks a lot for your reply. I was trying to initialize an MPO to modify later, so I did not really need a random MPO, and as you say even random_mpo does not accept linkdims. Actually, I was trying to build an MPO by knowing the form of the local tensor (such as the one you have in simple models using the finite state machine description) without using OpSum(). I think a feature that builds a translational invariant MPO from a local tensor given that and a list of sites would also be very useful if one wants to write his own MPO.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
ITensor Website
·
Twitter @ITensorLib
© 2022 ITensor Collaboration