Hello everyone, I have encountered some problems in learning the Metts method, and I hope to get your help.
1.When I read articles about Metts, the description of probability is very important. Probability of transition, probability of taking the average, etc. However, the probability is not reflected in the Metts code.
“samp = sample!(psi)
new_state = [samp[j] == 1 ? “Z+” : “Z-” for j in 1:N]”
Is there do not need transition probability?
2.The code computation each energy of the Metts, but why the code do not computation their average?
Thank you very much in advance
Shuaifeng
Hi Shuaifeng,
Are you referring to this code?
https://github.com/ITensor/ITensors.jl/blob/main/src/lib/ITensorMPS/examples/finite_temperature/metts.jl
For that code, the answers to your questions are:
-
the
sample!
function provided by ITensorMPS does behave in a probabilistic way. For a quantum state |\psi\rangle, it returns a product state |j\rangle in the computational basis (represented as an array of integers) such that the state |j\rangle is chosen with probability p_j = |\langle j|\psi\rangle|^2, which is also the usual probability of sampling from a wavefunction in quantum mechanics (i.e. using the Born rule). So the output ofsample!
will be random and different each time it is called. -
In the code, each METTS energy is
push!
-ed into an array calledenergies
. Then a function defined earlier in the code calledavg_err
is called on theenergies
array which computes both the average and standard error of the energy data. So the variablea_E
is the average energy and is what is printed out on the line starting with “Estimated energy”.
Hi miles,
Thank you very much for you answer. But i still have some problems.
- what do you mean about “the
sample!
function provided by ITensorMPS does behave in a probabilistic way.” ? what i understand about “sample!” is that it is a function to pick number 1 or 2 randomly, then ∣j⟩ can be chosen randomly to get the new CPS. But i do not understand where is the transition probability ? - the average energy is
a_E
,but the calculated weight seem be 1, every weight about the metts seem to be the same. there is no P(I)/Z. Is this setting correct?
“N = length(v)
avg = v[1] / N
for j in 2:N
avg += v[j] / N
end”
Thank you for your reply again.
For question 1, I would encourage you to call sample!
multiple times in a row on the same MPS input and then print (using @show
or println
) the output that you get. You will see that it does not just output the number 1 or 2, but an entire array of integers representing a configuration or product state of the sites in the computational (“z”) basis.
You’re right that each element of the returned array is either the number 1 or 2, randomly chosen. The transition probability is implicit in the algorithm (inside of sample!
) that chooses these numbers at random. There are probability weights and so on inside that algorithm but they are implicit and not explicitly revealed to the outer code calling the sample!
function. If you reread through my earlier answer to 1 above, you will see that by definition the output of sample!
is distributed according to p_j = \langle j|\psi\rangle which meets the criterion for sampling required by the METTS algorithm.
Also there is a more detailed page here going through the mathematics used to sample a product state from an MPS, which is the same way the sample!
function operates internally:
http://tensornetwork.org/mps/algorithms/sampling/
About question 2, you are right that there is no explicitly calculated weight P(i). If you go back through the METTS algorithm as explained in the Stoudenmire, White paper from 2010, you will see that in the actual steps of the algorithm one never needs to use the weights P(i) explicitly. All they do is normalize each METTS to have a normalization of 1.0. Then the only probabilistic step is sampling the next product state, and in that step the weight P(i) does not appear. It is kind of remarkable and a bit magical, so you are right that this is non-trivial. What’s going on here is that the sampling procedure is implementing a “Markov chain” and the weights P(i) are satisfied by this Markov chain process, so you do not actually need to compute them. It is a subtle thing indeed.
Finally, the reason the energies can be directly averaged and not weighted by P(i)/Z is also a surprising thing, but it is a consequence of the theory of Markov chains and more broadly the theory of sampling. If the sampling procedure already chooses each energy E(i) with the correct probability weight, then the weights are already accounted for in the sampling process itself and do not need to be again accounted for a second time in the averaging process. This is also a subtle fact and is not specific to the METTS algorithm, but is also the procedure in many Monte Carlo algorithms such as Monte Carlo integration or quantum Monte Carlo etc. Again it’s not obvious but it is the correct procedure.
If you are still feeling unsure about all these things, I would encourage you to write a small METTS code completely from scratch yourself, not using any ITensor functions or even maybe not using ITensor at all but just using Julia matrix and vector operations. You can code METTS for a 2-spin system and compute all of the steps and convince yourself about how each step works and how the weights should be handled etc. I’ve done this before and it’s a very good way to learn all of the details and not too hard to do. Hope that helps.
Thank you very much!! I learn a lot and find found the direction. I will try to write a small METTS code and learn some knowledge about Markov chain.
Thank you! Wishing you a happy life!