Comparing BP and MPS computational bitstring probability amplitudes

Hi,

We have been comparing computational bitstring probabilities from a BP evolved state to MPS evolved state bitstring probabilities. When using inner with alg="bp" and equivalent functions, some probabilities don’t match. They do match when contracting the tensor network to get the full BP bitstring probability distribution. This is shown in the figures below for 100 computational bitstrings sampled from MPS. The state is evolved using the apply method from @JoeyT1994 which includes updating the bp_cache incorporated into the code here ITensorNetworks inner function with alg="bp" issue. Please let me know, why would the probability amplitudes not match?


Julia version 1.10.4
ITensors version 0.6.23
ITensorNetworks version 0.11.21

In your comparisons, or expectations about the results, are you taking into account that BP is an uncontrolled method? I.e. it is not guaranteed to give the same results one would get from exactly contracting a network. (Also it may be especially different for non-local properties like wavefunction amplitudes.)

1 Like

Hi @jsaroni ,

As Miles said BP is an uncontrolled method so if the underlying tensor network you have here contains a loop there is no guarantee the amplitudes you get are the same as if you exactly contracted the network.

What is the topology of your tensor network here? In order to get more controlled contractions you would have to consider corrections to belief propagation such as Boundary MPS or loop corrections, which we are planning to support in the coming months.

1 Like

Thank you for quick response. The comments are useful, thanks. Using this connectivity for the topology.

The loop structure is similar to Fig. 1(b) used for a loop expansion here https://arxiv.org/pdf/2409.03108.

Ok sounds like great features to have!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.