We have been comparing computational bitstring probabilities from a BP evolved state to MPS evolved state bitstring probabilities. When using inner with alg="bp" and equivalent functions, some probabilities don’t match. They do match when contracting the tensor network to get the full BP bitstring probability distribution. This is shown in the figures below for 100 computational bitstrings sampled from MPS. The state is evolved using the apply method from @JoeyT1994 which includes updating the bp_cache incorporated into the code here ITensorNetworks inner function with alg="bp" issue. Please let me know, why would the probability amplitudes not match?
In your comparisons, or expectations about the results, are you taking into account that BP is an uncontrolled method? I.e. it is not guaranteed to give the same results one would get from exactly contracting a network. (Also it may be especially different for non-local properties like wavefunction amplitudes.)
As Miles said BP is an uncontrolled method so if the underlying tensor network you have here contains a loop there is no guarantee the amplitudes you get are the same as if you exactly contracted the network.
What is the topology of your tensor network here? In order to get more controlled contractions you would have to consider corrections to belief propagation such as Boundary MPS or loop corrections, which we are planning to support in the coming months.