I am working on a project involving Quantum Fourier Transforms (QFT) on Matrix Product States (MPS) for large systems (N≈128−256 sites). My goal is to resolve extremely fine interference patterns in the phase information of the state. I have a few questions and I truly would appreciate any and all help I can receive in regards to this
Precision of the sample function My application requires BigFloat (extended precision) to resolve phase differences that are well below the machine epsilon of Float64.
– If I construct my MPS using BigFloat types, does the native sample(psi) function maintain this precision during the probability calculation (P(x)=∣⟨x∣ψ⟩∣ 2)?
– I am concerned that sample might internally convert to Float64 for random number generation, which would wash out the interference signals I am looking for. Is there a way to enforce high precision during the sampling step?
Efficient Sampling Strategy: Regarding the efficiency of extracting bit strings from an N=256 MPS: (I previously considered manually splitting the tensor network via SVD to sample blocks independently, but I assume this would destroy the global phase correlations/entanglement required for the interference pattern.)
– I understand that sample utilizes the chain rule of conditional probabilities (orthogonalizing site-by-site). Is this the most efficient approach for a system of this size, or is there a better way and more efficient way to do this computationally speaking?
QFT MPO Reference Implementation I am following the approach described in “Quantum Fourier Transform Has Small Entanglement”. Is there a reference implementation of the QFT MPO construction available in the ITensors.jl/ITensorMPS documentation, or is it best to construct it manually using OpSum or MPO gate decomposition?
Since you’re doing a pretty specialized application and ‘stack’ of methods, I would recommend the following:
1a. I’d consider first of all whether your application really requires high precision in a fundamental way, or whether that’s an artifact of your algorithm. For example, one could say that computing the inverse of an ill-conditioned but non-singular matrix “requires” high precision to succeed, but there are approaches to solving problems involving such matrices that don’t invert the matrix at all, and with these approaches the normal precision works extremely well.
1b. if you do proceed with high precision numbers, I’d recommend implementing the sampling algorithm yourself. It’s not too complicated algorithm and then you could be sure each step is implemented according to your needs. For example, instead of making density matrices to get the probabilities, which involves squaring matrices and can reduce precision / lead to roundoff, you could find the probabilities another way such as through high-precision SVD methods. (Also standard LAPACK SVD can give poor precision so we use a custom SVD for the ITensor svd routine.) To implement the sampling, there is a detailed article about it here: https://tensornetwork.org/mps/algorithms/sampling/
and you can look at the source code for the sample! function within ITensorMPS.
The sampling algorithm scales as N \chi^2 where N is the number of sites and \chi the bond dimension. It is an extremely efficient algorithm and N=256 is not an especially large MPS. So I’d go with the usual algorithm unless you find it is actually too slow for what you need. But if it is, I don’t know of a faster one…
Chen and Lindsey give an explicit construction of the QFT MPO in this paper: https://arxiv.org/abs/2404.03182
I.e. an explicit formula for all of the tensors in terms of family of polynomials. I have some personal code implementing the QFT MPO this way & can share it with you if you email me.
I can’t thank you enough for taking the time to write such a thoughtful and detailed response.
To give you a bit of context, this is for a research project in applied mathematics. The extended precision is indeed a fundamental mathematical requirement for our specific problem, so I am already getting to work on implementing the custom sampling routine you suggested. Since my background is in math rather than physics, tensor networks are still somewhat new territory for me.
Thank you also for sharing the Chen and Lindsey paper; I was looking around for exactly something like this.