Hello,
I am looking for a way to implement a custom updater/solver that replaces the eigensolve in dmrg.
I saw Miles suggested in Correction vector that there may now be a function where one can switch the solver. The closest I can find is dmrg_x in ITensorTDVP: ITensorTDVP.jl/src/dmrg_x.jl at main · ITensor/ITensorTDVP.jl · GitHub, where I think one can modify the eigen_updater. Is this the best way to implement a customized updater?
A minor question: what is the difference between dmrg_x and dmrg in ITensorTDVP.jl/src at main · ITensor/ITensorTDVP.jl · GitHub? Do they just have different default updaters?
Hi Shengtao,
To your last question, yes the dmrg_x and dmrg functions defined in the ITensorTDVP package differ just in terms of which updater they are using, and also a little bit in terms of their “setup” part of their code which you can see at the bottom of each of those files.
Please note that the dmrg code there is totally different from the dmrg function defined in the ITensorMPS package (which is the standard DMRG code that one gets when doing using ITensors, ITensorMPS). The codes in ITensorTDVP share a backend called alternating_update which was a prototype for a new design that turned into the more general codes being developed in the ITensorNetworks package right now. So other than for calling the tdvp function, you can think of the other code in ITensorTDVP such as dmrg_x as experimental in nature.
Finally about implementing an updater, I’m not totally sure what you mean by “what is the optimal way”. The job of the updater in those codes is to take the local tensor (in DMRG this is the contraction of two site tensors of the MPS) and then to use that tensor and the operator together to make a new two-site tensor that is “updated” in some way specific to the algorithm. For the dmrg algorithm the updater calls the KrylovKit.eigsolve function which peforms a few steps of iterative eigenvector finding using a Krylov algorithm like Lanczos. For tdvp the updater uses a different Krylov algorithm that time evolves the tensor by some amount. So for the case you are seeking, the updater should just do whatever change to the local wavefunction tensor your algorithm requires. The optimality comes down to making sure that the updating is done in an efficient way, but since the updater can be totally general otherwise that part is really up to you.
Hi Miles,
Thanks a lot for the detailed explanation! Now I understand that dmrg_x should be the function to call for implementing a customized updater, since I was not totally sure if there exists a better function to use in the ITensor package family, considering that the ITensorTDVP.jl (where dmrg_x currently is) will be deprecated in favor of the ITensorMPS.jl.
The alternating_update sounds very interesting and I will take a look at how it works.
it’s not really that dmrg_x should be called to implement a custom updater. It’s more that dmrg_x is an example of a function that is implemented through a custom updater. The way it is implemented is by passing this updater to the alternating_update function which is the real “generic engine” underlying dmrg_x and the experimental new version of dmrg (and tdvp and linsolve – all of those codes are just different ways of calling alternating_update).
we aren’t necesssarily planning to deprecate ITensorTDVP in favor of ITensorMPS. If anything it may be the other way around though the exact plans are still to be determined. Mainly we will ask people to start migrating over to ITensorNetworks for those algorithms eventually.
yes, please do take a look at alternating_update which should clarify my first point above. It is a generic algorithm that sweeps over an MPS and on each pair of sites merges two MPS tensors together, puts them into an updater, then factorizes them back to restore the MPS form. A lot of algorithms can be written in terms of that basic pattern.