I have a question about the tensor storage in Julia ITensor.
In ITensor C++ version, each tensor is normalized with an perfactor, the benefit is to avoid overflow and underflow. In Julia version, I didn’t find the same design . I wonder if there is particular reason to discard the design?
Hi Jing,
It’s a good question. I think the main reason was that this design was supposed to help with certain issues related to older algorithms where certain very large or small numbers were occurring, but we eventually learned other ways to deal with those issues. Also at the time, I thought maybe keeping the prefactor separate might give a speed benefit but I didn’t do enough of a systematic investigation of this. But this design does make the code a good bit more complicated, so if I remember correctly we just left this out for simplicity when doing the port to Julia.
Also, that was removed in more recent versions of C++ ITensor. As Miles said, basically it didn’t solve many problems in practice and made the library code more complicated and harder to maintain. There were many bugs that popped up which were caused by not accounting for that normalization properly, and additionally there were some more fundamental questions about how to track the normalization of tensors through various operations properly and efficiently (for example in tensor addition, if I remember correctly). You can always manually store a normalization of the tensor along with the tensor if needed, which we do in some parts of the code when we contract large tensor networks.
Hi Miles and Matt, thanks very much for the background of removing the design. It’s simpler to work without the design and maintain the prefactor by user. This is often the case in RG algorithms, such as TRG.
And @mtfishman , May I ask what kind of potential bugs may appear if we do an automatic storage of log norm ? Do you mean that when addition, extra time are spent to make them into the same scale before adding or something else ?
That was one issue. Also it was easy to forget to account for the scale factor in various lower-level functions that acted on the tensor data. Some of that could have been hidden more with better code abstractions (so maybe easier in Julia), but still it was something that we had to account for by hand throughout the library even if it was relatively hidden from users. It also didn’t cause any issues when we removed it which indicated it wasn’t needed or used for very much .
Yes, agreed with all that Matt said. And if we did put this idea back, I think a better place to put it might in the actual storage of an ITensor rather than at the top level of the ITensor type itself. And also it would be important to do detailed timings of calculations with and without this feature to see if there is a clearly identifiable case where it speeds things up, and most importantly if that case actually occurs often enough in a common algorithm that we use a lot.
Yes. I agree that the addition prefactor may cause more potential problems when implementing , such as forgetting to taking the coefficient into account etc. And it’s more natural to leverage all existing array directly without consideration of the additional coefficient.
And I agree with Miles even if we use the design, TensorStorage is a more reasonable place to store the coefficient, it’s related to storage implementation rather than the intrinsic property of tensor.