+1 vote
asked by (1.1k points)
retagged by

To my understanding, DMRG works by optimizing every consecutive pair of sites in turn. So it goes something like this: optimize sites 1-2 while holding the rest fixed, then optimize sites 2-3 while holding the rest fixed, then optimize sites 3-4 while holding the rest fixed, and so on. Eventually it reaches the end of the chain and turns around, and once it returns to the beginning, that's one sweep. I know that a version of DMRG exists where only one site is modified at a time, but the 2-site DMRG is considered superior for various reasons, which is why it's the one that is used.

My question is the following: how hard would it be to perform a modified version of this where the number of sites one optimizes at a time is some number other than 2 (such as 3), and where one did not necessarily optimize over every grouping of adjacent sites? In particular:

(1) How hard would it be to write code for in Julia, given that we can reference the original DMRG code at https://github.com/ITensor/ITensors.jl/blob/3b44ec3039a43870d6a3299f49ea79462febc174/src/mps/dmrg.jl?
(2) What might one expect from the runtime?
(3) What other complications might arise?

I ask this in particular with respect to the lattice Schwinger model, which has alternating spin-1/2 sites (for the fermions and antifermions) and link sites (for the electric flux values), as shown in equation (2) of https://arxiv.org/abs/1803.03326. Basically, this setup has local constraints enforced by Gauss's Law (this was also mentioned in http://itensor.org/support/2140/implementing-local-gauge-symmetries). As a result, one cannot modify one or even two sites in isolation. The "most elementary" modification one can make to a state is to add or remove an adjacent fermion-antifermion pair and change the value of the link between them by +/- 1. Notice that this modifies three sites.

Because of this, it seems to me that isolating two adjacent sites and optimizing them while holding the rest fixed will lead to problems when trying to perform DMRG on such a Hamiltonian. However, if one were to hold three sites fixed at a time, and only do every other consecutive triple, then it might work. This would look something like the following: optimize sites 1-2-3 while holding the rest fixed, then optimize sites 3-4-5 while holding the rest fixed, then optimize sites 5-6-7 while holding the rest fixed, and so on.

1 Answer

+1 vote
answered by (70.1k points)

Hi Sujay,
The short answer to your question is that one can certainly write a 3-site (or even 4 or 5 site) DMRG algorithm and it can be a powerful way to handle cases that 2-site DMRG has difficulty reaching convergence for. One project I was involved in was greatly helped by switching to 3-site DMRG. It would be especially appropriate if your model has a certain unit cell size which is greater than 2 sites (though that's not to say that 2-site DMRG can't work well for larger unit cell sizes).

The main technical step one needs to do to implement 3-site DMRG is to implement the step of factoring the 3-site wavefunction "center" tensor back into 3 separate MPS tensors. Say one is currently sweeping to the right, and say that after optimization of this tensor, it has 3 site indices: s1, s2, s3, and two bond indices bl and br. Then you first do an SVD to split the indices (bl,s1) from the rest. Then multiply the resulting SV to get the remainder R. Also grab the index "u" connecting U to S. Now SVD R, splitting the indices (u,s2) from the rest. The result is that the U from the first SVD becomes the first MPS tensor, the U from the second SVD becomes the second MPS tensor, and the SV from the second SVD becomes the third MPS tensor.

You just do the steps in reverse, essentially, when sweeping back to the left. The key is to make sure that the orthogonality center is pushed rightward when sweeping right and leftward when sweeping left. It helps a lot to draw diagrams of these things.

The two other considerations needed are:
1. making sure the outer loop using the iterator "sweepnext" is called as sweepnext(N,3) to set the number of center sites to 3
2. modifying the ProjMPO class to stop the building of the Hamiltonian environment tensors earlier so that they are compatible with the 3-site wavefunction tensor

Hope that helps - it would be a bit of work so do consider carefully whether you need it

Miles

commented by (14.1k points)
Hi Sujay,

A small correction: `commonind` returns the first common Index found, but doesn't error if there are more than one. From the code you posted, you are setting `M[b+j] = L`, which looks right. However, it also seems like you should be setting `M[b+j+1] = R` within your loop, could that be part of the problem you are seeing?

In the original `replacebond(...)` function, the point of the line `tags = tags(linkind(M, b))` is so that the new link index that is being introduced by `factorize` has the same tags as the original link index. Likely you would want this feature in your own function, but it isn't absolutely necessary. However, if the call to `tags(linkind(M, b+j))` is failing as you say, then the function `linkind(M, b+j)` is not finding a link index between your MPS tensors `b+j` and `b+j+1`. This would indicate that there is something off with your MPS, since it should have one common Index between tensors `M[b+j]` and `M[b+j+1]`. As Miles recommended, you should print the indices of those tensors to try to debug what is going wrong.

The `setprime` lines in the `factorize` function is there so that you can set the prime level of the new index being introduced if you want. `replacebond` doesn't set this, but other applications might want to. You should think of `factorize` as a more general functionality which is used in a wide variety of applications, and happens to be used in `replacebond` as a specific application. The final return of the Index `l` from factorize is just for convenience, if you want a different way to "grab" the new Index that is introduced by factorize (it returns the Index `l == commonind(L, R)`). For the use in `replacebond` it is just being discarded.

Also, this week I will be working on adding functionality to ITensors.jl for a generalization of `replacebond` to a general number of sites. Feel free to continue working on your own version, but hopefully there will be built-in functionality for that soon!

-Matt
commented by (1.1k points)
edited by
Thank you so much Matt! I definitely am using plenty of print statements to understand and locate the various indices. I will keep working on my version, although having that built-in functionality would be great. Do you know where I can find the code for the linkind function? I can't seem to find it in the GitHub repo, even when I search for it.

Also, the reason I am not doing `M[b+j+1] = R` in my loop is that the factorization needs to be done iteratively for ncenter >= 3, i.e. R needs to be factored again, then you need to set M[b+j+1] = L, and repeat until you set M[b+ncenter-2] = L and M[b+ncenter-1] = R at the very end. So if you try to say M[b+j+1] = R, then you won't be able to do the rest of the factorization process properly (although please let me know if you see a good way of doing this). Right now I am using a temporary variable to store these and then putting them into M one by one at the end. So somehow `linkind(M, b+j)` fails to recognize the new common index once I transfer the values from temp into M. I'm pretty sure looking at the code for the linkind function would help me understand why.
commented by (14.1k points)
The Github search is pretty bad, I generally clone a repository and then use `grep` on the command line. Here is the `linkind` function: https://github.com/ITensor/ITensors.jl/blob/2c22f8ba7fa7ea787693359b7036591888ac4bcd/src/mps/abstractmps.jl#L354

It basically is just a call to `commonind`, which returns the first common Index that is found or `nothing` if none are found.
commented by (14.1k points)
Perhaps you need to use `inds(phi_new[1])` instead of `inds(M[b+j])` in the call to `factorize`?

If you don't update `M[b+j+1]`, then the call to `linkind(M, b+j)` will fail since then the MPS tensors won't have shared indices. I don't see any harm in setting it, as you say it will get overwritten in the next step anyway.
commented by (1.1k points)
edited by
OK, so it turns out that my bug was really silly. My modified DMRG code seems to work now--I tried it on several of my old programs using ncenter values of 2 (default), 3, 4, and 5, and in almost every case it runs really smoothly! There does seem to be one case where it runs quite slowly, but even then it's still clearly working. Thank you so much to both of you for all of your help in this really long comment thread.

I have some other stuff that will probably keep me busy for a while, but I should be able to submit a pull request for this by the end of today.
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...