0 votes
asked by (250 points)

Hi,

When I configured "ConserveQNs" parameter in /sample/hubbard_2d.cc or dmrg.cc to enable block-sparse tensor computation, how to evaluate the "sparsity" in the tensor computation? (The "sparsity" means the percentage of the number of non-zero value to the total number of value. For example, a tensor with one non-zero value and ten thousand zero value, such that the "sparsity" is 1 /10000 = 0.0001)

Any comments are welcome!

Thanks!

1 Answer

+1 vote
answered by (14.1k points)

Hello,

For a single ITensor with QNs, you can use the function nnz to get the number of nonzero elements. For example:

auto i = Index(QN(0), 2, QN(1), 2);
auto A = randomITensor(QN(), i, prime(dag(i)));
PrintData(nnz(A));  // 8
PrintData(dim(inds(A)));  // 16

Here, dim(inds(A)) gets the total number of elements the ITensor could have, if it was dense, so the sparsity is nnz(A)/dim(inds(A)).

You can use these on individual ITensors of the MPO or MPS in the DMRG calculation.

-Matt

commented by (250 points)
Hello Matt,

Thank you for your response!

I tried to use nnz(ITensorA)/dim(inds(ITensorA)) to get the sparsity within DMRG or Hubbard model. However, the data are not represented as ITensor format in DMRG or Hubbard. May I know where specifically I can use nnz(ITensorA)/dim(inds(ITensorA)) to calculate the input ITensor in those two applications, and for MPO or MPS? Or is there other interface to do so within DMRG or Hubbard?

Thanks!
commented by (14.1k points)
You could get the sparsity of the individual ITensors of an MPS or MPO, for example:

    auto N = 4;
    auto sites = SpinHalf(N);
    auto state = InitState(sites);
    for(auto i : range1(N))
        {
        if(i%2 == 1) state.set(i, "Up");
        else         state.set(i, "Dn");
        }
    auto psi = MPS(state);
    auto nnz_tot = 0;
    auto dim_tot = 0;
    for(auto i : range1(N))
        {
        nnz_tot += nnz(psi(i));
        dim_tot += dim(inds(psi(i)));
        }
    PrintData(nnz_tot);
    PrintData(dim_tot);

The same thing would work for an MPO. Possibly we could make a definition for `nnz(MPS)` that is like the above code (the sum of the `nnz` of the ITensors of the MPS) but it is easy enough to calculate yourself.

You could calculate the sparsity of the MPS in this way before and after your DMRG run. I would guess that you would mostly want the sparsity of the optimized MPS that is return by DMRG.

If you are interested in the sparsity at intermediate steps of DMRG, you could stop your calculation at certain sweeps, save the wavefunction, calculate the sparsity, and then restart your calculation. Alternatively, you could make an Observer object that is passed to DMRG and used to perform custom measurements of your wavefunction within DMRG. Please refer to a previous answer by Miles here: http://itensor.org/support/761/how-to-determine-the-convergence?show=761#q761 about how that works.
commented by (250 points)
Thank you! It really helps.
commented by (250 points)
Hello Matt,

I checked the sparsity of those applications and the value of sparsity is between 6% - 40%. Is there any physics applications/models which value of sparsity is very low, e.g., less than 0.0001%, using ITensor ?

Any comments are welcome!

Thanks!
commented by (14.1k points)
I don't often look at the sparsity so I'm not sure, but not that I am aware of. However, the sparsity is very dependent on the model you are looking at (particularly what kind of symmetries it has) and what type of algorithm you are using.

For example, if you are using DMRG to find the ground state of the Hubbard model, you can choose to conserve only the fermionic parity, which would lead to a sparsity of 50%, or you can conserve particle number and spin, which would lead to a lower sparsity (and therefore a lower runtime). Additionally, if you use DMRG to study the 2D Hubbard model on a cylinder, you could also conserve momentum around the cylinder, which would lead to an even lower sparsity.

Using non-abelian symmetries like SU(2) symmetry, which is not currently available in ITensor, would lead to even lower sparsity. Additionally, higher dimensional tensor network algorithms (like PEPS) may have lower sparsity than MPS algorithms like DMRG (but I have not compared, so that is just speculation).

Another factor in the sparsity measure is that we are only using block sparsity right now in ITensor. Hamiltonians are quite sparse beyond just block sparsity (i.e. likely the blocks themselves are sparse), so it would be nice to take advantage of that as well, but we have not tried that yet.
commented by (250 points)
Matt, thank you so much for your comments!
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...