Hi,

I have some difficulties using multithreading for my calculations which include tensor networks (in Julia). Just to clarify, I don't want to use multiple threads to speed-up contraction of large tensors, I want to evaluate a function for multiple configurations of parameters in parallel, and have each function call use ITensor for tensor contraction.

So far I tried to use the usual Threads.@threads at a beginning of a loop, but in each try I get a different error- sometimes one of the tensors end up with too many indices, which happen randomly at multiple places in the code, and sometimes one of the threads drink up all the memory of the cluster (in which case I couldn't pinpoint the exact place in the code which causes it).

If I don't use multithreading and just evaluate the function once at a time using a regular for loop I never had these problems. It seems as if the different threads interfere with each other and modify memory which the other threads use. I also tried to explicitly disable multithreading at levels of contractions with BLAS.set_num_threads(1), but it didn't changed the results.

Am I doing something wrong? Do anyone know if there is might be some potential problems with multithreading function with ITensor calculations?

Thanks,
Matan.

commented by (70.1k points)
Hi Matan,
Good question. My first guess is that you are running in a subtle aspect of Julia and therefore of ITensor which is that, because of the reference semantics of variables in Julia and how Julia arrays work, etc, a lot of ITensor code does not make distinct copies of things like ITensor data but only views. So your multithreaded code may need to make more explicit copies of things before it begins and inside the body of your loop to avoid different threads accessing the same memory (so called “race conditions”).

To be any more specific, it would be helpful if you could provide a small example of some code that errors or doesn’t work as expected.

Thanks,
Miles