How to make multi-threaded iteration over array?

As from the list of values correctly read the lines to some sort of late thread has overwritten the pointer to the old value and 100 times not to do the same thing?
July 12th 19 at 16:52
2 answers
July 12th 19 at 16:54
In Python multi-threaded polling of the array will be slower than single-thread (GIL).

You can scatter on the processes, but it will not be a shared array, and each with its own, or you can use redis, etc. + processes, or use C-extension/GIL libs they are not affected*.
July 12th 19 at 16:56
For example, the summation of the array elements in multiple processes (and without C-extensions) :

# -*- coding: utf-8 -*-

import numpy
import multiprocessing

def sub_sum(z):
 return numpy.sum(z)

def parallel_sum(values, cpuz):

 boundaries = [i for i in range(0, len(values), len(values)//cpuz)]
 boundaries[-1] = len(values)

 with multiprocessing.Pool(cpuz) as pool:
 rc = pool.starmap(sub_sum, [(values[c1:c2],) for c1, c2 in zip(boundaries[: -1], boundaries[1:])])

 return sum(rc)

if __name__ == '__main__':

 cpuz = multiprocessing.cpu_count()
 n = 999
 values = numpy.array([i for i in range(n)])

 print('cpuz =', cpuz)
 print('sum =', parallel_sum(values, cpuz))
 print('sum =', n*(n-1)//2)

Find more questions by tags Python