How to distribute the workload in the program?

Hello, I have a lack of understanding of what to do.
I have an app that parses the data from the two sites. All this is done synchronously, step by step. This gives a total of 20-25 seconds. For one of the site library is used grequests(gevent based). I have an idea how to speed up the process. I read a lot of information about it. In the end, for python found 3 options: threading, multiprocessing, asynchronous requests.
How can I better implement the architecture for my parser. I see this is so.
Is the main thread that binds the parser to the first site and the parser for the second site.
Each parser is assigned a process. So now in total we have 3 process(primary, парсер1, парсер2)
In the process парсер1 and парсер2 to use asynchronous requests.

Correctly I think? Or I should be beat with a shovel?)

And another small question. Contrast to asynchronous requests and threads that asynchronous is a socket that is not closed after each request. And the flow is just parallel to the use of all system resources. Right?
July 2nd 19 at 17:49
2 answers
July 2nd 19 at 17:51
Solution
about asynchronous wrong, right now too lazy to look for pictures

generally look at multicurl if th, cheap and cheerful

if seriously - there is a Scrapy and Grablib for beginners and ends (one on top Opforce with a $10,000 budget for Scrapy that is) of scrapiron

more seriously, here are the things, you can see there Downloader all combined, well, or immediately alter to fit your needs
July 2nd 19 at 17:53
Solution
Make parser for each site in a separate file, then use https://www.gnu.org/software/parallel/
How to get data from different processes? - Afton commented on July 2nd 19 at 17:56
A little confused. It turns out I was using this library, run the first parser, then a second and a third. Further basically I'm starting to get just data with parsers. Yes? - Afton commented on July 2nd 19 at 17:59
: why do you want to share data between processes?

You simultaneously launch as many parsers as you want, and then parallel itself holds a number until they run out - marilyne_Roh commented on July 2nd 19 at 18:02
that is, for example, I need to read the sites every 5 seconds. So in the main thread, every 5 seconds I run parsers using parallel. They finish the work and give output. I have the data like that should be considered. And then I'm starting to process this data in the main thread. How do I pull the data after operation of the parser? - Afton commented on July 2nd 19 at 18:05
: you have the question the problem is not so - change your main question

How do you get the data - depends on how you wrote the code - marilyne_Roh commented on July 2nd 19 at 18:08

Find more questions by tags Pythonmultithreading