Why hangs the site, with 30-40 concurrent queries?

There is a script, send requests using curl multi to the site in response when requests are made it sends the call back. (each request he sends separately, one response).

When there is a situation that comes of 30-35 call backов at the same time the script is run, makes a call to the database, and the proxy begins to execute curl request, of course he can hang for a long time because is done through a proxy.

I heard about the queue system, but do not quite understand how it works and will it help in my situation.

If this is the queue system just defers query execution until execute previous queries, I'm afraid it won't fit me, because the life that gives me the website via the call back 2 minutes and I must make a request.

=====
When you perform a lot of call backов (they run curl requests) while trying to appeal to the phpmyadmin page loaded minutes... however via filezila it works quickly, etc.
Below show how the load looks like:
5df93b7ea262a878591165.png

The question ultimately is: is it Possible to do something, or just not load a large number of requests not yet executed the old one? Not enough resources or what? Will it help in my case, the queue system?
April 3rd 20 at 18:27
4 answers
April 3rd 20 at 18:29
Solution
From what I understand, it turns out you have to callback the handler the request is made using curl somewhere else (via proxy), and thus a worker from the pool of web servers is kept until the end of the query. Your servadac quickly escape pool of web workerb, even if you zoom them up to 100 and enough memory. Of course, this is a bad decision.
Yes, basically you need to implement a system of queues.
You need to perform preseni request-colback and free the pool. For this you need to restrict his work to put the necessary data in some storage, or to convey a message in the queue Manager, delegating work to send another pool of workerb. Example 1-4 workera/demons on ReactPHP or Node.js that will take a stack of assignments asynchronously and send requests.
@Rosanna77 please tell me how it is possible in this case to do if the token I receive from the website by the decision captcha is valid for only 120 seconds? If I set up a queue system - it will not help as I understand it in this case? Simple in theory, it turns out comes I call back and I'll put it in the queue, while on the contrary I need to send a request not to get the wrong reply - Fay_Prosacco commented on April 3rd 20 at 18:32
120 seconds will be enough above the roof. If it is possible to make a queue using RabbitMQ or Gearman overhead of delegating will be less than a second.
You should be more worried about how to write worker queue. - Rosanna77 commented on April 3rd 20 at 18:35
@Rosanna77, 120 seconds ? Did you account for proximi through which make the request written on the package and it is not working, put timeouts 20 seconds to connect)
I present this picture:
Got a call back, he is in the queue, the queue is released when there is free space, and the space available when you complete the old requests, and since proximi written on the package - the old queries will hang Oh so long) - Fay_Prosacco commented on April 3rd 20 at 18:38
@Fay_Prosaccowhat your one worker will be able to get the queries to work? - Rosanna77 commented on April 3rd 20 at 18:41
@Rosanna77, honestly - I don't know, so I can't say. I probably just need the advice of experts who have been in similar situations and knows what can be done to work without freezes) - Fay_Prosacco commented on April 3rd 20 at 18:44
@Fay_Prosacco, OK, I'll give a hint, at least 1000 can.
https://github.com/php-api-clients/rabbitmq-manage...
https://sergeyzhuk.me/2017/07/26/reactphp-http-client/ - Rosanna77 commented on April 3rd 20 at 18:47
@Rosanna77, Well, to summarize: I need to use the request queue using RabbitMQ or Gearman and thus I will avoid lockups experienced the server now, with 20-30 at a time is called call backах, as it will be used turn.

The question immediately arises: it Turns out, RabbitMQ just puts requests in a queue in the format:
1
2
3
4

When the server performed a task - just put a new one from the queue.

Turns out, it's just a limitation of making the number of requests to process, so perhaps I find it easier to make fewer requests to the site, which sends the call backи, for example can make a limit of 10 queries as soon as they executed status has changed and you can upload 10 more, etc. - this is much easier)

To me it seems strange that the server crashes when 20-30 concurrent curlах, isn't it?(( - Fay_Prosacco commented on April 3rd 20 at 18:50
@Fay_Prosacco, you did not understand what was happening. The web server you have about 30 workarou. One worker operates simultaneously over one running the php script. In the current your worker-turns reception of the request data from third party servers (colback) and it is not ending, you make a new request, which is long waiting for any response from the 3rd service.
When quickly comes 30-35 kolbikov you have the whole pool/queue of workerb exhausted, because they are released slowly due to slow queries.
So 1 more worker that fulfills you phpmyadmin is allocated with a delay (after the extract script colbeck). The worst in this situation is not it, but that you are not able in this moment to embrace the new colbecki, where just lose the allotted 120 seconds. And you probably didn't know about it?

Creating a queue you will have a good supply for fault tolerance, because the web worker will be completed much faster. You can of course go the other way: to make for example 200 web sarkerov and hope that this pool has not been exhausted. But I'm not sure that you have enough server resources. If it is possible to limit the number of kulbekov - also make. But I don't know how this solution will work, I think it's so-so. - Rosanna77 commented on April 3rd 20 at 18:53
@Rosanna77, @Rosanna77, leaves only 1 question - why sorcery will be released faster?

proxy will start to work faster or what?)
I have 1 file where I make and appointment and request 3rd service, took cal back - just make a request and what's wrong with that? You must first record a response Kel, Beka, and then raise the turn ?) so chtoli?) and where it will be faster still 1 worker - 1 treatment. Or I something do not understand? :(

Here workeri end due to the fact that a lot of the curl running at the same time, and they are all using proxy hang for a long time, but because I will turn - I do not understand what should be better, of course if the queue will not hangs, but in fact to handle more I can't, right? Then why make unnecessary me this epic place to spend time to study and change the code if you can just set the limit for sending requests to come backов call less and get almost the same thing, isn't it?) There probably need to think about how to simultaneously run 50-100 curl requests simultaneously without waiting. - Fay_Prosacco commented on April 3rd 20 at 18:56
@Fay_Prosacco,
why sorcery will be released faster?

Because the WEB worker will only deal with the reception of colback and transfer it to the queue.
Try to portray a timeline.

THE WEB WORKER IS CREATED BY QUEUE
|======= 5мсек ========|==== 1 MS======|======>
| Admission colback query | write to the queue | the End

WEB WORKER BLOCKING (CURRENT)
|======= 5мсек ========|==== 15000 MS=======================|======>
| Admission colback request | Change request | End


Yes, the proxied request should be executed. But it will execute ANOTHER process or processes that have nothing to do with the web server. Therefore, the web worker will be released sooner and will be able to take more kolbikov, instead of the 502 error.

Now for the execution of requests through a proxy. I wrote the above two links where you can write a simple PHP daemon on ReactPHP, which will asynchronously execute THOUSANDS of queries through a proxy. Because it operations these non-blocking and the CPU almost does not eat (only the memory needed).
Thus, a pool of web workerb are almost always free, and requests through the proxy will be run in parallel by another process. - Rosanna77 commented on April 3rd 20 at 18:59
@Rosanna77, thank you I will try. - Fay_Prosacco commented on April 3rd 20 at 19:02
@Fay_Prosacco, please. Alternatively, you can even teach a demon to make and Kabuki, and send requests. But in my opinion it is less reliable. In the end, you can even plant a daemon on another server. - Rosanna77 commented on April 3rd 20 at 19:05
April 3rd 20 at 18:31
1) If PHP is working through FPM to increase the number of processes as long as enough memory/CPU
2) If running via Apache - throw out Apache and put the FPM, then repeat step 1
3) If you really want queue - fed requests to the server queue, give the customer ID of the task, and the client every N seconds, let the polls ended the task or not
memory as I understand it used 79% of ie is already a lot.

The queue system simply receives a task and will carry it when it is released free space or what? I have a life time result which returned call back - 2 minutes, then turn it is impossible to put... - Fay_Prosacco commented on April 3rd 20 at 18:34
@Fay_Prosacco, the queue will get the task and give workera when they are released, the number of workerb adjust to the desired load/processing speed. At this point decide what is important to you, the processing time and the hung server or a free server but more processing time - kane66 commented on April 3rd 20 at 18:37
@kane66usually Apache at anything. - christop_Larkin32 commented on April 3rd 20 at 18:40
@christop_Larkin32usually Apache resources guzzles as not in itself bad parallelise - kane66 commented on April 3rd 20 at 18:43
@kane66, it depends on how you configured and used. - christop_Larkin32 commented on April 3rd 20 at 18:46
April 3rd 20 at 18:33
For a General understanding of what is happening: How to survive my server?
April 3rd 20 at 18:35
Most likely due to run blocking operations I/O.
You can read more? what i/o? - Fay_Prosacco commented on April 3rd 20 at 18:38
@Fay_Prosaccooperations receiving/sending data via network, disk or database. - christop_Larkin32 commented on April 3rd 20 at 18:41
@Fay_Prosacco, https://tproger.ru/translations/diversity-of-input... - christop_Larkin32 commented on April 3rd 20 at 18:44

Find more questions by tags PHPNginxApacheUbuntuWeb Development