Why is MongoDB very slow retrieves documents?

Hi all.

Is such a thing. Mongo runs a query very quickly but the bypass data to the foreach-eat takes a very long time. Measure the time a function microtime()
The result of the function output
microtime(true)
// 1464330248.2163
$MongoCollection = (new \MongoClient())->selectCollection('db', 'coll');
$MongoCursor = $MongoCollection->find($where, $fields)->sort($sort)->limit(50)->skip($this->offset);
// 1464330248.2165

foreach($MongoCursor as $doc) {

}

// 1464330253.6667


As seen in the example, the cycle lasts as much as three seconds. This is the case even if in the loop nothing happens.
Use MongoDB v3.2.6
July 9th 19 at 11:14
2 answers
July 9th 19 at 11:16
Solution
Mongo runs a query very quickly but the bypass data to the foreach-eat takes a very long time.

1) the Query is executed at the time when you started to pull data, but the find command just prepares the request.
2) the Query runs slowly because you have no need of the index, and monga nothing to do with any database without indexes are slow, because scanned and sorted all the data at the time of the request.
I added the indexes. For example - db.createIndex({"price": 1, "price": -1}) and still slow - Urban66 commented on July 9th 19 at 11:19
: that's why I wrote you need to add nujny the index, not just index.
The index to be built for queries.

Give your full query filter (condition), sorting, skip and limit - Bianka0 commented on July 9th 19 at 11:22
: db.items.find({"name": {$regex: userInput}, "price" : {$gte: 0}}, {"_id":0, "name":1, "price":1}).sort({"price": -1}).limit(50).skip(0) - Urban66 commented on July 9th 19 at 11:25
: ok, the problem is in $regex, don't use it on large collections. The index is not built, and is complete overkill.
If you need text search use the full text index. If you want to search some key words, you can make an array (with an index) for that. - Bianka0 commented on July 9th 19 at 11:28
and if sphinx to use? - Urban66 commented on July 9th 19 at 11:31
: Sphinx even better, but need integration - Bianka0 commented on July 9th 19 at 11:34
July 9th 19 at 11:18
Well for starters look
https://docs.mongodb.com/manual/reference/method/d...

Secondly - why do you need cursors? They were always braking in all DBMS compared to direct queries.

Their use is justified when tricky-a complex processing on the server - which in this case is not observed.
And how in that case to pull the data? - Urban66 commented on July 9th 19 at 11:21
:
To begin to read the official documentation

php.net/manual/ru/class.mongocursor.php

There has 2 method.
Why did you choose second?

Just because you understand it?

The cursor method allows you to retrieve data from the server parts, but for normal web applications this makes no sense. This method should be used only if the data in the response (the result of find() ) too much. - Bianka0 commented on July 9th 19 at 11:24
And the first and second option have exactly the same version as in my example! - Urban66 commented on July 9th 19 at 11:27
:
second - as you have.
first - no.
to the first there are comments about those rare case where you cannot use the first - read carefully. - Bianka0 commented on July 9th 19 at 11:30
: A simple iteration over the array from the 2nd option
foreach run 1464335383.7088
end foreach 1464335386.0473

And simple call the function iterator_to_array()
iterator run 1464335386.0473
iterator end 1464335388.3665 - Urban66 commented on July 9th 19 at 11:33
: updated. this can not be.
carefully see the query plan - maybe it plugging. - Bianka0 commented on July 9th 19 at 11:36
In DB 4M of goods, by the way - Urban66 commented on July 9th 19 at 11:39
: And weighs the database 6 GB. This is one solid base! Without shards - Weston98 commented on July 9th 19 at 11:42
: laugh. I have a few billion entities in the database. Without shards. all flies.
study the query plan and the indexes.

Database locally or available only online? - Urban66 commented on July 9th 19 at 11:45
: locally - Weston98 commented on July 9th 19 at 11:48
: to modern databases and modern computers 6 G and 4 M is ridiculous numbers. It is even less than the amount of RAM in a typical server.

Learn indesi and plan.
Incidentally, Monga how much RAM do I eat? If very little memory is - then this too could be the problem. - Urban66 commented on July 9th 19 at 11:51
: and as you can see? - Weston98 commented on July 9th 19 at 11:54
:
In the Windows task Manager.
In *nix losst.ru/ispolzovanie-operativnoj-pamyati-linux - Urban66 commented on July 9th 19 at 11:57
how to see the query plan - the first link in my first response. - Urban66 commented on July 9th 19 at 12:00
: monga eats 12 GB - Weston98 commented on July 9th 19 at 12:03
: I noticed that if you remove the sorting, the request fulfills for 0.3000 milliseconds - Urban66 commented on July 9th 19 at 12:06
without sorting, limit/offset will not work as it should. for without sort is not guaranteed the sequence of sampled data.

in one query will get 1, 3, 5, 2, 5, with limit/offset from the first to the fifth element.
the following query will get 14, 2, 1, 22, 15, with limit/offset from the sixth to the tenth element. - Weston98 commented on July 9th 19 at 12:09
:
if no sort flying, it indexes for sorting are needed. 100%. - Urban66 commented on July 9th 19 at 12:12
: but I've added indexes for sorting - Weston98 commented on July 9th 19 at 12:15
: or indexing is not complete or is not added.
study plan request - my first link about explain. - Weston98 commented on July 9th 19 at 12:18
: okay. thank you - Urban66 commented on July 9th 19 at 12:21

Find more questions by tags MongoDBPHP