The situation is following: I get, say, around a million of rows (maybe more)that write to a List<myclass> Followers</myclass>.
I also have another list List<myclass> Likes</myclass> with the same large number of elements, about a million (maybe more).
Next, I need to make Likes.Intersect(Followers).
The correct approach I am using, storing it in a regular List<t></t>, or to do it otherwise? And will not run if I limit the List<t></t> 2GB?
How to do the fastest filtering (find the intersection, simply put the INNER JOIN) of these two data sets?
Use database. They are just here for this.
And about to be plucked or not - it is easy to count, knowing the average size of MyClass.
cory.Hammes68 answered on June 27th 19 at 15:10
List<t></t> has no limitations. If you have a 32-bit system, then you will run up against the limitations of 2 GB of RAM, if 64 bit - that will score memory the eyeballs. The approach is wrong, because you don't know how much memory the end user.
Correctly advised about the database is the simplest approach. Use SQLite as the simple portable database.
The approach is more difficult to keep the records in the file, then do the Intersect portion, algorithmically.
Aliya_OConner31 answered on June 27th 19 at 15:12
For large volumes it is necessary to use a hashset