How to handle large amounts of data using C#?

The situation is following: I get, say, around a million of rows (maybe more)that write to a List<myclass> Followers</myclass>.
I also have another list List<myclass> Likes</myclass> with the same large number of elements, about a million (maybe more).
Next, I need to make Likes.Intersect(Followers).
The correct approach I am using, storing it in a regular List<t></t>, or to do it otherwise? And will not run if I limit the List<t></t> 2GB?
How to do the fastest filtering (find the intersection, simply put the INNER JOIN) of these two data sets?
June 27th 19 at 15:06
3 answers
June 27th 19 at 15:08
Use database. They are just here for this.
And about to be plucked or not - it is easy to count, knowing the average size of MyClass.
The application is built without using the database. - maiya commented on June 27th 19 at 15:11
: Well this is a wrong approach. - nicklaus commented on June 27th 19 at 15:14
: You don't take into account that the use of the database may not be justified in the framework of the task? - maiya commented on June 27th 19 at 15:17
: a million records with the requirement of a quick filter is a good enough justification - nicklaus commented on June 27th 19 at 15:20
June 27th 19 at 15:10
List<t></t> has no limitations. If you have a 32-bit system, then you will run up against the limitations of 2 GB of RAM, if 64 bit - that will score memory the eyeballs. The approach is wrong, because you don't know how much memory the end user.

Correctly advised about the database is the simplest approach. Use SQLite as the simple portable database.
The approach is more difficult to keep the records in the file, then do the Intersect portion, algorithmically.
June 27th 19 at 15:12
For large volumes it is necessary to use a hashset

Find more questions by tags C#