Is there a way to optimize a large table with sparse data?
The essence of the following, there is a table with 200 million rows (char32, char32, char32, char64)
The first three fields - uid, last - text-id
Selection in all fields, key length - 127
The point is that the sample for this table is through a where exists for 13 thousands of rows.
The size of the index with the same key and you know what. No analogues * PostgreSQL hstore, not jsonb in mysql. Partial indexes, partitioning, too. His head izumal how to wriggle out.