How to choose a DB for large amounts?

Good time of day.
There are large amounts of data, ~40-45ГБ day, 25-30K lines per second, the recording is continuous.
The final volume may consist of 50-70ТБ.
Data format:
int timestamp
int value1
int value2
int value3

Samples mostly have the form timestamp > date_1 && timestamp < date_2 && data == value*
What DB would you recommend? What response can you expect ?
Very nice bonus is if the database can compress data.

Will add the second option, which is a compression method with the so-called free seek through the file.
July 9th 19 at 13:15
4 answers
July 9th 19 at 13:17
Solution
Put a file hourly (for example) - new hours - new file. Then pack your bags.
For timestamp you can take 2 bytes (as within hours). See if value can be reduced.
Even if you write 16 bytes then a modern HDD (150Mb/s) can save ~9mn records per second (with your 30K right)
Will only make Tulsa which will be according to your requirements to get the data.

Files can be stored on disk in a file DB in GridFS which will Arditi them across the cluster.
at the moment, is stored daily in files value less than int does, there IP address. - moises.Wehner commented on July 9th 19 at 13:20
This Tulsa else to do. Selection by `value*` demanding. There could be a parallel to build some kind of structure to search for (BLK-cu. binary tree?) for extra screws. - Ron.Fu commented on July 9th 19 at 13:23
: Not needed here trees. easier to the same style of slice data on the SP - 1 file - 1 UI x period (month). - Bianka0 commented on July 9th 19 at 13:26
July 9th 19 at 13:19
Look at sophia.systems, Tarantool uses it as one of the engines.
July 9th 19 at 13:21
InfluxDB - specialized under such a task.

Yandex Elliptics (currently only compiles easily under Ubuntu 14.04 and the corresponding generation of Debian) - no database, and distributed storage DHT. But she knows how to scale and replicate and recover. Your business will only connect the new servers to it (or the drives on the server).
July 9th 19 at 13:23
You hadoop cluster to deploy the required architecture and design based on message delivery database with a mediator, and all your terabytes in the logs gzip keep well or cluster in the hive. In short, this question is not for a toaster. You freelance platform for devops'AMI need to experience You have in the course of these issues.

Find more questions by tags Big dataDatabases