How to store large number of images on the server?

Now I make a project in it every day the user will load in 10 images, users, about 50-70 filled with izobrajenie should be kept as minimum 1 month, as is better to organize on the server? Keep on hosting or salivate to another and to store only the link? or are there other options?
June 10th 19 at 16:25
2 answers
June 10th 19 at 16:27
Solution
If we are talking about Node.js can share in its implementation.

But in General files are stored on the hash, ie if you have for example there is a file with the hash
ddcab4080b10682d9384b04f41ddbf3e907165a5

then save it for example on the way
upload/ddc/ab4/080/b10682d9384b04f41ddbf3e907165a5/file.jpg

and in database write down what the file is and what it's for.
Resize or any other versions of the file can be stored so
upload/ddc/ab4/080/b10682d9384b04f41ddbf3e907165a5/250x140.jpg

This method avoids duplicates in the same folder to store unlimited number of files (with a large number of files in one folder, the operation speed of the file system to slow down)

This basis, next is the multi-server support, and various cache servers for files.
can hosting to recommend? - christelle_Feil commented on June 10th 19 at 16:30
, yuzayu digitalOcean, no worse\not the best, disk space can be purchased. - Finn_Emmerich commented on June 10th 19 at 16:33
, prntscr.com/hlm3wi for example, on the smallest plan I can fill 1TB of images? and what SSD there then? - christelle_Feil commented on June 10th 19 at 16:36
,

1TB images

1TB transfer has meant that you can get rid of this number of traffic, but drive in a minimal plan you will have 20GB
dd2c6578eed1c6e366f31b7c00209e60.png
and what SSD there then?

Need SSD for fast read and write files, the problem with the number of files in one folder is on the algorithm level of the file system of iron does not. - Finn_Emmerich commented on June 10th 19 at 16:39
there actually 20GB written. 1TB is the limit traffic monthly. - glennie_Mer commented on June 10th 19 at 16:42
the feature that PS has the form of a tree. When a lot of files in one folder, is dizbalans leaves on the branch, you have to load all the information about the branch(folder) and not just part of it, if everything is in folders - Nettie.McClu commented on June 10th 19 at 16:45
before creating he need to reduce the image size? does it affect the result of the hash, or the performance of its creation? - christelle_Feil commented on June 10th 19 at 16:48
thanks , clear now, and if you look for example how the VC works, then it is possible for the same pattern to see) https://pp.userapi.com/c840326/v840326292/300c7/Zx... - christelle_Feil commented on June 10th 19 at 16:51
Yes size affects the number of computations when calculating the hash, the hash must be regarded as a stream, simultaneously with the download of the file itself. - Finn_Emmerich commented on June 10th 19 at 16:54
Yes , about this feature and discussed. - Finn_Emmerich commented on June 10th 19 at 16:57
before you save the image I'll crop, so to me before or after circumcision to make hash? - christelle_Feil commented on June 10th 19 at 17:00
you can crop it right at the front, and send the already cropped to the server. - Finn_Emmerich commented on June 10th 19 at 17:03
and how best to do the cleaning of the images which are stored on the server for over a month overkill? - christelle_Feil commented on June 10th 19 at 17:06
I retain the original and buck make large and small for full HD for preview.

In the database save the information about the creation date of the image, then do the sample obsolete records schedule / manual, and delete files / folders - Nettie.McClu commented on June 10th 19 at 17:09
there depends on the architecture. In my basic app has a mode of service and in fact there is a similar process - the delayed deletion of files, because for me this would have been convenient if you upload to create a list for deletion, and the need to change the date of removal (for example when a file is requested).

But zamesti a separate list just to iterate over database records that correspond to files. Focusing on the creation date, you can make a list of files to delete each time.

A separate list faster are uploaded, it is convenient to me since the service is in the background, and quite often, there's just not piles up large amounts of data, because it is expensive to dig each time the entire database. In addition, this feature serves not only udalenie files that deprives me of the need to do uzkospetsializirovannye function.

But if you do this once a day, and one task of this kind, I think 2-3 minutes is enough for the list by iterating over the database, even if the files will be well very much, the slowest here be the will itself udalennye. - Finn_Emmerich commented on June 10th 19 at 17:12
, you can do it when the url. Requested to create a link to file with ID so-and-so, under the format fullhd, create a link and simultaneously forming the desired file. We're only piles up extra files and there is no need to think what sizes are needed to any files. - Finn_Emmerich commented on June 10th 19 at 17:15
With all agree, the decision on the basis of the hashes correct, but here it is -
with a large number of files in one folder, the operation speed of the file system to slow down
causes strong doubts.
Why is it that the speed of the filesystem will slow down? Is that what some old FS.

Well, the author of the question it is as I understand on 10*70*30 =21tys images.
It is very small.
A lot of it is at least a million. - earl.Weissnat commented on June 10th 19 at 17:18
above already described why.
As concerns quantity, Yes it is a little. But I once wrote a normal module with the upload and no longer think a lot of them there. - Finn_Emmerich commented on June 10th 19 at 17:21
Well about a tree - not all of the FS trees - ntfs, btrfs and probably all of the popular?
In ext3 like the simple indexing is used, or am I wrong?

What about the fact that decisions of competent, I in no case do not argue - provides uniform distribution and comfortable search, the result is a very handy scale, you can scatter the path in different drives. - earl.Weissnat commented on June 10th 19 at 17:24
it is , but do not know in advance what will happen in the future. - Finn_Emmerich commented on June 10th 19 at 17:27
June 10th 19 at 16:29
From myself I will add that moment.
Just now just messing with it.
The website on Bitrix, the owner upload photos into a single folder, it had grown to 500 thousand pieces. And because of this, the whole site is slow despite the fact that he's dedicated and very powerful server. It all. It is necessary to solve this problem, what would the site work faster.
The solution to the following.
Folder, which stores all images, it is necessary to break into sub folders. and to stuff in there pictures.
Ready, the site came to life and began to run faster. The fact that the file load the server heavily loaded, because you have to make a selection from all this heap images only need.
From the point of view of FS - typically the same.
The speed of file access does not depend on the number of files in the folder.
It is possible with identical success to put a million files in a thousand folders, or to cram everything in one.
This mainly on the old FS was observed.
And - ext3 there is indexing in ntfs at all the trees.

Brakes are observed due to implementation - who listing of the files in the code will do or something like that. - christelle_Feil commented on June 10th 19 at 16:32
What FS was not able to say for sure, but we helped!
And I decided just to write! - Finn_Emmerich commented on June 10th 19 at 16:35

Find more questions by tags Data storage