We need to organize the cluster's Ceph. There are five servers, each with four HDD. Disks connected via hardware RAID. It is clear that fault tolerance is achieved programmatically using the Ceph, so to speak, for this and all afoot. The question is more about performance. What RAID configuration is better from the performance point of view in the normal mode, and when you rebalance?
1. Four RAID 0. 4 OSD on the node. Only 20 OSD.
2. One RAID 1+0. Accordingly, on 1 OSD node. Only 5 of the OSD.
3. Single RAID 0. Accordingly, on 1 OSD node. Only 5 of the OSD.
4. Two RAID 1. ...
Well, and so on...