What RAID configuration is best for a Ceph?

Good day!
We need to organize the cluster's Ceph. There are five servers, each with four HDD. Disks connected via hardware RAID. It is clear that fault tolerance is achieved programmatically using the Ceph, so to speak, for this and all afoot. The question is more about performance. What RAID configuration is better from the performance point of view in the normal mode, and when you rebalance?
Options:
1. Four RAID 0. 4 OSD on the node. Only 20 OSD.
2. One RAID 1+0. Accordingly, on 1 OSD node. Only 5 of the OSD.
3. Single RAID 0. Accordingly, on 1 OSD node. Only 5 of the OSD.
4. Two RAID 1. ...
Well, and so on...
March 19th 20 at 08:54
1 answer
March 19th 20 at 08:56
Solution
No, CEPH do not need RAID. Die disk just replace and all. RAID is unnecessary here.
Suppose I have connected the disks to bypass the RAID (actually almost the same egg as the first option). What happens if a node dies? Rebalance the four OSD? The cluster will not be stunned? When the number of replicas (size) 3 will not all three PG on those three failed OSD in the failed node? - moses86 commented on March 19th 20 at 08:59
@moses86, no cluster needs to place them on different nodes. A rebalance would be for anyone. Killed pg it is necessary to restore osd on live - cindy.Schinn commented on March 19th 20 at 09:02

Find more questions by tags CephRAID