No, you don’t need to do that.
No, you don’t need to do that.
It might be ‘state after G3’
pathological
I’m afraid this one is already taken, friend.
Motorcycle, actually.
Definitely isn’t necessary, but if you search for ‘3.5" SAS lot’ on ebay you might find all the drives you’ll need to get to 50TB for the price of a couple new SATA drives.
Yeah, you don’t want a surveillance drive. They are optimized for continuous writes, not random IO.
It’s probably worth familiarizing yourself with the difference between CMR and SMR drives.
If you expect this to keep growing, it might make sense to switch to SAS now - then you can find some really cheap enterprise class drives on ebay that will perform a bit better in this type of configuration. You’d just need a cheap HBA (like a 9211-8i) and a couple breakout cables. You can use SATA drives with a SAS HBA, but not the other way around.
SSD RAID is actually very common outside of home use! And yeah, clustered filesystems help overcome many of these limitations, but tend to be extremely demanding (expensive hardware for comparable performance). Network almost immediately becomes the bottleneck. Even forgetting about latency and other network efficiency concerns, 100 Gbps isn’t that fast when you have individual devices approaching 16 Gbps.
“Mid-range systems” is not referring to personal computers. “8-inch drives” is another clue.
I think you might be off by a few years at least, a 40MB drive in 1982 would’ve been incredibly uncommon.
From the technical sense it doesn’t have to have 4 drives
Please explain how you think you can distribute two sets of parity data across a three drive array?
You can’t have a three drive RAID 6 array.
Please just stop.
You are not ready to be lecturing on this topic.
Bits of what you wrote are reasonable, but your premise is incorrect.
Consider a scenario with a degraded RAID 1 array comprised of two 1.6 TB disks capable of transferring data at a sustained rate of 6 Gbps: you should be able to recover from a single disk failure in just over half an hour.
Repeat the same scenario with 32 TB members, now we’re looking at a twelve hour recovery - twelve hours of intensive activity that could push either of your drives over the edge. Increasing data density actually increases the risk of data loss.
Finally, we say you shouldn’t think of RAID as a backup because the entire array could fail, not for the excruciatingly literal reasons you are attempting to convey. If you lose half of a two disk mirror set, you haven’t lost any data.
You should probably attempt to understand the topic / post before diving in.
Thank you for sharing your opinion and your brilliant advice on how to be constructive. I especially enjoyed the part where you said I shouted my comment with anger—that was really good!
This community needs moderation. :(
Let’s flip it, then: what about this post is useful?
Lots of different hosts, multiple load balancers / ingress controllers.
This is fucking useless. Please stop.
So will I.