Yes, you can absolutely use a 256GB and two 512GB devices for Ceph storage in your Proxmox cluster — but there are a few things to consider for performance and redundancy! Let’s break this down.
✅ Ceph doesn’t require identical drives:
- Ceph treats each device as an OSD (Object Storage Daemon), and the cluster will balance data across them.
- Mixed drive sizes are fine, but Ceph’s performance and usable capacity will be influenced by the smallest drive and overall distribution.
⚠️ Capacity imbalance:
- Since Ceph tries to evenly distribute objects across all OSDs, the smaller 256GB drive will fill up faster than the two 512GB drives.
- Once the 256GB device reaches capacity, Ceph won’t be able to place new data on it, even if the larger drives have space — this could cause cluster warnings or degraded performance.
⭐️ Workarounds:
- Weighting OSDs:
- In Ceph, you can set the weight of each OSD to reflect its capacity — this helps the cluster balance data more intelligently.
- Example: Assign a weight of 1.0 to the 512GB drives and 0.5 to the 256GB drive.
- Separate pools:
- If performance or balance becomes a concern, you could create different storage pools — one for the 256GB drive and another for the 512GB drives — though this adds complexity.
✨ Redundancy and performance:
- Ceph replication or erasure coding will still work, but keep in mind that losing the smaller drive would affect the cluster more quickly since it’s likely to fill up first.
- If you’re running 3-replica redundancy, the cluster will ensure data is distributed across all three nodes, even with mismatched drive sizes.
👉 Bottom line: It’ll work fine, but adjusting the OSD weights in Proxmox (via the Ceph GUI or CLI) would help avoid issues. I can walk you through setting that up if you’d like! 🚀
Let’s get your Ceph OSD weights adjusted! I’ll break this down step-by-step.
✅ Step 1: Check current OSDs and weights
- Open Proxmox Web GUI.
- Navigate to Datacenter → Ceph → OSD.
- Here, you should see your three OSDs (one for each drive). Take note of their IDs.
👉 Alternatively, from the Proxmox shell:
ceph osd tree
This shows the OSDs, their weights, and which nodes they’re on.
✅ Step 2: Set the OSD weights
The goal:
- Set the two 512GB devices to a weight of 1.0.
- Set the 256GB device to a weight of 0.5 (reflecting its smaller capacity).
👉 From the Proxmox shell on any node:
ceph osd crush reweight osd.<ID_of_256GB_drive> 0.5
ceph osd crush reweight osd.<ID_of_512GB_drive_1> 1.0
ceph osd crush reweight osd.<ID_of_512GB_drive_2> 1.0
✅ To confirm the changes:
ceph osd tree
You should now see the updated weights.
✅ Step 3: Monitor Ceph health
- Check the Ceph health status:
ceph health
- Make sure there are no warnings like “near full OSD” or “backfilling issues.” Ceph should automatically rebalance the data across your drives based on the new weights.
✅ Step 4: Automate monitoring (optional)
Since the 256GB drive could still fill up faster, you might want to set up Proxmox alerts to notify you if the drive is getting close to full.
👉 Check OSD utilization:
ceph osd df
⭐ Why this works:
- Ceph uses the CRUSH algorithm to distribute data — weights help guide how much data each OSD should hold relative to the others.
- By lowering the weight on the 256GB OSD, you’re telling Ceph to store less data there, preventing it from filling up before the larger drives.
🚀 Need help fine-tuning Ceph pools, redundancy, or troubleshooting warnings? Let me know how the rebalancing goes!