Ceph converged cluster on Clear?

Hi All,

I’m looking at cranking up a home lab (three hosts/nodes) using Clear, Ceph and KVM/QEMU. I have some questions that I’m hoping someone can provide insight into.

  1. When installing Clear Linux onto the bare metal hosts SSD’s, can Clear’s own partitions (inc root) be setup on Ceph directly?
    (I have a fourth host that I can use to setup Ceph on the three primary hosts for the cluster in advance)

1b. If not, does the Clear OS need to sit on it’s own Raid 1 SSD’s (with xfs or ext4 etc) so that it’s isolated from the disks of the Ceph pool running within the same hosts?

  1. Is there a recommended (/suggestions for an) orchestration tool for VM management that can support live migration (ideally HA automated fail over, doesn’t need full fault tolerance lock step) of VM’s running on the Clear Linux hosts in KVM/QEMU?
    (assume the storage layer to be equally available to all the Clear host via the Ceph storage pool they’re running).

I’m coming at this as a learning project, from a VMWare and oVirt background. I’m not expecting anything as polished as those solutions.

Thanks

Conceptually, I think this video answers question one around the 47 min mark:

Still a lot of detail I need to grasp, but I think Clear has the Ceph kernel drivers bundled by default, so if I cranked up the Ceph pool with RBD from a seperate (possibly temporary) ceph admin host, then I should be able to deploy Clear straight on the Ceph pool…I think :slight_smile:

This is something we’d like to figure out if we can support it, but there’s very little time since it requires some persistence, and time, to test all the wheels that turn to make it work.

If you do figure it all out, please make sure you write down your actual experiences so that we can figure out whether there’s improvements we can pull in to ClearLinux. My assumption is that you’ll need an initrd, and that /boot has to remain on a local drive.