Hacker News new | past | comments | ask | show | jobs | submit login

Can anybody share their experiences with running applications that use persistent volumes on bare metal kubernetes?

I mean without cloud services like Google cloud persistent disks.




Disclosure: I work on IBM cloud

IBM has been running kubernetes on baremetals internally for over a year and just recently announced it as a product: https://www.ibm.com/blogs/cloud-computing/2018/03/managed-ku....

I will admit that it isn't as straightforward to get it to work as you might imagine at first. But once you've got the automation humming, its been a surprisingly easy to maintain. Would highly recommend that route! (but of course I'm biased since I've worked on it)

One interesting use case is that its straightforward to access the underlying hardware on baremetal machines with `SecurityContext: priviliged` (you can do a much more fine-grained security permissions; I'm just giving an example). So for instance, you can access GPU's, TPM (trusted platform modules) this way.


+1 for "regular" Ceph. Don't bother with that rook stuff. Just setup a regular Ceph cluster and go. Kubernetes handles its stuff and a (much more reliable and stable) Ceph cluster handles blocks and files.


Can you explain what's wrong with Rook? I thought it was supposed to make life easier when running Ceph.


NIO, the self-driving car company is doing this. They did a pretty detailed interview on their use case which includes a 120 PB data lake, and Cassandra, Kafka, TensorFlow, HDFS. You can read here: https://portworx.com/architects-corner-kubernetes-satya-koma... . (Disclosure, I work for Portworx the solution they use for container storage, but hopefully the content speaks for itself).


Redhat probably has the best production quality deployment for self hosted Kubernetes. They support running glusterfs: https://github.com/openshift/openshift-ansible/blob/master/i...

Personally I wouldn't do it unless you have a RedHat contract (or already have a team that manages glusterfs), but its worth looking at.


1200 core k8s cluster + 1.5PB ceph (shared nodes to some degree) no issues with persistent disks etc, only "annoying" thing is to figure out RBAC initially

you just use a storagecontroller, then its no work whatsoever, ceph does the rest


I've been recently experimenting with getting lustrefs usable with kubernetes, and needed some way to natively integrate it into the cluster I had.

https://github.com/torchbox/k8s-hostpath-provisioner Proved to be useful, it allows you to use any mounted path on (all) a node(s) (hostpath method) to return satisfy a persistent volume claim and return a persistent volume backed by the mounted file system.

A similar set up could work using bare metal, are you using something like Openstack Ironic?


For Kubernetes 1.10+ I'd recommend the local volume provisioner instead: https://github.com/kubernetes-incubator/external-storage/tre...


OpenEBS looks promising, but have not given it a try yet, ref: https://www.openebs.io/


Commercial: Portworx, StorageOS, Quobyte

OpenSource: OpenEBS, Rook (based on Ceph), Rancher's Longhorn

Commercial options highly recommended if you want safety and support as storage is hard, although people seem to be running all of these options well enough. Portworx probably most highly developed with Quobyte a good option if you want shared storage with non-kubernetes servers.


We experimented with both using cephfs via rook and glusterfs via heketi and ran into enough operational issues and speed bumps that we're just using hostPath volumes for now.

IOW, they're not production ready yet.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: