DeepOps - project requirements

From Define Wiki
Jump to navigation Jump to search

Project requirements

Entry template:
 

  <li>
    <ul>
      <li>Priority: </li>
      <li>Stakeholder: </li>
      <li>Requirement: </li>
More details, discussion, etc. (don't indent!)
    </ul>
  </li>



    • Priority: high
    • Stakeholder: User
    • Requirement: I want to launch a Kubernetes cluster with specified number of master and worker nodes.
    • This can be done using Magnum cluster templates. Administrators of the platform create templates specifying, among others, the COE and the flavour used by instances. Users can specify the number of master and worker nodes, override template flavours and add their keypair (among others) when creating a cluster off of the template. See http://wiki.bostonlabs.co.uk/w/index.php?title=DeepOps_on_OpenStack_POC#DeepOps_on_vScaler_POC. For the Ubuntu-based Magnum driver the only currently tested setup is a single master node and a single worker node (it should work fine with more than one worker node).
    • Priority: low
    • Stakeholder: User
    • Requirement: I want some extra components (packages, device plugins, etc.) to be installed automatically right after the cluster is up and running.
    • Currently, the process of adding components not installed by kubeadm (for example the NVIDIA's device plugin) is manual. This can be automated by running extra playbooks/scripts after cluster enters the CREATE_COMPLETE status. This functionality could make use of a separate instance hosting all the extra playbooks/scripts and configured as a client of the newly deployed Kubernetes cluster.
    • Priority: medium
    • Stakeholder: User
    • Requirement: I want to be able to see what persistent storage backends are available on the platform and select one of them.
    • Initially Cinder can be supported - this includes installing Ceph, GlusterFS or something similar on Cinder volumes. For backends like Quobyte there would be a question of how to isolate shares of various tenants whilst maintaining self-service (some sort of API for the backend that creates volumes when requested by the user?)
    • Priority: high
    • Stakeholder: User
    • Requirement: I want to have a clear, tested path of upgrade for my Kubernetes cluster using Magnum.
    • Tests + user docs
    • Priority: high
    • Stakeholder: User
    • Requirement: I want to be able to sign and revoke user's certificates.
    • This should be supported by Magnum, but needs testing when using a custom driver.

Questions

  1. Do users want to maintain their own Kubernetes cluster or are they looking for a fully-managed solution?
  2. The latter wouldn't really need Magnum and be more like what GCP offers, where users don't need to worry about things like upgrading Kubernetes as this is all done for them automatically by Google. The former requires one person form the User's team to be designates as the administrator of the cluster (responsible for access to instances, issuing certificates and so on).