Skip to content

Virtualization

The HyperCloud virtualization layer is a tenant-first framework for instantiating and managing virtual machines within namespaces.

Compute Layer

Within a tenant, templates can be created from scratch or inherited and modified from the operator namespace. Operators can build standard VM templates, network templates, and images either across the entire cloud or individually for each tenant.

Virtual machine templates

All virtual machines are instantiated from virtual machine templates.

Templates consist of:

  • Capacity: Physical and virtual resource allocation.
  • NICs: Set of virtual network cards along with their respective virtual networks - and any additional configuration.
  • Images: One or more images to mount when the virtual machine is loaded at boot.
  • Image configuration: Boot order and context such as hostname and SSH keys.

VM Templates(/assets/hc/vm-templates.png)

Advanced configuration settings allow for granular management of every VM attribute, which can either be pasted in or passed in via the API.

Image configuration

Image configuration templates provide comprehensive setup of a whole host of hypervisor attributes available to tenants.

The most important attributes are:

  • Capacity configuration such as physical CPU and memory.
  • OS and boot options including CPU architecture, kernel, initrd, bootloader and firmware.
  • Disk options including caching options, mapping drivers, IO throttling and trim options.
  • Network options including IP, MAC, physical bridge, network filtering, security groups, bandwidth throttling, and custom network configuration shell scripting.
  • User inputs which will prompt users for custom variables when a VM template is instantiated.
  • Schedule actions which will conduct VM operations at scheduled times.
  • Placement Options.
  • Special VM features including Physical Address Extension (PAE), HyperV Extensions, IO threads, and ACPI.

A comprehensive reference of all image configuration options is available in the API documentation.

Service definitions

As opposed to creating single instances, service definitions allow tenants to create a deployment consisting of many disparate resources that can connect to form a resource.

In the example below, a service definition for message-app describes a set of stateless services atop a resilient database architecture, fronted by a SSL-terminating load balancer setup and external network.

Service Definitions img/service-definitions.png

This example service consists of:

  • 3 postgresql LXC containers, served by an image,
  • a persistent volume store for the containers,
  • a set of stateless message-daemons,
  • 3 HAProxy load balancers,
  • an internal network connecting all of the above, and
  • an external network serving requests to the load balancers alone.

Orchestration

Terraform

HyperCloud fully embraces the open-source OpenNebula standard, leveraging its XML-RPC API through the OpenNebula Terraform Provider to empower you with infrastructure-as-code for automating and managing cloud resources. Just configure the provider with your HyperCloud credentials and endpoint, as shown below, and you can effortlessly deploy and control resources like virtual machines and networks with Terraform’s streamlined precision!

provider "opennebula" {
  endpoint      = "https://hypercloud.softiron.com:2634/RPC2"
  flow_endpoint = "https://hypercloud.softiron.com:2475/RPC2"
  username      = var.one_username
  password      = var.one_password
}

And then adding resources such as:

resource "opennebula_virtual_machine" "example" {
  count       = 2
  name        = "virtual-machine-${count.index}"
  description = "VM"
  cpu         = 1
  vcpu        = 1
  memory      = 1024
  group       = "terraform"
  permissions = "660"
  template_id = "opennebula_template.example.id"

  nic {
    model           = "virtio"
    network_id      = var.vnetid
    security_groups = [opennebula_security_group.example.id]
  }

  tags = {
    environment = "example"
  }
}

The terraform provider is documented here, and orchestration examples can be found in the hypercloud-examples repository.

Ansible

HyperCloud has an Ansible module which can be used to deploy, manage and terminate instances. This module is included in the community.general collection, so it can be installed with: ansible-galaxy collection install community.general

With this module, you can create a playbook such as the following to deploy an instance:

---
- name: deploy a vm
  hosts: localhost
  remote_user: root

  tasks:
  - name: Deploy a new VM named 'foo' , using network 0 and security group 101
    community.general.one_vm:
      template_name: 'Debian 10'
      vcpu: 4
      attributes:
        name: foo
      networks:
        - NETWORK_ID: 0
          SECURITY_GROUPS: "101"
      state: absent
      api_username: "hypercloud_username"
      api_password: "hypercloud_password"
      api_url: "https://<hypercloud_url>2634/RPC2"

Documentation and examples can be found under the community.general.one_vm module.

Salt

Salt is a data-driven automation, orchestration, and infrastructure management platform. Configuring your Salt master to communicate with and manage HyperCloud only requires a few lines of text to be added to a configuration file, a file ending in .conf, in the '/etc/salt/cloud.providers.d/' directory on the master. An example configuration file is below:

hypercloud:
  xml_rpc: https://<HyperCloud-URL>:2634/RPC2
  user: <HyperCloud Username>
  password: <HyperCloud Password>
  driver: opennebula
  private_key: <Path-to-Private-Key>

Once this file is in place and populated with the requisite values, you can verify proper connectivity to HyperCloud from your Salt master with the following command, (which will return a list of all running virtual machines or containers):

root@salt:~# salt-cloud -f list_nodes hypercloud
hypercloud:
    ----------
    opennebula:
        ----------
        k3s-agent-1:
            ----------
            id:
                3581
            name:
                k3s-agent-1
            private_ips:
                - 10.127.4.75
            public_ips:
            size:
                ----------
                cpu:
                    1
                memory:
                    1024
            state:
                3