Technical Questions
Can AMD and Intel CPU’s be mixed on VM and Container hosts?
Yes, we can mix servers with different CPU brands, however there are certain rules that must be followed. Servers are grouped into pools or in other words clusters. A cluster should ideally only contain a single manufacturers’ CPU resources. This guarantees migration of virtual machines between servers avoiding problems with client applications in production.
Can you mix CPU family types like Intel Xeon with E5, E7 with a different number of cores?
Yes, however, the pool of hosts should contain servers with CPU architecture as similar as possible - an ideal case is identical hosts in s single pool. The number of cores only determines the max vCores for virtual machines, CPU architectures in the pool are conformed, taking the least common denominator of all architectures present and using it as a virtualization model. As a result, some instruction sets from more advanced CPU architectures are not virtualized.
Are GPUs supported?
Physical GPUs are supported in virtualization hosts, utilizing PCIe pass-through features. The virtualization of GPU resources is supported by the library we use for virtualization, but this feature is not yet available via the Warren platform and will be integrated in the near future.
What is the scale in a number of nodes for VM’s and Containers (have seen other solutions that limit to 8 or 16 nodes)?
We have an option for a provider to limit the maximum scaling factor if needed. Currently, it is not used for virtual machines, containers have default value 10.
Can Warren stack software updates be done seamlessly for Control Nodes and VM and Container Hosts?
Yes. Virtually all updates for Warren can be done seamlessly and without special preparation. Some more complex updates for host OS (like kernel updates) require rebooting the host. In such cases, the live migration feature is used to migrate containers/virtual machines to another host before applying updates, however, the whole process is still seamless for all users.
Besides the Warren software we have also had ‘software’ on the hardware like BIOS update, firmware update on controllers, etc How is this managed?
All the pre-Warren and embedded software involving tasks, like new server detection, hardware component monitoring and configuration including firmware updates for BIOS, BMC, NIC, RAID, etc are carried out using DMTF's Redfish® API standard interface. For these operations, special NIC, along with dedicated management LAN is used.
How are the platform and hardware monitoring done (is there a central console or does the platform generate tickets)?
The hardware monitoring can be roughly divided into 3 verticals:
1. The SDN - network monitoring, including logs, observability data, security-related events for both software and hardware portion of it. This vertical is entirely carried out by Tungsten Fabric, a component Warren SDN solution is based upon.
2. Server component health/performance monitoring, server power management monitoring. This vertical is a central element for automated recovery/resource balancing functionality and is built on DMTF's Redfish® API.
3. Hosts and Warren component monitoring. This part monitors the actual performance of a Warren itself, also providing statistics to administrative GUI, meant for infrastructure providers (Warren client).
Are resources like CPU, Memory, Drives shared between different tenants (the noisy neighbor effect)?
There are different levels of hardware sharing available with Warren:
1. Dedicated hosts - End-user has a special agreement with the provider that has prepared a host that is connected to Warren cluster. It can be accessed from an API and is visible in GUI, so it acts like any location, other than the main shared resource pool. No sharing, single-tenant mode.
2. Bare-metal servers - dedicated or shared hosts with no pre-configured virtualization. Direct access to hardware. Noisy neighbor effect likelihood excluded with dedicated cores + RAM range/channels.
3. Shared hosts - End-user cannot directly see or access the virtualization server, rather than a virtual machine or container created from a general resource pool, defined as location. CPU and RAM noisy neighbor effect likelihood excluded with static resource amounts that cannot be exceeded. Disk and Network resources protected with per-tenant throughput limitations.
Do we need a specific network controller (brand, model) to arrange Micro-segmentation (that is what I understand is done)?
Micro-segmentation is done with L3 overlay networking, which in turn is controlled by Warren SDN. Requirements for NICs can be found from the following link: Warren Hardware Requirements
Do we need a specific network switch (brand and model)?
There are no strict requirements for switch and router models. However, if existing networking hardware is available before Warren adoption, exact models should be provided to the Warren team, in order to validate if all the SDN functionality is guaranteed to work with particular devices.
Can the network setup from the hardware perspective be done redundant (2 or more switches connected to the VM and Container hosts)?
Yes, and not only can, but is highly advisable! Otherwise high reliability markers for Warren cannot be guaranteed. All networking devices should have redundancy if the purpose of particular usage of Warren is public service provisioning.
Are Controller Nodes needed on every physical location (when you have multiple DC or On-Premises locations where you run VM and Contained Nodes)?
In general, yes. There can be some exceptions considered, when a particular location has some specific purpose, thus, having for example, a reduced set of Warren components installed. Still, for the sake of reliability, there should be at least 1 control node present.
What are the options for storage systems other than CEPH?
Although, the production grade option is at the moment only CEPH, local storage (inside hosts) with ZFS and iSCSI connected shared enterprise storage options are at the moment in development. Analysis and preparation, integrating StorPool distributed storage systems have been completed and development could start, when demand for commercial, more stable and feature-rich CEPH counterpart occurs.
Are there migration tools available from by example VMware or Hyper-V to Warren.io platform?
There are specific procedures for such migration, that are basically described here: https://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack/
Openstack is pretty similar to Warren in terms of virtualization tools. By no means can such a process be described as a trivial one and such migration has a significant chance to fail when current platform-specific features are in use for end-user applications.