Goals of this case study
To specify the information that needs to be exchanged between Warren and the client Data Center (DC), in order to decide whether fulfilling the requirements of DC is feasible by the current state of Warren feature set
To extract the data, taken into account a wide variety of possibilities that will ultimately satisfy the main goal of DC.
For Warren side to group the required features by the demand quantity and development effort, giving the best business value to the maximum number of clients
Introduction
All the necessary information from the DC side to Warren can and should be addressed as a set of simple, unambiguous questions. These questions, in turn, can be divided roughly into three subsets, based on the goal they are meant to achieve. Each subset can be expressed as a more general "umbrella question":
What are the current infra- and software components in use and what role do they play in the plans for the future?
What services is DC offering and what expectations do they have to Warren?
What is the level of commitment of DC in cooperation with Warren and what will be the 3rd party software systems alongside Warren?
If these topics are cleared from both sides, the ambiguity and misunderstanding of decisive factors that are the backbone of successful cooperation should be minimized.
These questions are vital to gather the information that affects the following topics in Warren development:
- Architectural decisions:
- which external libraries, components, and standards to use to cope with the requirements of the majority of DCs in the target group?
- How to design the functionality in component systems, so that we provide the value we claim to be offering, without causing the decrease in quality of services and processes existing there before Warren adoption?
- Marketing content and business value:
- Can we actually offer the functionality we are claiming to offer?
- Can we offer the functionality at a sufficient level of reliability in a particular service domain of a DC?
- Are we doing it in a sensible way, e.g. the development effort is comparable to the actual value the development result is providing?
- Are all the features and functionality we are/will be providing also in correlation with the actual requirements?
- Architectural decisions:
Conflicting nature in service requirements
To enhance the analysis result and make it directly usable as an input to the development process, let's partition the hypothetical DC stack (hardware, firmware, software) into functional domains that have common properties to according Warren components. Two of DC functional stack domains that have been there before Warren adoption are more influential than others, both future development- and adoption process-wise. These are Network and Storage. They are also tightly coupled, as decisions in one domain heavily depend on the properties of the other. If analyzed, the connection between these two domains is expressed best in the decision-making process, as two fundamental trade-offs:
Availability vs Locality
The biggest trade-off there is in multi-site computing, (thus distributed cash and storage are simultaneously good and evil at the same time ).
- Availability in this context denotes:
- Spacial - data or service is concurrently available to recipients/consumers in different locations rather than just one (many Virtual Machines using the same database that resides in distributed storage)
- Temporal continuity - data or service is kept available even in case of soft- or hardware failures ("High Availability")
Both of these aspects may seem very desirable, especially in cloud computing, but the downside is delivery speed in various forms. For example, distributed storage without high-end hardware may not have sufficient latency for storage-sensitive applications. Also, to keep the application availability rate high, there are software and several levels of hardware redundancy involved which means buying additional devices and keep them constantly running. - Locality denotes the physical distance of some functional domain from compute resource (local storage vs distributed storage)
While High Availability metrics are received by involving distributed and redundant resources, locality is also not free from redundancy cost, however it is usually the one of sub-server level, so definitely less expensive. Local storage has also much lower latency, but total capacity is very limited and, as data on this storage is not available to outer devices without additional control and services, it introduces additional data duplication need in addition to one that is meant for "High Availability".
Software- vs Hardware defined control of domains
This can mostly be described as:
- SD* - slow, but flexible, automatically reconfigurable and easily portable
- HD* - high speed, low portability, automatic configuration is limited or impossible
The general tendency is towards a concept “software-defined DC”, largely because of automation and management benefits it offers. The exception to tendency is bare-metal provisioning popularity that could be explained the still-existing demand for direct control over hardware, required some type of applications, independence from general software-system failures and speed.
Considerations in the network domain
There are several factors in DCs network setup that dictates what we need to think through in the Warren application development process. Such factors include:
Network topology (tree, clos, fat-tree, thorus, etc)
This aspect defines network traffic between components, servers, racks also between DC and the internet. It sets DCs physical extendability properties, thus, we need to consider:
How the automated discovery process will be handled
What deployment schema to use when implementing new nodes
Which components are involved in such processes
How the non-positive results of such cases will be handled.
Obviously, we cannot fine-tune our setup for every topology type because it's not a standalone factor, so the set of variables in such analysis is large and too costly compared to the business-value of the outcome. But we can target the solution that covers topologies mostly used in DCs with a sufficient degree of quality. Metrics of service reliability and availability standards are something that cannot be purely theoretically calculated in the platform that is under heavy development. Thus, they will rather be deduced from DCs adoption process. The current assumption is that the most widely used topologies in the probable target DC group are fat-tree and various forms of clos. Based on that, most optimizations are made for the latter two topology types.
Nature of applications and services offered by DC
Although, both, this and next point seem to be trivial compared to a real problem magnets like network topology, adopting SDN solution, or better yet, consolidating different SDN solutions; this has become a major issue in public clouds (and presumably also in private ones, where such issues are usually not materialized as a series of scientific papers). Like almost all (except for SDN maybe) network-related considerations, also this one has the quantity-dependent nature.
In-DC traffic amount between racks
The bigger the amounts of data-flow between hardware devices, the bigger of a problem it tends to be. This traffic (and also In-DC traffic between silos, if larger DC is under consideration), is the one that measures the service system (Warren) efficiency. It's a two-fold problem, first the traffic that is generated by the clients, secondly the one that is generated by Warren as a management system. The goal of Warren is to reallocate resources to minimize in-DC traffic and in rare cases, it can, by doing so, destabilize the network flow for a short period of time. Management flow must always take precedence when client flow is causing problems, even if it decreases client throughput further. Because it’s purpose is to restore the previous state, or at least maximize the efficiency with the currently limited amount of available resources.
Existing SDN solution
In general, all SDN systems are based on the same principles an in major part, derived from two prevalent frameworks for SDN generation. There are several types of protocols when it comes to network device configuration, among which, OpenFlow is still the most dominant one. Almost all needed routing protocols are also supported by all major SDN solutions.
To conclude the above, there shouldn’t arise any drastic problems on a connection basis (which doesn't mean it's a trivial task!). However, there is an exception to that hypothetical balance - the security domain. All SDN systems implement some (or more) security domains, whether it’s client level or system-wide. To configure 2 or more SDN systems to cooperate simultaneously on that domain, might be more time consuming than configure the whole system to use adopt a new one.
Considerations in storage domain
Warren storage domain consists of three options:
Distributed storage
Shared storage
Local storage
To determine the right solution, one must consider several factors that are required to implement a particular storage type. As storage holds the most valuable part - client data, the impact on the reliability and to QoS. Afterall - network outage only affects the availability of data, whereas storage problems may lead to permanent data loss.
Distributed - Expensive, but reliable, multi-functional, and scalable
The cost of a distributed storage (that may also be shared-distributed) comes from the fact that distributed is usually (not always - one exception is HCI) implemented as a separate cluster(s). So there are three main types of costs and an additional, optional one:
Upfront cost - devices itself, including explicit network for storage (fixed cost)
Repair/management costs + cost of space (fixed over a long period of time)
Energy cost (usually fixed over a long period of time with its seasonal spikes)
Optional license cost when the commercial distributed storage system is applied
When summed up and divided into monthly payments over a time period that equals server units service life, by far, the highest one is the energy cost. To conclude, although it seems wiser to make use of old server hardware for storage clusters, it is actually not so at all. Much wiser is to buy new, specially configured, low power consumption hardware that may even come with installed and configured distributed storage systems. Such specially configured devices offer another benefit - fast cluster extendability.
In a typical distributed storage solution, there are implemented both object and block storage, giving in such a way an opportunity to (when implemented as separate clusters) to use object storage also as a base for backup or disaster recovery implementations, in addition to its main purpose.
The reliability advantage compared to shared or local storage should be obvious.
Distributed - Expensive, but reliable, multi-functional, and extendable
Warren components placement in DC
Based on network and Storage requirements, there can be predicted several issues due to poorly planned location of Warren control plain components such as:
Network congestion or TCP incast in rack-top switches.
One possible solution (in case of 2 physical nodes for control) is to host them in different racks. This leaves the ability in case of the problems above in one rack, to recover all control components in the node in the other rack. Hypervisors are wise to keep in separate rack due to the type and nature of network traffic, but also because of the danger of out-of-control virtualized resources, be that then caused by poor configuration or the applications they host.Low availability markers
Depending on network topology of DC, the reasons in section 1, but also hardware failures may be the cause of unsatisfactory MTBF and MTTR. For example, if both control nodes are placed in one rack, rack-level hardware failure is causing at least short total downtime during the diagnostics/repair process. That is not, of course, the case, when Warren is used more heavily, than just 3 nodes tryout.
On the other hand, keeping such a level of separation between nodes certainly increases in-DC traffic between racks. So there are no absolute rules in component placement, but rather it depends on an already existing setup, nature of provided services and median/peak traffic levels in racks.
-