Goals of the case study
To specify information that needs to be exchanged between Warren and the client Data Center (DC), in order to decide whether fulfilling the requirements of DC is feasible by the current feature set of Warren.
To gather all necessary data, that takes into account a wide variety of technical possibilities that will satisfy the service model and commercial goals of the DC.
For Warren for the discovery of necessary required features for DC together with business value vs development effort estimates to provide maximum business value for the DC.
Introduction
Necessary information from the Data Centre can be gathered with a set of simple, unambiguous questions that are divided into three subsets, based on the goal they are meant to achieve. Once answers to all subset questions are defined any ambiguity and misunderstanding over technical and determining factors will be cleared. The technical clarity achieved will be the fundamental premise to lay groundwork for a successful cooperation.
What are the current infra- and software components in use and what role do they play in the plans for the future?
What services is DC offering and what expectations do they have to Warren?
What is the level of commitment of DC in cooperation with Warren and what will be the 3rd party software systems alongside Warren?
Answers to the above questions are vital, to gather information that affects the following areas of the development for Warren:
- Architectural Decisions:
- Which external libraries, components and standards to use to meet the requirements of the majority of DCs in Warren's client target groups.
- How How to architect features across components, to provide value Warren aims to offer, while maintaining quality of services and maintaining necessary processes the DC had before adopting Warren.
- Business Value and Marketing:
- Can Warren guarantee to offer the functionality we are claiming to offer?
- Can Warren provide the functionality at a sufficient level of reliability in a particular domain of service for the DC?
- Will development effort for Warren be in proportion to the business value the developed functionality is forecasted to provide?
- Are/will features and functionality of Warren be in correlation with the actual requirements of the DC?
- Architectural Decisions:
Trade-Offs between Availability vs Locality and Software vs Hardware Defined Control
There are two fundamental tradeoffs that a DC needs to decide on:
- Availability vs Locality
- Software vs Hardware-Defined control
To analyze the possible tradeoff decisions for a DC and to make the input directly usable in Warren's development process, the hypothetical DC stack is divided into the following functional domains: hardware, firmware, software. All three domains have common properties that correlate to Warrens software components. Stack for Network and Storage that the DC has previously adopted define the biggest influence in future developments and technical on-boarding.
Network and Storage are tightly coupled, as decisions in one domain are influenced by the properties of the other domain. Once the connection between Network and Storage domains are analysed in full, decision-making in the following two trade-offs is possible.
Availability vs Locality
The biggest trade-off decision is in multi-location computing (distributed cache and storage are simultaneously good and evil ).
- Availability in this context denotes:
- Spacial continuity - data or service is concurrently available to recipients/consumers in different locations rather than just one
Example: many Virtual Machines using the same database that resides in distributed storage - Temporal continuity - data or service is kept available even in case of soft- or hardware failures ("High Availability")
In cloud computing both spacial and temporal continuity may seem desirable, but the downside is lower delivery speed and latency. For example spacial availability with distributed storage without high-end hardware may not have the optimal latency for storage-sensitive applications. To assure application high-availability several software and hardware redundancies are involved. This requires buying and maintaining additional hardware, which makes the Total Cost of Ownership higher. - Spacial continuity - data or service is concurrently available to recipients/consumers in different locations rather than just one
- Locality denotes the physical distance of a functional domain from compute (CPU, RAM) resources (local storage vs distributed storage)
Locality is also not free from redundancy costs as High Availability metrics are achieved by involving both distributed and redundant resources. Luckily that is usually on the sub-server level and is therefore less expensive. Local storage has lower latency which is desirable, but total storage capacity in contrast is limited. In addition, as data on local storage is not available to outer devices without additional control and services, additional data duplication demands are introduced on top of requirements to achieve High Availability. This means extra development work and more costs involved.
Software- vs Hardware-Defined Domains of Control
This can mostly be described as:
- Software-Defined* - slow, yet flexible, automatically reconfigurable and easily portable
- Hardware-Defined* - high speed, low portability, automatic configuration is limited or impossible
The general tendency is towards a concept “solely software-defined DC”, largely because of the automation and management benefits it offers. An exception to this tendency is the popularity of bare-metal provisioning, as a demand for direct control over hardware still exists. This is required by some type of applications, that require independence from general software-level system failures and speed.
Clearly Defined Roles of DC System Administration and Warren System Support
DC system administrators' role depends on the size of the DC and the nature and the complexity of the infrastructure, regional peculiarities, job description and other factors. In cooperation between the DC and Warren a strict distinction between a DC system administrator and Warren system support role can be made. It is important to note, that Warrens system support role differs from third party software support. Depending on the DC and installation requirements setting and defining the border needs to be discussed.
The roles and responsibilities must be addressed thoroughly before a final settlement of adopting warren. It is a clerical error to assume that supportability is a second-grade matter that can be addressed after set up of production grade systems. As a matter of fact, it is one of the most important parts of service provisioning so that it deserves a chapter in the development documentation! Complex software systems must be developed with efficient observability and support in mind.
Considerations and Goals in the Network Domain
There are several factors in DCs network setup that dictates what needs to be thought through in application development process of Warren. Such factors include:
Network topology (tree, clos, fat-tree, thorus, etc)
This aspect defines network traffic between components, servers, racks and network traffic between the DC and the internet. Network topology set physical extendability properties of the DC, thus, we need to consider:
How will the automated hardware discovery process be handled?
What deployment schema should be used when implementing new nodes?
Which system components are involved in discovery and deployment processes?
How will any non-positive results of discovery and deployments be handled?
We are obviously not able to fine-tune Warrens setup for every type of topology. Topology is not a standalone factor and the set of variables in pre-analysis makes it too costly compared to the business-value of the expected outcome. However we can target discovery and deployment solutions most widely used by DCs with a sufficient degree of quality. Service reliability and availability metrics cannot theoretically be calculated in a platform that is under continuous heavy development. Thus these metrics will rather be defined during the DC adoption and on-boarding process.
The presumption is that the most widely used topologies in Warrens target DC client group are fat-tree and various forms of clos. Based on that, most optimizations are made for these topology types.
Nature of applications and services offered by DC
Although, both, this and next point seem to be trivial compared to a real problem magnets like network topology, adopting SDN solution, or better yet, consolidating different SDN solutions; this has become a major issue in public clouds (and presumably also in private ones, where such issues are usually not materialized as a series of scientific papers). Like almost all (except for SDN maybe) network-related considerations, also this one has the quantity-dependent nature.
In-DC traffic amount between racks
The bigger the amounts of data-flow between hardware devices, the bigger of a problem it tends to be. This traffic (and also In-DC traffic between silos, if larger DC is under consideration), is the one that measures the service system (Warren) efficiency. It's a two-fold problem, first the traffic that is generated by the clients, secondly the one that is generated by Warren as a management system. The goal of Warren is to reallocate resources to minimize in-DC traffic and in rare cases, it can, by doing so, destabilize the network flow for a short period of time. Management flow must always take precedence when client flow is causing problems, even if it decreases client throughput further. Because it’s purpose is to restore the previous state, or at least maximize the efficiency with the currently limited amount of available resources.
Existing SDN solution
In general, all SDN systems are based on the same principles an in major part, derived from two prevalent frameworks for SDN generation. There are several types of protocols when it comes to network device configuration, among which, OpenFlow is still the most dominant one. Almost all needed routing protocols are also supported by all major SDN solutions.
To conclude the above, there shouldn’t arise any drastic problems on a connection basis (which doesn't mean it's a trivial task!). However, there is an exception to that hypothetical balance - the security domain. All SDN systems implement some (or more) security domains, whether it’s client level or system-wide. To configure 2 or more SDN systems to cooperate simultaneously on that domain, might be more time consuming than configure the whole system to use adopt a new one.
Considerations in the storage domain
Warren storage domain consists of three options:
Distributed storage
Shared storage
Local storage
To determine the right solution, one must consider several factors that are required to implement a particular storage type. As storage holds the most valuable part - client data, the impact on the reliability and to QoS. Afterall - network outage only affects the availability of data, whereas storage problems may lead to permanent data loss.
Distributed - Expensive, but reliable, multi-functional, and scalable
The cost of a distributed storage (that may also be shared-distributed) comes from the fact that distributed is usually (not always - one exception is HCI) implemented as a separate cluster(s). So there are three main types of costs and an additional, optional one:
Upfront cost - devices itself, including explicit network for storage (fixed cost)
Repair/management costs + cost of space (fixed over a long period of time)
Energy cost (usually fixed over a long period of time with its seasonal spikes)
Optional license cost when the commercial distributed storage system is applied
When summed up and divided into monthly payments over a time period that equals server units service life, by far, the highest one is the energy cost. To conclude, although it seems wiser to make use of old server hardware for storage clusters, it is actually not so at all. Much wiser is to buy new, specially configured, low power consumption hardware that may even come with installed and configured distributed storage systems. Such specially configured devices offer another benefit - fast cluster extendability.
In a typical distributed storage solution, there are implemented both object and block storage, giving in such a way an opportunity to (when implemented as separate clusters) to use object storage also as a base for backup or disaster recovery implementations, in addition to its main purpose.
The reliability advantage compared to shared or local storage should be obvious.
Shared - cheaper, faster, half-baked reliability
If infrastructure includes a direct-attached storage unit used as a shared storage solution, there is a high chance that the vendor has included the device software that operates with the device. There may even be distributed solution working in this unit but it must be kept in mind that this kind of storage is distributed within the device itself. If the storage device should fail, all the data is still unreachable - the data protection works at the disk level.
To raise the protected sphere to the rack level, several such storage units must be placed in one rack. Now if Infrastructure contains more than 2 racks (which should be the normal case for DC), why are they not separated from compute units to form the autonomous distributed storage cluster? One answer to that might be the performance. As off-the-shelf storage units usually include “real RAID” controllers (with detached CPU and cache) and connection to compute units is direct (not over the network) the performance may be significantly higher than that of the distributed storage could offer.
Local - cheapest, not definitely fastest
Nowadays, the cost of the TB as a single disk is very low compared to the same capacity implemented in the form of an advanced storage device. However fast the single disk might be, it couldn’t compare with the direct-attached, performance-tuned shared storage system.
Arguments that local storage is less expensive to network resources, like the two other options above, are not exactly correct if you value the data on those disks. To be prepared for hardware failures one has to constantly back up the data and it is meaningless if not done outside the machine/cluster. Which doesn’t mean that if there are local disk placed in servers, they cannot be used. There are a lot of properties that need caching or swapping and local storage is a perfect case for such needs.
-