|
Necessary information from the Data Centre can be gathered with a set of simple, unambiguous questions that are divided into three subsets, based on the goal they are meant to achieve. Once answers to all subset questions are defined any ambiguity and misunderstanding over technical and determining factors will be cleared. The technical clarity achieved will be the fundamental premise to lay groundwork for a successful cooperation.
What are the current infra- and software components in use and what role do they play in the plans for the future?
What services is DC offering and what expectations do they have to Warren?
What is the level of commitment of DC in cooperation with Warren and what will be the 3rd party software systems alongside Warren?
Answers to the above questions are vital, to gather information that affects the following areas of the development for Warren:
There are two fundamental tradeoffs that a DC needs to decide on:
To analyze the possible tradeoff decisions for a DC and to make the input directly usable in Warren's development process, the hypothetical DC stack is divided into the following functional domains: hardware, firmware, software. All three domains have common properties that correlate to Warrens software components. Stack for Network and Storage that the DC has previously adopted define the biggest influence in future developments and technical on-boarding.
Network and Storage are tightly coupled, as decisions in one domain are influenced by the properties of the other domain. Once the connection between Network and Storage domains are analysed in full, decision-making in the following two trade-offs is possible.
The biggest trade-off decision is in multi-location computing (distributed cache and storage are simultaneously good and evil ).
This can mostly be described as:
The general tendency is towards a concept “solely software-defined DC”, largely because of the automation and management benefits it offers. An exception to this tendency is the popularity of bare-metal provisioning, as a demand for direct control over hardware still exists. This is required by some type of applications, that require independence from general software-level system failures and speed.
DC system administrators' role depends on the size of the DC and the nature and the complexity of the infrastructure, regional peculiarities, job description and other factors. In cooperation between the DC and Warren a strict distinction between a DC system administrator and Warren system support role can be made. It is important to note, that Warrens system support role differs from third party software support. Depending on the DC and installation requirements setting and defining the border needs to be discussed.
Please note with attention:
There are several factors in DCs network setup that dictates what needs to be thought through in application development process of Warren. Such factors include:
This network topology type defines network traffic between components, servers, racks and network traffic between the DC and the internet. Network topology defines physical extendability properties of the DC, thus, we need to consider:
How will the automated hardware discovery process be handled?
What deployment schema should be used when implementing new nodes?
Which system components are involved in discovery and deployment processes?
How will any non-positive results of discovery and deployments be handled?
Topology types:
tree
clos
fat-tree
thorus
etc)
We are obviously not able to fine-tune Warrens setup for every type of topology. Topology is not a standalone factor and the set of variables in pre-analysis makes it too costly compared to the business-value of the expected outcome. However we can target discovery and deployment solutions most widely used by DCs with a sufficient degree of quality. Service reliability and availability metrics cannot theoretically be calculated in a platform that is under continuous heavy development. Thus these metrics will rather be defined during the DC adoption and on-boarding process.
The presumption is that the most widely used topologies in Warrens target DC client group are fat-tree and various forms of clos. Based on that, most optimizations are made for these topology types.
Services offered by the DC (emphasis on RAM vs CPU vs Local or Distributed storage) and thus similarly the end-user application resource needs need to be discussed and understood. This point has a quantity and volume dependent nature similarly to network-related questions and deserves careful consideration and planning so suggestions on hardware needs can be made.
Data traffic volumes between hardware devices are measured to define the efficiency of Warrens service system. In case a larger DC installation is considered DC traffic between silos is additionally measured. The bigger the amounts of data-flow between hardware devices the bigger a problem it creates. It's a two-fold problem, first by the traffic that is generated by applications and end-users and secondly by the traffic that is generated by Warren as a management system.
The goal of Warren is to reallocate resources to minimize internal-DC traffic. In rare cases this can result in destabilising the network flow for a short period of time. In cases where client application related data flow is causing problems, system management related data flow must always take priority, even in cases where client throughput as a result decreases further. The purpose is to restore the systems previous state or maximise the efficiency of limited available resources in a given point in time.
In general, all SDN systems are based on the same principles and in major part, derived from two prevalent frameworks for SDN generation. There are several types of protocols when it comes to network device configuration ( OpenFlow is still the most dominant one). Almost all needed routing protocols are supported by all major SDN solutions. This means any pre-existing SDN solution the DC might already use, should not cause any drastic adoption or installation problems on a connection basis. Which doesn't mean it's a trivial task!
However, there is an exception to that hypothetical balance - the security domain. All SDN systems implement some (or more) security domains, whether it’s client level or system-wide. To configure 2 or more SDN systems to cooperate simultaneously on that domain, might be more time consuming than to configure the whole system to adopt a new one.
Warrens storage domain consists of three options:
Distributed storage
Shared storage
Local storage
To determine the most desirable solution, the DC needs to consider several factors that are required to implement a particular storage type. Storage holds the most valuable information - client application data, so the impact on the reliability and to Quality of Service is immense. In comparison network outage only affects the availability of data, whereas storage problems lead to permanent data loss.
Pros: reliable, multi-functional, scalable
Cons: expensive
The cost of a distributed storage (that may also be shared-distributed) comes from the fact that distributed is usually (not always - one exception is HCI) implemented as a separate cluster(s). So there are three main types of costs and an additional, optional one:
Upfront cost - devices itself, including explicit network for storage (fixed cost)
Repair/management costs + cost of space (fixed over a long period of time)
Energy cost (usually fixed over a long period of time with its seasonal spikes)
Optional license cost when the commercial distributed storage system is applied
When summed up and divided into monthly payments over a time period that equals server units service life, by far, the highest one is the energy cost. To conclude, although it seems wiser to make use of old server hardware for storage clusters, it is actually not so at all. Much wiser is to buy new, specially configured, low power consumption hardware that may even come with installed and configured distributed storage systems. Such specially configured devices offer another benefit - fast cluster extendability.
In a typical distributed storage solution, there are implemented both object and block storage, giving in such a way an opportunity to (when implemented as separate clusters) to use object storage also as a base for backup or disaster recovery implementations, in addition to its main purpose.
The reliability advantage compared to shared or local storage should be obvious.
Pros: cheaper, faster
Cons: half-baked reliability
If infrastructure includes a direct-attached storage unit used as a shared storage solution, there is a high chance that the vendor has included the device software that operates with the device. There may even be distributed solution working in this unit but it must be kept in mind that this kind of storage is distributed within the device itself. If the storage device should fail, all the data is still unreachable - the data protection works at the disk level.
To raise the protected sphere to the rack level, several such storage units must be placed in one rack. Now if Infrastructure contains more than 2 racks (which should be the normal case for DC), why are they not separated from compute units to form the autonomous distributed storage cluster? One answer to that might be the performance. As off-the-shelf storage units usually include “real RAID” controllers (with detached CPU and cache) and connection to compute units is direct (not over the network) the performance may be significantly higher than that of the distributed storage could offer.
Pros: cheapest
Cons: not fastest
Nowadays, the cost of the TB as a single disk is very low compared to the same capacity implemented in the form of an advanced storage device. However fast the single disk might be, it couldn’t compare with the direct-attached, performance-tuned shared storage system.
Arguments that local storage is less expensive to network resources, like the two other options above, are not exactly correct if you value the data on those disks. To be prepared for hardware failures one has to constantly back up the data and it is meaningless if not done outside the machine/cluster. Which doesn’t mean that if there are local disk placed in servers, they cannot be used. There are a lot of properties that need caching or swapping and local storage is a perfect case for such needs.
-