...
- Network topology (tree, clos, fat-tree, thorus, etc)
This aspect restricts network traffic between components, servers, racks and between DC and internet. Also it sets DC's physical extendability properties, thus we need to think through how will be handled automated discovery and deployment of new nodes, which components are involved in such process and how the non-positive results of such cases are handled.
Obviously, we cannot fine-tune our setup for every topology type because it's not a standalone factor, so the set of variables in such analysis is large and too costly compared to the business-value of the outcome. But we can target the solution that covers topologies mostly used in target DC's with sufficient degree of quality (metrics of service reliability and availability standards are something that cannot be purely theoretically calculated in platform that is under heavy development and will rather be deduced during DC adoption process). Current assumption is that most widely used topologies in probable target DC group are fat-tree and various forms of clos. Based on that - Application/Service types
Although, both, this and next point seem to be trivial compared to a real problem magnets like network topology, adopting SDN solution or better yet, consolidating different SDN solutions; this has become a major issue in public clouds (and presumably also in private ones, where such issues are usually not materialized as a series of scientific papers). As almost all (except for SDN maybe) network-related considerations, also this one has the quantity-dependent nature. The bigger the amounts of data-flow between hardware devices, the bigger of a problem it tend to be. - In-DC traffic amount between racks
- Existing SDN solution
...