Nowadays, many companies decide to build their own data centers. However, they need help to do it perfectly. By the way, there are several steps to be taken from the outside to the inside, each of which should be perfectly studied. Therefore, this article provides truly useful tips for making the center.
Table of contents
Focus on Infrastructure
Building a data center is a huge challenge. Each of the criteria one chooses requires a quick commitment. So first, the building should meet all the conditions to achieve the best energy parameters.
On the one hand, the data center can be raised to from scratch. In this case, it will be able to easily achieve higher levels of certification and good energy parameters. First, it should be in a seismically stable regionoutside of the flood zones. Then, high quality access to electrical power and internet connectivity are very helpful. Two independent fiber optic tracks also should already be in place. In fact, establishing a second fiber requires much more construction time and budget.
On the other hand, it can be a renovationwhere the center will integrate an existing building. This approach allows significantly reduce costs. On the other hand, it is essential to analyze the current state of the building. The floors must necessarily be structurally resistantwith a minimum load-bearing capacity of 200kg per square meter. In addition, the roof should also be tenacious since it will be in charge of some of the equipment and other components.
As we already know, being powerful in terms of energy is the basis of any data center. That’s why it needs to be powered by two independent energy paths. Small businesses can use a single substation, but each machine must be powered by two independent paths. For data centers that plan to use more than 2 megawattsthey can opt for the unique battery-free power solutions like the dynamic or rotary inverters. These assemble a transformer, an inverter and a motor-generator.
It is really important to invest in a high quality inverter to avoid high costs for various maintenance expenses.
Cooling, a necessary part of a data center, is in fact a matter of cold and hot aisles. Today, the preferred method is free cooling, working best with a cold exterior. Cold outside air is used while saving electrical energy.
However, there are also a normal type of direct expansion air conditioning (DX). DX units are cheaper to buy, but more expensive in the long run. They are best used for cooling small amounts of heat.
Although the infrastructure is tough, the data must always be protected from fire and unauthorized access. Even the smallest server rooms should have at least a fire alarm and fire extinguishers for electrical equipment. Automatic fire and heat detection systems are generally used but there are also cheaper alternatives such as the use of demineralized water or sodium.
In addition, surveillance cameras, coded doors and security guards are very important. Temperature monitoring and cooling is also.
What about cloud devices?
Having defined the radical criteria of the data center infrastructure, we now move on to the hardware. With the grandiose evolution of technology these days, applications require a lot of network technologies.
The choice of the network
Here, there are two choices. The first one is the traditional three-layer approach access, aggregation and core, the second is the new Leaf-Spine topology. For most data centers, the three-part architecture is entirely sufficient. However, with its adoption of the Spanning Tree Protocol or STPit is now considered too heavy and not very effective. On the other hand, the new leaf-spine design uses a network of backbone (spine) and endpoint (leaf) switches) in a full mesh topology. So, this design provides higher total throughput and lower latency than the first architecture at the same cost.
The leaf-spine architecture is nowadays the most recommended. Since every important component is duplicated in the system, network failures will not make the endpoint inaccessible.
When building a data center, it is essential to know the servers to adopt. The servers differ according to their shape and size, to quote the tower, the rack or the blade. The formats tower and rack are servers fully autonomous. As for the blade serversthey require a special chassis that provides shared power, cooling, connectivity and management to all.
With blade serversthe entire chassis can be break downrequiring thereafter a another complete chassis. However, this represents more costs compared to a rack or tower server. Blade servers are therefore more suitable for large quantities. It is in this perspective that they show a cheaper maintenance and an easier management.
There is also another interesting trend on the Czech market. Equipment manufacturers have developed high-density servers with efficient power supplies and a small failure range. They come in small chassis for two to eight servers with shared power and management.
It is good to know that a higher server density means a higher concentration of waste heat. Then it requires a more powerful cooling solution.
The method of data storage in a data center can be realized directly in the center by a local disk or via a complex structure of disk arrays.
DAS: Direct Attached Storage
With the storage level DASall disks are connected directly to a server. The data is made available through special software, used by the hosting server. The clients then connect to the host server. If the host server fails, the data becomes inaccessible.
NAS: Network Attached Storage
The NAS is a storage space connected to the network with file level access. Using it allows simultaneous access to multiple clients. Testifying to speed, reliability and sophisticationNAS is now comparable to a SAN solution.
SAN: Storage Area Network
The SAN or storage area network defines a high-speed network reserved for data transfer between servers and block storage (disk arrays). The most common protocols are iSCSI and Fibre Channel. The networks are not only reliable and fastbut also provide very high performance. However, they are also the most expensive and the most difficult to prepare.
Although the most common storage remains the DASa data center can use these two other approaches. In addition, the fastest is the SAN with its virtualization.
How the data center works
The purpose of a data center is not complete without virtualization, cloud and the mantra of high availability. By the way, the disk array is still a big cost to charge.
The advantages of virtualization
It is easier to determine the benefits of the cloud based on the problems faced with the use of traditional physical servers. The latter more often present failures at the server level. The problem could not be solved without the intervention of a technician, who could be very far from the center. Thus, the repair is delayed for a long time, especially if backups need to be restored from a different hardware.
However, server virtualization is a simpler solution. It is the separation of the user interface and the physical hardware. In a virtual machine, the hardware is almost completely independent of the physical. Thus, the restoration of a backup on another machine can be well carried out since one has the same environment.
In addition, virtualization also makes it possible to solve the problems of upgrading of the equipment and extends the life of the equipment of the operating system.
How to choose a hypervisor
With the use of virtualization, you still need to choose the right hypervisor. By the way, there are two types of complete virtualization that differ in the location of the hypervisor. The first is the bare metal hypervisorinstalled in place of an operating system by replacing it entirely. One of the most famous of this model is VMware ESXi. Its size and simplicity make it truly advantageous. In addition, the server concentrates on one task and does it perfectly. So the servers can run for a long time without restarting, updates are not frequent. On the other hand, the main drawback is its limited hardware support.
The second type is called hosted hypervisor and works in the normal operating system environment. Oracle VirtualBox or VMware Workstation are among the best known. With this, you can use any hardware that can run the preferred operating system.
The benefits of the cloud
Even though the cloud and virtualization are different one from the other, they are still related. Virtualization offers computing resources to the end user, while the cloud allows him to easily consume these resources as a service.
The end user of a cloud does not rent a specific server, but resources in general. These are usually the CPU performance, RAM, storage space and network access. This technology finds its place in companies with different organizational units. Then virtualization comes into play by isolating these units from each other through suitable software. A central IT division oversees the whole process, making sure that part of the infrastructure is not overloaded and that resources go where they are needed. This alternative is called private cloud.
On the contrary, the public cloud works in a similar wayonly instead of different divisions of the same company. In this case, the central IT division is replaced by the cloud service provider. The latter supervise the use of the infrastructure and then form the basis for calculating the monthly rental fee for the cloud.
In general, even across multiple platforms, migrating virtual servers between the private and public clouds is fairly straightforward. In fact there are open standards available, such as OVA or OVFas well as conversion tools which also make moving easier.
Running applications in the cloud
For data centers created in From scratchthe choice is wide on the platform and the operating system. For data centers with infrastructure already existing infrastructureit must be planned in advance.
Modern operating systems are usually accompanied by support for virtual hardware. This makes it possible to move a physical server into a virtual environment in its current form, or just after installing some drivers. Special converters, a simple copy or backup restores are sufficient to do this.
When to avoid migration to the cloud?
In some cases, it is better not to migrate to the cloud at all. Migration remains financially unfeasible or totally impossible. We can mainly mention applications requiring more resources to run. This migration is also not recommended for applications requiring extremely low network latency. And finally, systems that work with specific real hardware such as a USB key or a smart card reader are much better off without the migration.
Focus on technology experts
No matter how well everything is set up, building a data center will fail without the technology expertsold or new. They will be of great help to bring the project to life.
Source: masterdc – Credit: