19.05.2020

Organization of the process of building a data center: practical notes. The key to the success of a data center is competent design


In the modern sense, a data center (data center), or data processing center (DPC), is a complex organizational and technical solution designed to create a high-performance and fault-tolerant information infrastructure. In a narrower sense, a data center is a room designed to house equipment for processing and storing data and providing a connection to fast communication channels. In order to more fully reveal the essence of the concept of DPC, let's start with the history of its occurrence.

In principle, computer centers, familiar to many from EU-machines, which became widespread in our country 30 years ago, are in a certain sense the progenitors of modern data centers. Common to current data centers and old data centers is the idea of ​​resource consolidation. At the same time, computer centers had rather complex subsystems for providing the environment necessary for computing technology, which consisted of subsystems for cooling, power supply, security, etc., many of which are also used in modern data centers.

With the spread of the PC in the mid-1980s, there was a tendency to disperse computing resources - desktop computers did not require special conditions, and therefore, less and less attention was paid to the issues of providing a special environment for computing. However, with the development of client-server architecture in the late 90s, it became necessary to install servers in special rooms - server rooms. It often happened that the servers were placed on the area of ​​the old VC. Around this time, the term "data center" was coined, applied to specially designed computer rooms.

The heyday of data centers came during the dot-com boom. Companies that needed fast Internet access and business continuity began to design special premises that provide increased security for processing and transmitting data - Internet Data Centers. Since all modern data centers provide access to the Internet, the first word in the name has been eliminated. Over time, a separate scientific direction, which deals with issues of optimizing the construction and operation of data centers.

IN early XXI century, many large companies both abroad and in our country came to the need to implement a data center - for some, business continuity became paramount, for others, data center solutions turned out to be very effective due to savings in operating costs. Many large companies have found that a centralized computing model provides best indicator TCO.

Over the past decade, many large IT companies have acquired an entire data center network. For example, the oldest global operator Cable & Wireless in 2002 bought the American company Digital Island - the owner of 40 data centers around the world, and the European operator Interoute in 2005 acquired the operator and hosting provider PSINet, connecting 24 data centers to its pan-European network.

The practice of applying risk-based business approaches stimulates the use of data centers. Companies have begun to realize that the investment in keeping mission-critical IT systems up and running is far less expensive for many businesses than the potential loss of data due to a failure. The adoption of data centers is also facilitated by the adoption of laws requiring mandatory redundancy of IT systems, the emergence of recommendations on the use of an IT infrastructure outsourcing model, and the need to protect businesses from natural and man-made disasters.

Separate data centers began to occupy all used O larger territories. For example, recently there was information that Google intends to build a large data center in Iowa with an area of ​​22.3 hectares, having spent 600 million dollars on it, which will start working in the spring of 2009.

In Russia, the construction of a data center (in the modern sense of the term) began at the end of the past - the beginning of a new century. One of the first large Russian data centers was the Sberbank Center. Today, many have their own data centers commercial structures(first of all financial institutions and major carriers).

At the same time, reputable Russian Internet companies already have several data centers. For example, in September of this year, a message appeared that Yandex opened a new (fourth in a row) data center for 3,000 servers (occupied area - 2,000 sq.m, supplied power - 2 MW). The new complex is equipped with precision cooling systems that allow you to remove up to 10 kW from the rack, uninterruptible power supplies and diesel generators. The data center is connected to Yandex's Moscow optical ring, which connects other data centers and Yandex offices, as well as to M9 and M10 - traditional traffic exchange points with providers.

Simultaneously Russian operator Synterra announced the start of one of the most major projects(not only by Russian, but also by European standards) - the construction of a national network of own data centers. The project was named "40x40". By creating large data centers in nodes broadband network in most regions of Russia, the operator intends to turn them into points of customer localization and sales of the entire range of services.

Until mid-2009, newly created data centers will be opened in 44 centers of the subjects of the Federation. The first will be Moscow, St. Petersburg, Kazan, Samara and Chelyabinsk. The operator plans that by the end of 2008 the first 20 sites will be put into operation, the rest - by the middle of 2009. The project integrators are Croc, Technoserv A/S and Integrated Service Group (ISG).

The area of ​​each data center, depending on the needs of the region, will vary from 500 to 1000 sq.m on a raised floor and accommodate 200-300 technological racks. Two rings of the network should be connected to the data center with a total channel bandwidth of 4x10 Gb / s, which will provide customers high level reservations and service availability.

The 40x40 project is aimed at a wide range of clients who need to outsource IT infrastructure throughout the country - telecom operators, "network" corporate clients, content and application developers, IP-TV operators and television companies, as well as government agencies responsible for the implementation of national ICT programs.

Own data centers in our country have not only commercial, but also state structures, such as the Ministry of Internal Affairs, the Ministry of Emergency Situations and the Federal Tax Service.

According to IDC, the number of data centers in the US will reach 7,000 by 2009 as companies move from distributed computing systems to centralized ones.

Along with the construction of new data centers, the problem of modernizing old ones is on the agenda. According to Gartner, by 2009, 70% of data center equipment will no longer meet operational and performance requirements unless appropriate upgrades are made. Average term updates of computer equipment in the data center - approximately three years. The data center infrastructure is designed taking into account the service life of about 15 years.

Purpose and structure of the data center

Depending on the purpose, modern data centers can be divided into corporate ones that operate within a particular company, and data centers that provide services to third-party users.

For example, a bank may have a data center where information on transactions of its users is stored - usually it does not provide services to third-party users. Even if the data center does not provide such services, it can be separated into a separate organizational structure companies and provide them with access to information services on the basis of SLA. Many large companies have a data center of one kind or another, and international companies may have dozens of data centers.

The data center can also be used to provide professional IT outsourcing services for IT solutions on a commercial basis.

All data center systems consist of the actual IT infrastructure and engineering infrastructure, which is responsible for maintaining optimal conditions for the operation of the system.

IT infrastructure

A modern data processing center (DPC) includes a server complex, a data storage system, an operation system and a information security, which are integrated with each other and united by a high-performance LAN (Fig. 1).

Rice. 1. IT infrastructure of a modern data center

Consider the organization of the server complex and data storage systems.

Data center server complex

The most promising model of a server complex is a model with a multi-level architecture, in which several groups of servers are distinguished (see Fig. 1):

  • resource servers, or servers information resources, are responsible for storing and providing data to application servers; for example, file servers;
  • application servers perform data processing in accordance with the business logic of the system; for example, servers running SAP R/3 modules;
  • presentation servers provide an interface between users and application servers; for example, web servers;
  • service servers ensure the operation of other subsystems of the data center; for example, backup system management servers.

To servers different groups different requirements are imposed depending on the conditions of their operation. In particular, presentation servers are characterized by a large stream of short requests from users, so they must scale well (increase in the number of servers) to ensure load distribution.

For application servers, the requirement for horizontal scalability remains, but it is not critical. They require sufficient vertical scalability (the ability to increase the number of processors, RAM and I/O channels) to process multiplexed requests from users and execute the business logic of the tasks being solved.

Storage systems

The most promising solution for organizing a storage system (SAN) is SAN (Storage Area Network) technology, which provides fault-tolerant server access to storage resources and reduces the total cost of ownership of the IT infrastructure due to the possibility of optimal online management of server access to storage resources.

The storage system consists of information storage devices, servers, a management system and a communication infrastructure that provides physical communication between the elements of a storage network (Fig. 2).

Rice. 2. Storage system based on SAN technology

This architecture allows for uninterrupted and secure data storage and data exchange between SAN elements.

The concept of SAN is based on the ability to connect any of the servers to any storage device operating on the Fiber Channel (FC) protocol. technical basis SANs are made up of fiber optic connections, FC-HBAs, and FC switches, currently capable of transfer rates of 200 MB/s.

The use of SAN as the transport basis of the storage system enables dynamic reconfiguration (adding new devices, changing the configurations of existing ones and their maintenance) without stopping the system, and also provides a quick regrouping of devices in accordance with changing requirements and rational use production areas.

The high data transfer rate over the SAN (200 MB/s) allows real-time replication of changing data to a backup center or remote storage. Convenient SAN administration tools make it possible to reduce headcount service personnel, which reduces the cost of maintaining the storage subsystem.

Adaptive engineering infrastructure of the data center

In addition to the actual hardware and software complex, the data center must provide external conditions for its operation. The equipment located in the data center must operate around the clock under certain environmental parameters, which require a number of reliable support systems to maintain.

A modern data center has more than a dozen different subsystems, including main and backup power, low-voltage, power and other types of wiring, climate control systems, fire safety, physical security etc.

It is quite difficult to ensure the optimal climatic conditions of the equipment. A large amount of heat generated by computer equipment must be removed, and its volume increases as the power of the systems increases and the density of their layout. All this requires optimization of air flows, as well as the use of cooling equipment. According to IDC, already this year, the cost of supplying data centers with electricity and providing cooling will exceed the cost of the actual computer equipment.

The listed systems are interconnected, so the optimal solution can only be found if, when building it, not individual components, but the infrastructure as a whole are considered.

Designing, building and operating a data center is a very complex and time-consuming process. There are many companies offering the necessary equipment - both computer and auxiliary, but for building individual solution without the help of integrators it is impossible to do here. A number of large domestic system integrators are engaged in the creation of data centers in Russia, such as IBS Croc OpenTechnologies, as well as specialized companies: DataDome, IntelinePro, etc.

Data center and IT outsourcing

According to IDC, the global market for hosted data center services alone is growing very fast and will reach $22-23 billion by 2009.

The most comprehensive IT outsourcing service is outsourcing information systems. It is provided under a long-term agreement under which the service provider receives full control of the entire IT infrastructure of the client or a significant part of it, including equipment and software installed on it. These are projects with a wide involvement of the contractor, which involve responsibility for the systems, network and individual applications that are part of the IT infrastructure. Typically, outsourcing of IT infrastructure is formalized by long-term contracts that last more than a year.

To create their own IT infrastructure from scratch, companies need large funds and highly paid specialists. Renting data center infrastructure allows you to reduce TCO by sharing resources between clients, provides access to the latest technologies, and makes it possible to quickly deploy offices with the ability to increase resources. For many companies, the reliability of the uninterrupted operation of equipment and network infrastructure is now becoming a critical factor for the functioning of the business. IT infrastructure outsourcing allows you to provide a high level of data reliability at a limited cost, giving customers the opportunity to rent server racks and rack spaces to accommodate customer equipment (co-location), rent a dedicated server (dedicated server), licensed software, data transmission channels, and also receiving technical support.

The customer is freed from many procedures: technical support and administration of equipment, organization of round-the-clock security of premises, monitoring of network connections, data backup, anti-virus scanning of software, etc.

The data center can also provide an outsourced application management service. This allows customers to use certified professionals, which guarantees a high level of service. software products and provides an easy transition from one software to another at minimal financial cost.

In the application outsourcing mode, data center customers can receive outsourcing of mail systems, Internet resources, storage systems or databases.

Passing on their corporate systems for outsourcing for redundancy, customers reduce the risk of loss of critical information through the use of professional systems for restoring the health of IT systems, and in the event of an accident, they get the opportunity to insure information risks.

Typically, data center customers are offered several levels of business continuity. In the simplest case, this is the placement of backup systems in a data center with proper protection. In addition, there may be an option in which the client is also provided with rental of software and hardware systems for redundancy. The most complete version of the service involves the development of a full-scale system recovery plan in the event of a disaster (Disaster Recovery Plan, DRP), which includes an audit of the customer's information systems, risk analysis, development of a disaster recovery plan, creation and maintenance of a system backup, as well as the provision of equipped office space to continue working in the event of an accident in the main office.

Examples of commercial data centers

Data Centers Stack Data Network

The Stack Data Network unites three data centers built on the basis of foreign experience.

Two of them (the Stack data center and the M1 data center) with a total capacity of 700 racks are located in Moscow, and the third (PSN data center) with a capacity of 100 racks is located 100 km from the capital.

There are partnership agreements with a number of European data centers on the possibility of using their resources through the Stack Data Network.

Stack Data Network data centers provide a business continuity service - disaster recovery, as well as high-quality hosting: a collocation service - server placement (Fig. 3) and a dedicated server service - a dedicated server (Fig. 4).

Rice. 3. Stack Data Center: Server Placement
(server collocation)

Rice. 4. Data Center Stack: rent a dedicated server

Data centers have autonomous power supply systems with uninterruptible power supplies and powerful diesel generator sets (Fig. 5), climate control and air conditioning systems (Fig. 6), systems for round-the-clock monitoring of the state of infrastructure elements and gas fire extinguishing systems. To ensure the reliability of life support systems, all systems are reserved according to the N + 1 scheme. A special security regime is achieved through several access perimeters using individual plastic magnetic cards, a biometric access control system, a video surveillance system and motion sensors.

Rice. 5. Stack data center: diesel generator

Rice. 6. Data Center Stack: Liebert Air Conditioner

A round-the-clock operation service (on-duty operators and specialists) is organized in the Stack Data Network data center network, including for life support systems. There are systems for round-the-clock monitoring of life support systems, telecommunications and server equipment, networks and the state of communication channels. The data centers are connected to the main telecommunications hubs of Moscow and interconnected by their own redundant fiber-optic communication lines.

Sun Microsystems offers a new concept of "data center in a box"

The process of creating traditional data centers is very costly and lengthy. To speed it up, Sun Microsystems has come up with a solution called Blackbox.

The Blackbox system mounts in a standard length shipping container that can pack up to 120 SunFire T2000 servers or up to 250 SunFire T1000 servers (2k cores in total) or up to 250 SunFire x64 servers (thousand cores) plus storage systems, capacity which can reach up to 1.5 PB on hard drives and up to 2 PB on tapes. Up to 30,000 Sun Ray terminals can be connected to a container.

The system is running Solaris 10.

The equipment is placed in a container very tightly, there is simply no room for air circulation. In this regard, air cooling is extremely inefficient, so water cooling is used.

According to SUN, placing equipment inside a shipping container can reduce the cost per unit area of ​​computing power by five times compared to a conventional data center.

The Blackbox solution is at least an order of magnitude cheaper than a traditional data center organization, while it provides a multiple acceleration of the installation process.

It should be noted that such a center may not be implemented everywhere, since such a container cannot be brought into every building. The sale of the ball solution started this year.

Data center IBS DataFort

In 2001, IBS and Cable & Wireless announced the start of providing Russian and foreign companies integrated services under the ASP scheme within the framework of the joint DATA FORT project based on the data center. A little later, DATA FORT began to live on its own, and in 2003 IBS announced the launch of its own DC, which belongs to an IBS subsidiary - IBS DataFort. The IBS DataFort data center is focused on serving clients with critical requirements for confidentiality and data protection, provides a high degree of data availability, modern hardware and software, reliable power supply, high-speed data transmission channels and a high level of technical support. The perimeter is heavily guarded (Fig. 7).

Rice. 7. Protected area of ​​the IBS DataFort data center

Inside the building there is a technical module with an area of ​​more than 130 sq.m, a two-story reserve office with an area of ​​about 150 sq.m, as well as an operator's post. To prevent the risks of floods and fires, the technical module of the data center is built of steel sandwich panels and raised half a meter above the floor (Fig. 8).

Rice. 8. Technical module of the IBS DataFort data center

The technical module is a fireproof, earthquake-resistant structure, equipped with a high-strength raised floor, waterproofing and grounding systems. The module is designed for 1500 Rack servers placed in 19-inch APC industrial racks.

The data center has an automatic gas fire extinguishing complex, consisting of Fire Eater, Shrak and Inergen GOS equipment, light and sound alarms (which warn of a gas release and require you to leave the premises of the data center), as well as effective system smoke exhaust (Fig. 9).

Rice. 9. Data center fire extinguishing systems
IBS DataFort

The climate control system (Fig. 10) consists of industrial air conditioners with automatic maintenance of the set temperature in the region of 22 ± 0.5 ° C and humidity at the level of 50 ± 5%, switched on according to the N + 1 scheme (in case of failure of one of the air conditioners, the calculated parameters of the entire system are not violated). The influx of fresh air from the street is carried out using a special installation that prevents the penetration of dust into the data center.

Rice. 10. Climate control system
data center IBS DataFort

IBS DataFort specializes in complex IT outsourcing services, taking over all the functions of the customer's IT departments, and offers the following types of services:

  • outsourcing of IT infrastructure - placement of customer equipment or leasing of data center infrastructure, ensuring the operability of corporate information systems;
  • application management - skilled administration and management of various applications;
  • outsourcing of IT staff - providing qualified specialists to solve various IT tasks;
  • ensuring business continuity - organizing fault-tolerant solutions for restoring information systems after accidents and failures;
  • IT consulting and audit - audit and inventory services in the field of IT, as well as building industrial technologies operation of IT systems;
  • functional outsourcing - management of individual IT functions according to agreed standards and approved service levels.

To begin with, let's determine that the idea to build a data center has not only visited the client, but also gained a foothold, and the CIO, who needs to ensure the reliable functioning of business applications, understands that the time "H" has come. The business understands that the risks of losing profits are very high, reliable IT functioning is required and it is necessary to invest in a normal data center. Therefore, further we will talk about the process of creating a data center not from a technical point of view, but from an organizational one.

So where to start? From an idea. Why not? The integrator here must play the role of a psychologist: come to the client and talk about what he ultimately wants to get from the data center. Two things are important here: not to formalize the process and not to dig into the details. Formalization of the process usually comes down to sending a bunch of questionnaires and tables to the client. Of course, this should be done, but not at the first or second meeting. Therefore, it is better to break a large questionnaire into several smaller ones and transfer them to the client company's specialized specialists. Capacious questionnaires with a lot of technical details are usually simply not filled out, and here the human factor is to blame. Alas, facts are stubborn things, and own experience I will say that the universal questionnaire, which includes everything, everything, everything, is filled out by no more than 1-3% of clients. Usually it looks like this: send, a week or two of lost time, come and start talking. Live communication allows you to significantly save time, which is usually not there anyway: for some reason, the decision to build a data center is made "for yesterday", and the client usually thinks for a year how to build a data center in two months :).

Preliminary work with the client

The kick-off meeting took place, and then the work of presale specialists and key specialists client. It is important that the presale is able to speak two languages ​​- the language of financiers and the language of technical staff. A presale is a kind of translator that is able to understand the needs of the client and assess how much the project will cost and whether it will be beneficial to the client. Moreover, such a specialist solves the dilemma by helping working group client to defend the budget of a calculated decision in front of the financial director. Calculation of indicators such as return on investment (ROI), total cost of ownership (TCO), internal norm return (IRR), payback period (PP), will allow top managers to justify why exactly 1 million is needed and 200 thousand is not enough, “for IT people to calm down and play enough”, as well as take into account the peculiarities of financing and the stages of investment. Ideally, here the client needs to provide at least an enlarged investment plan to understand the finite cost of a data center and choose a model for its use: whether it will be your own data center, a rented commercial or "cloud" service. Usually for a corporate client, this will be a kind of hybrid in which there will be all three types. In this case, the top manager understands the meaning of the whole undertaking, the financial director - the end of the costs, and the IT director - how much the data center will meet the needs of the business.

Another important detail on this stage Don't get too carried away with redundancy and the reliability factor. It often happens like this: the construction department chooses to duplicate the sources of power supply and air conditioning. The IT department duplicates communication lines and IT equipment, business application specialists do synchronous replication and “hot duplication” of working nodes ... As a result, the price goes off scale, although each department individually did the right thing and worked out its zone as much as possible responsibility. Therefore, when comparing budgeting options and describing the concept, it is necessary to rely on a comparison of turnkey options. In addition, one should not forget that, other things being equal, the costs of two Tier III sites will always be more reliable than Tier IV, because in this case the cost of external risks decreases.

Site selection

It would be good for the client to understand a simple truth: at this stage, the help of an integrator is needed, perhaps, more than during design. After all, it is often the architecture of the building and site that requires designers to provide technical solutions that are not always economically justified and lead to budget overruns.

Particular attention should be paid to the following issues:

1. Supply of communications.

Here it is worth paying special attention not just to the possibility of supplying communications, but to the cost of a turnkey project. Very often there is a technical possibility, but the cost one ... Either low-current sewers need to be dug or rented, or a transformer substation (TS) must be installed and tied up with 10 kV. Therefore, do not be tempted to talk and promises until there are valid technical conditions for connection. At least they will ensure that within a time period (usually one year) costs will not increase. Separately, it is desirable to inspect the independence of power inputs. As practice shows, it is highly likely to receive an unpleasant surprise during this procedure: to find out, for example, that two inputs are drawn from one substation. By communication channels: keep in mind that there should be several of them and it is desirable to pull them along different routes. Dark fiber is ideal, but extremely expensive. Switched communication lines that run between different providers are cheap, but cheerful: in the event of an accident, downtime can be unpredictable. Therefore, the golden mean is a communication channel from one provider that owns the entire network from point A to point B (this is especially true for communication between the main and backup data centers).

2. Location of the data center.

There are several important points:

A) Plan for skidding a large equipment. It is important to understand that the equipment must not only be brought in, and, of course, it must pass through the doorways, but also unfold it, load it and remove it from the trolley.

b) The presence of sufficient load on the ceiling and wall structures. Needless to say, the equipment must not just be brought in and put in - it still needs to be brought in. Therefore, we pay attention to what kind of load capacity of the floors along the transportation routes (failed raised floor, crushed tiles, etc. will not add convenience at all when the data center is already built). Finally, an architectural expertise of the building "for today" is required. Often it is, but 15 years ago. Of course, you can extrapolate and guess on the coffee thick, but the cost of risks is much higher than the cost of such an examination. As for the walls, it is important here how much they can be loaded on, because cable racks, hanging shields, when designing to save space, are easier to hang on the wall.

V) Room height. Do not forget that in the data center it is necessary not only to put cabinets, but also to make multi-storey cable routes for electricians and low-voltage power, so there is no excess height. Especially if cabinet air conditioners with raised floor blowing are planned - the solution is simple, inexpensive and, with the right calculation, allows you to remove up to 15 kW from the cabinet (in theory, more is possible, but it's a pity for the volume of the room). So what size should you aim for? Empiricism showed: 4.5 meters. But certainly not less than three meters.

G) Presence near the security post. This will at least reduce the cost of organizing round-the-clock security and simplify the task of prompt response to emergency situations (paradoxically, no automation can cope in terms of efficiency if event triggers and procedures for automation systems are not described). Otherwise, you will have to equip not just a room for people, but also think about life support systems for them (ventilation, sewerage, heating, lighting, etc.).

e) Organization of the entrance to the data center. Is it possible for a road train to drive directly to the gates of the data center? Is it possible to organize unloading directly from the trailer to the building?

f) If there is already a transformer substation and it is planned to increase the capacity, it should be given Special attention condition and cross-section of the connected high-voltage cable and the presence of transit connections for other powerful neighbors. Otherwise, the client risks facing the fact that unscheduled work on the repair of a high-voltage line with excavation of the territory may be carried out on its territory. This is also true for other transit communications, so an up-to-date master plan is required, otherwise you may encounter unpleasant situations already at the implementation stage - when an excavator, according to the classics of the genre, interrupts the communication cable (and even worse - 10 kV).

and) Soils and their composition. Possession of this information will allow you to understand how to organize grounding (how deep to hit the pins and how many of them will be needed) and protect the fuel and lubricants storage tanks from corrosion. Particular attention should be paid to this point if the object is located in an industrial area where the soil may be chemically active.

Interaction between integrator and client teams

We will not duplicate PMBook - let's focus on the main points. The appointment of people allocated by the integrator and the client must be secured by orders with appropriate clear documentation, which will spell out their areas of influence and responsibility. It is extremely useful for the project manager on the part of the integrator to create a project charter and fix the responsibility matrix at the very first meeting (it often happens as it happens: some people make decisions, and others are responsible for these decisions). This matrix should also list the contact details of each participant so that they can be easily contacted. On the one hand, the argument that "we need to hurry, there is no time to waste time on nonsense." But the charter of the project is quite a useful thing. It fixes the format of documents that will be exchanged by the participants, the frequency of meetings, the order of coordination decisions taken. A good project charter is a document that you can give to a new team member, and after reading it, he will be able to fully engage in work, knowing his role in this process.

The main points to focus on:

1. Find the right people, define the real roles in the project. How to find the right person? The answer is simple. Try to mentally remove it - will something change? If not, this person is not needed. The data center project does not require a large permanent team, so it is better not to include more than 5-7 people from each side in the core team.

2. A person resists change until he feels safe. Therefore, it should not be surprising that, as a rule, the team of people from the client's side is more conservative. This is normal, because their task is not only to build a data center, but also to live with it, therefore any revolutionary solutions are met with caution: the product is raw, there are no failure statistics, what it is like in operation is not clear ... This does not mean that new products are not it is worth implementing - it’s just that their implementation will require more detailed study, including with the client.

3. Available channels for delivering bad news. A controversial moment, but it is necessary, and especially in the integrator team. Often the project manager finds out about this news, when nothing can be changed, and what happened can be accepted as a fact, although many knew about it, but hid it, hoping for "maybe it will be decided."

4. Formal programs aimed at improving the existing process (all possible certifications of specialists, staff assessment) will cost the team dearly both in time and money. And even if there is an improvement, it is unlikely to cover the costs. The data center project is quite complex in organizational terms due to a large number milestones and key points of docking of these same milestones. Therefore, training, development of configurators, templates, etc. is the kind of work that can load the key specialists of the team at the most inopportune moment and delay the work on the critical path.

5. How harder project, the more time is spent on design and less on commissioning. Unfortunately, design in the post-Soviet space is a contrasting one: either it is the old Soviet school, when design is carried out fundamentally, slowly and without reference to financial costs, or it is the complete opposite of the Soviet school - a commercial approach, where specification is at the forefront, quick purchase with subsequent installation at the facility (“somehow the guys themselves will figure it out later”). Taking into account domestic realities, the client is always in a hurry, and he needs a data center “yesterday”: at the design stage, they always want to shorten the time. One can talk for a long time about the incorrectness of this approach, but the reasonable maximum how to speed up this process is the development of a concept and draft design in case of two-stage design (or the approved part in case of one-stage design), coordination with the customer and purchase of SKD equipment with a long delivery time. Materials and smaller components can be detailed in the working drawings, which will go into installation. At the same time, the design period is the same, but the supply of materials will be carried out earlier, respectively, the period for the implementation of the data center will be significantly reduced.

6. People will not think faster if management starts putting pressure on them. The more overtime, the lower labor productivity (only short-term extracurricular work). You don't have to do this all the time. The most difficult stages are the initial and final stages. At the beginning, it is important to quickly decompose the tasks, load all the specialists with the initial data and start working, at the end - to have time to synchronize everything before the end of the processes and complete the project on time. Constant revisions indicate either an imbalance in the team, or poor project management, or unrealistic deadlines.

7. A specification that does not have a list of incoming and outgoing information should not even be considered. Therefore, it is not worth working according to someone else's specification or specification, calculated by "one well-known company", if there is no understanding of what technical task was set. General advice for the project manager from the client side: before ordering a project from third party company and after that to hold a tender for implementation, ask the question, who will act as the quality auditor of such a project? Wouldn't that be a waste of time and money? General advice for an integrator project manager: don't take comfort in the fact that you'll just put prices in someone else's project specification and the risk will fall on the client. This is not so, because, most likely, in the contract the client will want to receive not a calculation for the installation of materials, but a working system that would have a number of parameters. This means that the integrator performing the implementation will still be responsible.

8. If a large team is involved in the project, this reduces the effectiveness of the most critical part of the work - defining the concept of the data center (after all, everyone needs to be given work quickly), which leads to a loss of independence within the team, an increase in the number of meetings and meetings. Therefore, adhere to the following principle: first a small team, after the concept - connecting new players. Very often it was possible to observe when 30-40 people come to the first meeting, who try to discuss everything at once, break into groups, discuss something and ... leave. After that, during the formation common protocol there is a confrontation - and everyone gathers again. And so on ad infinitum. Identify key people (on the client side, this is usually the CIO, on the integrator side, the Project Manager), and connect the rest as needed. The empirically optimal group with which it is still possible to resolve issues is a group of up to 10 people.

9. The project should have two deadlines - planned and desired. And they don't have to match. This should be understood by both the customer's data center project manager and the integrator's representative. As a rule, they report to the top about the end date of the project, and since the data center is still the basis for deploying business services, it can happen that completely independent projects are connected, while none of the project teams won't know about it. For example, equipment will be ordered and an application for a business trip of foreign specialists for its commissioning will be generated. At the same time, the data center has not yet been launched, the equipment has not been put into operation, and the specialists have already arrived for commissioning - the company incurs losses, and as a result, deadlines are missed ...

Design

Before starting the design, you need to decide on the terms of reference (TOR) and agree it with the client. Although the client should do this, I will express my humble opinion: the integrator should still do this together with the client. Why? Yes, because it is the integrator who is more competent: he has more experience and sometimes more understanding of what the client needs. In the TOR, it is imperative to fix not general phrases (like “the air conditioning system must maintain the temperature in the server room within the limits recommended by the IT equipment manufacturer”), but specifics, for example: ensure the air temperature at the air intakes of IT equipment in the cabinet within +20...24 °С around the clock at any time of the year.

It is important to fix two key values ​​in the TOR: the number of stages and the capacity of IT equipment at the first stage. It was repeatedly possible to observe how, with an estimated PUE of 1.4-1.7 at the first stage, it was equal to two or three precisely for the reason that the first stage of IT equipment was 1/10 of that specified in the TOR. If there is a specification of IT equipment and an understanding of the stages of its purchase, which is planned to be installed, do not be lazy and place it in racks. Then there will be an understanding of how much power per rack is actually required, how many ports of a structured cabling system (SCS) and which ones will be needed, what power connectors will be required on power distribution units in cabinets.

Further, the general concept is being worked out: the layout of the premises, the placement of SKD equipment, walkways, equipment service areas. Often the following task arises: there is a power of 1000 kW from a transformer substation (TS) - how to divide it? We divide simply: usually PUE is 1.6-1.8. Accordingly, we set PUE 1.6 at the initial stage for IT equipment, leaving the power at 625 kW, for the rest of the equipment (and this is mainly air conditioning) - 375 kW. Next, we calculate the zones of IT cabinets. Do not forget that it is highly desirable to physically separate the input room, server zones, rack Hi-End (since they often need a specific organization of cooling, which may differ from the typical one in the server room) and the switching zone (the power of which is usually much lower than the server zones).

After we have an understanding of the concept of the air conditioning system, we connect electricians by giving them a basic list of pantographs for IT, ventilation and air conditioning systems. Now you can draw a single-line diagram and calculate a diesel generator power plant.

If the air conditioning project provides for a raised floor, then its height is calculated, and not chosen arbitrarily. Raised floor tiles play the role of a coordinate grid, so the binding of all cabinets to the grid is mandatory - this is necessary to ensure the free rise of the tiles in the passages between the equipment and access to the raised floor space. It is also worth considering the normal width of the passage not only between cabinets, shields, air conditioners and other equipment, but also comply with the requirements prescribed in normative documents, between open doors and a wall or other cabinet, thereby ensuring safety and convenient installation / dismantling of equipment.

Access control, alarm and fire extinguishing systems are calculated when there is an understanding of the required number of rooms. For an automatic fire extinguishing system, the presence of a raised floor, isolation of corridors and false ceilings is also important. When the arrangement of rows of cabinets and passages between them appears, you can begin designing a video surveillance system. After that, taking into account the class of visual works and the requirements of the previous system, lighting is designed.

Monitoring and automation systems are designed at the final stage, since the initial data for these systems are calculated based on the design solutions of all previous systems.

Implementation

The implementation process can be very different depending on the architecture, but in general the sequence of the process is approximately the following:

1. Reconstruction of premises, relocation/construction of walls, making openings for the passage of communications, expanding doorways, performing outdoor work on laying cable channels, pouring the foundation for diesel generators and fuel storage tanks, laying fuel lines, laying external communications to the data center building, performing grounding and lightning protection. If necessary, perform electromagnetic shielding of the server room.

2. Installation of fasteners for outdoor units, ventilation ducts, unloading frames of chillers, laying of internal cable channels, laying of heat carrier routes, cables, marking and installation of raised floor pedestals, cladding of the external facade.

3. Installation of raised floor slabs, installation of internal air conditioning units, installation of cabinets, installation of switchboards and UPS in the control room, installation of diesel generator sets.

4. Installation of SCS, installation of AGP, ACS, video surveillance, lighting, monitoring and automation equipment.

5. Carrying out measurements, tests, start-up equipment of engineering systems, commissioning.

Commissioning and service

Upon commissioning, the integrator must hand over to the customer all documentation, operating manuals and instructions for personnel. It is also important to instruct the personnel so that the employees of the client company can independently operate the installed systems. On the part of the client, it is important to appoint those responsible - otherwise, the instruction will have to be carried out many times, and the usefulness of this will tend to zero.

As for service, there may be options. This may be self-maintenance of the data center by the client's staff. The disadvantages are obvious: the need to keep specialists of the required qualifications on staff. The second option is the maintenance of each system by a narrow-profile company. In this case, it will be enough for the client to follow only the service regulations, but he will have to independently resolve joint service issues, if any. For example, if there is a problem with the use of several systems, it will be necessary to mediate between several organizations in resolving the issue, which will also require some experience from the person who will oversee service maintenance or engagement of an external auditor. The third option is the easiest for the client, but it can be more expensive: outsourcing service support by an integrator with expertise. In this case, all risks on joint issues fall on the shoulders of one organization.

The question of the need for SLA is not so clear: if the data center is designed correctly and the systems have a reserve, then SLA is a cost that is far from always justified. The only option that can justify them is the complete outsourcing of data center maintenance, when the client does not involve its specialists at all to monitor the viability of the data center.

In conclusion, we can say the following: The data center is a great example of a complex project that needs to be implemented in as soon as possible without the right to errors and alterations. Spend more time on design and risk assessment, and it will pay off handsomely during handover and commissioning, as well as save a lot of nerves and money during implementation. Remember that the data center is not just a cost. This is a way to save money on business losses, this is an increase in the company's capitalization. A reasonable approach to budgeting when designing a data center and protecting it will allow you to return this money after a certain period of time. With the right approach and understanding of the "road map" of the data center construction project, this is easy to do.

Konstantin Kovalenko

International backbone telecom operator

One of the key providers of international data transfer services.
Provision of IP-transit services and dedicated communication channels to international and local Internet providers.
Organization of virtual private networks (VPN), Internet access and other communication services for large and medium enterprises from various industries; banking structures and financial institutions as well as government organizations
Extensive network coverage - 28 countries, 3 continents - and a significant presence in Eastern Europe and Russia.
High bandwidth network through which large volumes of international traffic pass.
High connectivity due to direct connections to numerous data centers and international traffic exchange points.

The largest Internet provider in Moscow

Wide coverage area - about 70 areas for corporate clients and 27 areas for individuals.
Innovative technologies - constantly improves the quality of its services, increasing the capacity of servers and channel bandwidth, boldly introducing innovative technologies, providing its subscribers with new opportunities for work, leisure, communication and development.
Starlink takes care of its subscribers by providing them with a new generation of Internet. The company was the first in Russia to actively connect its users to the Internet using IPv6, a new generation protocol.

Long-standing data centers (DPCs) no longer meet the requirements of the organizations that own them. This is the main conclusion that follows from the results of our survey. Approximately half of the respondents said they would definitely or expected to upgrade the power supply and cooling systems of their data center equipment in the next 12 months. Most organizations are faced with a puzzle - how to carry out the necessary modernization? On the one hand, the infrastructure of the old data center is no longer suitable for the normal operation of modern servers, data storage systems and network equipment, which consume more and more electricity per unit area of ​​the data center, and on the other hand, building a new data center can be very expensive.

Most organizations simply don't have a choice. The fact is that they have to store and process (in new and more complex ways) ever-increasing volumes of data, and there is no sign of abating this trend. And as useful as server consolidation and improved storage management efficiency are, sooner or later you will still have to remodel your old data center or build a new one.

Data center power and cooling issues continue to be the biggest concern for IT professionals. But in response to our question whether organizations are experiencing a shortage of free space in their data center, two interesting facts. First, almost half of the respondents said that their data center has enough space for further development its IT infrastructure. Such a response is to be expected, as the performance of servers and storage systems is constantly improving while maintaining or reducing their overall dimensions. As a result, to increase the efficiency of the data center, additional space is by no means always needed. The second fact seemed quite unexpected to us: almost a quarter of the respondents said that they use ordinary, not intended for this premises to organize their data centers.

We did not go deep into what is the reason for this, but it can be assumed that the respondents who answered this way work in small companies or in remote branches. large organizations. In any case, data center-in-a-box solutions—refrigerator-sized systems with the cooling and uninterruptible power supplies (UPS) required to run up to 30 or so 1U servers (or other network devices)—should be in high demand.

The survey results point to the driving forces behind data center consolidation. To solve a number of problems, including ensuring compliance with legal requirements for information security and improving the reliability of data storage, it is necessary to improve system management, and for this it is desirable to move the servers of the company's divisions and its remote branches to the data center.

What is actually better?

Clearly, businesses need improved data centers. But what does “improved” mean? Should servers, for example, be specialized or as general as possible? Should I buy blade systems or is it just another way to "tie" the customer to a specific manufacturer? Will direct current (DC) powering equipment help reduce energy costs, or will this type of power be more trouble than it's worth? Are raised floors necessary, or are they already obsolete? The number of questions seems endless, but it is quite clear that building objects (for the organization of data centers) of the same type that were built just five years ago is a failed undertaking. Perhaps the most big problem for organizations planning to build a data center is the high cost of the latter.

In the past, most power systems for data center equipment were rated at a power density of 50 W/sq. ft (1 sq. ft = 0.0929 sq. m), but now many analysts recommend having a system with a power density of 500 W/sq. foot. A tenfold increase in the power density of power supply systems (including UPS and power generators), as well as the need to have appropriate cooling systems, make the cost of implementing these systems the largest expense item in the creation of data centers. If you want to build a Tier IV data center with “five nines” reliability (as defined by the Uptime Institute), then the cost of acquiring and installing its power and cooling systems can be as much as 50 times the cost of building the data center building itself (165 million dollars against 3.3 million in the case of the construction of a data center with an area of ​​15 thousand square feet). And the annual payment for electricity when using the data center of the above area, as they say "to the fullest", is $ 13 million (in California), which is almost five times the cost of building its building.

Given such significant costs for the implementation of the engineering infrastructure of the data center and the cost of paying for electricity, it makes sense to critically evaluate all the details of the constructive solution of its premises. One of the controversial points in the design of data centers is the presence of a raised floor, since the latter becomes useless with high-density placement of equipment in mounting racks. Just six years ago, the average power consumption of an equipment rack was about 3kW. Today, this figure is almost 7 kW, and when the rack is filled with blade servers, its power consumption can increase to 30 kW or more. No existing raised floor cooling system is designed for this concentration of heat load. But not only problems with cooling cast doubt on the need for a raised floor.

As the power consumption of equipment racks increases, so does their weight. A fully loaded rack can weigh from a quarter of a ton to one ton or more. In order for the raised floor to withstand such heavy racks, it must be strengthened with additional props. Instead of relying on raised floor cooling, consider using rack-level or row-of-rack cooling as an addition to, or a complete replacement for, the system. These tools are good in that they are modular and deliver the refrigerant exactly where you need it, which makes them more efficient.

DC or AC?

By analyzing the best approaches to building data centers, you will undoubtedly benefit greatly. But one approach we would not advise you to take is the use of DC power systems instead of the alternating current (AC) power systems that currently dominate data centers.

DC power systems are needed by organizations whose network infrastructure must comply with NEBS (Network-Equipment Building System) requirements. As a rule, these are telecommunications operators who have been using such systems for many years, mainly because of their high reliability and ease of connecting alternative power sources (batteries and generators).

Other industries are showing interest in DC power as well, so every major server manufacturer has at least a few DC-powered models. One of the manufacturers specializing in the production of such servers is Rackable Systems.

There are a number of factors preventing the use of DC power in data centers. First, the choice of IT products with this type of power supply is relatively small. Secondly, building a DC-power supply system on a data center scale requires special knowledge and skills - it is enough not to tighten at least one bolt on the DC bus and it can melt. Thirdly, you will have to place the inverters outside the data center so that the heat generated by them does not interfere with the cooling of the equipment installed there.

Fourth, DC power systems typically cost more than AC systems (however, with higher efficiency, DC systems consume less power, and you can save on lower total cost of ownership).

And finally, fifthly, if there are servers, switches, routers and storage systems with DC power supply on the market, then there are no DC air conditioners for data centers. This means that you will have to implement two power systems in the data center - DC and AC, but it is obviously much better to deal with only one type of system. For the reasons mentioned above, DC power supply in data centers focused on a variety of IT applications has not become popular.

Stroy-TK will design and install a data center for your business.

Check out our services for design, construction, operation of the data center, you can.

The rapid development of modern technologies leads to the inevitable growth of automation of most processes, not only in enterprises different levels but also in everyday life. With the increase in the level of technology, the processes of data exchange between various subjects are accelerating. There is a need for special automated centers where all information will be stored and processed reliably and in an orderly manner. The best solution in this matter are data centers (data processing centers), which are gradually becoming an integral part of the infrastructure of any enterprise.

Purpose and types of data centers

Data processing centers are a complex system that includes a whole range of IT solutions, high-tech equipment and engineering structures. The main task of such a center is to quickly process any amount of data, store information and issue it in a standardized form to the user. In fact, the core of the center is powerful server stations equipped with the necessary software, cooling and security systems.

There are two main types of such centers:

  • commercial type. These are entire complexes that are being built for the subsequent rental of computing power. They are high performance and maximum speed data exchange. The user, using this service, receives a virtual data center, which may actually be located in another city or even country. The advantages of this option are obvious - the user can independently configure the system according to certain parameters, control is carried out remotely, there is no need to spend money on building and maintaining their own data center;

  • internal (corporate). Centers established under specific enterprise. They are for internal corporate use only. Despite the costs associated with the creation and commissioning of such a center, its main advantage lies in the direct management of the entire system. The company does not depend on the third-party owner of the equipment (service provider) and will be able to more effectively ensure the security of data and the safety of trade secrets. In case of failure of certain systems, the process of their recovery will be much faster. Do not forget that an enterprise that owns its own data center can ensure autonomous operation of equipment in case of power outages and respond as quickly as possible to various emergency situations.

Mobile and modular data centers

Separately, it is worth considering a mobile data center, which is turnkey solution For small companies. The advantages are obvious - there is no need to design a special room for a stationary data center, take into account the specifics of the equipment, temperature conditions and other factors. Several world-class manufacturers have taken up the development of such centers and achieved great success.

Structurally, the system is a certain number of unified modules that can be easily combined into a full-fledged autonomous complex and quickly configured. A modular data center is the most rational and simple way for many mid-sized companies. Another advantage of such a center is that it takes significantly less time to create and put into operation than a stationary structure.

Such integrated solutions are widely used in enterprises of various fields of activity. First of all we are talking about organizations and companies for which fast data exchange is a key field of activity. These are various banking systems, government and commercial telecommunications organizations, IT companies, call centers, emergency dispatch services.

Correctly processed information, which is promptly submitted, plays a paramount role in the work of such companies. An important criterion is also the possible loss of profit in case of incorrect or too slow data processing. Therefore, for large organizations, such centers are supplied with the latest technological equipment and software.

What is important to know when choosing a data center?

The reliability and speed of information processing in many companies depends on the construction of the workflow and the level of services provided to customers.

The key requirements for the data center are as follows:

  • offline work;
  • high level of reliability;
  • data protection;
  • reliability;
  • high performance, regardless of whether it is a cloud data center (virtual) or an internal corporate option;
  • a large amount of information storage;
  • the possibility of expansion and modernization (planned for the next 5-7 years, taking into account the projected growth of the company and the development of technologies).

Data processing centers in Russia that meet these criteria are the best solution for domestic companies in the core business. For most customers, the cumulative ratio of requirements to cost is very important. Today Data Centers in the Russian Federation due to the crisis in the economy are no longer as in demand as they were 10 years ago, and this trend can be traced all over the world. Large companies understand the importance of such centers, but when choosing equipment, they are guided primarily by cost, and not by Newest technologies and advanced features.

Key elements of data centers

The structure of data processing centers includes a number of components and subsystems, without which the operational processing of information and data storage is impossible.

The system includes the following blocks:

  • IT infrastructure of the CD;
  • engineering systems of various levels of complexity;
  • an integrated approach to security;
  • management and monitoring.

To understand the structure in more detail, you should consider the main blocks more closely.

IT component of the data center;

The first block is a complex of high-tech equipment, which is integrated into a common system.

This is a kind of core of any modern data center, which consists of:

  • server hardware;
  • transmission, processing and storage systems.

To improve server efficiency, many famous companies develop new products: software, technologies virtual reality and ready-made complex solutions based on the use of blade servers. This is one of the most demanded areas in the specialized IT field. New developments can significantly increase the productivity of equipment, improve energy efficiency, reduce maintenance costs and optimize work processes to the maximum.

Engineering solutions in the data center

To ensure the smooth operation of powerful high-tech equipment, it is necessary to create an effective engineering system.

The main engineering systems of the data center fall into two categories:

  • power supply. Special equipment should ensure not only an uninterrupted supply of electricity to the equipment, but also, in case of accidents on power lines, switch to autonomous power supply. To accomplish this, various uninterruptible power supplies and additional generators are used. It is very important that the voltage and frequency of the current correspond to the necessary parameters and that there are no interruptions and sharp jumps in the network. Such fluctuations adversely affect server hardware and can lead to its failure;
  • cooling. Powerful servers during operation allocate great amount heat, which is removed with the help of special built-in radiators. This does not completely solve the problem, since the server stations are located in separate closed rooms. To ensure reliable cooling, various air conditioning systems are used that operate in automatic mode, ensuring the optimum temperature in the room and preventing equipment from overheating. Using new technologies in the field of air conditioning (inverter compressors, high-precision thermal sensors), it is possible to achieve a reduction in the cost of consumed electricity by 10-15%.

Data center security

One of the most important components of the health of any data center is a properly designed security system. Companies use the latest anti-virus software and other data protection programs to keep data secure and prevent third parties from accessing it over the Internet. In order to prevent the entry of unauthorized persons, a system for the admission of a certain category of employees, methods of counteracting hacking and physical entry to equipment, video surveillance, and a fire system are being developed and implemented. A complex approach to security ensures the safety of important information.

Data center monitoring

An integrated approach to management and monitoring is an important component of any modern data center. Such systems automatically monitor the performance of all equipment and environmental parameters (temperature, humidity, voltage and frequency). An integral part monitoring systems are systems for predicting probable equipment failure and early warning.

A separate place is occupied by dispatching - this is an important component of the workflow, which allows you to organize informing staff about an emergency using modern communication technologies (sending data to an email address, the ability to automatically dial subscribers of a certain group or short SMS messages). An integrated approach significantly reduces the risk of emergencies, and in the event of a force majeure, it provides the most effective method of dealing with and restoring damaged parts of the system.

Data processing centers at the exhibition "Communication"

On the territory of the Central Exhibition Complex "Expocentre" an important event will be held - the exhibition Communication. A large exposition will be devoted to the issues of operational data processing and secure storage of information. This is an international event where the world's leading manufacturers of specialized equipment and software will showcase their latest developments in this area.

Everyone can find out the latest news about the data center, get acquainted with design systems, new mobile and stationary complexes. Special attention will be paid to safety and necessary equipment to provide cooling, ventilation and uninterrupted power supply of data processing centers. This is an important event for all industry professionals and business leaders who want to modernize their enterprise using modern technologies And latest equipment for storage, transmission and processing of data.


2023
newmagazineroom.ru - Accounting statements. UNVD. Salary and personnel. Currency operations. Payment of taxes. VAT. Insurance premiums