The reliance on high-performance computing (HPC) is increasing at a dramatic rate, as it penetrates more and more aspects of business and society. Until recently, it was the domain of large industrial or scientific research institutes. However, such is the demand and increase of data, it has now come into the realm of small- and medium-size enterprises, especially those with a focus on digital manufacturing (sometimes called the 'missing middle'), as developments in software, networking solutions and server technology are now capable of harnessing the power of HPC.
The demands on HPC are also changing, as data crunching is becoming as important a topic as number crunching. Fast, precise results are now being required from a full spectrum of organisations, whether it is for advanced modelling techniques in manufacturing and analysis for oil production, or computational chemistry, financial analytics and publishing. Access to large amounts of data and the ability to analyse and process it quickly is being demanded by organisations of all sizes.
Whereas HPC used to be reliant on grid computing – often with thousands of desktops sharing the same resources in a distributed architecture – there appears to a be a paradigm shift, moving away from the old technology which was so reliant on the low power, high footprint infrastructure. The next generation HPC platforms are now moving to a virtualised environment that integrates virtualised networks, computing and storage. It is even suggested that barriers to HPC as a Service are slowly being dismantled, which presents a whole new opportunity.
The issue that many organisations face is that many legacy data centres were not built to accommodate this new evolution, and the cost of adapting existing data centres from a low average power over a large space to a higher average power over a small space is simply not cost effective. So it raises the issue of whether it is time to move to a new data centre that can accommodate this paradigm shift?
Many HPC projects often only require a large number of core processors and access to complex data structures over a defined timescale, maybe a week or a month. Therefore, a flexible, yet powerful data centre is required to support these activities. Data centres need to be able to transform into ‘service centres’ that deliver applications on demand and respond to changing customer requirements with speed and agility.
Virtualization also used to be off limits for high-performance computing because the added overhead and complexity would only reduce performance. However, there has been major technological advances in virtualization and the development of virtual stacking via blade server technology, which provides the agility, flexibility and fast deployment required to support the crunching of terabytes of data generated by HPC.
A data centre that has been designed for this next generation of HPC infrastructure, such as Aegis One, will provide an optimised environment that makes better utilisation of power and space to provide lower total cost of ownership and be able to cope with the demands and stresses of virtualization. As the use of HPC proliferates within organisations the demands for more processing power will continue to grow and the amount of unstructured data that needs to be analysed will continue to double. Data centres will need to have a strong, dedicated power supply with the flexibility and scalability to match IT load to power and the capability of delivering power headroom to meet future growth.
Tags: Applications, Security, Storage Networking