Jan 5, 2011
PlanetHPC: what's next for high-performance computing?
What are the challenges and priorities for future European research into high-performance computing (HPC)? EPCC, the supercomputing centre at the UK's University of Edinburgh, has launched a two-year consultation that aims to answer this question. And with medical physicists prime candidates for exploiting advanced computational resources – the project wants to hear your views.
The European Commission-funded initiative – PlanetHPC – provides a unique forum for discussing how to ensure that Europe's HPC capacity continues to grow and underpin socio-economic development in the region. The medical sector is important for the European economy, as well as for the wellbeing of citizens, and its future computing needs will be a key aspect of the project's considerations.
HPC has advanced remarkably in the last 20 years, with the computational capability of the world's fastest machines increasing by a factor of a million. Even the standard machines used today in academia and industry are many times more powerful than the fastest machine of a decade ago. Few, if any, other technologies can claim this kind of progress.
Development has been driven by the ever-increasing performance of the basic commodity components (processors, memory, interconnect and storage), coupled with the adoption of standards at all levels, including programming models. Year-on-year, users of HPC have been able to re-use their software on machines with ever larger numbers of faster processors.
From a scientific researchers' point of view, HPC has become an essential tool, with computer simulation accepted as the third research methodology, complementing theory and experiment. Within medical physics, it has enabled great advances in areas ranging from diagnostic imaging and treatment planning, to disease analysis and drug development.
Modern functional and structural imaging methods, for example, generate large amounts of data that require processing and analysis, while multimodality imaging necessitates vast processing capabilities to fuse high-resolution three-dimensional images. Time-of-flight PET is another application that calls for huge computing resources to manipulate the timing data. Meanwhile, powerful image analysis algorithms are being designed that offer faster disease detection and increased accuracy.
In the search for greater understanding of disease progression, computer simulations have been employed, for example, to mimic complex neurological processes and reveal the relative strength of human bone structures. In-silico simulations have been used to precede clinical trials during the development of potential new drugs.
The availability of increased computing power should also prove invaluable for radiotherapy planning and optimization, enabling beam modelling and Monte Carlo-based radiotherapy dose calculations. Computational grids have been designed for complex intensity-modulated radiotherapy (IMRT) planning. Here, high-end computational capabilities generate a large number of quasi-optimal radiotherapy plans, from which an optimal plan can be selected.
In common with other areas of science, the medical physics field exerts a continuing demand for greater computing power as researchers and clinicians employ increasingly data-intensive techniques.
Approaching the limits
But this is a time of change in the HPC industry, with a number of factors threatening the continued exponential growth of computing. Firstly, there will come a point at which Moore's Law will not apply. This "law", named after Intel co-founder Gordon Moore, predicts that the number of transistors that can be placed on an integrated circuit will double every 18 months. The corresponding increases in clock-speed have resulted in huge increases in performance over four decades. However, the increases in the speed at which individual components can operate will soon begin to level off as engineers reach the limits of miniaturization.
Secondly, the electricity consumption of the most powerful systems is now measured in tens of megawatts; scaling this up implies that the systems planned for the next decade will require a dedicated energy source equivalent to the largest power station in Europe today. Clearly this would be infeasible, and components with much lower energy consumption must be used.
Thirdly, many applications are reaching the limit of scalability on today's large systems, which have tens or hundreds of thousands of processors. Simply using more processors doesn't make the algorithms run faster. These factors are already forcing changes in the wider IT industry that will impact on HPC, and all users of the technology.
In the mid-2000s, it became commonplace to boost computing performance by incorporating two or four cores (individual units that execute software instructions) on a single processor, allowing multiple instructions to be carried out simultaneously. This trend is set to continue, and many industry analysts expect processors with hundreds or even thousands of cores to be the norm by 2020.
Alongside, we are seeing the use of specialized processing units, such as graphical processing units (GPUs) as accelerators. The principle here is that a conventional processor controls the programme flow, while the computationally intensive part – the number crunching – happens on the accelerator.
GPUs deliver high-performance computing that's affordable for a clinical setting and as such, are already being deployed to ramp up the power of radiotherapy planning software. The emerging area of image-guided radiotherapy requires online replanning in near real-time. GPUs – each containing hundreds of simplistic cores designed for floating point arithmetic – can perform key replanning tasks such as segmentation, dose calculation and plan optimization in a matter of seconds.
The evolution of GPUs was driven by the computer gaming market, but it has been realized for some time that their parallel processing capability can be harnessed for general purpose computation. GPUs offer higher floating point capability than the typical cores found on a "normal" processor, and also higher memory bandwidth, an important characteristic for many calculations. Furthermore, GPU-accelerated systems incorporate fundamental characteristics that are important when approaching the next landmark in HPC system performance, the exascale: a million trillion calculations per second.
This may paint an optimistic picture for HPC in the future. Indeed, the use of multi-core and specialized accelerators provides a promising approach for the next generation of HPC machines. However, such systems present significant challenges, since traditional programming languages are not sufficient to realize the available performance.
Placing more processing cores on a single integrated circuit will allow manufacturers to continue to increase processor performance; but in reality, peak performance will be hard to achieve. For example, although a processor with 200 cores capable of 5 Gflop/s can be claimed as a Tflop/s processor, this level of performance will never be attained in a single application unless all cores are used efficiently. The actual performance might in fact be much lower. A similar challenge exists for accelerator technologies: the higher floating point performance of the accelerator can only be realized if the cores are all used efficiently.
Software developers will feel these changes the most. They will need to generate new applications or port existing ones to the new programming models that will support the emerging architectures. This will require training and education, and a radical rethink of current programming models and standards. A whole new approach to simulation techniques and algorithms may be required.
This is where PlanetHPC comes in. The initiative is consulting academics and industrialists in order to produce a roadmap of research priorities and recommendations for EC funding, driven by business and academic needs and taking account of the technology shift that is happening.
Application users should also engage with PlanetHPC, as ultimately they are the ones who will use HPC to advance research and development in their fields. The medical physics community has an important role to play here. The breakthroughs that HPC may enable – discovery of new diagnostic techniques, treatment optimization, design of medical equipment and development of new drugs - could have profound benefits for the public.
The changes that will be brought by the next generation of HPC should be seen as an opportunity, as well as a challenge. Its arrival will surely enable great things to be achieved.
• PlanetHPC runs a website where HPC users can discuss their requirements, exchange ideas and suggest areas for research. You can also find out more about our workshops, which aim to identify current best practice and possible future use of HPC. We want to hear from you – do get in touch.
About the author
Mark Sawyer is project manager for PlanetHPC, and business development and project manager at EPCC. Contact: email@example.com