Jul 14, 2008
High-performance computing: shaping medicine's future
Image-guided radiation therapy with real-time optimization before and during treatment; drug development using "in-silico" simulation to precede clinical trials; in vivo visualization of cellular and genetic processes via advanced molecular imaging. All of these developments, and many more like them, have the potential to impact immensely upon the future of healthcare and biomedical research. But these applications have one other thing in common: they generate huge amounts of data and require powerful computer processing facilities to cope with subsequent data analysis and manipulation.
One promising possibility is grid computing, a networking technology that provides shared access to the processing capabilities of distributed computing resources around the world. Several grid networking projects are already underway. HealthGRID, for example, promotes the development of biomedical grid initiatives in Europe. In the US, the Biomedical Informatics Research Network is building a network of high-performance computers to advance biomedical and healthcare research. Research testbeds include Function BIRN, which uses functional neuroimaging to understand the causes of schizophrenia and develop new treatments, and Morphometry BIRN, a testbed for examining MR images of the brain to investigate how its structures and functions relate to neurological diseases.
Another example is the IBM-sponsored World Community Grid, which has the rather grand mission of creating "the world's largest public computing grid to tackle projects that benefit humanity". The World Community Grid uses donated processing time from around one million PCs and boasts a combined computing power that ranks among the top five supercomputers worldwide. Associated projects include the analysis of protein X-ray crystallography to help understand cancer progression and treatment; the use of computational methods to design new anti-HIV drugs; and investigation of protein-protein interactions to improve treatments for muscular dystrophy.
Last month also saw the launch of ACTION-Grid, a consortium of seven European and South American partners (including HealthGRID) working to analyse and link biomedical informatics and grid technologies. Key focus areas of this million-Euro initiative are nanoinformatics (e.g. the study of nanoparticle-based drugs and targeted drug-delivery systems, implantable materials and nanodevices, and nanotechnology-based diagnostic imaging schemes) and personalized medicine (in which a patient's treatment is chosen on the basis of their genetic information or other biological markers).
Alongside the emergence of powerful grid networks, supercomputers themselves now offer overwhelming levels of processing power. Last month, IBM announced record-smashing performance from its Roadrunner supercomputer, built for the Los Alamos National Laboratory (LANL) in the US. Roadrunner is the first system to boast a peak performance of more than 1 petaflop/s. This means that it can process 1015 calculations every second (that's roughly twenty-five thousand times more powerful than today's top-of-the-range PC processors).
Roadrunner was installed at LANL last month and has already been employed to mimic complex neurological processes. As part of the verification process, the machine was tested using a code dubbed PetaVision, which models the human visual system. Based on initial results, in which PetaVision mimicked over one billion visual neurons and trillions of synapses, the LANL researchers reckon that they should be able to study the entire human visual cortex in real time. "We are already doing computational tasks that existed only in the realm of imagination a year ago," noted Terry Wallace, LANL's associate director for science, technology and engineering.
Roadrunner supersedes the reigning IBM BlueGene/L supercomputer, which boasts a maximum performance of 478.2 teraflop/s and is also employed in a range of medical modelling studies. Last week, for example, medicalphysicsweb reported on a Swiss research project in which BlueGene/L was used to perform high-definition simulations to reveal the relative strength of human bone structures (see Supercomputing vs osteoporosis). Here, massively parallel simulations on a 5x5 mm specimen of real bone generated 90 gigabytes of output data in just 20 minutes of computing time.
Elsewhere, researchers at the University of Edinburgh in the UK recently embarked upon a five-year research project combining BlueGene/L simulations and laboratory experiments to speed the design of drugs that inhibit infection by the HIV virus. Over in Canada, the Ontario Cancer Institute just got its hands on an IBM System Cluster 1350 supercomputer, which runs at up to 12.5 teraflop/s. The machine will be used to analyse millions of images of disease-related proteins in the search for more effective cancer treatments - a research project that was added to the World Community Grid last year.
Looking closer to home, it seem inevitable that high-performance computing will find its way into all areas of medical physics. Just look at multimodality imaging, which requires vast processing capabilities to fuse high-resolution 3D images from more than one modality (see Image registration just got faster). And with advances in image-guided radiotherapy ramping the amount of data that treatment-planning systems have to deal with, access to powerful processing capabilities could become an invaluable tool.
About the author
Tami Freeman is Editor of medicalphysicsweb.