For more than 100 years, physicists have made remarkable contributions to our ability to look inside the body non-invasively, initially using X-rays but more recently using other energy forms such as ultrasound and electromagnetic fields. Today, biomedical imaging has developed from early, simple uses of radiographs for diagnosing bone fractures and detecting foreign bodies into a compendium of powerful techniques, not only for patient care, but also for fundamental studies of biological structure and function.

Imaging developments have built on scientific discoveries and technological innovations in other fields, such as materials science, fundamental physics and digital computing. Advances in digital radiography, X-ray computed tomography (CT), magnetic resonance imaging (MRI), and other nuclear, ultrasound and optical-imaging techniques, have produced an array of modern methods for interrogating intact 3D bodies non-invasively and deriving uniquely valuable information about tissue composition, morphology and function. Continuing developments in imaging technology, as well as companion sciences such as molecular biology and nanotechnology, have expanded the applications of imaging to major new areas of high impact, such as the study of gene expression or the functional organization of the brain.

Today, imaging science is a dynamic, evolving field of multi- and inter-disciplinary activities that has grown enormously in scope and importance in recent years. Major technical advances continue to be made in all modalities, while the development of faster, more powerful computers has led to advanced image-analysis methods and processing algorithms that can be used to enhance images and extract novel, often quantitative, information. The establishment and mining of large image databases using informatics approaches is now being used to relate multidimensional data such as genotype and structure. A variety of molecular probes and contrast agents have also evolved to propel cellular and molecular imaging to the forefront of biological research. The ability to detect, diagnose and monitor pathological, physiological and molecular changes using imaging is of fundamental importance for disease management and personalized interventions, as well as in basic biological research.

In the past, innovation in imaging was largely driven by technical advances such as the development of new materials for radiation detection, superconducting magnets for MRI or digital technologies for signal processing. Imaging scientists traditionally sought to improve image quality by exploiting new technologies and producing new instruments or novel techniques to achieve images with higher spatial resolution or signal-to-noise ratio. Such technical advances in imaging continue to be made: the development of higher-field MRI, volumetric CT scanning and hybrid techniques such as photo-acoustic imaging, for example. However, it can be argued that current imaging technology is often so good that further efforts to improve image quality are less compelling than in the past. While predicting the future is speculative, we can appreciate what currently limits the performance of imaging systems, and – based on that understanding – predict what improvements can be expected in the future.

X-ray evolution

X-ray radiography has remained essentially unchanged since its inception, except for the replacement of X-ray films by solid-state digital detectors. The primary limitation on image quality is the quantum noise associated with detecting a finite number of photons, a quantity that is limited by the radiation exposure of the subject and the energy deposited in tissues that may cause damage. More efficient detectors that count photons and discriminate their energies will certainly increase image quality by reducing noise and improving image contrast, but the nature of the interactions of X-rays with the atoms in tissues fundamentally limits the differential absorption between structures and thus the potential image contrast.

The invention of X-ray CT relied on the advent of digital computers that could solve the mathematical problem of reconstructing objects from their projections. This field has subsequently advanced dramatically, with the implementation of multiple parallel detectors that enable 3D scanning of large volumes in very short times. Again, however, the primary limitation on image quality arises from the need to minimize the amount of radiation that a patient is exposed to. Indeed, recent reports shed doubt on the prospect of continuing to use current levels of radiation in routine practice. For example, David Brenner and Eric Hall of Columbia University Medical Center (N. Engl. J. Med. 357 2277) stated that “when organ-specific cancer risk was adjusted for current levels of CT usage, it was determined that 1.5–2% of cancers may eventually be caused by the ionizing radiation used in CT”, confirming previous speculation that medical radiology can cause harmful side effects when used excessively on large populations. This has led to an emphasis on increasing the efficient use of radiation dose by tailoring CT and X-ray procedures more carefully, and by improving detectors and reconstruction methods.

Even though technology will develop further, the trade-off between image quality and dose will remain, with any technical improvements likely to be mainly used to reduce patient exposures rather than increase image quality. Indeed, similar trade-offs are likely to arise in other modalities too, simply because image quality has improved so dramatically that any further increments may prove to be of only marginal value. Instead, other considerations such as safety and cost will take precedence.

Within ultrasound imaging, for example, the physical performance limits of imaging systems are dictated by conflicting requirements – it is better to use shorter acoustic wavelengths to resolve structures, whereas the amount of energy that gets absorbed goes up as wavelength falls. The necessary compromise between image quality and sound intensity is already near the limit of what may be safely achieved, so dramatic improvements in image quality are unlikely. On the other hand, developments in materials, microelectronics and signal processing will help to make ultrasound systems much smaller and less complex. Indeed, high-resolution imaging on an iPhone or similar device is already a reality. This will accelerate the dissemination of ultrasonography out of traditional hospital-based departments into individual physician offices, emergency first-response situations, on the battlefield or at the bedside, where portable, simpler systems may be adequate.

MRI and nuclear medicine

MRI remains an expensive, capital-intensive technique, but the advances made over the past 30 years have been truly dramatic. The remarkable improvements in image quality and speed have evolved directly from technological advances in the design and construction of superconducting magnets, of radio-frequency (RF) and digital technologies, and the engineering of coils and amplifiers to control magnetic field gradients. These developments have been accompanied by inventive conceptual advances such as parallel imaging, compressed sensing, and new acquisition and reconstruction techniques.

Current state-of-the-art MRI systems operate at far higher fields than early scanners but run up against limitations set by patient safety (especially the risk of heating from absorbed RF energy) and the fundamental behaviour of high-frequency electromagnetic fields in complex dielectric and conducting media; they are also much more expensive. The spatial resolution achieved routinely in clinical practice is now so good that significant increases will provide diminishing returns on the effort and expense required to achieve them. Future advances in MRI will focus more on simplifying imaging protocols so that data can be gathered faster and patients be seen more quickly. These developments will not demand more advanced hardware or technological breakthroughs, but rather refinements to how data are acquired and processed using hardware available today.

Nuclear-imaging methods, such as single photon emission computed tomography (SPECT) and positron emission tomography (PET), have advanced from simple camera systems that captured projections of 3D distributions of radiolabelled compounds into tomographic scanners that can accurately resolve the distribution and kinetics of an array of either gamma-ray or positron emitters. The sensitivity of nuclear imaging is limited by the number of radioactive events detected, which in turn depends on the activity within the body, as well as the efficiency of the detectors and acquisition. The information obtained depends on the target-to-background ratio of activities, which is an indicator of the radiotracer’s specificity in localizing a target of interest, such as a tumour.

Both PET and SPECT are ultimately limited by considerations of the radiation dose absorbed within tissues and the specificity of the radiotracer. There are certainly improvements possible in system design, image acquisition and reconstruction, but in reality, major increases in spatial resolution are unlikely and most efforts are focused on reducing dose and designing new compounds that report more specifically on molecular events in tissue. In future, the impact of chemistry on nuclear imaging is likely to be greater than that of physics or engineering.

A look to the future

For almost all modalities, we are very close to the practical limits on spatial resolution. The speed of image acquisitions may increase, but sensitivity for detecting pathologies is limited by radiation dose or other safety considerations. From this, we may predict that, in contrast to the imperatives of the last 50 or so years, increases in image quality will not be major drivers of imaging technology in the near future. Instead, technical innovation will be used to reduce cost, scanning time and radiation exposures. We will no longer strive simply for the “best” images possible, but will make more prudent judgements about what compromises (in dose, speed, cost, resolution, sensitivity, patient tolerance) to make.

If striving to improve imaging technology is destined to be less important, the role of the imaging physicist will need to be reassessed. One major goal should be to get a better understanding of what affects the signals used to construct images – the physical and physiological factors that modulate the behaviour of different energy forms in the body – to help interpret images better and derive more information. Remarkably, this has been a somewhat under-emphasized sub-discipline of imaging, but one that should drive the development of new techniques.

For each biological event associated with many pathological disorders, there is a physical signature that can be measured. This could be a change in tissue composition caused by fibrosis – which makes tissues stiffer – a physiological parameter such as reduced blood flow in arteries, or perhaps a change in an electromagnetic property such as conductivity or magnetic susceptibility. Physicists have excelled at devising ways to measure and map these properties, but a detailed relationship between the underlying events and imaging measurements is rarely available. Major gaps in our understanding of important contrast mechanisms in every modality and every application still remain.

This state of ignorance stops us from making the most of the data within images and impedes the development of more quantitative tissue characterization from images. So while functional MRI, for example, has revolutionized our ability to study the functional architecture of the brain, the precise relationship between the MRI signals used to map neural activity and the underlying neurochemical and electrophysiological processes are poorly understood. Similarly, although MRI differentiates tissues based on parameters such as spin relaxation times, we have no quantitative models of the causes of variations in relaxation that accurately predict the values actually measured. Bridging such knowledge gaps should be a major development area for imaging scientists in the future.

One result of a better understanding of the factors that affect image contrast should be the development of quantitative imaging biomarkers. This term has only recently entered the lexicon of imaging science but has a much longer history in, for example, the pharmaceutical industry. Formally, a biomarker is a characteristic that can be objectively measured and evaluated as an indicator of a specific biological process or a measure of a response to a stimulus, intervention or perturbation. A quantitative imaging biomarker is then a measurement of a characteristic that needs to be localized and/or mapped spatially.

Imaging reveals not only where processes occur, but can also report their spatial heterogeneity, and the relationships and variations between different regions. Clinical imaging today is still very qualitative – most images do not represent “absolute” properties (although CT is an exception). It is therefore hard to interpret measurements and to compare data taken at different times or using different pieces of kit. Quantitative imaging has grown a lot in recent years, allowing parametric maps of intrinsic tissue properties to be derived that reflect particular physiological phenomena or biophysical properties. But big practical obstacles to the adoption of quantitative imaging remain.

In summary, imaging will continue to advance and provide an essential set of tools for broad use in basic research, drug development, selecting treatments, assessing treatment response, assessing effects of drugs and more. Quantitative, robust and reliable imaging biomarkers will continue to contribute unique information for research and may be adopted for routine clinical applications. But in the near future, there is likely to be less emphasis placed on pushing the limits of image quality and more on understanding how to use images to their full capacity.

• This article originally appeared in the Physics World Focus on Medical Imaging.