Artificial Intelligence and Health

The mention of artificial intelligence (AI) often conjures the figures of industry consultants and tech moguls overhyping it, dystopic images of superintelligences superseding ours, or the very real headlines of issues arising during the early adoption of certain systems, e.g., face recognition.

Some AI systems are already ubiquitous, many more are currently either underdeveloped or undertested. Therefore, while the latter hold great potential, not all AI-based technologies have a massive impact on our daily lives yet, particularly in health care.

However, many AI systems will mature, therefore it remains crucial to exercise caution, take time to build robust legal and ethical safeguards, and inform their development. The risks will likely lurk in our interactions with AI, our use and misuse of the technology, and our perception of what it can or cannot do – as well as what we think it should or should not do.

But what is AI?

AI is an umbrella term that encompasses the different ways in which machines and systems “display human-like capabilities such as reasoning, learning, planning, and creativity.”[1] There are many types of intelligent systems, including software (e.g., algorithms), and embodied (i.e., robots).

The former can be further classified according to the systems’ reasoning and learning processes. For example, the Max Planck Institute for Intelligent Systems researches Empirical Inference, an approach that could improve algorithms used for predictive modelling, where often correlations are mistaken for causation.

At the Centre for Research and Technology, Hellas (CERTH), one of TeNDER’s partners, researchers are also exploring embodied intelligent systems, focusing on robot-human interaction and robotic vision, among other things.

To read more about developments in AI you can consult various sources like: the websites of the Max Plank Institute and CERTH above, as well as Nature, The Stanford Social Innovation Review, the European Commission’s page on AI, etc.

AI-powered health care

The World Health Organization (WHO) has helped identify areas where the application of AI holds a lot of promise. In clinical settings, for example, AI can support diagnostics, particularly in radiology, pathology, and medical imaging. Recent studies have shown that when it comes to diagnosing and predicting the risk of developing cancer, AI is equivalent and sometimes even surpasses human medical judgement.

Yet these results have not been widely replicated, nor have they been validated outside the contexts in which the algorithms were trained. Many of the results pertain to specific types of cancer, and even as accuracy improves, researchers themselves caution that these AI systems should not replace pathologists or physicians. [2]

While such advances will become increasingly important in supporting medical decision-making, some risks need to be acknowledged to address them. For instance, the WHO notes that in regions of the world that face chronic medical staff shortages, these types of predictive technologies could become the norm. This could result in a problematic trade-off, where healthcare providers in lower-income countries invest in digital infrastructure to support such AI systems, rather than much-needed healthcare personnel.

Other areas that hold great potential for AI include health research and drug development. Specially trained algorithms can comb through vast amounts of data to identify clinical practices that yield better results and help optimise approaches to care and other clinical protocols. In drug development, AI has been used to identify potential treatments for Ebola and Covid-19. It should be noted again that this does not mean we can cut corners, drugs need to undergo testing to ensure their safety and efficacy, and the entire process requires human oversight. [3]

The pandemic forced a drastic shift from hospital to home care and it accelerated the adoption of telemedicine solutions where it could be safely done. Beyond the pandemic, remote health monitoring may become more common as societies age, a trend that is often accompanied by a rise in co-occurring chronic conditions.  One of the stronger arguments in favor of assistive technologies powered by AI lies in their potential to improve the quality of life of elderly patients who may enjoy greater autonomy in their own homes.

Such approaches could help patients, caregivers, and physicians keep track of health markers beyond clinical settings. Exacerbations often happen outside of the doctor’s office; therefore, another benefit may be that we would be able to catch markers indicating deterioration that could result in severe complications.

Because of these and other potential benefits, the EU and other institutions across the world are increasingly funding multidisciplinary research in the health sector (among others) that integrates AI or builds care approaches around it.

(Mitigating) Risks

Some considerations should remain at the forefront of all developments in the field:

  • Taking into consideration the unseen environmental costs of AI. For example, to store expanding datasets considered essential “to train machine learning algorithms,” acres and acres of land would be needed, as well as hundreds of megawatts of electricity to keep servers running continuously. [4]
  • Communicating clearly about what AI can and cannot do. AI can often be perceived as neutral or infallible, but it is neither. It is informed by our world and made by humans, which is why, as Michael O’Flaherty [5] has stated, “people need to be aware when AI is used, how it works and how to challenge automated decisions.”
  • Understanding bias. This links to the previous point that because humans have (and should have) oversight over AI, our biases become embedded in such systems. Here, we are not merely talking about ‘neutral’ bias, in the sense that AI systems are built to perform specific tasks, but rather the type of bias that can be harmful. For instance, numerous studies have found that many seemingly ubiquitous algorithms are biased in such a way that they reproduce racist stereotypes and other forms of prejudice (see here, here, and here).

So, how are researchers across disciplines, sectors, and institutions around Europe promoting the potential of AI whilst ensuring that they don’t lose sight of the considerations listed above and other associated risks?

Foremost, the European Union, which also governs the Framework Programme for Research and Innovation that funds projects like TeNDER, has formed a High-Level Expert Group on Artificial Intelligence to provide advice on its artificial intelligence strategy. The Expert Group published in April 2019 the Ethics Guidelines for Trustworthy AI to guide initial policy on AI at the EU level.

It sets out three key principles; AI should be (1) Lawful, (2) Ethical, and (3) Robust. Then it lays out the different domains these three principles cover, and how trustworthy AI will be assessed initially. For example, the first principle linked to law covers things such as the way data is gathered, processed, and used, as well as the protection of fundamental rights. The ethical dimension is built on four primary principles:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

Respect for human autonomy and explicability are crucial as they link strongly to a core condition for AI development: it should always allow for human oversight. This latter point is especially relevant in the context of deep-learning algorithms, where researchers are not always privy to the process by which an algorithm decides or makes a prediction, for example. This lack of transparency violates the principle of explicability.

Finally, the robust dimension addresses the issue of the safety of the systems, among others. For instance, they should be as resilient as possible to cyberattacks. It also describes the ecosystem necessary to avoid bias, discrimination, as well as to ensure accountability.

This guidance, coupled with the European Commission’s regulation proposal released to the European Parliament and the Council in April 2021, represents an important step towards the future governance of AI.

The proposal attempts to harmonise the rules that will govern AI across the EU. The guidance integrates technical and non-technical elements necessary to harness the potential of artificial intelligence while mitigating the risks associated with it.

AI research and development are highly interdisciplinary endeavours; therefore, it makes sense that governance models are informed by engineers, developers, philosophers, mathematicians, lawyers, ethicists, social scientists, and the full range of people whose domains contribute to the creation of such systems.

REFERENCES

[1] European Parliament (2021, March 29). What is artificial intelligence and how is it used? European Parliament News: https://www.europarl.europa.eu/news/en/headlines/society/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used

[2] Savage, N. (2020, 25 March). How AI is improving cancer diagnostics. Nature. https://www.nature.com/articles/d41586-020-00847-2

[3] World Health Organization (2021). Ethics and Governance of Artificial Intelligence for Health, WHO Guidance. Geneva: World Health Organization Press.

[4]  Halpern, S quoting Kate Crawford. (2021, October 21). The Human Costs of AI. The New York Review of Books, p. 29. 

[5]  Michael O’Flaherty is the Director of the European Union Agency for Fundamental Rights. See full statement: https://fra.europa.eu/en/news/2020/now-time-ensure-artificial-intelligence-works-europeans.  

Entradas recomendadas