Artificial intelligence (AI) is one of the most popular topics in the tech industry, along with IoT, cloud and blockchain, to mention just a few. Although it is a very promising technology, it is also connected with very high expectations, quite often beyond the capabilities that AI provides today.
As with every new technology, a lot of questions are raised about the benefits, as well as the risks that stem from AI misuse, either intentional or not. This is where ethical AI comes into the picture. From a philosophical point of view, ethics tries to define what is good and what is evil. Ethical AI tries to define a set of guidelines, both for humans who design, create and use AI systems and the expected behavior of algorithms executed by machines.
When we look at the health care industry, we see multiple problems: increasing costs, lack of medical staff and growing demand for services. Technology can help us tackle all those challenges. And AI will play a vital role in transforming the health care industry as we know it. You can watch Forbes conference Artificial Intelligence and Ethics Mandate and hear from field experts sharing their experience to further address bias, algorithmic aspects, regulations and privacy protection. When I think about ethical AI, three key aspects come to mind: governance, fairness and explainability. Let me elaborate on that.
AI governance
Data is the oil of the 21th century, and it fuels artificial intelligence. However, as we generate and collect more data, privacy concerns arise. The Cambridge Analytica data scandal, for example, proved that given enough information, one can predict the behavior of people and influence individuals’ decisions for political gain. And all of this without the knowledge or consent of targeted people. In the world of health care, similar concerns arise. Let us focus on the health care space.
There are companies that provide genetic testing and collect databases of people’s digitalized DNA. This is extremely sensitive information that can do harm if it gets into the wrong hands. At the same time, these genetic data repositories may be invaluable to medical research. We need to protect privacy, but at the same time enable scientific progress.
Therefore, when it comes to using medical data, we need to provide appropriate governance, oversight and security measures. We must make sure to only use medical data within an agreed scope, accessed by authorized personnel and algorithms. We must also ensure that every use of this data goes through some objective validation to see if it has added value and has no potential to harm.
AI fairness
The learning part of machine learning can be only as effective as the quality, quantity and representativity of the data used for training the models. One common challenge is an underrepresentation of certain groups of the population in the data used for model training. It is often called AI bias. As a result, the AI model will often provide worse results for less-represented groups.
Data with a gender imbalance data will result in worse accuracy of the models for the underrepresented gender. Skin-cancer detection models trained mostly on light-skinned patients will perform worse on individuals with darker skin. For health care, poor model performance for a particular group may give unreliable information, leading to an incorrect diagnosis or suboptimal treatment.
Feeding AI models with biased data will introduce a systematic bias that we usually want to overcome. It is vitally important, given the sensitive nature of AI outcomes in health care, that models are trained on diversified data that is verified against potential underrepresentation of certain groups in the population.
AI explainability
Some of the algorithms typically used in AI systems (such as neural networks) are considered “black boxes.” This means that we are providing the algorithm with training data and asking it to learn to recognize some specific patterns from this data. We can parametrize the algorithms and change the input data by adding new characteristics. However, we do not have direct insight into the reason why the algorithms classified a particular observation in a given way.
As a result, we may have a very accurate model that produces unexpected results. In a well-known example, researchers trained an algorithm to distinguish between dogs and wolves. As it occurs, the algorithm tended to make the decision based on the background of the picture rather than on the animal silhouette or fur color. In the training data, all the wolves had trees/forest in the background.
We can use explainability techniques to understand what the black-box models base their decisions on. When it comes to health care, bad classification may result in health- and even life-threatening situations. Therefore, it is very important to understand and verify the aspects of the data that the algorithms use.
Sometimes the use of less-complex models might provide better explainability. Linear regression or decision trees might provide enough accuracy and provide good visibility into what variables and factors are key for the model. When we use a more complex model, we should use tools that support the explanations. In both cases, subject matter experts should then verify the explanation by looking for potential errors and doubts in the choice of variables and characteristics that the model uses.
AI at Amsterdam UMC
AI algorithms in health care can support diagnosis and augment doctors in treatment. One example is the cooperation of SAS and Amsterdam University Medical Center (UMC) to evaluate computerized tomography (CT) scans. Computer vision deep learning algorithms evaluate the scans to increase the speed and accuracy of chemotherapy response assessments. As a result, algorithms evaluate total tumor volume (compared to the typically two-dimensional measurement performed by radiologists). This helps doctors to determine more accurately which treatment strategy to choose.
But what is very important and noted by Dr. Geert Kazemier from Amsterdam UMC, AI technology must be transparent and open if it’s going to revolutionize health care. “If you create algorithms to help doctors make decisions, it should be explainable what that algorithm is actually doing,” he says. “Imagine if an algorithm came up with something bad for the patient and the doctor follows it. What’s the effect of that? To err is not only human.”
Piotr Kramek is Data Science team leader at SAS.
7 november (online seminar op 1 middag)Praktische tutorial met Alec Sharp Alec Sharp illustreert de vele manieren waarop conceptmodellen (conceptuele datamodellen) procesverandering en business analyse ondersteunen. En hij behandelt wat elke data-pr...
11 t/m 13 november 2024Praktische driedaagse workshop met internationaal gerenommeerde trainer Lawrence Corr over het modelleren Datawarehouse / BI systemen op basis van dimensioneel modelleren. De workshop wordt ondersteund met vele oefeningen en pr...
18 t/m 20 november 2024Praktische workshop met internationaal gerenommeerde spreker Alec Sharp over het modelleren met Entity-Relationship vanuit business perspectief. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikbare ...
26 en 27 november 2024 Organisaties hebben behoefte aan data science, selfservice BI, embedded BI, edge analytics en klantgedreven BI. Vaak is het dan ook tijd voor een nieuwe, toekomstbestendige data-architectuur. Dit tweedaagse seminar geeft antwoo...
De DAMA DMBoK2 beschrijft 11 disciplines van Data Management, waarbij Data Governance centraal staat. De Certified Data Management Professional (CDMP) certificatie biedt een traject voor het inleidende niveau (Associate) tot en met hogere niveaus van...
3 april 2025 (halve dag)Praktische workshop met Alec Sharp [Halve dag] Deze workshop door Alec Sharp introduceert conceptmodellering vanuit een non-technisch perspectief. Alec geeft tips en richtlijnen voor de analist, en verkent datamodellering op c...
10, 11 en 14 april 2025Praktische driedaagse workshop met internationaal gerenommeerde spreker Alec Sharp over herkennen, beschrijven en ontwerpen van business processen. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikba...
15 april 2025 Praktische workshop Datavisualisatie - Dashboards en Data Storytelling. Hoe gaat u van data naar inzicht? En hoe gaat u om met grote hoeveelheden data, de noodzaak van storytelling en data science? Lex Pierik behandelt de stromingen in ...
Deel dit bericht