All accidents, trust will improve in these systems, however

All in all, I believe that the best way for AI technology to
be developed ethically is for better engagement with the public, both in terms
of explaining how advanced AI algorithms work and in terms of gathering opinion
on what it is acceptable for an AI system to do. It may also involve developing
AI to be different in other cultures around the world where public attitudes
are different, for example “Japanese culture … has a very different and much
more optimistic view about artificial life than we have in our Western culture”
5. All in all, the issue of ethical AI is a complex one and one
which cannot be underestimated if we wish for the benefits of AI to penetrate
all aspects of our lives.

The issue of mistrust is also present in autonomous vehicles.
While there are people who argue for and against autonomous vehicles, one issue
which cannot be overlooked is that some members of the public simply would not
trust a vehicle with absolutely no human interference. As the technology
advances and sensors and AI get better at avoiding accidents, trust will
improve in these systems, however there may still be an issue present inherent
in the programming of the car itself. For example, in a situation where a child
suddenly runs out into the road and the car’s choice is to crash into the child
or to swerve and crash elsewhere injuring the driver and possibly other
bystanders, a human driver would most likely decide to do all they could to
save the child, as as a society we would prioritise the safety of the child over
that of the driver. However, in this particular scenario, it may be difficult
for the AI to determine between a child and adult, so produce (to our eyes)
ethically wrong decisions some of the time. The programming of the AI is also a
problem as the programmer would have to explicitly tell the AI what to do in
these sort of situations, which results in the car being told to prioritise
either the life of the driver or of others. In a study by Jean-Francois Bonnefon, participants were posed
with ethical dilemmas an autonomous car may face, and give their opinion. The
results found that generally people would choose the course of action which minimised
death toll, however they “wished others to cruise in utilitarian autonomous
vehicles, more than they wanted to buy utilitarian autonomous vehicles
themselves” 3. Of course, there is no correct answer to these dilemmas,
however these findings highlight that if AI systems are to be integrated into society,
then public opinion must be gathered as to what are acceptable actions the
systems can take. For without public backing there will always be a level of
mistrust around these systems, and they will not reach their full potential.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

One successful application of deep learning algorithms is to
medical diagnosis. There is great hope that AI can look at patterns in medical
data and make diagnoses much more accurately than a doctor, for example Oxford
University recently announced that it has developed an AI which can predict heart
disease which “greatly outperformed his fellow heart specialists” 2.
However, even with these success rates, if the programmer cannot explain to a
doctor why the AI makes a diagnosis then the doctor is asked to blindly trust it.
Taking the AI’s word at face value will perhaps prove ineffective in practice,
as when a patient goes to a doctor they expect to be given a diagnosis based on
conclusions the doctor makes from their symptoms. However, whereas a doctor can
explain their reasoning to the patient, the AI would be a black box with no explanation
as to why it thinks a patient has a condition. The public therefore would be
unlikely to accept this conclusion, especially if it required any invasive or
complex treatment, as they would not believe the results and be willing to
undertake new treatments without the science to back up the diagnosis. Of
course, this problem could be overcome if the inner workings of these AI systems
could be exposed and explained then both doctors and patients would be able to
trust the system.

A deep neural network, one of the most powerful AI
programming techniques, works by using an extensive network of interconnected
“neurons” arranged into several layers, each of which takes an input, performs
a calculation, then passes the result to the neurons in the next layer down of
the network, which continues until the system produces a final output. On top
of this, the network can tweak the calculations performed by individual neurons
in order for the network to teach itself to produce the correct output 1.
As you can imagine, the huge scale of this network leads to it being incredibly
hard for a programmer to interpret why it produces a certain output, and is
nigh on impossible to explain its reasoning.

With the advancement of new techniques such as deep learning,
AI systems are becoming much more powerful and successful at tackling the
problems they were designed to face. However, as Will Knight highlights 1,
the more sophisticated AI systems rely on algorithms which are so complex that
it is impossible to know the reasoning behind the decisions it make. This
raises huge questions regarding how we integrate these systems into day to day
life and how much we trust the results they provide.