AI deployment creates new opportunities for meaningful work, but also risks reducing it. Optimistic accounts highlight that it may expand meaningful higher-order tasks, but pessimistic ones point to job displacement and socioeconomic impact.
Participants were preoccupied with the capacity of AI to exacerbate digital divide, discrimination, economic equity issues and disinformation. They also raised questions about accountability and transparency.
Artificial Intelligence & Ethics
AI is poised to transform the world. It can do things that humans cannot do, and we are beginning to see it in everyday life—from self-driving cars to medical diagnostics to personal assistants like Siri. But there is concern about the ethics of these technologies. At the extreme, some worry that AI systems could be used to kill people (think drones or autonomous weapons), while others believe it will cause jobs loss by replacing human labor with machines (like robo-advisors or augmented reality). It is important to understand that humans understand context better than ai.
At a more practical level, AI/ML—the subset of AI that uses machine learning—is flourishing at the enterprise level as businesses rely on it to expedite internal processes and eliminate human error. Those companies are increasingly paying attention to AI ethics.
Some researchers believe the development of AI will lead to intelligent machines, which may have their own moral codes and make decisions in accordance with those values. However, this raises concerns about how to determine what an AI’s ethical code would be, and it could be difficult for regulators to keep pace with rapid technological change if they require every new product using AI to be prescreened for social harms.
Furthermore, there is a fear that AI systems will reinforce the biases in real-world data they learn from, potentially amplifying existing risks such as racial and gender discrimination. And because many AI systems are black boxes, it is often impossible to know what they are thinking or how they came to a decision.
These fears are compounded by the fact that AI is being used in highly sensitive ways, such as in health care where it can diagnose disease and treat patients. While there are significant technical efforts to ensure that AI does not introduce bias into data sets and that it can be programmed fairly, these efforts are still in the early stages.
In addition, some researchers are concerned that the development of AI will lead to intelligent augmentation—a form of artificial intelligence whereby humans are given enhanced cognitive abilities, such as memory organization or critical reasoning. This form of AI could be used to improve people’s quality of life or allow for immortality, which has its own philosophical and ethical implications.
Biological Intelligence & Ethics
In the field of biological intelligence, the term ‘intelligence’ refers to the ability of animal organisms to achieve complex goals through calculation, planning and decision-making — either as individual units or in concert. This capability is an evolutionary adaptation to a constantly changing environment.
Biological systems are able to cope with the complexity of the environment in part by a diversity of sensor types and the capacity to systematically collect, process and act on information about their surroundings. This information is often accumulated by the body’s internal control systems, and the results of this information processing are often expressed in the form of complex behaviors.
The most well-known ethical issue arising from the development of AI concerns bias, particularly when the bias is hidden or unknown to the system designer. While there are significant technical efforts to identify and mitigate bias, it is still a new area with a number of challenges. One is that to do so successfully requires a mathematical notion of fairness, which is not yet available.
Another issue is that artificial intelligence can be used to manipulate human behaviour online and offline in a way that undermines autonomous rational choice. This has implications for privacy and freedom of speech, as it is easy to use AI systems to nudge people towards certain decisions or behaviours or to influence their perceptions. This is especially true when AI systems are used in areas like surveillance, and where the information about users is based on data that can be collected, stored, analysed, manipulated, and distributed in a highly personalized manner.
Another major ethical concern arises when AI is used to create robots that are problematic because their processes and appearance may deceive people into attributing to them intellectual or emotional properties that they do not have. This is a particular problem when such robots are designed to look similar to living humans, as this can cause them to be perceived as more than mere tools and therefore to violate Kantian principles of respect for humanity. This also raises the question of whether or not an AI should be considered a person, which in turn affects how its rights are protected and what laws might apply to it.
Cyber Intelligence & Ethics
The ethical implications of AI are also related to how the technology is governed. Governments, parliaments, associations and industry circles in industrialised countries often produce reports or white papers on the subject, which typically generate good-will slogans such as “trusted/responsible/humane/human-centred/good/beneficial AI”. The problem is that actual technology policy is difficult to plan and enforce. It can also run into conflicts with general policy, or with the wider aims of science and society.
The exploitation of AI can raise issues of privacy, data ownership and transparency. This is because the algorithms that drive AI can be complex. For example, social media algorithms are often hidden behind the wall of the user’s interface and not explained. They can be influenced by human “nudges” and other forms of manipulation that undermine autonomous rational choice.
A more worrying issue is the potential for AI to be used to carry out cyberattacks, or to engage in high-frequency trading. This will have serious effects on economic and financial security. It will also impact the ability of governments to protect citizens’ civil liberties, and to regulate the business interests of companies and secret services.
It is also important to consider the impact of AI on the labour market. It is likely that it will automate many jobs. Already, robots and AI systems are replacing humans in certain tasks such as telemarketing and customer service. The trend is expected to accelerate as the technology is developed further (Galloway 2019).
In addition, the use of AI in agricultury can be beneficial. For example, computer systems can identify weeds in fields and automatically spray them with herbicide to save time and money. This can help to reduce the use of chemicals in agriculture and improve human health.
In the long term, the main ethical issue is whether it is ethical to develop artificial consciousness. This is a concern because, once artificial intelligence has achieved consciousness, it will be able to choose between different courses of action. It is possible that it will make bad choices that harm others or cause environmental damage.
Artificial General Intelligence & Ethics
There are concerns that future AI will become so intelligent that it will become a moral agent, having rights and responsibilities. A growing number of actors, from business and government to research centres and academic societies, have called for policies to ensure that this does not happen. These range from a series of good-will statements and reports to specific proposals for regulation at the national level. However, actual policy to govern AI is hard to plan and enforce. This is partly because it may conflict with the aims of other forms of technology and general policy, such as economic growth or privacy protection.
One issue is the ease with which people attribute mental properties to AI systems and empathise with them. This can lead them to treat AI as an equal, which could violate the principle of Kant that all persons must be treated with dignity and respect. It can also cause them to use the power of artificial intelligence in ways that do not serve human interests, such as destroying the world, or to engage in deception and manipulation by exploiting the fact that these systems have an outward appearance similar to that of living beings.
Another issue is the fallibility of AI, which can lead to errors in calculations and a lack of consistency and reliability. This can be dangerous if it is used in medical and financial applications, where mistakes could have serious consequences. It can also be a problem in other fields, such as in art and heritage programmes where it can lead to the misrepresentation of historical events or objects.
AI is already having an impact in many areas of our lives, from culling resumes to analyzing job interviews to helping doctors diagnose patients. It’s also changing the nature of work, allowing it to take over low-value tasks like data entry and driving for delivery trucks, so that humans can focus on more complex technical jobs.
While the capabilities of current AI are limited, there is a strong desire to advance the technology and develop AGI, which would be far more powerful than any existing machine. This will pose a number of ethical challenges, including how to define what AGI is, and the implications of developing it.