A question of ethics

September 24, 2023

An increasing use of AI technologies is raising questions related to their ethical use

A question of ethics


A

rtificial intelligence, or the theory and development of computer systems able to perform tasks normally requiring human intelligence, is widely heralded as an ongoing revolution transforming science and society.

Approaches to the AI such as machine learning, deep learning and artificial neural networks are reshaping data processing and analysis. Autonomous and semi-autonomous systems are being increasingly used in a variety of sectors, including healthcare, transportation and the production chain. The AI possesses ‘superhuman’ capabilities. In 1997, IBM’s Deep Blue supercomputer beat the then best human chess player Gary Kasparov.

The AI learns from the data we provide. Thus, we can guide it in any direction. An artificial intelligence tool trained on roughly a million screening mammography images can identify breast cancer with approximately 90 per cent accuracy when combined with radiologist analysis. The AI can transform the medical industry in disease detection, disease classification, patient medication response and treatment solutions. Furthermore, using smart watches and smart phones, we can measure important features of a person’s health in real-time. This could be very helpful to patients who suffer from serious illnesses. In some applications, its major purpose is to improve the hospital room experience and simplify the process of preparing patients to continue their healing at home. Also, virtual nurses can reduce patient anxiety, improve safety, keep people entertained and increase patient satisfaction with medical services.

As artificial intelligence technologies enter many areas of our daily life, the problem of ethical decision-making, which has long been a big challenge for the AI, has caught public attention.

A major source of public anxiety about the AI is related to artificial general intelligence research, aiming to develop AI systems with capabilities matching and eventually exceeding those of humans. The age of AGI is still decades away. This gives us an opportunity to develop suitable standards for its practical implementation.

The AI research community realises that machine ethics is a determining factor for the extent to which autonomous systems will be permitted to interact with humans. Therefore, research has emerged focusing on technical approaches for enabling these systems to respect the rights of humans and only perform actions that follow acceptable ethical principles.

To survive and grow, an industry must evolve continuously. Concerns that the AI might jeopardise jobs for human workers, be misused by malevolent actors, elude accountability or, inadvertently disseminate bias and thereby undermine fairness, have been at the forefront of the recent scientific literature and media coverage.

Governments in the US and Europe, have started to add litigation with respect to commercial use of data, which in turn would affect how the AI is applied to these data sets. However, due to the global nature of online businesses, more international laws would be required to regulate the ethical use of the AI.

If specified, the preservation and promotion of justice are proposed to be pursued through: (1) technical solutions, such as standards or explicit normative encoding; (2) transparency, notably by providing information and raising public awareness of existing rights and regulation; (3) testing, monitoring and auditing: the preferred solution of notably data protection offices; (4) developing or strengthening the rule of law and the right to appeal, recourse, redress or remedy and (5) via systemic changes and processes such as governmental action and oversight, a more interdisciplinary or otherwise diverse workforce, as well as greater inclusion of civil society or other relevant stakeholders in an interactive manner and increased attention to the distribution of benefits.

As we implement the AI across various industries, we have to settle the rules of business for its implementation. Ethical business practices will be promoted through the AI setup and monitored while it learns and improves. Social values that a society agrees on can provide the ethical standards that an AI-based system must have. We can further expand on this matter by (1) how ethical principles are interpreted; (2) why they are deemed important; (3) what issue, domain or actors they pertain to; and (4) how they should be implemented.

These findings have implications for public policy, technology governance and research ethics. At the policy level, greater inter-stakeholder cooperation is needed to align different AI ethics agendas and to seek procedural convergence not only on ethical principles but also their implementation. While global consensus might be desirable, it should not come at the cost of obliterating cultural and moral pluralism and may require the development of deliberative mechanisms to adjudicate disagreement among stakeholders from various regions.

Despite widespread references to responsible AI, responsibility and accountability are rarely defined. Nonetheless, specific recommendations include acting with integrity and clarifying the attribution of responsibility and legal liability, if possible upfront, in contracts or, alternatively, by centring on remedy. In contrast, other sources suggest focusing on the underlying reasons and processes that may jeopardize freedom and autonomy. Whereas some sources specifically refer to the freedom of expression or informational self-determination and privacy-protecting user controls, others generally promote freedom, empowerment or autonomy. Some documents refer to autonomy as a positive freedom, specifically the freedom to flourish, to self-determination through democratic means, the right to establish and develop relationships with other human beings, the freedom to withdraw consent, or the freedom to use a preferred platform or technology. Ethical AI sees privacy both as a value to uphold and as a right to be protected. While often undefined, privacy is frequently presented in relation to data protection and data security.

GenEth provides a graphical user interface for discussing ethical dilemmas in a given scenario and applies inductive logic programming to infer principles of ethical actions..The authors proposed a set of representation schemas for framing the discussions on AI ethics. It includes:

1. Features: denoting the presence or absence of factors (e.g., harm, benefit) with integer values;

2. Duties: denoting the responsibility of an agent to minimise/ maximize a given feature;

3. Actions: denoting whether an action satisfies or violates certain duties as an integer tuple;

4. Cases: used to compare pairs of actions on their collective ethical impact; and

5. Principles: denoting the ethical preference among different actions as a tuple of integer tuples.

6. Ethics requirements are often exogenous to AI agents. Thus, there is a need for ways to reconcile ethics requirements with the agents’ endogenous subjective preferences in order to make ethically aligned decisions.

The AI is a powerful tool; it can help mankind achieve great things, and if misused, it can cause serious challenges. From the perspective of business and governance, the AI industry would have to be tightly regulated.

The AI will make the process of data processing and decision-making more efficient and speedier. We need to be careful about what data set we provide because the AI can actually learn better than us and then make decisions. It will become what we feed it.


The writer is an environmental engineer and visiting scientist at the University of Cambridge. He may be reached at Zarakbabar-santaclara@outlook.com

A question of ethics