AI: Challenges, Solutions, and Moratorium

Annadi Muhammad Alkaf

4 min read

With the rapid development of Artificial Intelligence (AI) technologies recently, along with its benefits, many experts and scholars also shed light on its potential risk to the future of humanity. Hence, many of them call for the regulation of AI. However, insufficient attention has been paid to comprehending the fundamental concept behind such regulation, as well as the socio-technical solutions that may be required. In this article, these aspects will be addressed, and provide a detailed discussion of the issue.

There are two notions that currently become subject to discussion among experts in regard to AI problems. The first is Explainable Artificial Intelligence (XAI). It is a rapidly growing notion in the AI development world which is programmed to create a set of machine learning techniques that can possibly produce more explainable models while maintaining the level of accuracy and enabling human users to understand the way the AI works or how AI comes up with certain results. 

In this sense, the way AI works and the results they provide along with its strengths and weaknesses, could be evaluated. Thus, the final decision is still granted to the human user, rather than solely relying on the AI result. Accordingly, the XAI is considered one of the socio-technical solutions to the black box phenomenon.

For example, XAI can be used in diagnosing patients with the disease and can explain their diagnosis simultaneously. It can assist doctors to explain their diagnosis to patients and explain how a treatment plan is going to help. Hence, in the context of medical treatment that uses AI, it can increase the trust between doctors and patients while at the same time can mitigate any potential ethical issues, such as transparency and accountability.

However, in its current development, XAI is still far from perfect. For this reason, political will is needed to develop better XAI from the governments, universities, and other stakeholders across the globe to encourage that. Especially nowadays when the risk and danger of AI is even real in our daily life.

The government’s ongoing commitment to pushing the discourse about AI through the National Strategy of Artificial Intelligence (Stranas KA) by Indonesia’s National Research and Innovation Agency (BRIN) which collaborates with universities and the private sector can exemplify this. However, Indonesia along with many other countries that already have their national strategy for AI development should form a transnational consortium on AI while acknowledging and respecting the national efforts of each country based on their national and cultural contexts for managing AI development.

Secondly, to bring this to the broader point of view, it also needs to address the systematic application in developing technologies, including AI, because no matter how innovative the technology is, it needs to be aware of the potential of technological determinism that obscures the socio-technical aspects of it. In this context, the notion of Responsible Innovation can be relevant.

According to research conducted by de Sio (et alia) in 2021, Responsible Innovation is an approach that involves decision-making regarding the design, development, introduction, and governance of AI technologies based on deliberate societal judgment by considering a wider set of individual and societal values. By adopting this approach, it is expected that the development and utilization of AI align with human, social, and ethical values.

To put it differently, it is crucial to prevent and anticipate the social impact of technology. It is also essential to ensure that designers, engineers, and policymakers proactively encourage the relevant human and social values in the process of the development of technological systems, as opposed to just regulating the use of technology or devising post-hoc policies to govern the ethical and societal impact of technology. In this context, preventive action must be prioritized over curative action and also to emphasize the importance of active and responsible design of technology.

Therefore, the implementation of XAI and the broader concept of Responsible Innovation are important. In order to address this issue and ensure a more favorable future utilizing AI, collaborative efforts are imperative among national governments worldwide, major technology corporations, academic institutions, and non-governmental organizations. Just as the climate crisis transcends national borders, so too must the solution for AI potential problems be sought at the global level rather than on an individual country basis.

However, it is worth noting to acknowledge the intricate nature of the development of technologies like XAI and the complexities inherent in formulating regulations or policies on a global scale, especially considering the rapid progress of conventional AI technologies. Consequently, it may be prudent to consider and conduct the AI moratorium until we have a better instrument and regulation to anticipate AI’s detrimental impacts. Given the rapid pace of development and uncertainties surrounding its potential impact, it is crucial to exercise caution and prevent potential adverse consequences that may arise in this unprecedented era.

As Elon Musk and some experts have already talked about this AI problem and the need to delay its development temporarily, in a letter issued by the Future of Life Institute, it is said that “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” In other words, the statement emphasizes the need to do an AI moratorium until there are shared safety protocols for it from misuse or bias that potentially harm people and society as a whole.

For example, one of the problems of AI generative tools such as ChatGPT is privacy or cybersecurity issues. There is an indication of confidential data leaks. ChatGPT’s privacy policy expert explains that the user prompts can be used to train ChatGPT’s models unless the user opts out. Hence, because of that concern, it is reported that many big-tech companies such as Apple, Samsung, Verizon, etc. restricted the use of ChatGPT.

Moreover, Yuval Noah Harari, a prominent global historian, in his recent article published in The Economist argues that AI tools such as chatGPT may possess the potential to surpass human capabilities in linguistic communication. As language is a distinctively human tool and a product of culture that profoundly shapes humanity and civilization, one can argue that the advancement of AI may bring about that human history, or more precisely, human-centered history, could come to an end. The emergence of AI may engender the creation of new cultural ideas, distinct from those shaped by human intellect.

In light of these considerations, it is not unfounded to take into account an AI moratorium as a means of averting a future where humanity is overshadowed by AI dominance. Such a moratorium would serve to safeguard our human essence from being overtaken by a world governed primarily by AI. In the meantime, it is also imperative to promote further research in the field of XAI to minimize potential biases caused by conventional AI. Moreover, pursuing Responsible Innovation, in general, must be actively embraced and established as the prevailing approach to technology development.

*****

Editor: Moch Aldy MA

Annadi Muhammad Alkaf

Leave a Reply

Your email address will not be published. Required fields are marked *

Dapatkan tulisan-tulisan menarik setiap saat dengan berlangganan melalalui email