In recent years, governments across the world—including China, the United States, Germany, the United Kingdom, and others—have prioritized the development of national strategies on artificial intelligence (AI) and made investments in technologies associated with AI. While many of these measures are aimed at maintaining a competitive advantage in business, warfare, or espionage, there are also widespread initiatives to creatively apply the enhanced computation of artificial intelligence in fields like medicine and humanitarian relief.
The work of the United Nations in development or peacekeeping is not often associated with AI, but the UN has been actively engaged in debates on weapons sparked by advances in AI technology, and Secretary-General António Guterres recently established the High-Level Panel on Digital Cooperation to help ensure safe and inclusive applications of technology, including AI. On the sidelines of an all-day symposium on artificial intelligence held at the International Peace Institute (IPI) in June, Eleonore Pauwels, a Research Fellow at UN University, and David Li, the founder of the Shenzhen Open Innovation Lab, discussed the broad applications and risks of AI and its relevance to the work of the UN with Global Observatory editor Samir Ashraf.
This interview has been edited for clarity and length.
In many of the situations where the UN is involved, there are a host of actors operating in very complex environments with different intentions. How can or should AI be employed in these contexts?
Mr. Li: First, I would try to figure out the aim of the [UN] mission and then try to move the tool as close to the end-user as possible. We have the ability now to make the beneficial, productive applications of AI available as close to the end-user as needed, so it is a question of rethinking how open data or open source can change the mandates and functioning of UN missions. For example, rather than going in with a view of temporary relief, how can AI help build a future around innovation in a local community? AI applications can be taken advantage of by an end-user to build something cheaply, transfer knowledge, or digitize what needs to need to be digitized. These applications can increase innovation in addressing a community’s challenges, and whatever knowledge is generated remains local, and in that way the community can grow and change.
Ms. Pauwels: This switch from top-down to bottom-up is a crucial and big change for the UN. Developing innovation ecosystems that are very close to people who are facing a specific problem allows them to be involved in the innovation process and better respond to their own situation. This is a movement away from the commodification of AI toward its democratization.
What do you see could be AI’s role in helping countries achieve the sustainable development goals?
Mr. Li: There is one example with malaria detection. Usually, a large sum of money is appropriated and devices are bought in one place and moved to the area where detection is needed. The last time I was in Geneva, I saw how malaria and disease detection is now happening at the local level in a way that doesn’t require training in how to use a microscope or how to identify the disease. Basically, a fully open source device that funnels or channels water through itself and detects the disease. So the local entrepreneur that wants to make it can do so relatively easily through the use of “makerspaces” and thus change the way their community interacts with the disease. Makerspaces are small-scale, local spaces that provide an average person the ability to fabricate devices or tools. With open source knowledge and AI computation they can empower far more people.
Ms. Pauwels: The beauty of this is that it is based on human creativity, not on AI replacing humans. You change the narrative and focus on the knowledge of humans in a community with the added support of AI.
There is a perception that AI will replace humans in many sectors, is this true?
Mr. Li: AI replacing humans is a very Western-centric narrative. Sixty percent of the global population is in the informal economy, which means that they don’t have regular jobs. If you don’t have a regular job, how is AI going to replace you? This needs to be communicated to people in New York or Geneva, where the majority of people have a regular job. Three years ago, Switzerland introduced a universal basic income ballot and I was with one of the representatives from Parliament talking about the need for a universal basic income because AI was coming to take jobs. Political decisions are being made based on this assumption, which hasn’t been borne out in reality. Take the ATM, which never really decreased banking jobs, or automation in factories, which never really decreased employment.
Ms. Pauwels: This, of course, doesn’t mean that tasks cannot be automated with AI. You can automate recognition of anthrax or specific tumors in radiology, but it is possible to do it in a way where the human creativity at the core is still what matters most.
We should have a society that is less rigid, a little more free in terms of how you can change your activity and make money out of it so you could indeed go to a makerspace, create something and make money out of it. We need to teach the next generation to be able to do that instead of only being tied to a particular job. That requires more democratization and an approach where human intelligence is still valued so that what you get is innovation.
Can you share some examples of using AI computation toward positive ends and its advantages in those contexts?
Ms. Pauwels: Many of the potential applications of this kind are at the idea stage or very early in their use. One area at the idea stage is medicine. If we could use AI computation to progressively identify current diseases, new diseases, and epidemics through knowing what biomarkers to search for, this would be a tremendous opportunity. The problem is that there are obvious risks in connecting bio labs with each other. There is also the risk of unintended consequences when a tool like this is introduced in a way that causes clashes in communities with different health practices and cultures.
Mr. Li: I think right now it is more important to shift orientations toward AI. There is often a lack of awareness of what AI actually is—it is described like it’s a nuclear bomb or something you can contain, monitor, and supervise. It’s not really a question of containment, it is just a tool and there is a range of what constitutes AI. When we talk about AI as if it is in a box and needs regulation, then we get into conversations around who manages the box. If we hand management to researchers or some other group with a stake, the risk is treating people like data, or trying to do “X” at any human cost. Ultimately, we are better served thinking through its uses depending on who needs it.
What would you say are the risks associated with developing artificial intelligence technology and its applications?
Ms. Pauwels: What is most interesting about AI is its convergence with other domains and technologies, since in itself it is just a form of enhanced computation. The computation, of course, is dependent on how the algorithms are designed and how much thought is given to inclusivity and equity, and this is why a framework is needed, based on human rights, that informs how you algorithms are designed. Now when you get to the computation being automated and having to determine where humans fit into the loop, then we face a larger set of questions.
Take the use of biometric data as an example. AI computation could be applied to databases of biometrics, but there are legitimate concerns over identity theft and the difficulty of securing the data itself. Determining how to use AI thus depends on the goals we have in mind. For example, AI can be used to automate cyber hacking campaigns or for recognizing and preventing the patterns of those very hacks. We should be helping designers and policymakers weigh their goals and the questions that are raised in applying AI.