Artificial intelligence can be harnessed for social good, contends Hannah Miller
Machine learning and artificial intelligence are not getting good press at the moment. Uber has just pulled all of its self-driving test cars from the road after one tragically hit and killed a pedestrian in Arizona. A whistleblower has exposed Cambridge Analytica for illegally harvesting data on Facebook, and interfering in the 2016 United States presidential election in the process. By some calculations, robots are set to steal 800 million jobs worldwide by 2030.
There are very real reasons to be concerned about the rise of AI and the coming of the so-called ‘fourth industrial revolution’. But, among the horror stories and the headlines, we should not lose sight of this: AI, like most technology, is not inherently bad. Artificial intelligence is a set of approaches and algorithms that enable machines to simulate human-like intelligence.
AI, as it currently stands, is simply a tool. While this technology is not inherently bad, it is not inherently fair, either. It runs on the data that feeds it, and the data often reflects the patterns of prejudice built in to our daily lives. It follows the algorithms that instruct it, which can replicate the biases of those who program them. This is why who programs, owns, and operates AI systems is so important. In the right hands, AI has the potential to be a tool for enormous good.
International development is one notable sector where AI is being used to find novel and socially useful solutions to problems, and to improve those that already exist.
Mcrops, a Ugandan company, has created a system which allows real-time crop disease surveillance and modelling on smartphones. Basil Leaf Technologies has designed a prototype ‘tricorder’ which uses AI to quickly and non-invasively diagnose up to 13 diseases. NIRAMAI, an Indian startup, has come up with a low-cost, portable, automated system for detecting breast cancer, using thermal image processing and machine learning algorithms.
US-based logistics company, Zipline, has established a drone delivery system which enables the emergency restocking of urgently-needed medical supplies such as blood and high-priority medication. Doctors send orders to Zipline by text or online messaging services, which are then packaged at distribution centres and dispatched via drone. The drones use autonomous navigation systems, allowing them to find the delivery locations without pilots. This AI-based solution helps to overcome emergency access problems associated with remote communities by allowing high-priority supplies to be delivered within 30 minutes. The company has worked with the Rwandan government since 2016, and is beginning a partnership with the Tanzanian ministry of health over the first quarter of 2018. Once established, the delivery service is to be the largest of its kind in the world, and is expected to make up to 2,000 life-saving deliveries per day.
While these applications of AI-based technologies highlight its potential, the United Nations secretary general Antonio Guterres warned at the 2017 AI for Global Good Summit that ‘[d]eveloping countries can gain from the benefits of AI, but they also face the highest risk of being left behind’. Unequal access to AI runs the risk of deepening the digital divide, reinforcing existing global and demographic inequalities.
To share the positive impact of AI, we must ensure that there is collaboration and diversity among those who design, operate and own the technology. Companies developing solutions also need to take an interdisciplinary approach to ensure they do not replicated biases and exacerbate inequalities. We should not allow exciting new drone technology to take precedence over continued investment in more routine infrastructure, such as improving roads and public health services. Instead, we must limit potential tradeoffs by complementing traditional approaches with novel solutions.
Moreover, it is not enough to export solutions predominantly developed and tested by powerful companies in the world’s richest nations to lower- and middle-income nations. Global companies need to develop and test solutions using local data sets, or they will run into similar problems as IBM Watson for Oncology, which has been found to give advice biased toward American patients and treatment options. Building international networks between developers, and encouraging investment in lower- and middle-income countries can help ensure that AI solutions take into account local complexities and the interests of marginalised groups.
For example, Eddi Ü is an educational technology project in Mexico that helps distribute resources to high school students. Originally designed as a peer-to-peer system where students could share educational videos made by students, for students, it now incorporates video resources from a range of online sources such as Coursera and MexicoX. Although the system is available online and on a mobile app, chief science officer Ricardo Michel Reyes explained they needed to find a way to enable those living in rural communities without internet access to use the service. They designed a device which plugs into televisions, using voice-powered AI to give students quick access to relevant videos and articles, without needing to be online.
This simple solution avoided introducing more components – which were more potential broken parts. The team identified this after observing issues with a previous scheme, when the Mexican government issued all schools in the country with computers. A lack of technical support for problems or breakages have led many to fall into disrepair.
The Harvard Berkman Klein Centre and the Institute for Technology and Society of Rio last year organised an international conference attended by experts and researchers from over 20 countries to discuss the global democratisation of AI. There is clearly potential for AI to work for everyone if we can create good quality open data in emerging economies, implement ethical guidelines, ensure the interests of low- and middle- income countries are represented in decision-making and ensure that tech skills are a priority for governments around the world.
AI could work for many people in many countries. For this to happen, innovation must be inclusive from beginning to end.
Hannah Miller is a consultant at Oxford Insights, where she advises organisations on the strategic, cultural, and leadership opportunities stemming from artificial intelligence
Progressive centre-ground Labour politics does not come for free.
Our work depends on you.