Quoting one of OpenAI’s claims, artificial intelligence should be used for the broader good of society. So far, the only good we see is companies selling their systems for productivity. Still, there is some buzz about wanting to understand how AI can support human evolution, especially in reducing inequality. The report “AI for Impact: The Role of AI in Social Innovation“, commissioned by the World Economic Forum, wants to give a contextual view of AI implementation in segments such as social impact, gender geography and global well-being development.
The white paper, which can be downloaded here, resulted from the work of the World Economic Forum’s AI Governance Alliance (AIGA) Presidio Framework. The report, developed in collaboration with Microsoft and EY, draws on insights gathered from more than 300 case studies on social innovation and interviews with experts and introduces PRISM (Principles for Responsible Implementation, Scale and Management of AI), a guide to the progressive and responsible adoption of AI for positive impact, which also highlights risks and gaps to be addressed for equitable implementation.
Geographic boundaries
Starting from pathways and case studies, the goal is to show real examples of social innovation with AI while also sharing a roadmap to follow to integrate and implement artificial intelligence internally. This series of reports results from the AI for Social Innovation work stream of the Global Alliance for Social Entrepreneurship, which aims to raise awareness and promote the ethical adoption of artificial intelligence by social innovators and impact enterprises.
More than half (54 per cent) of social innovators are currently leveraging artificial intelligence to improve basic products or services, and nearly 30 per cent are using it to develop entirely new solutions. However, according to the new report, critical gaps hinder its wider adoption. Only 13 per cent of initiatives focus on educational tools, mainly in the Global North. Gender disparities persist, with only 25 per cent of women-led social enterprises using AI compared to half of the sector overall. Challenges and equity have also emerged. For example, most commercially available models are trained on data from high-income countries, yielding inferior results for low- and middle-income countries.
Pyramid shape
The organization envisions the AI implementation process as a pyramid. At the bottom are capabilities and risks, in the middle are adoption paths, while at the top are impacts and strategies. “Considering ethics capabilities and risks ensures that AI systems are smart and fair at the same time,” the study authors explain. “Because social innovators often work with vulnerable and marginalized groups, this is one of the key considerations. The enigmatic nature of AI decision-making is often compared to a ‘black box’ in which inputs are and outputs are visible, but processing remains opaque. For organizations dedicated to social impact, peeling back the layers of this black box can be imperative. AI decision-making should be clear whenever possible, especially in complex systems such as predictive analytics and recommendation engines.”
Data security
Another essential aspect concerns anonymization techniques, which are used to prevent personal data from being traced back to individuals. This is especially important for datasets used for artificial intelligence training, where removing identifiers can help protect privacy while enabling valuable information. It is important to note that a surprisingly small amount of data is required to identify a person. In a landmark survey, MIT researcher Latanya Sweeney showed that 87 per cent of U.S. citizens could be uniquely identified using only their gender, zip code and date of information readily available in health data. Privacy-by-design addresses these issues by integrating security into developing new products, systems or processes.
Case Study
SAS Brazil is developing an artificial intelligence diagnostic tool designed to predict cervical cancer risk and accelerate its diagnosis in local communities in Brazil. The priority is to address the ethical implications of AI applications in health care, so SAS formed an internal ethical framework with its partners to guide the development of the tool. It also wanted to address a major challenge in data bias since the data sets are mostly collected from sick patients in the Global North, with few references to train AI globally.
Also, social innovators are leading the AI revolution in areas such as health care, education, and environmental conservation to significantly increase their impact on complex social challenges. Their approach, outlined in the PRISM Framework, sets a standard for the ethical integration of AI across sectors, emphasizing the balance between organizational preparedness, ethical considerations, and potential benefits. These examples demonstrate that the reach of AI can extend beyond commercial uses, improving the way organizations address societal problems when aligned with a clear social mission. The PRISM Framework captures these practices, offering an iterative approach to AI implementation so that the technology adapts to humans and their challenges, not vice versa.