
By: Juan Mateos-García & George Richardson (Nesta)
There’s a flurry of excitement about modern developments in artificial intelligence (AI).
The arrival of powerful image generators, AI agents able to perform multiple tasks or seemingly (to some) sentient chatbots are an exciting prospect for data scientists that use machine learning to tackle big societal challenges in areas such as health, education and the environment.
One of the unofficial remits of AI is to “solve intelligence and then solve everything else”. We have to assume that “solving” would include reducing inequalities in education, tackling obesity and decarbonising our homes. Are we about to get AI systems that could help us solve these problems?
Industrialised AI
The dominant model for AI is an industrial one. It trains deep, artificial networks on large volumes of web and social media data. These networks learn predictive patterns and can be useful for perception jobs such as identifying a face in a photo. They are good for tasks that don’t have human input or where we are not interested in understanding why someone made a choice, such as liking a social media post.
The large technology companies developing these systems use them to predict relevant search engine results, what social media content is most engaging and to make recommendations that could result in a purchase. This helps these companies build more engaging and profitable websites and apps.
But when it comes to social impact sectors, data is scarce, explanation is more important than prediction and making mistakes could cost lives.
An absence of data in the social sector
Our societies face big challenges: a widening outcome gap between the poorest children and the rest; growing obesity rates; the need to urgently reduce household emissions. AI systems such as those described above could help tackle these challenges by mapping the problems and targeting solutions. Unfortunately, this is easier said than done.
Search engines and social networking sites generate vast amounts of standardised data that is highly predictive of relevant outcomes. By contrast, social impact sectors such as education and health comprise hundreds or thousands of organisations (local authorities, hospitals or schools) each collecting small, incomplete and disconnected datasets. Social media or search engine data that could be relevant for improving outcomes in health (the food adverts different groups are exposed to, for example) or education (which social network structures increase community resilience and social mobility) are expensive or impossible to access. Even if it were possible, it’s likely they would provide a biased view of the situation, excluding or underrepresenting some vulnerable groups…
Featured News
Belgian Authorities Detain Multiple Individuals Over Alleged Huawei Bribery in EU Parliament
Mar 13, 2025 by
CPI
Grubhub’s Antitrust Case to Proceed in Federal Court, Second Circuit Rules
Mar 13, 2025 by
CPI
Pharma Giants Mallinckrodt and Endo to Merge in Multi-Billion-Dollar Deal
Mar 13, 2025 by
CPI
FTC Targets Meta’s Market Power, Calls Zuckerberg to Testify
Mar 13, 2025 by
CPI
French Watchdog Approves Carrefour’s Expansion, Orders Store Sell-Off
Mar 13, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li