3 Comments
User's avatar
Shefton Parker's avatar

Great blog post! It touches on many of my concerns around the lack of 'human-centred' evaluation of AI. Ie evaluation of the direct social changes that generative AI is having on people using it and indirect changes it has on our society (good and bad). It is those 'bad' or 'harmful' changes that I hope we pay close attention to and address sooner rather than later. We have seen the harmful impacts of social media imagine if we could go back in time and put in place measures to protect against such social harms whilst also amplifying the positive more beneficial benefits of the tech.

Expand full comment
Anthony Clairmont's avatar

A classic idea in the evaluation theory of Michael Scriven is that we need to look beyond whether interventions fulfill their own stated goals – in practice it is very easy for interventions to meet the objectives set by the people who run them as long as those people take a narrow view of what the program is supposed to do. He strongly advocated for actively looking for negative unintended consequences as a routine part of evaluation. I think this is the perspective we need to adopt if we are to evaluate technology before being blindsided by its social effects. This might have saved us a lot of grief with social media, as you say.

Expand full comment
Shefton Parker's avatar

FYI check out the response when I ask ChatGPT: Q. What are the social costs from the rise in the use of generative AI which is proposed to be more efficient?

-The rise of generative AI, while promising greater efficiency, comes with significant social costs across various domains. Here are some key concerns:

1. Job Displacement & Labor Market Disruptions

Automation of white-collar jobs (e.g., writing, design, legal analysis, customer service) can reduce demand for skilled workers, leading to job losses and economic insecurity.

Lower-skilled workers may struggle to reskill fast enough, increasing income inequality.

AI-driven productivity gains might benefit corporations more than workers, worsening wage stagnation.

2. Misinformation & Erosion of Trust

AI can generate deepfakes and misinformation at scale, making it harder to distinguish truth from fiction.

The spread of AI-generated propaganda and fake news could undermine democracy and social cohesion.

Over-reliance on AI-generated information may reduce critical thinking skills among the public.

3. Bias, Fairness, & Discrimination

AI models inherit biases from their training data, leading to unfair outcomes in hiring, lending, healthcare, and policing.

Unequal access to AI tools can widen digital and economic divides.

Over-reliance on AI decision-making could reinforce systemic biases rather than mitigate them.

4. Intellectual & Creative Erosion

AI-generated content may lead to devaluation of human creativity in writing, music, art, and entertainment.

Plagiarism and copyright issues may escalate as AI repurposes existing works.

If organizations prioritize AI-generated efficiency over human insight, cultural and artistic expression could suffer.

5. Privacy & Surveillance Concerns

AI-enhanced surveillance can increase corporate and government monitoring, threatening civil liberties.

Companies may use AI to track and analyze worker productivity, leading to excessive workplace monitoring.

AI-generated personal data profiling can be exploited for targeted manipulation (e.g., in politics and advertising).

6. Energy & Environmental Impact

Training and running large AI models consume massive amounts of energy, contributing to carbon emissions.

The demand for AI hardware (chips, GPUs) increases resource extraction, worsening environmental degradation.

7. Human-AI Dependence & De-skilling

Over-reliance on AI for communication, problem-solving, and decision-making could erode human expertise.

Workers and students relying on AI tools (e.g., ChatGPT for writing or coding) may lose fundamental skills over time.

AI-driven personalization (e.g., automated recommendations) could limit exposure to diverse ideas and reinforce intellectual bubbles.

8. Power Concentration & Economic Inequality

The AI boom benefits tech giants (Google, OpenAI, Microsoft, etc.), increasing their monopoly power.

Smaller companies, independent workers, and freelancers may struggle to compete with AI-driven automation.

If AI-driven productivity gains are not fairly distributed, wealth concentration among AI owners could exacerbate economic inequality.

Potential Mitigations

To address these social costs, policymakers and businesses need to:

Develop ethical AI regulations (e.g., transparency, accountability, and bias audits).

Ensure fair distribution of AI-driven productivity gains (e.g., worker protections, reskilling programs).

Promote AI literacy to help people navigate misinformation and over-reliance on automation.

Implement stronger privacy protections against AI-driven surveillance and data misuse.

Support human-AI collaboration rather than full automation in creative and professional fields.

While AI efficiency is undeniable, without safeguards, it could exacerbate inequality, misinformation, and social disruption rather than benefiting society as a whole.

Expand full comment