
As artificial intelligence becomes embedded in British healthcare, policing, and public services, the question of which values should govern its development has never been more urgent. Where should the moral line be drawn?
Put the items in your preferred order.

Human Oversight First
No AI system should be able to make consequential decisions β from benefit assessments to criminal sentencing β without meaningful human review. Accountability must rest with people, not algorithms.

Transparency & Explainability
If an AI system affects your life, you should be entitled to a clear explanation of why. Black-box decision-making erodes public trust and undermines democratic accountability.

Do No Harm
Borrowed from medical ethics, this principle insists that the first obligation of any AI system is to avoid causing damage β physical, psychological, or social β to individuals and communities.

Fairness & Non-Discrimination
Biased training data has already led to discriminatory outcomes in hiring tools and facial recognition. Ensuring equitable treatment across race, gender, and class must be a foundational constraint.

Long-Term Public Benefit
Short-term commercial gains should never override the broader social good. AI policy must prioritise outcomes for the many β not just the tech firms and investors who profit from deployment.
Drag the photo to reorder
Which species of deer is the largest native land mammal in the United Kingdom?
πΏ Nature & Animals Β· 25 votes
Has the Β£6 supermarket lunchtime sandwich become a national rip-off we just accept?
π½οΈ Food & Drinks Β· 25 votes
Which Scottish doctor and pharmacologist won the 1988 Nobel Prize in Medicine for developing the beta-blocker propranolol?
π©Ί Health Β· 24 votes
Which inheritance from past generations carries the heaviest moral weight today?
π³ 24 votes