
As artificial intelligence becomes embedded in British healthcare, policing, and public services, the question of which values should govern its development has never been more urgent. Where should the moral line be drawn?
Put the items in your preferred order.

Human Oversight First
No AI system should be able to make consequential decisions β from benefit assessments to criminal sentencing β without meaningful human review. Accountability must rest with people, not algorithms.

Transparency & Explainability
If an AI system affects your life, you should be entitled to a clear explanation of why. Black-box decision-making erodes public trust and undermines democratic accountability.

Do No Harm
Borrowed from medical ethics, this principle insists that the first obligation of any AI system is to avoid causing damage β physical, psychological, or social β to individuals and communities.

Fairness & Non-Discrimination
Biased training data has already led to discriminatory outcomes in hiring tools and facial recognition. Ensuring equitable treatment across race, gender, and class must be a foundational constraint.

Long-Term Public Benefit
Short-term commercial gains should never override the broader social good. AI policy must prioritise outcomes for the many β not just the tech firms and investors who profit from deployment.
Drag the photo to reorder
How many teeth does an adult fox have?
πΏ Nature & Animals Β· 26 votes
Is it time to admit that dating apps have made British men and women worse at actual relationships?
β€οΈ Relationships Β· 25 votes
Should UK banks be forced to fully reimburse victims of online scams, no exceptions?
π» Tech Β· 24 votes
In which year did the United Nations adopt the Convention on the Rights of the Child?
π³ 22 votes