Artificial intelligence raises ethical questions that philosophy, law, and politics are only beginning to grapple with. Algorithmic bias has produced documented harms: in hiring systems that disadvantage women, in credit scoring that disadvantages minorities, in facial recognition that fails people of colour. Black box systems make accountability nearly impossible — when an AI denies a loan or a visa, there may be no accessible explanation. The alignment problem asks how we ensure that increasingly powerful AI systems pursue goals that are actually good for humanity. Meanwhile, the concentration of AI development in a small number of companies and states raises profound questions about who controls the technology, in whose interests, and with what oversight. These are not questions that can be resolved by technologists alone.
💡 Did you know? Amazon developed an AI hiring tool that it scrapped in 2018 after discovering it systematically downgraded CVs from women. The system had been trained on ten years of the company's own hiring data — which reflected existing gender biases in the workforce.

