There is one class of AI risk that is generally knowable in advance. These are risks stemming from misalignment between a company’s economic incentives to profit from its proprietary AI model in a particular way and society’s interests in how the AI model should be monetised and deployed. The surest way to ignore such misalignment is by focusing exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed for profit.