After dozens of board-level AI briefings, the pattern is consistent. Three questions come up every time, usually in the first fifteen minutes.
First: what's the actual risk? Not the theoretical risk. The specific, quantifiable exposure this organisation faces if AI goes wrong - reputational, regulatory, operational. Boards don't want a lecture on AI ethics. They want to know what could go on the front page.
Second: who owns this? Not which team is building it. Who is accountable when something breaks, when a decision is wrong, when a customer complains. The accountability question is where most AI presentations fall apart, because the honest answer is usually 'nobody, yet.'
Third: what does this cost us if we don't do it? This is the question most presenters miss entirely. They focus on the upside of adoption. Boards are more moved by the cost of inaction - the competitive gap, the talent drain, the operational inefficiency that compounds quarter over quarter.