Crisis Communications

The Boardroom Crisis: Managing AI Risk Before the Incident Happens

Boards that wait for an AI incident to define their posture have already lost the narrative.

Every board I sit with eventually asks the same question about AI: 'are we exposed?' It is the wrong question. The right question is: 'when an incident happens, will the public, the regulator, and our customers conclude that we were in control?'

That conclusion is not formed in the press release. It is formed in the months before the incident, by the artefacts the organisation has already produced — the policies, the disclosures, the training, the escalation paths, the accountable executive. If those artefacts do not exist when the incident lands, no communications strategy can manufacture them in time.

The three failures I see most

First, AI risk is treated as an IT or legal issue. It is a reputational issue with technical and legal dependencies. The accountable owner has to be at the executive table, not two layers down.

Second, there is no agreed definition of an 'AI incident' inside the company. Without a shared definition, frontline teams will under-report, executives will be surprised, and the first time the board hears about a problem will be from a journalist.

Third, the crisis playbook has not been rehearsed against an AI scenario. The shape of an AI incident — opaque cause, ambiguous accountability, public confusion about how the model works — does not match the muscle memory of a traditional crisis drill.

What a defensible posture looks like

A defensible posture is boring on purpose. A named accountable executive. A short, plain-language statement of how the organisation uses AI and what it will not do with it. A standing review cadence. A pre-approved holding statement for the first 60 minutes of an incident. A clear line on customer notification.

Boring is the point. The artefacts that protect a board in a crisis are the ones that look unremarkable in calm weather: a one-page policy that has been signed, a register of AI use cases that has been kept current, a quarterly review that produced minutes, a tabletop exercise the executive team actually attended. None of these are difficult to produce. The difficulty is producing them before they are needed.

The board's specific job

The board does not need to become technical. It needs to ask three questions, on the record, every quarter: who owns AI risk in this organisation, what has changed since we last asked, and what would the first hour of a serious incident look like. The answers form a paper trail that demonstrates oversight — the single most important factor in how regulators and the public judge the response.

None of this is glamorous. All of it is what separates the organisations that emerge with their reputation intact from the ones that spend the next year explaining themselves.

The boards that move now still have the option of leading the conversation. The boards that wait will only get to react to it — and reaction, in an AI incident, is almost always more expensive than it needed to be.

By Nichole Brackett Walters

Caribbean CMO and advisor on marketing transformation, AI leadership, and reputation strategy. Writing from the field.

Publication

The ACTIVE Leadership Brief

A bi-weekly strategic dispatch on AI disruption, executive reputation, and the future of trust — read by global marketing and communications leaders.

Joined by 2,500+ regional & global leaders