Is Your "AI Strategy" actually just a story you’re telling yourself ?

Is Your "AI Strategy" actually just a story you’re telling yourself ?

Shadow AI is not a tooling problem. It's a strategy gap. Why a Microsoft Copilot licence is not an AI strategy, and what mid-market firms should do instead.

The CEO told us they had an AI strategy. They had a Microsoft licence and a Notion page. That sentence describes more boardroom conversations in 2026 than anyone wants to admit. The pattern repeats across SMEs, PE-backed scale-ups, and regulated mid-market firms. Buy one or two enterprise AI products. Send an internal announcement. Assume the matter is handled. Meanwhile real AI use is happening somewhere else: personal accounts, browsers, no logging, no governance.

The scale of the problem IBM's Cost of a Data Breach Report 2025, based on more than 600 organisations studied with the Ponemon Institute, linked 20% of breaches to shadow AI use. 97% of organisations reporting an AI-related security incident lacked proper AI access controls. 63% had no AI governance policy at all. A high level of shadow AI added an average of USD 670,000 to the cost of a breach. [1] A 2025 TELUS Digital survey of US enterprise employees found 68% of generative AI users at work were using personal accounts, and 57% had entered sensitive information into them. 29% knew their employer's policy explicitly prohibited this and did it anyway. [2] BlackFog's 2026 survey of 2,000 employees at companies of 500+ staff put unsanctioned AI use at 49%, with senior leaders the most relaxed about it. [3] This is not an edge case. It's the workforce.

The Samsung lesson, three years on In April 2023, within three weeks of letting engineers use ChatGPT, Samsung recorded three separate incidents of staff pasting confidential information into the public tool — including source code and an internal meeting transcript. The company banned generative AI on its devices the following month. [4][5] The engineers were senior, technically literate, and aware the tools existed. They weren't negligent. They were using a productivity tool the way it was designed to be used. The failure was structural. A category of tool with no precedent in corporate IT, deployed faster than any enterprise software in history, into an organisation whose governance frameworks did not yet recognise it. Most mid-market firms today operate roughly the way Samsung operated in March 2023. The models are now far more capable. The attack surface is larger.

Why a licence isn't a strategy A licence tells employees which tool the company paid for. It doesn't classify the data they're about to type. It doesn't log what was sent. It doesn't check vendor terms. It doesn't audit how often this happens, by whom, with which categories of data. It doesn't produce evidence the board can show regulators, insurers, or acquirers.

Regulators and insurers are already moving EU AI Act obligations on general-purpose AI providers came into force on 2 August 2025. Enforcement powers, including fines of up to €35m or 7% of global turnover, apply from 2 August 2026. [6][7] UK firms with EU customers, EU residents in their data, or products supplied into the EU are in scope. The ICO published its AI and Biometrics Strategy in 2025 and committed to a statutory code of practice on AI and automated decision-making. [8] The Data (Use and Access) Act became law on 19 June 2025. [9] A dedicated UK AI Bill isn't expected before late 2026. [10] None of this is a reprieve. UK GDPR, the Equality Act, FCA expectations, and ICO enforcement already apply to AI today. Insurers are moving faster than parliaments. AIG, Great American, and W.R. Berkley have filed for AI-related exclusions in liability and cyber lines. Others have introduced affirmative AI endorsements conditional on the insured's governance posture. [11][12] Underwriter questions have shifted from "do you use AI?" to "which models, who decided, what controls, what verification process." [13] Most UK mid-market firms will hit these questions in writing for the first time at their 2026 cyber renewal.

Anchor on real frameworks You don't need to invent governance from scratch. The NIST AI Risk Management Framework (AI RMF 1.0) is free, voluntary, and organises the work into four functions: govern, map, measure, manage. A generative AI profile was added in July 2024. [14][15] ISO/IEC 42001:2023 is the first certifiable international standard for AI management systems. It uses the same Plan-Do-Check-Act structure as ISO 27001 and integrates with existing security and privacy programmes. [16] Borrow NIST's vocabulary for internal work. Treat ISO/IEC 42001 as the destination. Both are referenced in EU AI Act compliance guidance, in major insurers' underwriting questions, and increasingly in PE buy-side technical due diligence. What follows fits inside one calendar year for most £5m to £100m firms.

Phase 1: Discover Weeks 1 to 6 You can't govern what you can't see. The IBM finding that 97% of breached firms lacked proper AI access controls is a direct consequence of organisations not knowing what they had. Surveys understate shadow AI by a factor of three or four because employees self-censor. An audit looks at evidence:

Network and DNS logs for traffic to AI provider domains Browser extension inventories OAuth grants on Google Workspace and Microsoft 365 Expense data for AI subscriptions paid on personal cards Code repository activity Meeting recording tools Customer-facing chatbots and integrations

A typical 60 to 80 person professional services firm has 30 to 80 distinct AI tools in active use. Three to ten of those are unknown to leadership. At least one will have permissive default terms allowing the vendor to use customer data for training. Senior staff are usually more exposed than junior staff, not less. Output: a written register of every tool, owner, department, data categories observed, vendor terms, and a preliminary risk rating.

Phase 2: Classify and decide Weeks 6 to 12 Two parallel tracks. Data classification Most mid-market firms have no classification scheme, or one written in 2014 and ignored. A workable scheme has six tiers: public, internal, confidential, highly confidential, regulated, personal. Each tier maps to permitted AI tools and permitted use modes. This work is hated and skipped. It's also what determines whether everything that follows survives contact with reality. Use-case risk tiering A junior copywriter drafting blog posts is in a different risk tier from a paralegal pasting client matter notes into a free consumer tool, which is different again from an autonomous agent with write access to production CRM. Borrow the EU AI Act's tiering or NIST's profiles. The principle is constant: high-volume low-risk uses get light-touch approval, low-volume high-risk uses get heavyweight approval. For most firms in this size band, the sequence is:

Buy first. Enterprise tenants of reputable providers with no-training contractual terms cover most legitimate use cases. RAG second, when grounding against your own documents matters. Fine-tune rarely and late. Build from scratch, almost never.

Vendor evaluation criteria worth writing once and reusing: data residency, training-on-customer-data terms, sub-processor list, model versioning, deletion and audit logs, SOC 2, ISO 27001, ISO/IEC 42001 if available.

Phase 3: Architect Weeks 12 to 20 Three components must exist by the end of this phase. A sanctioned tooling stack Enterprise tenants for the two or three foundation-model providers your staff actually want, with SSO, no-training terms, DLP integration. Enterprise meeting recording. Enterprise code assistants for engineering teams. Optionally, an internal RAG assistant against your document repository. Identity, access, and logging Every sanctioned tool authenticates via SSO. Every tool logs usage with retention long enough to satisfy your longest applicable regulator (six years in UK financial services). AI agents with write access are treated as identities in their own right, with the same operational controls as human accounts. An exception path The most overlooked piece. The fastest way to feed shadow AI is to ban tools your staff need without offering a way to add new ones. The path is: a one-page form, a 48-hour SLA, a named approver (not a committee), a standard set of vendor questions, and a public list of approved tools updated weekly.

Phase 4: Govern Months 5 to 9 Mid-market firms over-build governance. Three new committees, an AI Council, an Ethics Board. Rarely the right answer at this size. Use existing structures. The existing risk committee picks up AI as a standing item. The existing DPO absorbs AI data handling. The existing CISO or head of IT owns technical controls. One named accountable executive — usually COO, CTO, or General Counsel — owns the integrated picture and reports to the board. Policy that people read One page, two at most. Long policies get acknowledged and ignored. Cover what's allowed, what's banned, where the sanctioned stack lives, how to request an exception, what gets logged, and consequences. Republish every six months with the date stamp visible. The executive shadow problem The executive shadow problem is real. BlackFog's data shows senior leaders are the most relaxed about unsanctioned use. [3] In practice the people writing the policy are the most likely to ignore it, pasting board materials and HR investigations into personal accounts because the sanctioned tenant feels slow. Two practical fixes:

The accountable executive reviews their own usage logs quarterly with the CISO, on the record. The sanctioned tooling for the executive team is genuinely best-in-class, not the cheapest tier.

Phase 5: Operate Month 6 onwards Quarterly board reporting One page, same shape every time: tools sanctioned, tools in exception, incidents in the period, training completion, audit findings outstanding. If you can't produce that page, the strategy doesn't exist yet. Useful metrics

Coverage — percentage of identified shadow tools retired, replaced, or formally approved Logging completeness Time to approve on exceptions Incident count and severity Renewal posture, refreshed quarterly so cyber renewal isn't a scramble

Incident response Update the incident response playbook to recognise prompt injection, RAG vector-store poisoning, AI-generated phishing trained on the firm's public content, and synthetic-identity attacks on procurement. Test it annually with at least one AI-themed scenario. Run a smaller version of Phase 1 every six months. New tools appear constantly.

Where rollouts die Most governance programmes don't fail outright. They die quietly between months four and six. Four traps account for most of the deaths. The legal-review trap Months one to three feel productive. Around month four, legal raises objections for the first time and the work stalls awaiting alignment. Put legal in the room from week one. The "we banned ChatGPT" failure Heavy prohibition without sanctioned alternatives produces an immediate spike in shadow use. Samsung's 2023 ban worked only because Samsung also accelerated an internal alternative. The compliance-as-checklist trap Frameworks are useful only when they structure live operations, not annual audits. The proof-of-concept graveyard Most internal projects ship a PoC in month two, demo it to the board, win applause, then die because no production budget, owner, or operational support was planned. Every PoC needs a named owner committing to either production deployment or decommissioning by a fixed date.

The window is closing The ICO has signalled enforcement intent. The EU is one quarter from full enforcement powers. Insurers are pricing AI governance into renewals. PE buyers are commissioning pre-deal AI assessments alongside conventional technical due diligence. Any business considering a sale, raise, or material customer audit in the next twenty-four months should expect AI questions in writing. The firms that get ahead in 2026 won't be the ones that bought the most licences. They'll be the ones that can answer one question, in writing, on demand.

What AI is operating inside this organisation, who decided, and where is the evidence?

Bibliography

IBM & Ponemon Institute. Cost of a Data Breach Report 2025. 30 July 2025. https://www.ibm.com/reports/data-breach

TELUS Digital. “AI at Work Survey.” 26 February 2025. https://www.telusdigital.com/about/newsroom/telus-digital-survey-reveals-enterprise-employees-use-of-shadow-ai

BlackFog, reported in CIO. “Roughly half of employees are using unsanctioned AI tools.” 30 January 2026. https://www.cio.com/article/4124760/

TechCrunch. “Samsung bans use of generative AI tools after April internal data leak.” 2 May 2023. https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/

Bloomberg News. “Samsung Bans Generative AI Use by Staff After ChatGPT Data Leak.” 2 May 2023.

European Commission. AI Act Service Desk — Implementation Timeline.
https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act

DLA Piper. “Latest wave of obligations under the EU AI Act take effect.” 7 August 2025.

Information Commissioner’s Office (ICO). AI and Biometrics Strategy 2025.
https://ico.org.uk/about-the-ico/our-information/our-strategies-and-plans/artificial-intelligence-and-biometrics-strategy/

Information Commissioner’s Office (ICO). Guidance on AI and Data Protection.
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

King & Spalding. “EU & UK AI Round-up — July 2025.”

CSO Online. “Insurance carriers quietly back away from covering AI outputs.” 2026.

Business Insurance. “Insurers, brokers adjust as AI exclusions emerge.” 2026.

Wheelhouse Advisors. “Why Generative AI Is Breaking Cyber Insurance.” 2026.

National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0). 26 January 2023. https://www.nist.gov/itl/ai-risk-management-framework

National Institute of Standards and Technology (NIST). AI RMF Generative AI Profile. 26 July 2024.

ISO/IEC 42001:2023. Information Technology — Artificial Intelligence — Management System.
https://www.iso.org/standard/42001

Neurotic

Company

Resources

US locations

World locations