AI FOI rules give UK public right to chatbot logs

AI FOI – The UK regulator says AI work outputs—and the prompts behind them—fall under FOI rules, potentially expanding access to ministers’ chatbot interactions.
A major shift in government transparency is taking shape in the UK, with new regulator guidance indicating the public may be able to access how officials use AI chatbots.
The Information Commissioner’s Office (ICO). the UK’s data-protection authority. has issued fresh guidance stating that when staff at a public body use AI for work purposes. the information produced is covered by Freedom of Information Act rules.. Crucially, the regulator also says that this can include the prompts used to generate the resulting information.
The ruling matters because it directly challenges a barrier that has limited similar requests in the past.. Last year. a successful FOI bid obtained the ChatGPT logs of then–tech secretary Peter Kyle. a move that was widely seen as a first of its kind.. After that. other news organisations attempted to replicate the access request. but several were rejected—either because authorities argued the cost of complying would be too high. or because requests were characterised as “vexatious. ” a legal label that allows public bodies to refuse certain applications.
The ICO’s clarification. however. could make it harder for public authorities to argue that AI-related requests fall outside FOIA altogether.. Jon Baines. of the London law firm Mishcon de Reya. said it would be “very difficult” for agencies to maintain that such information is not subject to FOI obligations.. In his view. the key issue is whether the authority holds information in a recorded form—whether that material sits on a server. in a log. or elsewhere inside its systems—and whether it includes what went in and what came out of AI tools.
Baines’ explanation fits with a broader practical understanding of record-keeping: if information is captured and stored when a public servant is doing their job. then it sits within the logic of FOI.. Tim Turner. a data-protection expert based in Manchester. said the principle should be straightforward. adding that scope should apply to AI interactions the same way it would apply to other work artifacts such as notes written on paper.
One of the most significant possibilities emerging from the guidance is access to prompts—the text officials enter into AI systems.. The ICO’s position suggests that such inputs may also qualify as “information generated” in the context of FOI. potentially widening what the public can request beyond final outputs or summaries.. That change could shift the focus of scrutiny from only what AI produced to also how it was asked to produce it.
There is also another practical implication for how FOI responses might be handled.. The ICO has indicated that public bodies may be expected to use AI to summarise large documents or datasets when responding to FOI requests.. That suggestion is relevant because it could reduce reliance on cost-based refusals. particularly where authorities previously dismissed requests as too burdensome to answer in full.
Not everyone views the prospect of FOI access to AI chat logs positively.. Matt Clifford. chair of the UK’s Advanced Research and Invention Agency (ARIA). criticised the earlier decision that led to the release of Kyle’s ChatGPT interactions.. He argued publicly that the outcome was “absurd. ” and that it could discourage ministers from using AI at all—an assertion grounded in concern that transparency requirements could deter officials from adopting tools even for routine work.
In response to questions about whether the ICO’s updated guidance was prompted by the earlier successful request. the watchdog did not confirm any link.. A spokesperson said the ICO “regularly attend[s] events and seek[s] feedback” from both public authorities and requestors.. The spokesperson added that the organisation’s latest AI-and-FOI guidance reflected feedback the ICO had been hearing. and that the content was tested with external stakeholders to ensure it was clear and useful.
For those following the governance of AI in public services. the timing and wording of the guidance now place the spotlight on how governments document their AI workflows.. With the regulator signalling that recorded AI inputs and outputs can be treated as FOI-covered information. the threshold for what can be requested—and how officials might need to retain records—could be shifting fast.
UK AI FOI rules ICO guidance chatbot logs data protection freedom of information government transparency
So now they have to show chatbot prompts too? Wild.
I don’t get it, are they gonna post everybody’s prompts like on the internet? Seems like privacy is about to go out the window. Also FOI sounds like the same thing as hacking but with paperwork.
Wait, they already got ChatGPT logs of some minister last year, right? So this is just repeating that but making it “official” that prompts count? Honestly if it’s on a server it’s basically public anyway, so they’ll probably still find some loophole like “too expensive” again.
This is gonna backfire for sure. Like the government will stop using chatbots if they can’t hide the prompts. Or they’ll just use different software that isn’t connected, which doesn’t even solve anything. FOI already gets messy and now it’ll be messy AI logs, like who’s even checking what’s real vs copied text.