AI Disclosure Policies in PR: Why Transparency Matters in 2025
*The above image was AI-generated.
In the grand tapestry of technological advancement, artificial intelligence (AI) emerges as both a marvel and a quandary. Its potential to revolutionize industries, enhance efficiency, and elevate human capabilities is undeniable.
Hold up a second – quick question. Could you tell that ChatGPT wrote the lede above?
If not, would you want to know? (Spoiler alert: It was).
That’s the central question facing organizations of all stripes who use generative and other AI systems—which at this point is almost every organization in some capacity—and who now understand the need to develop usage and disclosure policies around the use of the technology.
Generative AI is popping up in all the wrong places:
- Sports Illustrated saw its reputation tarnished when it failed to disclose that some of its writers were really AI.
- In another instance, a peer reviewed science journal published widely inaccurate images of rat physiology. In both cases, the reputational damage is clear.
- And the horror film “Late Night With the Devil” was recently roasted by some viewers after viewers noticed it had used AI-generated imagery without disclosure.
AI Disclosure, Labeling, and Usage Policies in Public Relations
Some organizations in the PR and communications space have already released AI disclosure and labelling policies, including the Institute for Public Relations (IPR), which released its policy in early March.
IPR President and CEO Tina McCorkindale told Fullintel that the policy’s rollout was primarily about protecting the integrity of research, and was partially in response to seeing obvious AI-generated opening paragraphs in some research submissions similar to the one in this article.
“We were seeing instances of ‘AI speak’, where you can tell something is obviously written by AI,” she explains, adding that other obvious AI use cropped up in translated documents.
The IPR policy was partly inspired by an article back in January by PR professor Cayce Myers, McCorkindale explains, and articulates a set of best practices on how to use generative AI responsibly and transparently to preserve research integrity. It’s a living document that includes considerations to keep in mind when using gen AI, such as:
- Acknowledging that gen AI is prone to hallucinations (making things up).
- Acknowledging that AI models can be biased.
The policy also includes guidelines for use in IPR-published work, such as:
- A requirement that authors disclose how gen AI is used in published work.
- That gen AI is primarily used for editorial assistance.
It also includes rules around when the use of gen AI should be disclosed (and when it doesn’t need to be), and how to disclose, including:
- Gen AI should be disclosed when used in the research process, such as collecting or analyzing data or creating new content.
- Gen AI doesn’t need to be disclosed for topic generation, brainstorming, or minor editing.
- Disclosure should include the AI program, the prompt, and the section to which it was applied.
But it’s not just IPR that has recognized the need for an AI policy. The Public Relations Society of America (PRSA) released its own guidance for the ethical use of AI back in November of 2023.
In the government arena, New York City, Maryland, and Illinois all have laws on the books requiring AI disclosure when used in employee screening processes.
What Are the Dangers of Not Disclosing AI?
While proactive disclosure of AI builds trust with your audience and stakeholders, the dangers of uncontrolled and undisclosed generative AI are considerable:
- Misinformation, disinformation (either on purpose or driven by AI hallucinations), and narrative attacks are some of the most serious. IPR research shows that Americans are extremely wary of misinformation and disinformation. And there’s no doubt that the proliferation of large language models (LLMs) for generative AI will worsen the problem.
- Having proprietary company information used as training data for an AI model. If you were to input sensitive data to ChatGPT, for example, without explicitly telling the app not to use your data for training.
- Ethical and reputational risks associated with using generative AI and the public’s reaction.
- And then there’s the temptation of simply mailing in your next project and having a bot write the whole thing for you with little to no human supervision. That’s typically a recipe for disaster for many reasons we’ve mentioned above (and also pretty lazy).
“There’s an astronomical level of risk (around misinformation and disinformation),” says Chris Hackney, the founder and CEO of AI Guardian, an Atlanta-based company that offers AI governance and compliance software. “I think you’re going to see extreme situations of generative AI used at scale to persuade large numbers of people with manipulated content.”
The sheer scale and rapidity of potential narrative attacks against an organization, for example, means PR professionals will need real-time monitoring programs to keep on top of and mitigate these threats before they explode into trending topics.
Building Your AI Policy: Four Essential Steps
If your organization hasn’t formalized an AI policy yet, you’re behind. Here’s how to catch up, based on Hackney’s recommendations:
1. Establish an AI Committee
Create a cross-functional AI center of excellence. Pull in stakeholders from legal, communications, IT, HR, and operations. This group becomes your centralized discussion forum for AI decisions, ensuring transparency and consistent standards across departments. One team using AI one way while another team follows different rules creates compliance nightmares.
2. Develop a Practical AI Policy
Your policy shouldn’t be a list of prohibitions. It should educate users on what they can do with AI, not just what they can’t. Specify which AI tools are approved, what tasks are appropriate, when disclosure is required, and how to disclose properly. Make it easy to follow, not easy to ignore.
3. Map Your AI Projects
Many CEOs have zero visibility into how their organizations actually use AI. Different teams adopt different tools for different purposes, creating a sprawling ecosystem no one fully understands. Map this landscape. Document which tools are in use, which projects involve AI, and which data sources feed AI systems. Use centralized software, not spreadsheets that no one updates.
4. Define AI Approval Criteria
Articulate clear criteria for approving new AI projects. What security standards must tools meet? What disclosure requirements apply? What training do users need? Without explicit criteria, every AI decision becomes a negotiation instead of following a playbook.
As AI tools become ubiquitous—embedded in software we already use, powering features we take for granted—disclosure policies may eventually feel outdated. McCorkindale acknowledges this possibility: “At this stage in the game, people don’t expect everything to be written by AI. But if there is a time where it becomes more ubiquitous and it just becomes part of our cultural norm, then maybe we won’t need these policies anymore because people just expect it. But we’re not there yet.”
We’re probably headed there, though. AI-generated video has improved dramatically in just one year. Text generation is increasingly indistinguishable from human writing. Voice synthesis can replicate anyone’s speech patterns. The technology is racing ahead while ethical frameworks and regulatory structures lag behind.
Until we reach that hypothetical future where everyone assumes AI involvement, disclosure policies protect organizations and their audiences. They build trust by demonstrating transparency. They provide legal protection when compliance requirements tighten. They establish clear standards that employees can follow without guessing.
For PR professionals especially, these policies aren’t bureaucratic overhead. They’re practical tools for managing reputational risk in an environment where brand monitoring and crisis response increasingly involve AI-generated content. Your clients and stakeholders need to know you’re using AI responsibly. Your team needs clear guidelines for daily work. Your organization needs protection from the reputational and legal risks that come with undisclosed AI use.
The question isn’t whether to develop an AI disclosure policy. The question is whether you’re willing to risk operating without one while your competitors, industry organizations, and regulators all move toward mandatory transparency.
That opening paragraph that fooled you? It took ChatGPT three seconds to generate. But it would take months to rebuild the trust you’d lose if audiences discovered you’d been passing off AI content as human work without disclosure. That’s not a gamble worth taking.
Ted Skinner is the VP of Marketing at Fullintel with extensive experience in AI implementation for public relations and media monitoring. A recognized expert in crisis communication strategy and competitive intelligence, Ted specializes in developing practical applications for AI in PR workflows. His thought leadership focuses on helping PR professionals leverage technology to enhance strategic communications while maintaining the human insight that drives successful media relations.
Read more of Ted’s insights on AI-powered PR strategies and follow his latest thinking on modern measurement approaches.



