Fullintel Logo
  • Solutions
    • Media Monitoring
    • Executive News Briefings
    • Strategic Media Analysis
    • 24/7 Situation Management
  • By Need
    • Enterprise
    • Pharmaceuticals
    • PR Agencies
    • Government Services
    • Defense
  • Resources
    • Blog
    • PR Glossary
    • Newsroom
  • Customers
  • About
Client Login
Contact Us
Request Demo
Media Monitoring

AI Disclosure Policies in PR: Why Transparency Matters in 2025

April 18, 2025 Ted Skinner
AI Disclosure

*The above image was AI-generated.

In the grand tapestry of technological advancement, artificial intelligence (AI) emerges as both a marvel and a quandary. Its potential to revolutionize industries, enhance efficiency, and elevate human capabilities is undeniable.

Hold up a second – quick question. Could you tell that ChatGPT wrote the lede above? 

If not, would you want to know? (Spoiler alert: It was). 

That’s the central question facing organizations of all stripes who use generative and other AI systems—which at this point is almost every organization in some capacity—and who now understand the need to develop usage and disclosure policies around the use of the technology.

Generative AI is popping up in all the wrong places: 

  • Sports Illustrated saw its reputation tarnished when it failed to disclose that some of its writers were really AI. 
  • In another instance, a peer reviewed science journal published widely inaccurate images of rat physiology. In both cases, the reputational damage is clear.  
  • And the horror film “Late Night With the Devil” was recently roasted by some viewers after viewers noticed it had used AI-generated imagery without disclosure. 

AI Disclosure, Labeling, and Usage Policies in Public Relations

Some organizations in the PR and communications space have already released AI disclosure and labelling policies, including the Institute for Public Relations (IPR), which released its policy in early March. 

IPR President and CEO Tina McCorkindale told Fullintel that the policy’s rollout was primarily about protecting the integrity of research, and was partially in response to seeing obvious AI-generated opening paragraphs in some research submissions similar to the one in this article. 

“We were seeing instances of ‘AI speak’, where you can tell something is obviously written by AI,” she explains, adding that other obvious AI use cropped up in translated documents.

The IPR policy was partly inspired by an article back in January by PR professor Cayce Myers, McCorkindale explains, and articulates a set of best practices on how to use generative AI responsibly and transparently to preserve research integrity. It’s a living document that includes considerations to keep in mind when using gen AI, such as:

  • Acknowledging that gen AI is prone to hallucinations (making things up).
  • Acknowledging that AI models can be biased.

The policy also includes guidelines for use in IPR-published work, such as:

  • A requirement that authors disclose how gen AI is used in published work.
  • That gen AI is primarily used for editorial assistance.

It also includes rules around when the use of gen AI should be disclosed (and when it doesn’t need to be), and how to disclose, including:

  • Gen AI should be disclosed when used in the research process, such as collecting or analyzing data or creating new content.
  • Gen AI doesn’t need to be disclosed for topic generation, brainstorming, or minor editing.
  • Disclosure should include the AI program, the prompt, and the section to which it was applied.

But it’s not just IPR that has recognized the need for an AI policy. The Public Relations Society of America (PRSA) released its own guidance for the ethical use of AI back in November of 2023.

In the government arena, New York City, Maryland, and Illinois all have laws on the books requiring AI disclosure when used in employee screening processes.

What Are the Dangers of Not Disclosing AI?

While proactive disclosure of AI builds trust with your audience and stakeholders, the dangers of uncontrolled and undisclosed generative AI are considerable:

  • Misinformation, disinformation (either on purpose or driven by AI hallucinations), and narrative attacks are some of the most serious. IPR research shows that Americans are extremely wary of misinformation and disinformation. And there’s no doubt that the proliferation of large language models (LLMs) for generative AI will worsen the problem.
  • Having proprietary company information used as training data for an AI model. If you were to input sensitive data to ChatGPT, for example, without explicitly telling the app not to use your data for training. 
  • Ethical and reputational risks associated with using generative AI and the public’s reaction.
  • And then there’s the temptation of simply mailing in your next project and having a bot write the whole thing for you with little to no human supervision. That’s typically a recipe for disaster for many reasons we’ve mentioned above (and also pretty lazy).

“There’s an astronomical level of risk (around misinformation and disinformation),” says Chris Hackney, the founder and CEO of AI Guardian, an Atlanta-based company that offers AI governance and compliance software. “I think you’re going to see extreme situations of generative AI used at scale to persuade large numbers of people with manipulated content.”

The sheer scale and rapidity of potential narrative attacks against an organization, for example, means PR professionals will need real-time monitoring programs to keep on top of and mitigate these threats before they explode into trending topics.

Building Your AI Policy: Four Essential Steps

If your organization hasn’t formalized an AI policy yet, you’re behind. Here’s how to catch up, based on Hackney’s recommendations:

1. Establish an AI Committee

Create a cross-functional AI center of excellence. Pull in stakeholders from legal, communications, IT, HR, and operations. This group becomes your centralized discussion forum for AI decisions, ensuring transparency and consistent standards across departments. One team using AI one way while another team follows different rules creates compliance nightmares.

2. Develop a Practical AI Policy

Your policy shouldn’t be a list of prohibitions. It should educate users on what they can do with AI, not just what they can’t. Specify which AI tools are approved, what tasks are appropriate, when disclosure is required, and how to disclose properly. Make it easy to follow, not easy to ignore.

3. Map Your AI Projects

Many CEOs have zero visibility into how their organizations actually use AI. Different teams adopt different tools for different purposes, creating a sprawling ecosystem no one fully understands. Map this landscape. Document which tools are in use, which projects involve AI, and which data sources feed AI systems. Use centralized software, not spreadsheets that no one updates.

4. Define AI Approval Criteria

Articulate clear criteria for approving new AI projects. What security standards must tools meet? What disclosure requirements apply? What training do users need? Without explicit criteria, every AI decision becomes a negotiation instead of following a playbook.

As AI tools become ubiquitous—embedded in software we already use, powering features we take for granted—disclosure policies may eventually feel outdated. McCorkindale acknowledges this possibility: “At this stage in the game, people don’t expect everything to be written by AI. But if there is a time where it becomes more ubiquitous and it just becomes part of our cultural norm, then maybe we won’t need these policies anymore because people just expect it. But we’re not there yet.”

We’re probably headed there, though. AI-generated video has improved dramatically in just one year. Text generation is increasingly indistinguishable from human writing. Voice synthesis can replicate anyone’s speech patterns. The technology is racing ahead while ethical frameworks and regulatory structures lag behind.

Until we reach that hypothetical future where everyone assumes AI involvement, disclosure policies protect organizations and their audiences. They build trust by demonstrating transparency. They provide legal protection when compliance requirements tighten. They establish clear standards that employees can follow without guessing.

For PR professionals especially, these policies aren’t bureaucratic overhead. They’re practical tools for managing reputational risk in an environment where brand monitoring and crisis response increasingly involve AI-generated content. Your clients and stakeholders need to know you’re using AI responsibly. Your team needs clear guidelines for daily work. Your organization needs protection from the reputational and legal risks that come with undisclosed AI use.

The question isn’t whether to develop an AI disclosure policy. The question is whether you’re willing to risk operating without one while your competitors, industry organizations, and regulators all move toward mandatory transparency.

That opening paragraph that fooled you? It took ChatGPT three seconds to generate. But it would take months to rebuild the trust you’d lose if audiences discovered you’d been passing off AI content as human work without disclosure. That’s not a gamble worth taking.

-Ted Skinner

  • AI Disclosure
  • AI Disclosure and Usage Policies
  • AI Usage Policies
  • Media Monitoring
  • PR
Ted Skinner
Ted Skinner

Ted Skinner is the VP of Marketing at Fullintel with extensive experience in AI implementation for public relations and media monitoring. A recognized expert in crisis communication strategy and competitive intelligence, Ted specializes in developing practical applications for AI in PR workflows. His thought leadership focuses on helping PR professionals leverage technology to enhance strategic communications while maintaining the human insight that drives successful media relations.

Read more of Ted’s insights on AI-powered PR strategies and follow his latest thinking on modern measurement approaches.

Post navigation

Previous
Next

Leave a Reply

Your email address will not be published. Required fields are marked *

Search

Categories

  • Awards 12
  • Blog 51
  • Business 20
  • Executive Insights 31
  • Media Analysis 6
  • Media Monitoring 118
  • Newsroom 27
  • Pharmaceutical News 30
  • PR Crisis 15
  • PR Lessons 15
  • PR Strategy 25
  • Shows 4
  • Top Media Outlets 37
  • White paper 6

Recent posts

  • The Human + AI Model: Hybrid Analysis
    The Human + AI Model: Why Hybrid Analysis Beats Automation Alone in PR Intelligence
  • Competitive Intelligence Mastery
    Competitive Intelligence Mastery: Using MATT AI to Win the Narrative War
  • AI Media Monitoring
    AI Media Monitoring: Why Automation Alone Isn’t Enough

Tags

AI media intelligence AI media monitoring AMEC AMEC Awards Angela Dwyer Communications crisis communication Crisis Communications crisis management crisis media monitoring Crisis Monitoring Data and Measurement event media monitoring event monitoring influencer marketing influencer monitoring IPRRC media analysis Media Impact Score media intelligence media measurement Media Monitoring media monitoring platform media monitoring service media monitoring services media monitoring tools Pharmaceutical News Pharma News PR PR Conferences PR Crisis PR crisis management PredictiveAI™ PR measurement PR news PR Research PRSA PRSA ICON PR Tools Public Relations real-time media monitoring Sentiment Analysis social listening social media monitoring social media platforms

Related posts

Deliver Media Briefs on Slack & Teams
Executive Insights

Beyond the Inbox: How Integrated Chat Delivery to Slack and Microsoft Teams Transforms PR Team Workflows

October 21, 2025 Ted Skinner

Picture this: It’s 7:15 AM, and your executive team needs immediate updates on breaking industry news. Your media monitoring brief sits buried beneath 47 other emails in the CEO’s inbox, competing with meeting invites, vendor pitches, and yesterday’s unread messages. Meanwhile, your entire leadership team is actively discussing strategy in Microsoft Teams, where critical business […]

Event Media Monitoring
Blog

The Top 5 Ways to Improve ROI Through Real-Time Event Media Monitoring

October 8, 2025 Andrew Koeck

It’s the night before your organization’s big event, and the final checklist is running through your mind: Have you briefed the media team? Confirmed speakers? Tested the A/V setup? But here’s a question many communications teams overlook: Did you configure your event-specific keywords and hashtags in your media monitoring tools? While it’s not as visible […]

Media Monitoring Vs. Media Analysis
Media Analysis, Media Monitoring

Media Monitoring vs. Media Analysis: What’s the Difference and Why It Matters

September 9, 2025 Ted Skinner

If you’ve ever been confused about the difference between media monitoring and media analysis, you’re not alone. These terms are often used interchangeably in PR and communications, but they represent two distinct yet complementary functions that every communications professional needs to understand. Think of it this way: if media monitoring is like taking your temperature […]

Fullintel Logo

Schedule time with a media expert to see a live, one-on-one demo.

  • 1.339.970.8005
  • Book a Demo
  • LinkedIn
  • Facebook
  • X
Solutions
  • Media Intelligence Hub
  • Executive News Briefings
  • Strategic Media Analysis
  • 24/7 Situation Monitoring
By Need
  • Enterprise
  • PR Agency
  • Government
  • Pharmaceutical
Resources
  • Blog
  • Product Updates
  • Case Studies
Want to receive news and updates?

    © FullIntel, LLC. All Rights Reserved.

    • Terms & Conditions
    • Privacy Policy