Fullintel Logo
  • Solutions
    • Media Intelligence Platform
    • Executive News Briefings
    • Strategic Media Analysis
    • 24/7 Situation Management
  • By Need
    • Enterprise
    • PR Agencies
    • Government Services
  • Resources
    • Blog
    • PR Glossary
    • Newsroom
  • Customers
  • About
Client Login
Contact Us
Request Demo
Blog, Media Monitoring

Why AI Disclosure and Usage Policies Are Necessary in PR (and Other Industries)

April 18, 2024 Jim Donnelly
AI Disclosure

*The above image was AI-generated.

In the grand tapestry of technological advancement, artificial intelligence (AI) emerges as both a marvel and a quandary. Its potential to revolutionize industries, enhance efficiency, and elevate human capabilities is undeniable.

Hold up a second – quick question. Could you tell that ChatGPT wrote the lede above? 

If not, would you want to know? (Spoiler alert: It was). 

That’s the central question facing organizations of all stripes who use generative and other AI systems—which at this point is almost every organization in some capacity—and who now understand the need to develop usage and disclosure policies around the use of the technology.

Generative AI is popping up in all the wrong places: 

  • Sports Illustrated saw its reputation tarnished when it failed to disclose that some of its writers were really AI. 
  • In another instance, a peer reviewed science journal published widely inaccurate images of rat physiology. In both cases, the reputational damage is clear.  
  • And the horror film “Late Night With the Devil” was recently roasted by some viewers after viewers noticed it had used AI-generated imagery without disclosure. 

AI Disclosure, Labeling, and Usage Policies in PR

Some organizations in the PR and communications space have already released AI disclosure and labelling policies, including the Institute for Public Relations (IPR), which released its policy in early March. 

IPR President and CEO Tina McCorkindale told Fullintel that the policy’s rollout was primarily about protecting the integrity of research, and was partially in response to seeing obvious AI-generated opening paragraphs in some research submissions similar to the one in this article. 

“We were seeing instances of ‘AI speak’, where you can tell something is obviously written by AI,” she explains, adding that other obvious AI use cropped up in translated documents.

The IPR policy was partly inspired by an article back in January by PR professor Cayce Myers, McCorkindale explains, and articulates a set of best practices on how to use generative AI responsibly and transparently to preserve research integrity. It’s a living document that includes considerations to keep in mind when using gen AI, such as:

  • Acknowledging that gen AI is prone to hallucinations (making things up).
  • Acknowledging that AI models can be biased.

The policy also includes guidelines for use in IPR-published work, such as:

  • A requirement that authors disclose how gen AI is used in published work.
  • That gen AI is primarily used for editorial assistance.

It also includes rules around when the use of gen AI should be disclosed (and when it doesn’t need to be), and how to disclose, including:

  • Gen AI should be disclosed when used in the research process, such as collecting or analyzing data or creating new content.
  • Gen AI doesn’t need to be disclosed for topic generation, brainstorming, or minor editing.
  • Disclosure should include the AI program, the prompt, and the section to which it was applied.

But it’s not just IPR that has recognized the need for an AI policy. The Public Relations Society of America (PRSA) released its own guidance for the ethical use of AI back in November of 2023.

In the government arena, New York City, Maryland, and Illinois all have laws on the books requiring AI disclosure when used in employee screening processes.

What Are the Dangers of Not Disclosing AI?

While proactive disclosure of AI builds trust with your audience and stakeholders, the dangers of uncontrolled and undisclosed generative AI are considerable:

  • Misinformation, disinformation (either on purpose or driven by AI hallucinations), and narrative attacks are some of the most serious. IPR research shows that Americans are extremely wary of misinformation and disinformation. And there’s no doubt that the proliferation of large language models (LLMs) for generative AI will worsen the problem.
  • Having proprietary company information used as training data for an AI model. If you were to input sensitive data to ChatGPT, for example, without explicitly telling the app not to use your data for training. 
  • Ethical and reputational risks associated with using generative AI and the public’s reaction.
  • And then there’s the temptation of simply mailing in your next project and having a bot write the whole thing for you with little to no human supervision. That’s typically a recipe for disaster for many reasons we’ve mentioned above (and also pretty lazy).

“There’s an astronomical level of risk (around misinformation and disinformation),” says Chris Hackney, the founder and CEO of AI Guardian, an Atlanta-based company that offers AI governance and compliance software. “I think you’re going to see extreme situations of generative AI used at scale to persuade large numbers of people with manipulated content.”

The sheer scale and rapidity of potential narrative attacks against an organization, for example, means PR professionals will need real-time monitoring programs to keep on top of and mitigate these threats before they explode into trending topics.

What Should Companies Be Doing on AI Policy?

Some of Hackney’s advice for organizations that use generative or other forms of AI include:

1. Set up an AI committee. Companies should have a centralized AI center of excellence composed of a diverse cross-section of stakeholders to ensure centralized and transparent discussions around AI.

2. Develop a sensible AI policy: This means organizations should develop a policy that doesn’t just tell people what they shouldn’t do—the policy should also educate users on what they can do with generative and other forms of AI.

3. Map your AI projects: He says many CEOs have no visibility into AI projects or use cases at their organizations, which invites a level of risk for the organization. Ideally, this is done via a centralized software platform, not spreadsheets or email chains.

4. Establish AI approval criteria: Articulate the criteria by which your organization decides to proceed with an AI project (or not).

Conclusion: Generative AI Policies Are Here to Stay (For Now)

As AI programs (either used directly or embedded in other software) become more ubiquitous, it’s a near certainty that more organizations will develop similar AI use and disclosure policies. Doing so protects the organization and its employees.

However, whether that will always be the case is a matter for debate. 

“At this stage in the game, people don’t expect everything to be written by AI,” explains McCorkindale. ‘But if there is a time where it becomes more ubiquitous and it just becomes part of our cultural norm, then maybe we won’t need these policies anymore because people just expect it. But we’re not (there yet).”

No doubt that’s where we are headed (or close to it). And if the intro to this article is any indication, it may be a while until we get there. But then again, maybe not: Check out how far AI-generated video has come in the past year.

  • AI Disclosure
  • AI Disclosure and Usage Policies
  • AI Usage Policies
  • Media Monitoring
  • PR
Jim Donnelly
Jim Donnelly

Jim Donnelly is a former journalist and content marketing and communications consultant who works with clients across a range of industries, including mobile technology, IT security, enterprise IT consulting, and media monitoring and intelligence. His previous roles have included Chief Media Officer at Canada’s largest IT and technology association, a director at a major media intelligence firm and, prior to that, editor-in-chief at a regional business publication. 

Post navigation

Previous
Next

Leave a Reply

Your email address will not be published. Required fields are marked *

Search

Categories

  • Blog 177
  • Business 37
  • campaign 1
  • Executive Insights 6
  • Marketing 8
  • Media Monitoring 160
  • News 1
  • Newsroom 14
  • PR Crisis 8
  • PR Lessons 4
  • PR Strategy 2
  • Press Release 8
  • Shows 3
  • Top media outlets 36
  • White paper 4

Recent posts

  • Met Gala 2025
    2025 Met Gala: What Brand Storytelling and Silence Can Still Say
  • Pharma Stories
    Top Pharma News in April 2025
  • How Labubus, Coca Cola, and PepsiCo had Winning Strategies
    Tracking Consumer Trends: How Labubus, Coca Cola, and PepsiCo had Winning Strategies

Tags

AMEC AMEC Awards Angela Dwyer business intelligence ChatGPT Communications crisis communication Crisis Communications crisis management crisis media monitoring Crisis Monitoring Data and Measurement digital marketing event media monitoring event monitoring influencer marketing influencer monitoring IPRRC Kamala Harris media analysis media intelligence media measurement Media Monitoring media monitoring platform media monitoring service media monitoring services Pharma News PR PR Conferences PR Crisis PredictiveAI™ PR lessons PR measurement PR news PR Research PRSA PRSA ICON PR Tools PR trends Public Relations public relations strategy Sentiment Analysis social listening social media monitoring social media platforms

Related posts

How Labubus, Coca Cola, and PepsiCo had Winning Strategies
Business, Media Monitoring

Tracking Consumer Trends: How Labubus, Coca Cola, and PepsiCo had Winning Strategies

April 24, 2025 Angus Nguyen, Katie Michel

If you’ve scrolled through TikTok or wandered into niche toy stores recently, you’ve probably noticed the growing obsession with blind boxes—sealed packages containing mystery collectible figurines. Brands like Sonny Angels, Labubu, and Pop Mart’s expanding lineup have turned the unboxing experience into a cultural phenomenon, especially among Gen Z and Millennials. But this isn’t just […]

China’s PR Masterclass in the Era of Tariffs and Trade Wars
Business, Media Monitoring

China’s Public Relations Strategy: How Soft Power Shapes Global Influence

April 17, 2025 Angus Nguyen

For a country that’s not exactly known for subtlety in the global narrative game, China’s been playing a PR strategy that’s part spicy, part savage, and all kinds of smart lately. Let’s be real: when the tariff wars kicked off and Western politicians started using China as their go-to punching bag, Beijing could’ve defaulted to […]

Top-Pharma-Stories_March-2025
Business, Media Monitoring

Top Pharma News in March 2025

April 1, 2025 Angela Dwyer

This month’s healthcare news highlights public health governance updates involving the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC), as well as trade restrictions affecting medical technology advancements. These events carry notable implications for the health and pharmaceutical sectors. To support informed decision-making, Fullintel Hub provides expert insights, in-depth […]

Fullintel Logo

Schedule time with a media expert to see a live, one-on-one demo.

  • 1.339.970.8005
  • Book a Demo
  • LinkedIn
  • Facebook
  • X
Solutions
  • Media Intelligence Hub
  • Executive News Briefings
  • Strategic Media Analysis
  • 24/7 Situation Monitoring
By Need
  • Enterprise
  • PR Agency
  • Government
Resources
  • Blog
  • Product Updates
  • Case Studies
Want to receive news and updates?

    © FullIntel, LLC. All Rights Reserved.

    • Terms & Conditions
    • Privacy Policy