Opinions expressed by Entrepreneur contributors are their own.
The technological advances we’ve seen over the past few decades have transformed how businesses communicate and market to consumers. From the emergence of the internet to the rise of social media, the landscape has continuously evolved. Now, AI is driving another major shift in the industry. As a Gen Xer, I’ve witnessed this transformation firsthand.
Initially, I was skeptical of ChatGPT’s potential as a generative language model to replace human creativity in content creation. However, after trying it out for myself, I was amazed by the quality of content it produced — and I’m not the only one. In a CNN interview, media theorist and author of the book Program or Be Programmed, Douglas Rushkoff, acknowledges how ChatGPT can write better than his students. Thus, its ability to write well makes it a valuable tool for marketers looking to streamline their content creation process.
As a result, 61.4% of marketers have already adopted AI or plan to use it, with 41.4% specifically using it for content marketing, according to the 2023 AI Marketing Benchmarking Report. While the time-saving and efficiency benefits of AI are clear, it’s crucial to consider the potential risks and ethical implications. With AI being relatively new and lacking clear guidelines and regulations, it’s currently the “wild west” of technology.
Why marketers must be wary of AI
When it comes to marketing, AI can be a game-changer. However, as with any new technology, there’s a learning curve, and we must be aware of the potential risks involved. Unfortunately, some marketers may prioritize the benefits of AI over these potential risks, leading to a lack of awareness and education on ethical considerations related to AI.
One example of AI being used unethically in marketing is the creation of fake online reviews or social media posts. In 2019, researchers from the University of Chicago and the University of California, San Diego, created an AI system capable of generating fake Yelp reviews that were almost impossible to distinguish from real reviews. This practice can deceive consumers and harm them by leading them to make purchasing decisions based on false information.
It’s crucial for marketers to recognize the ethical considerations surrounding the use of AI in marketing and to take steps to ensure they’re using it responsibly. By doing so, we can harness the benefits of AI without sacrificing the trust and goodwill of our customers. As AI continues to shape the marketing landscape, it’s up to us to ensure it’s used in a way that’s transparent, fair and beneficial for everyone involved.
Ethical missteps with AI within marketing typically fall within the following areas:
While generative AI like ChatGPT is an impressive language model, it’s important to recognize that it’s not infallible. As with any technology, there are limitations to its capabilities that marketers need to be aware of. The AI is trained on a fixed dataset, which means it may not be aware of recent developments or events that have occurred since the cutoff. Additionally, natural language is often ambiguous, and the meaning of a statement can be dependent on contextual factors that may be misinterpreted and lead to inaccurate responses.
To test the accuracy of ChatGPT, I asked it a couple of questions. First, I asked, “What was the first animated film?” ChatGPT responded with Fantasmagorie, a short animated film by French animator Emile Cohl in 1908. However, when I reworded the question and asked, “What was the first animated cartoon?” ChatGPT responded with “Gertie the Dinosaur” a short film created by American cartoonist Winsor McCay in 1914. So, which one is correct? This is just a fun example, but it highlights the potential for inaccuracies when using AI-generated content.
As marketers and PR professionals, we often work closely with media outlets, and the content we produce isn’t always fact-checked. The use of AI-generated content may increase the potential for inaccuracies and unintentional misinformation. This underscores the importance of verifying and fact-checking all content, regardless of its origin. While AI can be a valuable tool, it’s crucial to exercise caution and not rely solely on AI-generated content without human oversight.
Disclosure and transparency
As the use of AI in content creation becomes more common, the question arises: Should the public be made aware when a content piece was produced by AI? While there are no specific laws or regulations that require disclosure of the use of AI-generated content, there are existing laws and regulations that may apply in certain contexts.
For example, the FTC has issued guidelines for advertising and marketing that require disclosure of material connections between advertisers and endorsers. These guidelines also apply to AI-generated content in advertising or marketing, if the content is being used to promote a product or service. However, the issue of transparency in AI-generated content goes beyond legal requirements.
For marketers and journalists, transparency is crucial to maintain trust with their audience. In January of this year, CNET paused AI-generated stories after The Verge reported that AI tools had been utilized for months without transparency or full disclosure. The lack of transparency was a problem not only for readers but also for CNET staff, who were sometimes left in the dark about how the company was using AI.
As AI technology advances, it’s possible that new regulations will be developed to address transparency concerns. In the meantime, being transparent about the use of AI in content creation is a best practice to maintain trust and integrity with the audience. With AI-generated content being used more frequently, it’s important to consider the implications of its use and ensure that it’s used in a way that’s transparent, ethical and responsible.
If the AI-generated content incorporates copyrighted material, the marketer could be infringing on the exclusive rights of the copyright holder. Also, AI systems are typically trained on large datasets of text, images and other content. If a marketer uses copyrighted materials as part of the training data for an AI system without permission, they could be infringing on a copyright. To avoid copyright infringement when using AI-generated content, marketers should ensure they have the necessary rights and permissions to use any copyrighted materials that may be included in content. This may involve obtaining permission from the copyright holder or using only content that is in the public domain.
Racial and gender bias
In 2016, Persado, a marketing technology company, made headlines when it used AI to generate marketing messages for Hillary Clinton’s presidential campaign. While the messages were designed to appeal to different demographic groups, an analysis found they contained gender biases. Specifically, the messages targeted toward women focused on emotions and relationships, while messages for men were focused on achievement and power.
As marketers, it’s our responsibility to ensure AI systems are trained on diverse and representative datasets and audited regularly for bias. We must design AI systems with fairness and transparency in mind and ensure they reflect the ethics and values of our organization. Without proper oversight, AI-generated content may unintentionally perpetuate biases and stereotypes that could harm our reputation and relationships with our audience.
To combat this, it’s crucial to have human oversight in the creation and deployment of AI-generated content. We must ensure the content created is free from bias and aligns with the values of our organization. By doing so, we can use AI as a tool to improve our marketing efforts and create more inclusive and ethical content.
Respecting the privacy and security of our audience’s personal data is critical as a marketer. This means that if the AI-generated content involves the collection, processing or use of personal data, we must obtain user consent in accordance with data protection laws and regulations.
To obtain this consent, we need to provide clear and transparent information on how data is being collected and used in the AI-generated content. This not only ensures compliance with legal requirements but also builds trust with our audience by demonstrating we value their privacy.
However, data security is also a critical consideration. Marketers must take appropriate measures to ensure the security of the personal data that is collected and used in the AI-generated content. This may involve implementing technical measures to prevent unauthorized use, access or disclosure of the data.
By prioritizing user privacy and data security in the creation and deployment of AI-generated content, we can foster trust with our audience and avoid the risks of potential data breaches or privacy violations. As we continue to integrate AI into our marketing strategies, it’s essential to remain vigilant and uphold ethical and legal standards to safeguard our audience’s personal information.
Misleading information and manipulation
Marketers must be attentive regarding the potential risks of using AI-generated content, especially when it comes to chatbots or virtual assistants. These tools could be programmed to provide misleading information, intentionally steering customers toward particular products or services, and even deceiving them.
Furthermore, AI-generated social media posts or ads could be designed to manipulate customer behavior by eliciting emotional responses or creating a false sense of urgency to encourage purchases. These tactics are unethical and could damage the trust and reputation of a brand.
Therefore, we must prioritize fairness and transparency when using AI-generated content. We must ensure these technologies are not used to deceive or manipulate customers, but rather to enhance their experience and provide them with accurate information. By using AI-generated content ethically and responsibly, we can build trust with our audience and achieve long-term success for our brand. Ultimately, it’s crucial to prioritize ethical considerations and avoid any tactics that could harm our customers or our reputation.
How marketers can ensure they (or their companies) utilize AI ethically
The use of AI in marketing and public relations has sparked important discussions around ethics and responsibility. However, it’s important to recognize that AI is a powerful tool that can be used for good. At my own marketing and public relations firm, we held a team meeting to discuss the ethical ramifications of AI and how we could utilize it to improve our client work while ensuring we remain ethical in our usage.
As professionals in the field, we must assess the benefits of AI and determine how it can be utilized with the lowest possible risk. We can develop written guidelines our team can agree on regarding how we will use and not use AI, taking into account factors such as accuracy and potential bias. It’s crucial to decide how the use of AI will be communicated or disclosed to clients or customers and ensure cybersecurity measures are in place to protect personal data.
Additionally, we must provide fact-checking for accuracy and monitor for bias, staying updated on the latest AI-related regulations and laws. By prioritizing transparency, accountability and responsibility, we can use AI as a tool to enhance our work and provide our clients with exceptional service.
It’s important to recognize AI is not a one-size-fits-all solution and may not be appropriate for all marketing or public relations activities. However, with clear communication, guidelines and accountability, we can ensure we approach AI usage ethically and responsibly, aligned with our values and the best interests of our clients. Let’s embrace the benefits of AI while upholding the highest ethical standards in our work.
But let’s get to the bottom line: You’re probably wondering if this article was written using AI. The answer is yes and no. I wrote this article based on my own and very human insights and experience, however, once written, AI was used for enhancement. I used AI to research examples, which I then fact-checked, and to edit my article for spelling and grammar mistakes and readability. Just to be transparent … and ethical.