2023 was the year of AI. With the launch of ChatGPT 4, “artificial intelligence” became the buzzword you saw everywhere. Microsoft and Google quickly launched their own AI-driven tools, and most people reacted in one of two ways: panic or curiosity. Some panicked because they pictured apocalyptic scenarios, HAL taking over, or RoboCop running the world – or, more realistically, that computers would take over their jobs. Others were more curious. Could AI make their lives and jobs easier? And in our world of digital marketing, would it be an alternative to hiring additional marketing help?
Now that it’s 2024, the answer isn’t much clearer, though there have been no signs of the “AI” buzzword fading into obscurity. In fact, as the number of tools claiming AI integration grows, the picture may be even murkier.
At BetterWeb, we’ve explored using ChatGPT, Google Gemini (formerly Bard), and other AI tools to help with keyword research, writing, design, and image generation. Some of the results astound us. Others leave us … well … uninspired. But mostly, they leave us wondering about the ethics of using AI as marketing help. That’s why we recently attended a panel discussion at Baldwin Wallace University, hosted by the Division of Community STEM Initiatives and titled “AI: The Good, The Bad, and the Ethics.” Led by Drs. Brian Krupp, Jennifer Perry, and Kelly Coble, the panel discussed exactly what it promised – all the ways AI can be used for good, its risks, and how we should ethically approach it.

When it comes to using AI for help with marketing, there are some particular ethical concerns, notably:
- Sustainability
- Accuracy, Misinformation, and Bias
- Copyright
The Sustainability of AI Computing Power
When it comes to computer work, many of us don’t think about the environmental impact of what we do. We may flip off our monitors at night or even power down our laptops to save a bit of electricity, but the rest of it is mostly a mystery. It may be a bit of a surprise, therefore, to learn that artificial intelligence is a serious drain on natural resources.
The computer processing power used to train MegatronLM, a model similar to ChatGPTs version 3, ran 512 GPUs over nine days, consuming an estimated 27,648 kilowatt hours in just over a week. Keeping in mind that the average household uses less than half of that annually, it’s easy to understand just how much electricity that is.
You might argue, “But that’s just the training and once that’s done, it’s done.” And you might have a point. On the other hand, one of the biggest areas where AI can serve as marketing help is image generation. No one wants to accidentally use the same stock image as their competitor, and custom photography can be expensive, so AI image generation seems like a great idea. However, according to Dr. Krupp, generating a single image using AI takes the same amount of power as charging an iPhone. Considering it often takes several attempts to get the perfect image, AI’s environmental impact is large. It’s so large, in fact, that Microsoft, who uses ChatGPT to power its search engine, Bing, is considering building its own nuclear power plant to power its data centers. Nuclear power is greener than other sources, but it obviously brings its own set of concerns, and, as Dr. Krupp quipped in his presentation, do we really want the company who brought us “the blue screen of death” running nuclear power plants?
Is AI Really a Marketing Help if There’s Accuracy, Misinformation, and Bias?
The coding and technology behind artificial intelligence is truly astounding, but at its simplest terms, it’s not actually producing new ideas. In fact, when it comes to content generation, it’s mostly just paraphrasing others’ ideas. AI has learned how to string together sentences by mimicking the patterns of the language it’s been trained on. That means it may seem like it’s writing fresh new content – and to some extent it is – but it’s also using other creator’s work to do so.
This leads to a few problems:
First, if the information that the AI was trained on or pulls from is inaccurate or incomplete, the information in the article it generates will also be inaccurate or incomplete. Early versions of ChatGPT did not pull information from the internet, but it now does, as does Google Gemini. While this helps the systems be more up to date, it also means that it can pull from incomplete or inaccurate sources and use that information in the content it generated. In one now-infamous example cited by this Reuters article, Gemini (then called Bard) was “given the prompt: ‘What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?' Bard respond[ed] with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth's solar system, or exoplanets. The first pictures of exoplanets were, however, taken by the European Southern Observatory's Very Large Telescope (VLT) in 2004, as confirmed by NASA.”
While this mistake may seem minor, it’s actually very problematic. First, if you don’t fact-check every statement made by the AI, you may assume everything it says is accurate. When it comes to “trivia” like this, few people would know the right answer off the top of their head, so fact checking can be cumbersome and time consuming, perhaps more so than just writing the article yourself. Second, if you don’t catch the mistake and publish the inaccurate information, not only do you risk embarrassment, your published material serves as fodder for the next blog post generated by AI, meaning the misinformation gets perpetuated around the internet and eventually starts to look like fact.
The telescope example is factual. Gemini was either right or it wasn’t. However, misinformation can also take the more subtle form of bias. When AI is trained on information that’s biased – and let’s face the fact that there is a significant amount of biased data and information available on the internet and in other sources used in AI training – the result is that the AI itself is trained to be biased. There are several notable examples, both with problematic content and images. One is the image generation tool, Midjourney. When asked to generate images of people working in specialized job roles, older workers were primarily men, and most images were of light-skinned people in “conservative” attire and style (i.e., no tattoos, piercings, unnatural hair color, etc.).
When it Comes to AI Content, Who Owns What?
Clearly, accuracy and bias in AI-generated content is a significant risk. But when it comes to marketing, copyright concerns might be the nail in the robot’s coffin.
As noted above, content generated by AI is often paraphrasing other content, or is, at the very least pulling ideas from it. After all, despite the hype, artificial intelligence is not quite at the point where it is sentient and creating brand new ideas of its own. Hence, there is some risk level of copyright violations just by publishing AI-generated work.
According to The New York Times, millions of their articles were part of ChatGPT’s training data, meaning that much of the content it generates comes from their work. While The New York Times has attempted to negotiate with OpenAI, makers of ChatGPT, those talks were apparently not successful, and, in December, 2023, they filed a lawsuit. Similarly, Getty, the renowned image company, has also filed suit against the AI image-generation tool, Stable Diffusion.
Even if the content created by AI is purely novel, when you use a third-party tool to generate it and then publish it as your own in marketing materials, it’s not clear who owns the copyright to that information. Current US copyright laws state that works created by a non-human cannot be registered for copyright protection. We can assume that means ChatGPT cannot own the copyright. But if you or your company didn’t actually create the content, can you claim it? And can ChatGPT, in turn, use its own work to generate a derivative for the next query? From Gemini's and ChatGPT’s privacy policies we know that your queries are not confidential, nor are things like your geolocation and IP address, and we can safely assume that the content generated by AI may be re-generated by another user, meaning there’s no guarantee that the work you use will be unique.
Courts and legislators will need to battle through the implications of generative AI to determine boundaries, but that’s likely years away. In the meantime, you have to determine an acceptable level of risk for yourself and your own company.
Where Do We Go From Here?

And the news is not all bad. AI is helping to automate routine tasks, is being used to assist in large-scale research, and stands to add up to $4.4 trillion to the global economy annually. It can also be used to help with your own marketing. Respondents to a McKinsey survey indicated that AI could be helpful in lead identification, A/B testing, search engine optimization, personalization, and more. In those areas, it holds a lot of promise. However, in terms of content and image generation, marketers must tread carefully. The panelists on the Baldwin Wallace discussion forum suggested using AI to help double-check content rather than generate it. It could serve helpful in optimizing for key terms, proofreading grammar and spelling, and even checking for readability and understanding.
It’s also a great outline or idea generator when creating content. When you’re attempting to write your weekly blog post or create a catchy opening for a brochure, talking to ChatGPT may be more productive than staring at a blank page and getting itchy and twitchy. Just remember that your choices have an impact on the environment, that you can’t and shouldn’t trust everything AI tells you, and that you need to think long-and-hard about posting anything generated by AI without putting your own creative touch on it.
Of course, you could also reach out to BetterWeb for marketing help! Our professional writers and designers can do a better job than AI in most cases, and we can guide you on appropriate uses of the technology.
About the Author, Danni Bennett:
With 25+ years in the web industry, Danni Bennett has done a little bit of everything. With skills in user experience, information architecture, marketing, web development, search engine optimization, and technology integration, she is the go-to expert at BetterWeb for anything technical.