AI Can Lie Like Humans. Maybe Better!

Artificial Intelligence (AI) tools have burst on the scene with unprecedented business-altering capacity. Their buzz is becoming deafening and the promises and threats should be understood and capitalized upon because they will not be put back into the bottle. What’s seen cannot be unseen.

Two classifications of AI tools, graphics arts and Language Learning Models that can be prompted by language instructions, have been released to the masses that unleash a boost in creativity and cost savings for savvy business owners on a historic scale. Those who choose to ignore adaptation will like feel their impact negatively on the bottom line.

Our focal point in this post will be the AI language learning models, Google’s Bard and OpenAi/Microsoft’s ChatGPT, that are being deployed in marketing/advertising, research, legal, communications departments and more in businesses around the globe.

As opportune as these tools are, they need to be tightly controlled or they can create problems ranging from mildly embarrassing to legally and financially devastating.

Trust But Verify

To use an LLM (language learning model program), you type in a prompt. This can be in the form of a simple question, like a search, or a complex set of instructions. The program searches its database and can produce a detailed response almost before you blink twice. But it’s incumbent on the user to confirm the accuracy of the response. Consider a couple of examples:

ChatGPT

I recently made a request of ChaptGPT to provide a detailed description of a product and it’s characteristics and how it could benefit me. I already knew much of what the product could do, but was looking for more information to confirm spending some time with it.

In seconds, ChatGPT produced multiple paragraphs of convincing detail with a recommendation to give it a go. It was masterful, except for one thing. It had absolutely nothing to do with the product I thought I’d asked about!

Since you can question these tools and probe for more information, I suggested ChatGPT had given me misleading information. ChatGPT doubled-down and assured me all the information was completely accurate.

To be fair to ChatGPT, I shouldn’t have even put the query to the program because the product is a current product that the program did not have the ability to research. Nonetheless, it had the audacity to brazenly and convincingly assert it’s accuracy.

Google’s Bard

Bard has some advantages and challenges that ChatGPT does not. Advantages include access to the massive proprietary databases that Google has accumulated over the years. One of such sources that Bard scans when producing responses are the transcripts to YouTube videos.

To demonstrate the potential challenge this presents, the hosts of the All In Podcast put Bard through a light test. Hosts David Sacks and Jason Calacanis queried Bard about publications and positions Sacks had on a couple of topics. Bard miss-attributed several quotes to Sacks that did not accurately present his position on the issues. It effectively put words in Sacks’ mouth that were completely wrong. Sacks described Bard as ‘hallucinating.’ This is problematic for Google and potential Bard users and, although unlikely in this case, could create major legal issues for a user attempting to claim such statements as fact.

Why did Bard make this mistake? What Sacks and Calacanis realized is that the quotes Bard had made had come from a podcast they had done a few weeks earlier. Bard had read the transcript of the podcast and assumed the text to be the words of Sacks. This is an issue with YouTube transcripts. YouTube transcripts are a running narrative of the podcast that does not indicate when one speaker stops speaking and a second speaker carries on. If you read the transcripts while watching the podcast, you hear the change in voice. If you just read the transcript, it’s not always that simple or clear as to who is speaking. It was an issue for Bard in this case.

Much of what Bard had attributed to Sacks was actually the words of Calacanis, who held a completely different opinion on the subject they were discussing. The podcast hosts suggested that Google has further product refinement ahead, but that the promise and potential will be life changing.

So how should you utilize these tools? Or should you?

Use the tools, but be very specific in the instructions you give a language learning program. Don’t ask it open-ended questions for it to resort to its own devices to produce a response. Give it the content you want it to produce and ask it to do it better. Then confirm that what it produces is what you want.

Sonet Dynamics helps businesses leverage these tools in their marketing/advertising/sales/public relations and communications campaigns in ways that increase efficiencies and save businesses money. Call to discuss how we can help you.

Share this post

Recent Posts

wpChatIcon