Can AI implementation be risk-free? A pragmatic look at ChatGPT

Generative AI is developing by leaps and bounds. In the last two years, the number of commercial and open Large Language Model (LLM) providers has exploded, all offering something unique in the sphere of AI. Whilst ChatGPT’s ability to hold a conversation and generate a huge range of convincing content is undoubtedly impressive, Dr. Caroline Chibelushi, Artificial Intelligence expert at Innovate UK KTN, explains what’s holding ChatGPT back from becoming an essential tool for a business and how to make it part of your day-to-day operations with minimal risk.
By now, we all at least recognise the term “ChatGPT”. Whether you’ve familiarised yourself with the generative AI model itself or have seen the masses of headlines surrounding it, this fast-moving, ever-changing tool is truly dominating the thoughts of forward-looking business owners. And it won’t be slowing down anytime soon. An incomparable amount of people have tested the chatbot – over 100 million globally to be specific. This speaks to its immense popularity and the effectiveness of its word-of-mouth marketing.
Despite its global taking, not many understand how Large Language Models (LLM), like ChatGPT, work, and the inaccuracies the technology can produce. In short, LLMs are algorithms trained on large amounts of textual data, mainly sourced from the internet, and some will also include data from scientific research, books and social media posts.
Therefore, with such a large amount of data, one of the most challenging aspects of the technology is filtering out fake, offensive, inaccurate, and biased information.
The problem, of course, is that ChatGPT LLMs sound extraordinarily intelligent and eerily human. It’s certainly not sentient but it is nonetheless convincing. Being convincing carries both positives and risks, especially for businesses. Just like with any new system in a business, the potential risk of implementation is conspicuous, but ChatGPT seems to be an exception.
Organisations are flocking to utilise the tool, and even creating their own in-house AI systems off the back of ChatGPT’s popularity.
There’s no doubt that ChatGPT’s capabilities are impressive when it comes to generating convincing content, but businesses need to be aware that even this branch of Artificial Intelligence can still get things wrong, hallucinate incorrect facts, and can easily be coaxed into creating toxic content.

Dr. Caroline Chibelushi
Nonetheless, implementing the tool wholesale requires a healthy dose of caution. Even large companies that you may assume would act with more caution have slipped up and felt the consequences. I’m referring to how Google’s AI ‘Bard’ cost its parent company Alphabet over $100 billion after making a factual error in its first demo.
Other limitations that cause risk, whilst openly reported by OpenAI but not significantly known, include the data that is fed to the tool. ChatGPT is trained on data up to 2021 only, so its answers will be occasionally out of date. While it may generate reams of text on things that happened more recently, the tool is only guessing what occurred after 2021 and is likely to be wrong.
Likewise, ChatGPT cannot access the internet or external links. As AI is created and trained on data generated by humans, AI can easily reflect the real and very human biases implicit in society. Recently ChatGPT created code to predict how senior someone is likely to be in their career and it factored in age, race and gender into its computations.
For those removed from the AI industry, it’s important to be cautious of our own biases and naivety. To our ears, or eyes, ChatGPT is cool, confident and authoritative, so we’re prone to believe what it says. If we get ahead of ourselves without truly understanding the tool we are using, we are threatened by embarrassing blunders that could adversely affect a business.
Overreliance on such AI models could lead to companies dealing with legal implications. Sam Altman, Chief Executive of OpenAI, has actively warned against relying on ChatGPT “for anything important right now.”
Not to mention, ChatGPT faced a major outage recently, less than a week after OpenAI released its new, more powerful model – GPT-4. Businesses will become extremely vulnerable if they rely on tools such as ChatGPT that require frequent maintenance, upgrading and general monitoring to ensure it stays up and running.
Terms of use and the privacy policy revolving around ChatGPT should also be thoroughly looked into by businesses before implementation. User prompts may be incorporated into the LLM model for its learning purposes, and later on, the model will offer that information to others with a query on the related subject matter.
Businesses need to handle sensitive topics with caution. The more organisations that produce LLMs, the higher the risk of the queries stored online being hacked, leaked, or more likely, accidentally made publicly accessible. This could include potentially sensitive or user-identifiable information.
Ultimately, then, the way we perceive and approach AI is what’s holding AI from becoming a powerful and valuable contributor to a business. As long as those wishing to implement AI are prepared and well versed in its limitations, there’s no end to what generative AI such as Chat GPT can be used for.
