Rebuilding Success Magazine Features - Fall/Winter 2023 > The Inevitability of Artificial Intelligence
The Inevitability of Artificial Intelligence
By Andrew Flynn
When the artificial intelligence-powered chatbot ChatGPT became accessible to the public in 2022, Adam Zeldin, CPA, CA, CIRP, LIT, recalls “having something of an existential crisis about what my future as an insolvency professional looks like.”
Like many others, the Richter vice-president could see the vast potential for an AI tool that responds to questions in such a human way that it’s hard not to imagine it doing much of the work a human does.
Zeldin found the AI chatbot very helpful in proofreading even long samples of text. Pushing the technology a little further, he attempted to get it to help in drafting sections of a report that would require it to draw on prior court decisions and decide which were relevant.
“It was helpful in summarizing some things, but I found I couldn't trust that it had the full database and was able to summarize all of the information. It was essentially a completeness and accuracy issue, and I just wasn't able to garner that trust.”
Zeldin had run up against an experience very common among lawyers, accountants, and other business professionals who have tried to harness chatbots: it was pretty good at some things and dreadful at others. But he also found it quite chilling — he could see the huge potential of even this one relatively limited AI tool.
“The immediate thought was trepidation and fear: am I not going to be needed to draft reports anymore? Am I no longer going to need any expertise in financial modeling or court reporting?” Zeldin says.
“What does this look like a year from now, two years from now, five years from now? Can I simply just write a couple words into it and it spits out a final product? This is where the fear comes back into my head.”
AI doesn’t threaten jobs – yet
Bill Syrros, BDO’s exponential labs leader for innovation and change, doesn’t see AI becoming a threat to any jobs in the insolvency profession anytime soon. An expert in AI systems, Syrros moved to BDO when his startup Lixar IT was acquired by the company in 2020. He sees nothing but opportunity for all professionals, including those in insolvency, in the use of AI.
“I see it as sort of a new arms race for people to decide how effective they want to be,” he says. “Do they want to be the superpower, or do they want the status quo?”
He equates the current generative AI boom to the rise of Google 20 years ago. “You know, for those people who wanted to look a phone number up in the phonebook, they still had that power to do that. But for those who used Google, it was 10 times faster and you didn't have to have this paper brick on the corner of your desk.”
Generative AI is what insolvency professionals should be looking at when they want to understand the latest jump forward in AI technology. Many software and cloud platforms and almost all cybersecurity services have been using embedded AI components for years. Google, the Microsoft Azure cloud platform and countless others are already harnessing the power of AI in products on the market and more are coming every day.
But generative AI is different because it’s publicly available and appears to behave in eerily human ways. Ask ChatGPT or Google’s Bard a question and they respond in perfect English. It can be creative: it can write you a fictional story, a poem, or even an original joke. It can write a letter to your grandmother or correspond in business language to your client featuring information you provide.
Large language models and deep learning drive AI today
Much of what the tech industry refers to as “AI” is in truth a set of programs and applications that employ large language models (LLM). These are built to emulate human intelligence using machine learning, a branch of AI that employs algorithms to evaluate data and learn not just how a language works but how to improve its understanding of that language.
Novel new technologies such as OpenAI’s ChatGPT employ these strategies to create programs that can react to prompts from humans in a very human way – answering questions and even holding conversations that can feel all too real.
It’s not thinking at all like a human. What it is doing is using deep learning, a subset of machine learning, to create content by predicting text – in other words, it will produce a response by examining the data it has access to and predicting the next word, then the next sentence, then the next paragraph it should write.
For example, when asked to define the role of a Licensed Insolvency Trustee (LIT), ChatGPT sounds like it knows what it’s talking about:
A Licensed Insolvency Trustee (LIT), formerly known as a Bankruptcy Trustee in Canada, is a professional who is authorized and licensed by the Office of the Superintendent of Bankruptcy (OSB) to administer various insolvency proceedings, including bankruptcies and consumer proposals. This role is crucial in helping individuals, businesses, and corporations manage their financial difficulties and address insolvency situations.
It also gives the inquirer some pretty solid advice:
If you are facing financial difficulties and considering options like bankruptcy or a consumer proposal, consulting with a Licensed Insolvency Trustee can provide you with expert guidance on how to best navigate your situation.
And (as is sure to have many insolvency professionals sighing with relief) ChatGPT is quite candid about the fact that it is not able to replace an LIT: “While I can provide basic information and help users understand concepts related to insolvency, the role of a Licensed Insolvency Trustee involves a level of expertise and legal standing that an AI like me cannot replicate.”
Chatbots such as ChatGPT, Microsoft’s Bing AI, and Google Bard can sound very human and give very cogent, helpful advice. They are being used by companies around the world to create marketing materials, sift through research studies, collate reams of data that would otherwise take a human many hours of work, and much more.
Could an AI ultimately do what a LIT does?
Many see the usefulness of such a powerful tool, but others feel that their livelihoods could be threatened. Could a chatbot, armed with the details of an insolvency file simply take over for an LIT?
Probably not for decades to come, if ever, says Matissa Hollister, an expert on AI use in business and an assistant professor of organizational behaviour at McGill University’s Desautels Faculty of Management. But its value could lie in its ability to automate the most mundane tasks in any field where there is an abundance of data, such as the financial industry at large.
“With generative AI, the system is only going to help with some part of the job,” Hollister says. “So, the questions you have to think about are ‘How am I going to reconfigure this job? What does that mean? What does it mean to have part of the job taken away? What are the implications for that in terms of job quality?’”
Even for custom-built AI systems that are more task-based and using just data, a human needs to check on it to get the detail right, she says.
“That's a bigger issue with generative AI than it is for these custom-built systems. There's a need to be really straightforward and explain to the human who’s using it what the system is doing, what its limitations are, how much human input is still valued.”
“People are more likely to use it if they're not scared of it. They're more likely to use it well if they understand that the system isn't perfect and what its limitations are.”
It’s extremely important that a human is in the generative AI loop to fine tune what a large language model produces. And that human needs to understand where the AI is getting its data and how to fine tune the process of its responses, something referred to a “prompt engineering.”
Syrros points to a recent case where a colleague was worried that ChatGPT might have used copyrighted material in its responses. “I said, “Well, have you asked ChatGPT for the sources of the material it provided for you?’ She was surprised it could do that. It's spitting out references to things – ask it no differently than a human being would talking to another person. If it doesn't respond and the book doesn't exist there, or the source doesn't exist, then that's a pretty good tell, right?”
A cautionary tale for financial and legal professionals
There is no question that generative AI has the potential to enhance productivity, reduce time sifting through complex data, build more accurate financial profiles, and simplify thousands of complex tasks from marketing to customer service to security.
But as ground-breaking as this new technology may be, it’s crucial to note that it is still in development – and it still has some very significant drawbacks. The recent story of two New York personal injury lawyers should serve as a cautionary tale to any practitioner working in the insolvency field. Steven A. Schwartz and Peter LoDuca, were fined $5,000 after they used ChatGPT to supplement legal research in a case. Unfortunately, the chatbot (which, incidentally, does not have access to any data at all after 2021) made up a number of cases cited as precedent before the court. Such blips, referred to as hallucinations are not uncommon, which is one of its dangers and reinforces the need for human guidance.
“The training data is basically everything that humans have ever written and made public, though at its base, this is still a system that was just predicting with some randomness what the next word would be,” Hollister says.
“Part of the reason why it hallucinates is because it is sort of randomly predicting the next word, but once it picks that word, it kind of gets itself stuck in a rut. Once it goes in one direction, it's forced to try to stay consistent with that choice. And that's why it will start making these super weird, strong arguments and that kind of thing.”
The usefulness of AI
But that doesn’t mean it won’t be extremely useful, possibly game-changing for the industry in the decades to come.
“We're sort of in round one, you know,” Syrros says. “Fast-forward five years from now, and people will create their own personas like who they are, and there will be a large language model of everyone: you, me, anyone who wants one, and it'll sound like us. It'll respond like us. It'll even speak like us if we asked it to because I'd imagine video and voice will be part of this eventually.”
Such virtual personas could actually represent us in the virtual world, interfacing with clients using data unique to us, our personalities and our qualifications. Data that we ourselves are custodians of and choose to input into the model.
“We'll actually use them as part of our job profile,” says Syrros. “My LinkedIn profile might not be me the human, it'll be my bot that runs it. It knows how I speak, it knows my intonation, it knows what I'm interested in. It might be connected to my email and other things.”
“So, in terms of how does this affect an insolvency practitioner? I would answer: “How does this affect any professional in general?”
What should professionals do now to take advantage of AI?
“I've had many discussions with people of all different vintages, older, younger about AI,” says Zeldin. “The older generation, the way they sort of talk about it, is that we've seen this before where a new technology is introduced and everyone starts to overreact.. Then it finds its way in, we adopt a glass-half-full kind of attitude, and adapt and innovate accordingly.”
“I'm hoping that's the way this plays itself out,” Zeldin says. “But I don't know. Something about this seems different.”
The consensus among experts is that, just as with any other new technology, professionals will need to learn how to use generative AI properly and effectively. That is only way they can assess how it can hurt or help their business.
“When I’m asked about how someone can make the right use of generative AI, I respond ‘educate yourself,’ says Syrros. “Embrace new learning. Develop complementary skillsets. Learn how to prompt engineer. You know what's crazy about prompt engineering? It's like setting the table before dinner.
“Developing those skills to understand and use a technology like generative AI is super important. Expanding your network to include people who know how to use AI, that can speak to AI. In the end, just a hands-on experience: use it, start getting familiar with it.”
“The thing is, that the practical applications that you're going to be taught can be applied across all industries. Across all professionals.”
To read the ChatGPT response to a query on impact of AI on the insolvency industry in Canada, please click here.