BROOKLYN, NEW YORK — Experts who are experimenting with various aspects of artificial intelligence at Goldman Sachs Group Inc. and Morgan Stanley say artificial intelligence could be useful in detecting fraud and reducing errors in algorithmic trading, but there are still many limitations with the technology as it exists today.
“A lot of work needs to be done to translate (AI) advancements into benefits for finance,” said Ambika Sukla, executive director of machine learning and AI at Morgan Stanley, at an AI conference Tuesday. “As we work on some of these new models, it’s important to proceed carefully and have a human in the loop.”
Mr. Sukla and others spoke about how artificial intelligence could impact the finance industry at a conference hosted by Ai4 Media. The conference was held amid the rise of automation, software and computer-driven “quant” funds in finance. Today, snippets of code do much of the job of a trader, as WSJ has previously reported.
At Morgan Stanley, AI could help deliver better trading and investment ideas for clients by doing more research and analyzing hundreds of documents and data sources that humans wouldn’t be able to digest, Mr. Sukla said. The technology could also be used to identify anomalies in credit card and wire fraud and to reduce errors in algorithmic trading. It’s also the underlying technology in virtual assistants, which could perform simple tasks and answer simple questions for employees and customers, he said.
Limits with current AI applications
But when it comes to answering questions that need contextual understanding and financial acumen, AI algorithms are easily fooled and therefore it’s difficult to rely on them. “It’s not clear that these models are learning or just memorizing the data,” Mr. Sukla said, referring to algorithms that are trained to recognize patterns.
We’re a long way from so-called general purpose artificial intelligence, which understands knowledge, learns by observations, solves problems and generates ideas, he and others said at the conference.
“To me, I think the fundamental issue is what I call deep understanding versus shallow understanding,” said Charles Elkan, managing director and global head of machine learning at Goldman Sachs. Shallow understanding is the ability to answer a limited range of questions that are similar to each other, he said. Deep understanding, he said, implies broad context and broad knowledge, including knowledge about who is asking the question, which is out of reach of today’s AI algorithms. For example, using a chatbot to do preliminary screening of job candidates is “impossible” because it requires deep understanding, he said.
There’s much speculation about whether AI systems will become as intelligent or more intelligent than humans, Mr. Elkan said. “The answer is there’s no law of nature that says super-intelligence is impossible. But the entire spectrum of current algorithms that we know for AI are not going to scale to human intelligence, let alone super-intelligence.”
Another limitation to advanced AI systems is the fact that they are not able to explain how they make decisions.
Several companies, private institutions and researchers are interested in building a greater level of trust between humans and machines through transparency in artificial intelligence.
Capital One Financial, for example, is researching ways that machine-learning algorithms could explain the rationale behind their answers, which could have far-reaching impacts in guarding against potential ethical and regulatory breaches as the firm uses more artificial intelligence in banking.
Machine learning enables computers to learn from data with minimal programming, and is a large part of artificial intelligence, a term that encompasses the techniques used to teach computers how to learn, reason, perceive, infer, communicate and make decisions like humans do.
Agus Sudjianto, head of corporate model risk at Wells Fargo, says his team has developed ways to make machine learning models as transparent as statistical models.
“One of the biggest barriers in applying machine learning or AI is really the explainability,” he said.