Fintech
The challenges of GenAI in fintech
Due to cybersecurity disclosure rules adopted by the Securities and Exchange Commission (SEC). adopted In 2023, public entities in the United States will be required to disclose any major cybersecurity incidents. Going forward, these organizations will need in-depth knowledge of the impact, nature, scope and timing of any security incidents. In the era of generative artificial intelligence (GenAI), this is even more complicated.
The financial services industry has historically been slow to adopt new technologies into its offerings, due to the incredibly sensitive nature of the personally identifiable information (PII) it handles on a daily basis. But GenAI’s rapid spread across industries and ease of access to the public make it difficult to ignore. Public fintech organizations are among those already grappling with SEC reporting requirements, and GenAI adds a new level of uncertainty.
GenAI in fintech
Fintech is just one of many sectors wondering how to best approach the use of GenAI. Its capabilities can lead to increased productivity and greater efficiency and can allow employees to focus more on priorities. Specifically, GenAI can accelerate critical processes such as fraud detection, customer service, and in-depth analysis of massive collections of PII and other data.
To do this, GenAI must be trained with the correct, niche data for each use case; otherwise, the model will hallucinate or exhibit underlying biases.
GenAI is already known for making companies the subject of unfavorable news. More recently, the infamous Canada Air chatbots caused problems when a passenger purchased a plane ticket after speaking to the AI and being reassured that he would receive a refund for inflated last-minute fare costs due to its bereavement policy. When the passenger later went to collect the refund, Canada Air informed him that the chatbot had provided incorrect information about the policy and that no refund would be given. Courts have ruled otherwise and have held that AI-based chatbots are extensions of their associated companies.
No one wants to be the next big news story due to an AI malfunction, but fintech companies may need to pay more attention to staying ahead of such scenarios with SEC reporting requirements.
The security implications of GenAI
While some organizations and their boards of directors have an “all-in” mindset on using GenAI, others are watching and waiting. Fintech companies that have already started using the power of GenAI will need to lay the groundwork to ensure they have full visibility of its use across networks. And those who are taking a slower approach to GenAI will need the ability to ensure this Shadow AI it has not infiltrated workflows.
As threat actors continue to aggressively pursue data exfiltration and ransomware attacks, industries with valuable PII will also have to worry about AI-based attack capabilities used by cybercriminals, including exploiting AI to find vulnerabilities that could lead to extreme data breaches. Threat actors have already experimented with AI-generated spear-phishing campaigns with realistic deepfakes and other content to exploit human employees, and we are seeing evidence of AI-written malware.
Organizations must be prepared for the worst. To meet transparency requirements set by the SEC and ensure that GenAI does not pose an overall security risk, the task of laying the foundation for AI infrastructure is a top priority for organizational leaders and their boards of directors.
The basics of AI infrastructure
Boards and executives pursuing solutions that align with SEC rules and take into account the public availability of GenAI should consider emphasizing tailored infrastructures for holistic visibility and education: forensic analysis, auditability, AI governance and employee training.
You can’t manage what you can’t see, meaning risks like shadow AI will become rampant until organizations can gain insight into how, if at all, GenAI is being leveraged in internal processes. Any AI activity on internal networks should be easily viewable and monitored for anomalous or unwanted uses.
Additionally, the ability to log and monitor GenAI usage across internal networks as part of AI forensics automatically enables fintech companies to identify, track and mitigate potential security risks arising from GenAI. Since the SEC’s requirements include providing comprehensive details on security incidents, the ability to monitor AI activity through AI forensics on internal networks will be a critical skill for the future.
Another aspect of GenAI’s forensic intelligence and auditability that will prove critical is the ability to provide forensic information down to individual tips. Currently, companies do not have the infrastructure built to track and monitor AI usage. In cases where employees accidentally or intentionally provide sensitive information to the AI in the form of prompts, having GenAI history on file showing every prompt used internally will be invaluable for reporting purposes.
Education and training of employees on the use of GenAI and how to responsibly exploit its benefits are other key factors in complying with SEC regulations. Many popular large language models (LLMs) such as ChatGPT and Copilot are public repositories of data from the powered language, meaning that any PII accidentally entered into the model can potentially constitute a data leak. With proper education and training, employees will better understand how to appropriately use GenAI and minimize the risk of data breaches caused by improper use.
As boards and organizational leaders continue to consider the implications of GenAI in fintech and whether they should accelerate its adoption or wait, the SEC’s impacts on GenAI adoption are clear. The onus is now on public companies to better monitor and mitigate security risks, forcing high-value industries to reconsider their security and AI strategies.
By creating the foundation for GenAI governance and auditability, fintech companies can better prepare for the inevitable risks that come with stopping and pushing GenAI adoption. In fact, it’s the next logical step.