The Consumer Financial Protection Bureau’s recent issue spotlight took a close look at the growing use of chatbots in the banking industry, discussing the limits and risks of using the technology for customer service.
According to the CFPB, an estimated 37% of the U.S. population interacted with a bank’s chatbot in 2022, and that number is expected to rise. As chatbot use increases, it becomes important for banks to ensure they remain compliant and provide efficient services to their customers.
Chatbot Red Flags
Banks use chatbot for a variety of tasks including retrieving account balances and processing bill payments. In its analysis, the CFPB points out that not all chatbots are created equal. Some use simple rule-based systems are trained using real customer conversations.
Among these chatbots, the CFPB has several red flags:
- Violating consumer protection laws
- Lower customer service and trust
- Potential harm to consumers
Chatbot Issues Identified By the CFPB
Based on consumer complaints and the CFPB’s examination of chatbots across the industry, the Bureau identified several issues with the technology that contribute to the risks listed above.
Chatbots may struggle with complex questions. Some chatbots labeled as artificial intelligence may only be capable of answering basic questions, which limits their ability to:
- handle disputes when they don’t recognize trigger words or lack the functionality to respond to inquiries outside their parameters
- provide accurate, reliable, or sufficient information, particularly when the underlying datasets have inaccurate information
- resolve customer issues, for instance when chatbots don’t have the address specific issues and customers can’t reach a human representative
Consumers often have a hard time reaching a customer service representative after a frustrating experience with a chatbot.
Companies may prioritize revenue-generating features rather than improving user experience.
From a security standpoint, chatbots can be phished by other chatbots when they’re not programmed to detect suspicious patterns or impersonation attempts. Customers may also have a false sense of confidence in chatbot technology and overshare personal information.
Improving Chatbot Experience
So, how can we navigate these risks? Here are a few strategies:
Expand the triggering criteria for disputes beyond specific words or syntax to understand when customers raise concerns. Implementing natural processing language (NLP) techniques can help chatbots better interpret customer input. They should be able to analyze and understand the context of the dispute, offer relevant solutions, and guide customers through the resolution process.
Make sure customers can easily access human customer service representatives when chatbots are unable to address their issues effectively. This can involve implementing clear escalation pathways and minimizing wait times to connect with a human representative.
Address vulnerabilities in chatbot systems to prevent unauthorized access, breaches, or phishing attempts. Develop mechanisms to detect and identify suspicious patterns or behaviors exhibited by chatbots interacting with each other. Implement secure web protocols and robust privacy protections to safeguard customer information.
Improve chatbots’ ability to handle diverse dialects and language nuances to cater to customers with limited English skills or specific language needs. This can involve training chatbots on a wider range of dialects and implementing language models with increased linguistic flexibility.
Regularly update and refine chatbot systems based on user feedback and evolving customer needs. Conduct user testing and incorporate customer input to identify pain points and enhance the chatbot experience.
While AI and chatbots can help your business become more efficient, it’s crucial to be aware of the potential risks. Banks and other businesses have a responsibility to ensure that their chatbots are not just efficient, but also reliable, respectful, and compliant with the law.