Risk and reward: How AI is driving the US debt collection industry

Risk and reward: How AI is driving the US debt collection industry

The use of AI in the sector only looks set to grow; even though just over 10% of collections firms currently employ AI-driven tools in the US, around 60% are considering implementing them.

The benefits are clear: AI tools can predict which debtors are likely to pay; they can negotiate payments virtually, and segment and profile customers. This is good news for financial services providers, experts say, because systems can spot who’s going to pay their debt and who’s less able to honour demands. Another positive stems from the fact that many consumers are more at ease talking about their sensitive financial issues to non-judgemental chatbots, rather than to humans.

But what are the ethical concerns?

There exists undeniable risk of bias within the data that AI tools make decisions upon. Just looking at the debt collection sector, this could increase debtor harassment and flood court systems with new legal cases. If software learns and perpetuates racial disparities in debt collection, we could easily see different groups and regions unfairly targeted.

In the US in 2021, the Consumer Financial Protection Bureau (CFPB) finalised a rule prohibiting the use of threats or harassment in debt collection but the body has not yet addressed the use of AI or machine learning specifically. A spokesperson for the CFPB recently stated that the agency is monitoring the use of AI in collections, emphasising the need for companies using AI to comply with consumer finance laws.

Several firms are already succeeding with AI within the collections industry. For example, Skit.ai provides AI-driven voice tools that automate consumer conversations, handling inquiries, providing guidance about due balances and payment pathways, and assisting with dispute handling and settlement negotiations. 

The company’s CEO and founder, Sourabh Gupta, has stressed the importance of using AI ethically and responsibly, advocating the implementation of rigorous filters to ensure compliance with regulations and mitigate risk of bias.

Speaking to GRC World Forums recently, Behavioural Data Science Pioneer, Ganna Pogrebna advises:

“By capturing a wide range of perspectives and experiences, organizations can reduce the likelihood of bias being encoded into the AI models. Regular audits of AI systems for bias and fairness are crucial in maintaining the integrity of these technologies. These audits can help identify any biases that may have crept into the system, allowing for timely corrections and adjustments

“Transparency and explainability are also vital in mitigating bias. By making AI systems more transparent and their decision-making processes more explainable, organizations can build trust and facilitate a better understanding of how these technologies operate. Ethical guidelines play a pivotal role in steering the development and deployment of AI technologies,” Ganna adds.

Know the risks

Few industries will remain untouched by AI as firms increasingly seek to harness technology’s power to speed up processes and drive efficiency. But what associated threats are on the horizon, and how can they be tackled?

Get to the heart of the key issues next month at PrivSec & GRC Connect Chicago, where global experts explore the prospects and pitfalls of machine learning.

View Source Above content are taken from external website. If original source wants to remove content please contact us.
administrator

Related Articles