Ai
The financial services industry needs a fair degree of checks and balances before deploying ChatGPT and other AI versions into advisory roles. Image Credit: Shutterstock

Generative AI has shown the potential to deeply impact financial services, and the potential to take over a number of tasks that are not limited to support or research. Even extend to advisory and financial planning.

To manage the financial wealth of its users and create investment portfolios tailored to needs, confidential and sensitive data must be shared with the system to get meaningful output in terms of personalization and effectiveness.

Financial planning requires a deep understanding on the actual wealth composition of the investor, the identification of his future financial flows and of their source, the identification of the short-, medium- and long-term targets of the family as a whole and in relation to each of its components.

To do so, the info needed might expand to very sensitive data of such individuals, including health details. The extension of such a disclosure poses concerns about the security and confidentiality of the provided data, and about the use it can be done of them.

The issue is felt, but no real action has been taken to address it. While the most known AI application, ChatGPT, has been blocked in a number of countries mainly for political reasons, the only country that did it on privacy concern was Italy.

Italy takes a stance

The Italian privacy and personal data watchdog blocked the access as the way the service is delivered raised serious concern in regard to its compliance to local and European laws. We are talking about one of the most advanced data and privacy regulations worldwide, the noncompliance with which might extend to the entire territory of the EU.

The Italian regulator thereafter temporarily readmitted ChatGPT amid a reassurance to change access rules and the management of user data. But serious concerns remain.

There are a few aspects that will be relatively easily fixed, such as the need of a detailed disclaimer and agreement on how personal data is collected, stored and used, filters to inhibit the access to minors, the provision of the right to access, modify and delete, at any point in time, any data pertaining to the user.

Opening up to manipulation

The real issue is the control over the collection of data for other uses not directly linked to the service, which is rendered to the individual, and the sharing of such data (or of the analysis of it) with third-parties.

Here the threats can be multiple and quite serious, given how sensitive are the type and amount of data that are shared to use Generative AI within the financial services area.

The risk of such data ending up in hands who would make an unfair use of it, leveraging the great power of the new technology and the potential strong dependency its user might develop - along with the risk of manipulation of the output to unfairly drive decisions - is real and is to be looked at with extreme caution.

One thing is for sure: we are at the very beginning of the implementation of services via Generative AI. It is certainly a very good idea to refrain, for the time being, from providing sensitive data to systems that have still to undergo an appropriate audit, limiting our use to pure research. Wait for a more accurate regulation, management and control of these immensely powerful systems before providing our own vital data.