Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

UAE Education

IGCF 2024: Deepfakes will cost the world over $10 trillion by 2025

Governments must ensure mandatory digital safety programs are taught in schools



IGCF 2024: The panel included Hector Monsegur, founder of a cybersecurity startup, security researcher and Director of Research; Nader Al Gazal, academic and expert on AI and Digital Transformation; Alan Smithson, Co-Founder of Metaverse (Facebook); and Dr Inhyok Cha, Professor at Gwangju Institute for Science and Technology and Deputy President for Global Cooperation (South Korea).
Image Credit: Supplied

Sharjah: The fear of deepfakes — images, videos, or audio edited or generated using artificial intelligence (AI) tools — has limited people’s confidence in the authenticity of content.

According to recent reports, deepfakes will cost the world over $10 trillion by 2025, creating challenges for several institutions, including hospitals and governments.

On day two of the 13th International Government Communication Forum (IGCF 2024), a power-packed session on the topic ‘Why Resilient Governments are Building Protective Shields with Artificial Intelligence’ was held and highlighted what governments can do to raise awareness and train people to avoid deepfakes.

The panel included Hector Monsegur, founder of a cybersecurity startup, security researcher and Director of Research; Nader Al Gazal, academic and expert on AI and Digital Transformation; Alan Smithson, Co-Founder of Metaverse (Facebook); and Dr Inhyok Cha, Professor at Gwangju Institute for Science and Technology and Deputy President for Global Cooperation (South Korea).

Paradox of technology

The panellists stressed that deepfakes are an interesting phenomenon as anyone can go online and build a deepfake for free. Smithson said: “The paradox of technology is that it is neither good nor bad. It is how you use it. Just because we can create a tool doesn’t mean that it should be used for nefarious purposes. Governments are responsible for ensuring that we use this technology for good.”

Advertisement

Telling stories is one of the defining characteristics of humanity, shared Dr Cha. “Every individual wants to create stories and want them to be heard, have an influence and be believable. Technology like deepfakes enable this and because of its accessibility. It allows millions of people to create their own stories in ways that have never been done before. This is where the question of ethics comes into play. Governments must invest heavily in raising awareness and training people to avoid deepfakes.”

Some of the risks of deepfakes include loss of trust in media and news sources and degradation of real, legitimate news sources, which can break down societies; the degradation of public trust in the government; and the loss of trust in democratic institutions and the use of deepfakes to influence elections, amongst others.

Al Gazal stressed: “Deepfakes are a very serious matter and must be dealt with effectively. But we can also leverage AI and virtual assistants to help regulate deep fakes despite their rapid growth.”

Monsegur highlighted that, currently, there are no tools available that can help detect and mitigate a deepfake attack in real time. “At some point, each and everyone will be breached by deepfakes. But if you set mitigating controls in place prior to the breach, you’re going to limit the damage,” he said.

Read More

Advertisement

Multi-factor authentication

He suggested developing a multi-factor authentication tool for social media channels, such as WhatsApp, which would allow an individual to click on their friend’s name and send an authentication request when they are on a call with them to determine whether the person is real.

Dr Cha further explained that while governments focus on protecting the identities of the national leaders from deepfakes, they should also put their efforts towards protecting their most vulnerable citizens, as that is where real damage can occur.

“One of the biggest problems of deepfakes is the use of this technology for sexually nefarious purposes,” he said.

“In some instances, women may not have stringent laws for their rights. One fear women have of the metaverse is that people will behave badly, thinking that the technological freedom they acquire with this new tool will exempt them from behaving well. From a government point of view, every time a new technology emerges, they have the task of educating people that with great freedom comes great responsibility.”

Focus on regulations

Smithson said that the US, EU and China have started to regulate AI. Recently, California lawmakers approved legislation to ban deepfakes to protect workers and regulate AI. The law bans deepfakes related to elections and requires large social media platforms to remove any deceptive material 120 days before the election and 60 days thereafter. Campaigns would also be required to publicly disclose if they’re running ads and materials altered by AI. Content that is made by AI would also have to be clearly marked.

Advertisement

Smithson concluded: “The US is also working on the Defending Democracy from Deepfake Deception Act, which would require online platforms to be responsible for the content. An educated public is a powerful force for good. Governments must ensure that mandatory digital safety programs, like workplace safety, are taught in schools and offices.”

Advertisement