The Mirror Lifestyle Content

Enforce regulations against abuse of deepfake technology
Mr Kwaku Nyante is the Chief Information Security Officer (CISO) of Aignostics GmbH in Germany

Enforce regulations against abuse of deepfake technology

An Artificial Intelligence (AI) and data protection professional, Mr Kwadjo Nyante has called for the strict enforcement of regulations and awareness of the abuse of deepfake technology.

Advertisement

According to him, deepfake technology, sometimes referred to as ‘Photoshop’ was used to alter or create an image, text or video in the likeness and mannerism of someone or an idea. 

This phenomenon, he explained, tended to propagate false information, cause fear, distort human reality and possibly aid in acts of terrorism and political espionage.

He said in Ghana, because the technical (research, technology, AI robotics) and physical infrastructure (organisations, security experts and surveillance cameras) to combat this phenomenon were limited, it would be imperative to practise prevention rather than cure.

Mr Kwadjo Nyante made these revelations in an interview with The Mirror in Accra recently on the impact of deepfake on cyber security, data protection, mis/disinformation and how to detect and protect ourselves.

According to him, technological advancement that rode on AI had made it possible to use powerful techniques in generating deepfakes that might not be noticeable to the naked eye.

He said the phenomenon had gained attention for its major use in creating comic content (memes, satire, parodies), fake news, political propaganda, child or celebrity sexual materials, revenge porn, hoaxes and financial fraud. 

 “Deepfaking is achieved through deep learning, a branch of AI that imitates the patterns of human nerve movements, as well as face and speech recognition,” he explained. 

Speaking on the ethicality of the practice, Mr Nyante, who is also the Chief Information Security Officer (CISO) of Aignostics GmBH, an AI-driven pathology research firm in Germany, said generating deepfakes could only be ethical if permission and legal protocols were sought from the person or object of the deepfake, highlighting that this was true in the cases of harmless entertainment in computer games, movies, ceremonies, etc.

“Also if the intention was to endear a loved one or for comic relief, it could be considered ethical if consent was again sought,” Mr Nyante added.

However, according to Mr Nyante, any form of deepfake material could incur legal implications such as defamation and criminal lawsuits and worst, treason in cases of terrorism, political espionage and other unsanctioned acts.

He added that the main motivators of such acts were money and power. “For some, the sheer intention is to harm, generate controversies or cause civil unrest, a typical example was the ongoing Russia-Ukraine war,” he said.

Touching on data privacy breaches, he said deepfake was very evasive as it pried on existing data available on the internet. For example, pictures and videos posted on social media could easily be picked and manipulated to suit an agenda.

Detection and Protection
In terms of detection, the cyber security expert said deepfake had been refined to the extent that it was almost undetectable except with special tools and software or by a trained eye.

“It is great news that now some technological giants and researchers are developing ways to use other AI software to fight this. The question is how effective would this software be in curbing the phenomenon? he asked.

In terms of protection, he explained there are some reverse image search software that could trace the originality of an image or video to determine if it is fake or real. InVid and reveye were some of the  softwares he mentioned.

“There is also what is called forensic analysis in audio, images and videos. This is where such materials are tested for frequency, speed, landmarks, meta-data and background as a way to trace their possible location, owner and originality.

Mr Nyante said “One way is to extend and to strictly enforce regulations and punishment as stated under Section 208 of the criminal code of Ghana which states that a person who publishes “any statement, rumour or report which is likely to cause fear and alarm to the public or to disturb the public peace, knowing or having reason to believe that statement, rumour or report is false” commits an offence.” 

He added that across the European Union (EU), for example, citizens had rights such as the “right to be forgotten” where a citizen through proper protocols could request that any information a company had of them be deleted. 

“To the layman, a few tips for protection are to activate two-factor authentication (ideally with Google authenticator) on all social media handles and even emails. This is essential to protect passwords and data from being hacked. Also, do not go around clicking every link because they mostly just steal your data,” he said.

Speaking on the future of Artificial Intelligence, Mr Nyante said it was very bright in terms of advancement of health care, transportation, telecommunication, cyber security, social life and even transnational trade. 

“However, this is just a tool and in the right hands it can make humanity better; in the wrong hands, it is an extremely dangerous weapon,” he said.
                                                                                                 writer’s email: [email protected]

Connect With Us : 0242202447 | 0551484843 | 0266361755 | 059 199 7513 |

Like what you see?

Hit the buttons below to follow us, you won't regret it...

0
Shares