Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Aadhar Security. Show all posts

Generative AI Threatens Digital Identity Verification, Says Former CTO of Aadhar

 

Srikanth Nadhamuni, who formerly held the position of chief technology officer (CTO) of Aadhar between 2009 and 2012, believes that the tremendous improvement we are seeing in the field of artificial intelligence, particularly generative AI, poses a clear and present danger to digital identity verification. He and Vinod Khosla co-founded Bangalore-based incubator Khosla Labs, where he serves as CEO. 

The trust mechanisms that have been meticulously built into identification systems throughout time are seriously threatened by deep fakes, synthetic media that effectively mimic actual human speech, behaviour, and appearance. The need for a "proof-of-personhood" verification capability, probably using a person's biometrics, becomes paramount in this increasingly likely future scenario where AI-generated impersonations cause chaos and erode trust in the system, the tech expert wrote in a LinkedIn post titled "The Future of Digital Identity Verification: In the era of AI Deep Fakes." 

Disinformation is now taking on a whole new dimension thanks to generative AI. Text-to-image AI models like DALL-E2, Midjourney, and Stable Diffusion can produce incredibly realistic visuals that are simple to mistake for the real thing. The ability to create misleading visual information has been made possible by this technology, further obscuring the distinction between truth and fiction.

Even though the Indian government has stated that it will not regulate artificial intelligence (AI), it has revealed that the impending Digital India Act (DIA) will include provisions to address disinformation produced by AI.

“We are not going to regulate AI but we will create guardrails. There will be no separate legislation but a part of DIA will address threats related to high-risk AI,” Union Minister Rajeev Chandrasekhar said. 

The draft hasn't been released yet, so it's unclear how it will address the challenge that generative AI poses to digital identity verification. 

How to identify deep fake images

According to Sandy Fliderman, president, CTO, and creator of industry fintech, it was simpler to spot fakes in old recordings because of changes in skin tone, odd blinking patterns, or jerky motions. But since technology has advanced so much, many of the traditional "tells'' are no longer valid. Today, red flags could show up as irregularities in lighting and shading, which deepfake technology is still working to perfect.

Humans can seek for a number of indicators to distinguish between authentic and fraudulent photographs, such as the following: 

  • Body components and the skin have irregularities.
  • Eyes have a shadowy area. 
  • Unorthodox blinking patterns.
  • Spectacles with an unusual glare. 
  • Mouth gestures that are not realistic. 
  • Lip colour is unnaturally different from the face.