Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Biometrics. Show all posts

Can Face Biometrics Prevent AI-Generated Deepfakes?


AI-Generated deep fakes on the rise

A serious threat to the reliability of identity verification and authentication systems is the emergence of AI-generated deepfakes that attack face biometric systems. The prediction by Gartner, Inc. that by 2026, 30% of businesses will doubt these technologies' dependability emphasizes how urgently this new threat needs to be addressed.

Deepfakes, or synthetic images that accurately imitate genuine human faces, are becoming more and more powerful tools in the toolbox of cybercriminals as artificial intelligence develops. These entities circumvent security mechanisms by taking advantage of the static nature of physical attributes like fingerprints, facial shapes, and eye sizes that are employed for authentication. 

Moreover, the capacity of deepfakes to accurately mimic human speech introduces an additional level of intricacy to the security problem, potentially evading voice recognition software. This changing environment draws attention to a serious flaw in biometric security technology and emphasizes the necessity for enterprises to reassess the effectiveness of their present security measures.

According to Gartner researcher Akif Khan, significant progress in AI technology over the past ten years has made it possible to create artificial faces that closely mimic genuine ones. Because these deep fakes mimic the facial features of real individuals, they open up new possibilities for cyberattacks and can go beyond biometric verification systems.

As Khan demonstrates, these developments have significant ramifications. When organizations are unable to determine whether the person trying access is authentic or just a highly skilled deepfake representation, they may rapidly begin to doubt the integrity of their identity verification procedures. The security protocols that many rely on are seriously in danger from this ambiguity.

Deepfakes introduce complex challenges to biometric security measures by exploiting static data—unchanging physical characteristics such as eye size, face shape, or fingerprints—that authentication devices use to recognize individuals. The static nature of these attributes makes them vulnerable to replication by deepfakes, allowing unauthorized access to sensitive systems and data.

Deepfakes and challenges

Additionally, the technology underpinning deepfakes has evolved to replicate human voices with remarkable accuracy. By dissecting audio recordings of speech into smaller fragments, AI systems can recreate a person’s vocal characteristics, enabling deepfakes to convincingly mimic someone’s voice for use in scripted or impromptu dialogue.

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

MFA and PAD

By taking advantage of static data—unchanging physical traits like eye size, face shape, or fingerprints—that authentication devices use to identify people, deepfakes pose sophisticated threats to biometric security systems. Because these qualities are static, deepfakes can replicate them and gain unauthorized access to confidential information and systems.

Furthermore, the technology underlying deepfakes has advanced to remarkably accurately mimic human voices. Artificial intelligence (AI) systems can accurately replicate a person's voice by breaking down speech recordings into smaller segments. This allows deepfakes to realistically imitate a person's voice for usage in pre-recorded or spontaneous dialogue.

Deepfakes are sophisticated threats to biometric security systems because they use static data, which is unchangeable physical attributes like eye size, face shape, or fingerprints that authentication devices use to identify persons. 

Top 10 Cutting-Edge Technologies Set to Revolutionize Cybersecurity

 

In the present digital landscape, safeguarding against cyber threats and cybercrimes is a paramount concern due to their increasing sophistication. The advent of new technologies introduces both advantages and disadvantages. 

While these technologies can be harnessed for committing cybercrimes, adept utilization holds the potential to revolutionize cybersecurity. For instance, generative AI, with its ability to learn and generate new content, can be employed to identify anomalies, predict potential risks, and enhance overall security infrastructure. 

The ongoing evolution of technologies will significantly impact cybersecurity strategies as we navigate through the digital realm.

Examining the imminent transformation of cybersecurity, the following ten technologies are poised to play a pivotal role:

1. Quantum Cryptography:
Quantum Cryptography leverages the principles of quantum physics to securely encrypt and transmit data. Quantum key distribution (QKD), a technique ensuring the creation and distribution of interception-resistant keys, forms the foundation of this technology. Quantum cryptography ensures unbreakable security and anonymity for sensitive information and communications.

2. Artificial Intelligence (AI):
AI enables machines and systems to perform tasks requiring human-like intelligence, including learning, reasoning, decision-making, and natural language processing. In cybersecurity, AI automation enhances activities such as threat detection, analysis, response, and prevention. Machine learning capabilities enable AI to identify patterns and anomalies, fortifying cybersecurity against vulnerabilities and hazards.

3. Blockchain:
Blockchain technology creates a decentralized, validated ledger of transactions through a network of nodes. Offering decentralization, immutability, and transparency, blockchain enhances cybersecurity by facilitating digital signatures, smart contracts, identity management, and secure authentication.

4. Biometrics:
Biometrics utilizes physical or behavioral traits for identity verification and system access. By enhancing or replacing traditional authentication methods like passwords, biometrics strengthens cybersecurity and prevents fraud, spoofing, and identity theft.

5. Edge Computing:
Edge computing involves processing data closer to its source or destination, reducing latency, bandwidth, and data transfer costs. This technology enhances cybersecurity by minimizing exposure to external systems, thereby offering increased privacy and data control.

6. Zero Trust:
The zero-trust security concept mandates constant verification and validation of every request and transaction, regardless of the source's location within or outside the network. By limiting lateral movement, unwanted access, and data breaches, zero trust significantly improves cybersecurity.

7. Cloud Security:
Cloud security protects data and applications stored on cloud platforms through tools such as encryption, firewalls, antivirus software, backups, disaster recovery, and identity/access management. Offering scalability, flexibility, and efficiency, cloud security contributes to enhanced cybersecurity.

8. 5G Networks:
5G networks, surpassing 4G in speed, latency, and capacity, improve cybersecurity by enabling more reliable and secure data transfer. Facilitating advancements in blockchain, AI, and IoT, 5G networks play a crucial role in cybersecurity, particularly for vital applications like smart cities, transportation, and healthcare.

9. Cybersecurity Awareness:
Cybersecurity awareness, though not a technology itself, is a critical human component. It involves individuals and organizations defending against cyber threats through security best practices, such as strong passwords, regular software updates, vigilance against phishing emails, and prompt event reporting.

10. Cyber Insurance:
Cyber insurance protects against losses and damages resulting from cyberattacks. Organizations facing financial or reputational setbacks due to incidents like ransomware attacks or data breaches can benefit from cyber insurance, which may also incentivize the adoption of higher security standards and procedures.

Overall, the evolving landscape of cybersecurity is deeply intertwined with technological advancements that both pose challenges and offer solutions. As we embrace the transformative potential of quantum cryptography, artificial intelligence, blockchain, biometrics, edge computing, zero trust, cloud security, 5G networks, cybersecurity awareness, and cyber insurance, it becomes evident that a multi-faceted approach is essential. 

The synergy of these technologies, coupled with a heightened human awareness of cybersecurity best practices, holds the key to fortifying our defenses in the face of increasingly sophisticated cyber threats. As we march forward into the digital future, a proactive integration of these technologies and a commitment to cybersecurity awareness will be paramount in securing our digital domains.

Military Device Comprising of Thousands of Peoples' Biometric Data Sold on eBay


The last time the U.S. military used its Secure Electronic Enrollment Kit (SEEK II) devices was more than ten years ago, close to Kandahar, Afghanistan. The bulky black rectangle piece of technology, which was used to scan fingerprints and irises, was switched off and put away.

That is, until Matthias Marx, a German security researcher, purchased the device for $68 off of eBay in August 2022 (a steal, at about half the listed price). Marx had unintentionally acquired sensitive, identifying information on thousands of people for the cheap, low price of less than $70. The biometric fingerprint and iris scans of 2,632 people were accompanied by names, nationalities, photographs, and extensive descriptions, according to a story by The New York Times. 

From the war zone areas to the government equipment sale to the eBay delivery, it seems that not a single Pentagon official had the foresight to remove the memory card out of the specific SEEK II that Marx ended up with. The researcher told the Times, “The irresponsible handling of this high-risk technology is unbelievable […] It is incomprehensible to us that the manufacturer and former military users do not care that used devices with sensitive data are being hawked online.”  

According to the Times, the majority of the data in the SEEK II was gathered on people who the American military has designated as terrorists or wanted people. Others, however, were only ordinary citizens who had been detained at Middle Eastern checkpoints or even people who had aided the American administration. 

Additionally, all of that information might be utilized to locate someone, making the devices and related data exceedingly hazardous, if they ended up in the wrong hands. For instance, the Taliban may have a personal motive for tracking down and punishing anyone who cooperated with U.S. forces in the area. 

Marx and his co-researchers from Chaos Computer Club, which claims to be the largest hacker group in Europe, purchased the SSEK II and five other biometric capture devices- all from eBay. The group then went on with analyzing the devices for potential flaws, following a 2021 report by The Intercept, regarding military tech seize by the Taliban. 

Marx was nonetheless concerned by the extent of what he discovered, despite the fact that he had set out from the start to assess the risks connected with biometric devices. The Times reports that a second SEEK II purchased by CCC and last used in Jordan in 2013 contained data on U.S. troops—likely gathered during training—in addition to the thousands of individuals identified on the single SEEK II device last used in Afghanistan.  

SMS System Now A Long-Gone Era; Google Brings Out A New Update



With the rise of encrypted alternatives of SMS messages, WhatsApp, iMessage, and Signal, the SMS system has become a 'throwback to a long-gone era'. 

But ironically, that same SMS system has additionally been on the rise as the default delivery mechanism for most two-factor authentication (2FA) codes. 

The issue is being viewed as a critical one in light of the fact that an SMS is delivered to a phone number with no user authentication—biometric or password security efforts secure our physical devices, not our numbers, they are separated. 

What's more, this explanation alone clears a path for SIM-swapping, social engineering scams to take those six-digit codes, to malware that catches and exfiltrates screenshots of the approaching messages. For each one of those reasons, and a couple of additional, the advice is currently to avoid SMS-based 2FA if feasible for the user. 

But still,  if the user can tie 2FA to the biometric or password security of a known device, at that point this is a huge improvement. Apple does this splendidly. And Google is quick on making this the default also. 

In a blog post on June 16, Google confirmed “Starting on July 7 we will make phone verification prompts the primary 2-Step Verification (2SV) method for all eligible users.” 

Their plan fundamentally is to switch Google account holders to this setting, forestalling the majority, essentially defaulting to an SMS message or voice call. 

Yet, there's a drawback with this too , in light of the fact that all devices a user is logged into will receive the prompt, and that will require some rejigging for families sharing devices. Furthermore, users who have security keys won't see a change.

Phone prompt 2FA


In the event that the phone prompt doesn't work for the user, they can get away to an SMS during the verification process—however, Google doesn't recommend this. 

Further explaining that this move is both progressively secure and simpler, “as it avoids requiring users to manually enter a code received on another device.” 

In taking the decision to make this the "primary technique" for 2FA, Google says “We hope to help [users] take advantage of the additional security without having to manually change settings—though they can still use other methods of 2-Step Verification if they prefer.” 

For an attacker to spoof this system they will require physical access to one of the user's already logged-on devices where they will see the prompt. Users will likewise have the option to audit and remove devices they no longer need to gain access to this security option. 

Also, on the grounds that the prompt hits all logged-on, authorized devices all at once—user will straight away know whether an attempt is being made to open their account without their knowledge. 

Nonetheles, with the increasing utilization of multi-device access to our various platforms, it is an extraordinary thought to utilize an authentication device to verify another logon and this step by Google has without a doubt emerged as an incredible one in the direction way which should be followed by others as well.

Biometric Data Exposure Vulnerability in OnePlus 7 Pro Android Phones Highlighted TEE Issues


In July 2019, London based Synopsys Cybersecurity Research Center discovered a vulnerability in OnePlus 7 Pro devices manufactured by Chinese smartphone maker OnePlus. The flaw that could have been exploited by hackers to obtain users' fingerprints was patched by the company with a firmware update it pushed in the month of January this year. As per the findings, the flaw wasn't an easy one to be exploited but researchers pointed out the possibility of a bigger threat in regard to TEEs and TAs.

Synopsys CyRC's analysis of the vulnerability referred as CV toE-2020-7958, states that it could have resulted in the exposure of OnePlus 7 pro users' biometric data. The critical flaw would have allowed authors behind malicious android applications with root privileges to obtain users' bitmap fingerprint images from the device's Trusted Execution Environment (TEE), a technique designed to protect sensitive user information by keeping the Android device's content secure against illicit access.

As it has become increasingly complex for malicious applications to acquire root privileges on Android devices, the exploitation of the flaw would have been an arduous task and might also be an unlikely one given the complexity of the successful execution. Meanwhile, the fix has been made available for months now– ensuring the protection of the users.

However, the issue with Trusted Execution Environments (TEEs) and Trusted Applications (TAs) remains the major highlight of Synopsys's advisory released on Tuesday, “Upon obtaining root privileges in the REE [Rich Execution Environment], it becomes possible to directly communicate with the factory testing APIs exposed by Trusted Applications (TAs) running in the TEE. This attacker invokes a sequence of commands to obtain raw fingerprint images in the REE,” it read.

While explaining the matter, Travis Biehn, principal consultant at Synopsys, told, “Of course, people’s fingerprints don’t usually change. As attackers become successful in retrieving and building large datasets of people’s fingerprints, the usefulness of naïve fingerprint recognition in any application as a security control is permanently diminished,”

“A further possible consequence is that fingerprints become less trustworthy as evidence in our justice systems.”

“...this vulnerability shows that there'there are challenges with Trusted Execution Environments (TEEs) and Trusted Applications (TAs); these are software components that are opaque to most (by design), expertise is limited, and typically involve long supply chains. These factors together mean there'there are opportunities for organizations to make a mistake, and hard for security experts to catch at the right time,” he further added.

The flaw would have allowed attackers to recreate the targeted user's complete fingerprint and then use it to generate a counterfeit fingerprint that further would have assisted them in accessing other devices relying upon biometric authentication.

Major Breach of Biometric Systems Exposes Information of More Than 1 Million People



In a vulnerability found by Israeli security researchers there occurred a rather major breach of biometric systems that left data of more than 1 million individuals 'exposed' in an openly accessible database.

The frameworks influenced were said to have been utilized by the UK Metropolitan police, defence contractors, and banks, for fingerprint and facial recognition purposes.
It all started when the researchers found that the biometric data on 'Suprema's web-Biostar 2 platform' that controls access to secure facilities, was unprotected and 'mostly unencrypted.'

The affected database included 27.8 million records, totalling 23 gigabytes of data. A small and simple manipulation of the URL search criteria enabled access to the data as well as allowed room for some changes.

Purportedly, the researchers have now been searching for familiar IP blocks to further use these in order to discover holes in company’s frameworks that could conceivably prompt data breaches.
We were able to find plain-text passwords of administrator accounts. The access allows first of all seeing millions of users are using this system to access different locations and see in real time which user enters which facility or which room in each facility, even. We [were] able to change data and add new users,” – Rotem and Locar, the security researchers.

Despite the fact that the vulnerability has been fixed, be that as it may, it is still in the news as the size of the breach was disturbing because the affected service is currently in use in approximately 1.5 million areas over the world.

A Proposed Amendment to the Chicago Municipal Code That Could Invade Biometric and Location Privacy



As the utilization of facial recognition programming in the private sector is on the high very aggressively and exponentially, a proposed amendment to the Chicago municipal code would now enable organizations to utilize this facial recognition innovation, as indicated by the Electronic Frontier Foundation (EFF).

The EFF proceeds to state that this law would likewise disregard the Illinois Biometric Information Act (BIPA) including further that it could "invade biometric and location privacy, and violate a pioneering state privacy law adopted by Illinois a decade ago.” 

EFF went ahead to add -

"At its core, facial recognition technology is an extraordinary menace to our digital liberties. Unchecked, the expanding proliferation of surveillance cameras, coupled with constant improvements in facial recognition technology, can create a surveillance infrastructure that the government and big companies can use to track everywhere we go in public places, including who we are with and what we are doing.
This system will deter law-abiding people from exercising their First Amendment rights in public places. Given continued inaccuracies in facial recognition systems, many people will be falsely identified as dangerous or wanted on warrants, which will subject them to unwanted—and often dangerous—interactions with law enforcement. This system will disparately burden people of colour, who suffer a higher 'false positive' rate due to additional flaws in these emerging systems."

The proposition looks to include a section of "Face Geometry Data" to the city's municipal code which would enable organizations to utilize the disputable face reconnaissance frameworks compatible to the licensing agreements with the Chicago Police Department.

The law basically requires organizations to acquire informed, opt-in consent from people before gathering biometric data from them, or revealing it to an outsider and also secure storage for the biometric data all the while setting a three-year constrain on maintenance of the acquired data after which it must be deleted.

The EFF has likewise not been in support of the FBI's accumulation of colossal databases of biometric information on Americans. The Next Generation Identification (NGI) incorporates fingerprints, face recognition, iris outputs and palm prints. The data is accumulated amid arrests and non-criminal cases, for example, immigration, individual verifications or background checks and state licensing.

Regardless of the huge potential the facial recognition technology and biometric innovation in general, holds for the increased welfare, keeping in mind the national security and the advancements to cyber security, many have advisedly forewarned that the technology should be improved before its continual utilization before something extreme impacts the users.

2 Gujarat Ration Shop Owners Held for Aadhaar Fraud

The Gujarat Police on Friday arrested two owners of government-funded ration shops, or “fair price shops”, in Surat for allegedly committing fraud using stolen biometric data to pilfer subsidised foodgrain.

They reportedly bought a software for ₹15,000 which contained a list of stolen Aadhaar numbers, ration card numbers, and thumb impressions.

The accused, Babubhai Boriwal (53) and Sampatlal Shah (61), were arrested on Friday and taken into police custody for five days.

"The state government had in April 2016 launched the Annapurna Yojana under the National Food Security Act-2013,” said Crime Branch Inspector BN Dave. “Fair price shops, renamed as Pandit Deendayal Grahak Bhandar, were computerised so that subsidised food items reached the actual beneficiaries."

He said that under the scheme, shop owners were, through an application called E-FPS, given access to biometric data bank of the beneficiaries to “create an electronic record of beneficiaries availing subsidised grains from their shops.”

According to Inspector Dave, to gain access to the data, the accused used a duplicate version of the software, the source of which is yet unknown.

Boriwal and Shah have reportedly been booked under various sections of the Indian Penal Code (IPC) including section 406, 409 (criminal breach of trust), 467, 468, 471 (forgery), as well as sections of the Information Technology Act and the Essential Commodities Act.

The police are investigating into the source of the duplicate software as well as the biometric data.

UIDAI Addresses Security And Privacy Concerns

The issue of protection of citizen data has once again picked up steam in the most recent week after The Tribune revealed that an unknown WhatsApp number was pitching access to the whole Aadhaar database for as low as Rs 500. So in an attempt to address security and privacy concerns around the leakage of Aadhaar numbers and information data, the Unique Identification Authority of India on Wednesday introduced two new measures - virtual ID and limited KYC.

The Aadhaar-card holder can utilize the idea or most likely the 'concept' of the virtual id through its website which can take into consideration different purposes, including SIM verifications, and save them the trouble of sharing the actual12-digit biometric ID.

The Virtual ID would be an arbitrary 16-digit number, complete with biometrics of the user and would give any authorised agency like a mobile company, restricted or limited details like name, address and photograph, which are more than sufficient for any confirmation and verification.
Then again the idea of 'limited KYC' will just give need based or finite details of a user to an authorised agency that is providing a specific administration or service.

From 1 June, 2018 it will be obligatory for all organizations and agencies that attempt verification to acknowledge the Virtual ID from their clients. Agencies that don't relocate to the new framework to offer this additional alternative to their clients by the stipulated due date will confront financial disincentives.

"Aadhaar number holder can use Virtual ID in lieu of Aadhaar number whenever authentication or KYC services are performed. Authentication may be performed using the Virtual ID in a manner similar to using Aadhaar number," a UIDAI circular said.

Clients (users) can go to the UIDAI website to create their virtual ID which will be valid for a definite time frame, or till the user decides to transform it. Since the system generated Virtual ID will be mapped to a person's Aadhaar number itself at the back end, it will get rid of the requirement for the user to share Aadhaar number for validation and decrease the collection of Aadhaar numbers by various organizations.

According to the UIDAI, organizations that attempt validation would not be permitted to generate the Virtual ID on behalf of the Aadhaar holder.The UIDAI is also instructing all agencies utilizing its authentication and eKYC services to ensure Aadhaar holders can give the 16-digit Virtual ID rather than Aadhaar number within their application. 


Needless to say the move mainly focuses to reinforce the protection and security of Aadhaar data and comes in the midst of uplifted concerns around the collection and storage of personal and statistical (demographic) information of individuals.