Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Malicious Firefox Extension Steals Verification Tokens: Update to stay safe


Credential theft and browser security were commonly found in Google Chrome browsers due to its wide popularity and usage. Recently, however, cyber criminals have started targeting Mozilla Firefox users. A recent report disclosed a total of eight malicious Firefox extensions that could spy on users and even steal verification tokens.

About the malicious extension

Regardless of the web browser we use, criminals are always on the hunt. Threat actors generally prefer malicious extensions or add-ons; therefore, browser vendors like Mozilla offer background protections and public support to minimize these threats as much as possible. Despite such a measure, on July 4th, the Socket Threat Research Team's report revealed that threat actors are still targeting Firefox users. 

According to Kush Pandya, security engineer at Socket Threat Research Team, said that while the “investigation focuses on Firefox extensions, these threats span the entire browser ecosystem.” However, the particular Firefox investigation revealed a total of eight potentially harmful extensions, including user session hijacking to earn commissions on websites, redirection to scam sites, surveillance via an invisible iframe tracking method, and the most serious: authentication theft.

How to mitigate the Firefox attack threat

Users are advised to read the technical details of the extensions. According to Forbes, Mozilla is taking positive action to protect Firefox users from such threats. The company has taken care of the extensions mentioned in the report. According to Mozilla, the malicious extension impacted a very small number of users; some of the extensions have been shut down. 

“We help users customize their browsing experience by featuring a variety of add-ons, manually reviewed by our Firefox Add-ons team, on our Recommended Extensions page,” said a Firefox spokesperson. To protect the users, Mozilla has disabled “extensions that compromise their safety or privacy, or violate its policies, and continuously works to improve its malicious add-on detection tools and processes.”

How to stay safe?

To protect against these threats, Mozilla has advised users to Firefox users to take further steps, cautioning that such extensions are made by third parties. Users should check the extension rating and reviews, and be extra careful of extensions that need excessive permissions that are not compatible with what the extension claims to do. If any extension seems to be malicious, “users should report it for review,” a Firefox spokesperson said. 

Germany’s Warmwind May Be the First True AI Operating System — But It’s Not What You Expect

 



Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.

Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.

Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.


What exactly is Warmwind?

Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.

Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.

The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.

Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.

Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.

Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.

While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.

Tallento.ai Crosses 1 Million Users, Disrupts Recruitment with AI-Powered Instant Hiring

 

Tallento.ai, an AI-driven recruitment platform built without external funding, has surpassed 1 million registered professionals and joined forces with more than 5,500 employers nationwide. By fusing artificial intelligence, gamification, and a mobile-first experience, Tallento.ai is transforming the way talent is sourced, verified, and hired—positioning itself as "India's Quick Commerce of Hiring" that compresses job matching timelines from weeks to minutes.

Founded by a team of IIT Guwahati, NIT, and IIM Bangalore alumni, Tallento.ai stands out as a purpose-led alternative to conventional job portals. The platform leverages smart algorithms, gamified application journeys, and an intuitive mobile design to help companies onboard pre-verified candidates faster than ever before.

"We asked a simple question: if groceries and cabs can arrive in 10 minutes, why does hiring still take 30 days?" said Sandeep Boora, Co-founder. "We're solving for speed, relevance, and dignity—especially for young professionals entering the workforce."

Originally focused on recruitment in the EdTech sector, the company now partners with leading brands such as Allen, Aakash Institute, PhysicsWallah, and Byju’s to scale educator and operational hiring across India. Operating with a 120-member team and remaining profitable without raising outside capital, Tallento.ai has demonstrated strong demand and trust among employers and job seekers alike.

Looking ahead, the platform plans to roll out several new features, including:

  • AI Mentorship Modules to deliver personalized upskilling recommendations
  • Video-first Talent Showcases replacing static resumes with dynamic storytelling
  • Voice and regional language search to improve access for blue- and grey-collar workers
  • Emotional wellness support tools to ease job transitions
  • One-click verified hiring backed by AI-generated trust scores

"Hiring is no longer just transactional," said Neha Gopal Thakur, Co-founder. "We are building an ecosystem that empowers individuals, supports mental well-being, and ensures companies find the right talent, faster."

With a clear mission to make hiring accessible in Tier 2 and Tier 3 cities, Tallento.ai is bridging the gap for job seekers in semi-urban regions. The company envisions becoming the backbone of recruitment in fast-growing sectors such as IT, healthcare, BFSI, and retail.

"India's youth need fast, fair, and future-ready hiring," said Tushar Saraf, Co-founder. "Tallento.ai is here to deliver that—without friction, delay, or exclusion." 

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

NVIDIA Urges Users to Enable ECC to Defend GDDR6 GPUs Against Rowhammer Threats

  

NVIDIA has issued a renewed advisory encouraging customers to activate System Level Error-Correcting Code (ECC) protections to defend against Rowhammer attacks targeting GPUs equipped with GDDR6 memory.

This heightened warning follows recent research from the University of Toronto demonstrating how practical Rowhammer attacks can be on NVIDIA’s A6000 graphics processor.

“We ran GPUHammer on an NVIDIA RTX A6000 (48 GB GDDR6) across four DRAM banks and observed 8 distinct single-bit flips, and bit-flips across all tested banks,” the researchers explained. “The minimum activation count (TRH) to induce a flip was ~12K, consistent with prior DDR4 findings.”

Using these induced bit flips, the researchers performed what they described as the first machine learning accuracy degradation attack leveraging Rowhammer on a GPU.

Rowhammer exploits a hardware vulnerability where repeatedly accessing a memory row can cause adjacent memory cells to change state, flipping bits from 1 to 0 or vice versa. This can lead to denial-of-service issues, corrupted data, or even potential privilege escalation.

System Level ECC combats such risks by introducing redundant bits that can automatically detect and correct single-bit memory errors, ensuring data remains intact.

NVIDIA emphasized that enabling ECC is particularly critical for workstation and data center GPUs, which handle sensitive workloads like AI training and inference, to prevent serious computational errors.

The company’s security bulletin confirmed that researchers “showed a potential Rowhammer attack against an NVIDIA A6000 GPU with GDDR6 Memory” in scenarios where ECC had not been turned on.

The GPUHammer technique developed by the academic team successfully induced bit flips despite GDDR6’s higher latency and faster refresh rates, which generally make Rowhammer attacks more challenging compared to older DDR4 memory.

Researcher Gururaj Saileshwar told BleepingComputer that their demonstration could drop an AI model’s accuracy from 80% to below 1% with just a single bit flip on the A6000.

In addition to the RTX A6000, NVIDIA strongly recommends enabling ECC on the following GPU product lines:

Data Center GPUs:
  • Ampere: A100, A40, A30, A16, A10, A2, A800
  • Ada: L40S, L40, L4
  • Hopper: H100, H200, GH200, H20, H800
  • Blackwell: GB200, B200, B100
  • Turing: T1000, T600, T400, T4
  • Volta: Tesla V100, Tesla V100S
Workstation GPUs:
  • Ampere RTX: A6000, A5000, A4500, A4000, A2000, A1000, A400
  • Ada RTX: 6000, 5000, 4500, 4000, 4000 SFF, 2000
  • Blackwell RTX PRO (latest workstation line)
  • Turing RTX: 8000, 6000, 5000, 4000
  • Volta: Quadro GV100
Embedded/Industrial:
  • Jetson AGX Orin Industrial
  • IGX Orin
Newer GPUs—including Blackwell RTX 50 Series, Blackwell Data Center chips, and Hopper Data Center GPUs—feature built-in on-die ECC protection that requires no manual configuration.

To verify whether ECC is active, administrators can use an out-of-band method through the Baseboard Management Controller (BMC) and Redfish API to check the “ECCModeEnabled” status. NVIDIA’s NSM Type 3 and SMBPBI tools also allow ECC configuration, but these require NVIDIA Partner Portal access.

Alternatively, ECC can be checked or enabled in-band using the nvidia-smi command-line tool from the system CPU.

Saileshwar noted that enabling these safeguards could reduce machine learning inference performance by around 10% and reduce available memory capacity by 6.5% across workloads.

While Rowhammer remains a significant security concern, its exploitation in real-world scenarios is complex. An attack requires highly specific conditions, intensive memory access, and precise control, making it difficult to carry out reliably, especially in production environments.

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA confirms bug exploit

The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps. 

About the critical vulnerability

CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates. 

After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.

What is the CitrixBleed 2 exploit?

According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”

The rise of exploits

During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers. 

Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.

Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

A security bug in a stealthy Android spyware operation, “Catwatchful,” has exposed full user databases affecting its 62,000 customers and also its app admin. The vulnerability was found by cybersecurity expert Eric Daigle reported about the spyware app’s full database of email IDs and plaintext passwords used by Catwatchful customers to access stolen data from the devices of their victims. 

Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.

The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.

About Catwatchful

Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages.  The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).

Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses. 

Rise of spyware apps

The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed. 

How was the spyware found?

Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data. 

According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch passwords, use passkeys

Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts. 

What are passkeys?

Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.

At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages

Okta warns users to not use passwords

A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes. 

According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”

Why are passwords not safe?

It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.

According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”

Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication. 

Microsoft Phases Out Password Autofill in Authenticator App, Urges Move to Passkeys for Stronger Security

 

Microsoft is ushering in major changes to how users secure their accounts, declaring that “the password era is ending” and warning that “bad actors know it” and are “desperately accelerating password-related attacks while they still can.”

These updates, rolling out immediately, impact the Microsoft Authenticator app. Previously, the app let users securely store and autofill passwords on apps and websites you visit on your phone. However, starting this month, “you will not be able to use autofill with Authenticator.”

A more significant shift is just weeks away. “From August,” Microsoft cautions, “your saved passwords will no longer be accessible in Authenticator.” Users have until August 2025 to transfer their stored passwords elsewhere, or risk losing access altogether. As the company emphasized, “any generated passwords not saved will be deleted.”

These moves are part of Microsoft’s broader initiative to phase out traditional passwords in favor of passkeys. The tech giant, alongside Google and other industry leaders, points out that passwords represent a major security vulnerability. Despite common safeguards like two-factor authentication (2FA), account credentials can still be intercepted or compromised.

Passkeys, by contrast, bind account access to device-level security, requiring biometrics or a PIN to log in. This means there’s no password to steal, phish, or share. The FIDO Alliance explains: “passkeys are phishing resistant and secure by design. They inherently help reduce attacks from cybercriminals such as phishing, credential stuffing, and other remote attacks. With passkeys there are no passwords to steal and there is no sign-in data that can be used to perpetuate attacks.”

For users currently relying on Authenticator’s password storage, Microsoft advises moving credentials to the Edge browser or exporting them to another password manager. But more importantly, this is a chance to upgrade your key accounts to passkeys.

Authenticator will continue to support passkeys going forward. Microsoft advises: “If you have set up Passkeys for your Microsoft Account, ensure that Authenticator remains enabled as your Passkey Provider. Disabling Authenticator will disable your passkeys.”

The Critical Role of Proxy Servers in Modern Digital Infrastructure

In order to connect an individual user or entire network to the broader internet, a proxy server serves as an important gateway that adds a critical level of protection to the broader internet at the same time. In order to facilitate the connection between end users and the online resources they access, proxy servers act as intermediaries between them. 

They receive requests from the user for web content, obtain the information on their behalf, and forward the information to the client. As a result of this process, not only is network traffic streamlined, but internal IP addresses can be hidden, ensuring that malicious actors have a harder time targeting specific devices directly. 

By filtering requests and responses, proxy servers play a vital role in ensuring the safety of sensitive information, ensuring the enforcement of security policies, and ensuring the protection of privacy rights. 

The proxy server has become an indispensable component of modern digital ecosystems, whether it is incorporated into corporate infrastructures or used by individuals seeking anonymity when conducting online activities. As a result of their ability to mitigate cyber threats, regulate access, and optimize performance, businesses and consumers alike increasingly rely on these companies in order to maintain secure and efficient networks.

Whether it is for enterprises or individuals, proxy servers have become a crucial asset, providing a versatile foundation for protecting data privacy, reinforcing security measures, and streamlining content delivery, offering a variety of advantages for both parties. In essence, proxy servers are dedicated intermediaries that handle the flow of internet traffic between a user's device and external servers, in addition to facilitating the flow of information between users and external servers. 

It is the proxy server that receives a request initiated by an individual—like loading a web page or accessing an online service—first, then relays the request to its intended destination on that individual's behalf. In the remote server, a proxy is the only source of communication with the remote server, as the remote server recognizes only the proxy's IP address and not the source's true identity or location. 

In addition to masking the user's digital footprint, this method adds a substantial layer of anonymity to the user's digital footprint. A proxy server not only hides personal details but also speeds up network activity by caching frequently requested content, filtering harmful or restricted content, and controlling bandwidth. 

Business users will benefit from proxy services since they are able to better control their web usage policies and will experience a reduction in their exposure to cyber threats. Individuals will benefit from proxy services because they can access region-restricted resources and browse more safely. 

Anonymity, performance optimization, and robust security have all combined to become the three most important attributes associated with proxy servers, which allow users to navigate the internet safely and efficiently, no matter where they are. It is clear from the definition that proxy servers and virtual private networks (VPNs) serve the same purpose as intermediaries between end users and the broader Internet ecosystem, but that their scope, capabilities, and performance characteristics are very different from one another. 

As the name suggests, proxy servers are primarily created to obscure a user's IP address by substituting it with their own, thus enabling users to remain anonymous while selectively routing particular types of traffic, for example, web browser requests or application data. 

Proxy solutions are targeted towards tasks that do not require comprehensive security measures, such as managing content access, bypassing regional restrictions, or balancing network loads, so they are ideal for tasks requiring light security measures. By contrast, VPNs provide an extremely robust security framework by encrypting all traffic between an individual's computer and a server, thus providing a much more secure connection. 

Because VPNs protect sensitive data from interception or surveillance, they are a great choice for activities that require heightened privacy, such as secure file transfers and confidential communication, since they protect sensitive data from interception or surveillance. While the advanced encryption is used to strengthen VPN security, it can also cause latency and reduce connection speeds, which are not desirable for applications that require high levels of performance, such as online gaming and media streaming. 

Proxy servers are straightforward to operate, but they are still highly effective in their own right. A device that is connected to the internet is assigned a unique Internet Protocol (IP) address, which works a lot like a postal address in order to direct any online requests. When a user connects to the internet using a proxy, the user’s device assumes that the proxy server’s IP address is for all outgoing communications. 

A proxy then passes the user’s request to the target server, retrieves the required data, and transmits the data back to the user’s browser or application after receiving the request. The originating IP address is effectively concealed with this method, minimizing the chance that the user will be targeted, tracked, profiled, or tracked through this method. 

Through masking network identities and selectively managing traffic, proxy servers play a vital role in maintaining user privacy, ensuring compliance, and enabling secure, efficient access to online resources. It has been shown that proxy servers have a number of strategic uses that go far beyond simply facilitating web access for businesses and individuals. 

Proxy servers are effective tools in both corporate and household settings for regulating and monitoring internet usage and control. For example, businesses can configure proxy servers to limit employee access to non-work related websites during office hours, while parents use similar controls to limit their children from seeing inappropriate content. 

 As part of this oversight feature, administrators can log all web activity, enabling them to monitor browsing behaviour, even in instances where specific websites are not explicitly blocked. Additionally, proxy servers allow for considerable bandwidth optimisation and faster network performance in addition to access management. 

The caching of frequently requested websites on proxies reduces redundant data transfers and speeds up load times whenever a large number of people request the same content at once. Doing so not only conserves bandwidth but also allows for a smoother, more efficient browsing experience. Privacy remains an additional compelling advantage as well. 

When a user's IP address is replaced with their own by a proxy server, personal information is effectively masked, and websites are not able to accurately track users' locations or activities if they don't know their IP address. The proxy server can also be configured to encrypt web requests, keeping sensitive data safe from interception, as well as acting as a gatekeeper, blocking access to malicious domains and reducing cybersecurity threats. 

They serve as gatekeepers, thereby reducing the risk of data breaches. The proxy server allows users, in addition to bypassing regional restrictions and censorship, to route traffic through multiple servers in different places. This allows individuals to access resources that would otherwise not be accessible while maintaining anonymity. In addition, when proxies are paired up with Virtual Private Networks (VPN), they make it even more secure and controlled to connect to corporate networks. 

In addition to forward proxies, which function as gateways for internal networks, they are also designed to protect user identities behind a single point of entry. These proxies are available in a wide variety of types, each of which is suited to a specific use case and specific requirements. 

It is quite common to deploy transparent proxies without the user's knowledge to enforce policies discreetly. They deliver a similar experience to direct browsing and are often deployed without the user's knowledge. The anonymous proxy and the high-anonymity proxy both excel at concealing user identities, with the former removing all identifying information before connecting to the target website. 

By using distortion proxies, origins are further obscured by giving false IP addresses, whereas data centre proxies provide fast, cost-effective access with infrastructure that is not dependent upon an internet service provider. It is better to route traffic through authentic devices instead of public or shared proxies but at a higher price. Public or shared proxies are more economical, but they suffer from performance limitations and security issues. 

SSL proxies are used to encrypt data for secure transactions and improve search rankings, while rotating proxies assign dynamic IP addresses for the collection of large amounts of data. In addition, reverse proxies provide additional security and load distribution to web servers by managing incoming traffic. Choosing the appropriate proxy means balancing privacy, speed, reliability, and cost. It is important to note that many factors need to be taken into account when choosing a proxy. 

The use of forward proxies has become significantly more prevalent since web scraping operations combined them with distributed residential connections, which has resulted in an increasing number of forward proxies being created. In comparison to sending thousands of requests for data from a centralized server farm that might be easily detected and blocked, these services route each request through an individual home device instead. 

By using this strategy, it appears as if the traffic originated organically from private users, rather than from an organized scraping effort that gathered vast amounts of data from public websites in order to generate traffic. This can be achieved by a number of commercial scraping platforms, which offer incentives to home users who voluntarily provide a portion of their bandwidth via installed applications to scrape websites. 

On the other hand, malicious actors achieve a similar outcome by installing malware on unwitting devices and exploiting their network resources covertly. As part of regulatory mandates, it is also common for enterprises or internet service providers to implement transparent proxies, also known as intercepting proxies. These proxies quietly record and capture user traffic, which gives organisations the ability to track user behaviour or comply with legal requirements with respect to browsing habits. 

When advanced security environments are in place, transparent proxies are capable of decrypting encrypted SSL and TLS traffic at the network perimeter, thoroughly inspecting its contents for concealed malware, and then re-encrypting the data to allow it to be transmitted to the intended destination. 

A reverse proxy performs an entirely different function, as it manages inbound connections aimed at the web server. This type of proxy usually distributes requests across multiple servers as a load-balancing strategy, which prevents performance bottlenecks and ensures seamless access for end users, especially during periods of high demand. This type of proxy service is commonly used for load balancing. 

In the era of unprecedented volumes of digital transactions and escalating threat landscape, proxy servers are more than just optional safeguards. They have become integral parts of any resilient network strategy that is designed for resilience. A strategic deployment of proxy servers is extremely important given that organizations and individuals are moving forward in an environment that is shaped by remote work, global commerce, and stringent data protection regulations, and it is imperative to take proper consideration before deploying proxy servers. 

The decision-makers of organizations should consider their unique operational needs—whether they are focusing on regulatory compliance, optimizing performance, or gathering discreet intelligence—and choose proxy solutions that align with these objectives without compromising security or transparency in order to achieve these goals. 

As well as creating clear governance policies to ensure responsible use, prevent misuse, and maintain trust among stakeholders, it is crucial to ensure that these policies are implemented. Traditionally, proxy servers have served as a means of delivering content securely and distributing traffic while also fortifying privacy against sophisticated tracking mechanisms that make it possible for users to operate in the digital world with confidence. 

As new technologies and threats continue to develop along with the advancement of security practices, organizations and individuals will be better positioned to remain agile and protect themselves as technological advancements and threats alike continue to evolve.

Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks



Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks

Microsoft Defender for Office 365, a cloud-based email safety suite, will automatically detect and stop email-bombing attacks, the company said.  Previously known as Office 365 Advanced Threat Protection (Office 365 ATP), Defender for Office 365 safeguards businesses operating in high-risk sectors and dealing with advanced threat actors from harmful threats originating from emails, collaboration tools, and links. 

"We're introducing a new detection capability in Microsoft Defender for Office 365 to help protect your organization from a growing threat known as email bombing," Redmond said in a Microsoft 365 message center update. These attacks flood mailboxes with emails to hide important messages and crash systems. The latest ‘Mail Bombing’ identification will spot and block such attempts, increasing visibility for real threats. 

About the new feature

The latest feature was rolled out in June 2025, toggled as default, and would not require manual configuration. Mail Bombing will automatically send all suspicious texts to the Junk folder. It is now available for security analysts and admins in Threat Explorer, Advanced Hunting, the Email entity page, the Email summary panel, and the Email entity page. 

About email bombing attacks

In mail bombing campaigns, the attackers spam their victims’ emails with high volumes of messages. This is done by subscribing users to junk newsletters and using specific cybercrime services that can send thousands or tens of thousands of messages within minutes. The goal is to crash email security systems as a part of social engineering attacks, enabling ransomware attacks and malware to extract sensitive data from victims. These attacks have been spotted for over a year, and used by ransomware gangs. 

Mode of operation

BlackBast gang first used email bombing to spam their victims’ mailboxes. The attackers would later follow up and pretend to be IT support teams to lure victims into allowing remote access to their devices via AnyDesk or the default Windows Quick Assist tool. 

After gaining access, threat actors install malicious tools and malware that help them travel laterally through the corporate networks before installing ransomware payloads.

A Simple Guide to Launching GenAI Successfully

 


Generative AI (GenAI) is one of today’s most exciting technologies, offering potential to improve productivity, creativity, and customer service. But for many companies, it becomes like a forgotten gym membership, enthusiastically started, but quickly abandoned.

So how can businesses make sure they get real value from GenAI instead of falling into the trap of wasted effort? Success lies in four key steps: building a strong plan, choosing the right partners, launching responsibly, and tracking the impact.


1. Set Up a Strong Governance Framework

Before using GenAI, businesses must create clear rules and processes to use it safely and responsibly. This is called a governance framework. It helps prevent problems like privacy violations, data leaks, or misuse of AI tools.

This framework should be created by a group of leaders from different departments—legal, compliance, cybersecurity, data, and business operations. Since AI can affect many parts of a company, it’s important that leadership supports and oversees these efforts.

It’s also crucial to manage data properly. Many companies forget to prepare their data for AI tools. Data should be accurate, anonymous where needed, and well-organized to avoid security risks and legal trouble.

Risk management must be proactive. This includes reducing bias in AI systems, ensuring data security, staying within legal boundaries, and preventing misuse of intellectual property.


2. Choose Technology Partners Carefully

GenAI tools are not like regular software. When selecting a provider, businesses should look beyond basic features and check how the provider handles data, ownership rights, and ethical practices. A lack of transparency is a warning sign.

Companies should know where their data is stored, who can access it, and who owns the results produced by the AI tool. It’s also important to avoid being trapped in systems that make it difficult to switch vendors later. Always review agreements carefully, especially around copyright and data rights.


3. Launch With Care and Strategy

Once planning is complete, the focus should shift to thoughtful execution. Start with small projects that can demonstrate value quickly. Choose use cases where GenAI can clearly improve efficiency or outcomes.

Data used in GenAI must be organized and secured so that no sensitive information is exposed. Also, employees must be trained to work with these tools effectively. Skills like writing proper prompts and verifying AI-generated content are essential.

To build trust and encourage adoption, leaders should clearly explain why GenAI is being introduced and how it will help, not replace employees. GenAI should support teams and improve their performance, not reduce jobs.


4. Track Success and Business Value

Finally, companies need to measure the results. Success isn’t just about using the technology— it’s about making a real impact.

Set clear goals and use simple metrics, like productivity improvements, customer feedback, or employee satisfaction. GenAI should lead to better outcomes for both teams and clients, not just technical performance.

To move beyond the GenAI buzz and unlock real value, companies must approach it with clear goals, responsible use, and long-term thinking. With the right foundation, GenAI can be more than just hype, it can be a lasting asset for innovation and growth.



California Residents Are Protesting Against Waymo Self-Driving Cars

 

Even though self-driving cars are becoming popular worldwide, not everyone is happy about it. In Santa Monica, California, some people who were unfortunate enough to live near the Waymo depot found a terrible side effect of Alphabet's self-driving cars: their incredibly annoying backing noise.

Local laws mandate that autonomous cars make a sound anytime they backup, which is something that frequently occurs when Waymos return to base to recharge, as the Los Angeles Times recently revealed. 

"It is bothersome. This neighbourhood already has a lot of noise, and I don't think it's fair to add another level to it," a local woman told KTLA. "I know some people have been kept up at night, and woken up in the middle of the night.” 

Using a technique pioneered by activist group Safe Street Rebel in 2023, when self-driving cars first showed up on San Francisco streets, Santa Monica citizens blocked Waymos with traffic cones, personal automobiles, and even their own bodies. 

Dubbed "coning," the seemingly petty tactic evolved after the California Public Utilities Commission decided 3-1 to allow Waymo and Cruise to operate self-driving vehicles in California neighbourhoods at all times. Prior to the vote, public comments lasted more than six hours, indicating that the proposal was not well received.

Safe Street Rebel conducted a period of direct action known as "The Week of the Cone" in protest of the plan to grant Cruise and Waymo complete control over public streets. The California DMV quickly revoked Cruise's licence to drive in the state as a result of the group's campaign, in addition to multiple mishaps involving autonomous vehicles. 

"This is a clear victory for direct action and the power of people getting in the street," Safe Street Rebel noted in a statement. "Our shenanigans made this an international story and forced a spotlight on the many issues with [self-driving cars].”

However, Waymo isn't going down without a fight back in Santa Monica. According to the LA Times, the company has sued several peaceful protesters and has even called the local police to drive out angry residents.

"My client engaged in justifiable protest, and Waymo attempted to obtain a restraining order against him, which was denied outright," stated Rebecca Wester, an attorney representing a local resident. 

The most recent annoyance Waymo has imposed on its neighbours is the backup noise. Residents of San Francisco reported hearing horns blaring from a nearby Waymo depot nine months ago. This happens when dozens of the cars congest the small parking lot.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

Wyze Launches VerifiedView Metadata to Enhance Security After Past Data Breaches

 


Wyze’s security cameras and platform have earned praise from CNET reviewers in the past. However, over the last few years, recommendations for the company’s affordable cameras and related security products were tempered by a series of significant security breaches that raised concerns among experts and consumers alike.

More than a year has passed since those incidents, and Wyze has now introduced an advanced security feature called VerifiedView, designed to strengthen protections around user footage.

VerifiedView is a new metadata layer that applies to all content generated by Wyze cameras. Metadata refers to supplementary information attached to photos and videos, such as details about when and where they were captured, which helps systems search, organize, and identify files efficiently.

Wyze’s approach goes a step further. VerifiedView assigns every photo or video a unique identifier—an encrypted version of the user’s Wyze ID—that remains permanently tied to the account. Whenever someone tries to stream or view video through a Wyze account, their account identifier must match the one embedded in the metadata. If there is no match, access is denied. Live viewing functions the same way, ensuring that only the account that initially set up the camera can watch the footage.

While companies often embed metadata for various purposes, “this is the first time I've seen metadata used so clearly to manage video access and keep it from strange eyes.” This innovation is intended to directly address some of the most serious security issues, including past incidents in which unauthorized parties or employees were able to access private camera feeds.

Since the breaches and other security failures, Wyze has implemented several measures to bolster user safety and prevent similar problems. Key improvements include:

  • Automatic activation of two-factor authentication for all users, along with additional tools like OAuth, reCAPTCHA, and login abuse detection.
  • Investment in security resources provided by Amazon Web Services (AWS).
  • Expansion of Wyze’s security team to include more professionals dedicated to reviewing and strengthening code.
  • Regular penetration testing by firms such as Bitdefender, Google MASA, ioXT, and the NCC Group.

The introduction of a comprehensive cybersecurity training program for all employees.

“While I wish Wyze had started with security features like these, the changes are good to see.” For those evaluating options to protect their homes, these upgrades represent meaningful progress in Wyze’s approach to safeguarding customer data and privacy.