Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Artificial Intelligence. Show all posts

New Gmail Phishing Attack Exploits Login Flow to Steal Credentials

 


Despite today's technologically advanced society, where convenience and connectivity are the norms, cyber threats continue to evolve at an alarming rate, making it extremely dangerous to live in. It has recently been reported that phishing attacks and online scams are on the rise among U.S. consumers, warning that malicious actors are increasingly targeting login credentials to steal personal and financial information from their customers. Those concerns are echoed by the Federal Bureau of Investigation (FBI), which revealed that online scams accounted for a staggering $16.6 billion in losses last year—a jump of 33 per cent compared with the year prior.

The extent to which the problem is increasing has been highlighted in surveys that have revealed more than 60 per cent of Americans feel scam attempts are increasing, and nearly one in three have experienced a data breach regularly. Taking these figures together, it is apparent that fortifying digital defences against an ever-expanding threat landscape is of utmost importance. 

Phishing itself is not new; however, its evolution has been dramatic over the past few decades. Previously, such scams could be easily detected due to their clumsy emails that contained spelling errors and awkward greetings like "Dear User." Today's attacks are much more sophisticated. In this latest Gmail phishing campaign, Google's legitimate login process is accurately mimicked with alarming accuracy, deceiving even tech-savvy users. 

It has been documented by security researchers that thousands of Gmail accounts have been compromised, with stolen credentials opening the door to a broad range of infiltrations, including banking, retail, and social networking sites. A breach like this is compared to an intruder entering one's digital home with the key to the rightful owner. 

A breach of this kind can cause long-lasting damage both financially and personally because it extends well beyond inconvenience. Investigations have shown that this campaign is based on deception and abuse of trusted infrastructures. Fraudulent "New Voice Notification" emails are a way for scammers to get victims by phoning them with fake sender information and making them listen to their voicemails. This attack begins with a legitimate Microsoft Dynamics marketing platform, which lends instant credibility to it, thereby enabling it to bypass many standard security controls. 

A CAPTCHA page on horkyrown[.]com, which can be traced to Pakistan, then redirects victims to a fake login page that looks exactly like Gmail's login page, which makes them feel like they're being hacked before giving them the real thing. When credentials are exfiltrated in real time, the account can be taken over almost immediately. Adding more complexity to this problem is the advent of artificial intelligence in phishing operations. 

Cybercriminals are now making perfect emails, mimicking writing styles, and even making convincing voice calls impersonating trusted figures, utilising advanced language models. According to security companies, artificial intelligence-driven phishing attempts are just as effective as human-crafted ones - if not more so - showing a 55 per cent increase between 2023 and 2025 in success rates. 

With the use of techniques such as metadata spoofing and "Open Graph Spoofing," attackers can further disguise malicious links, essentially making them almost indistinguishable from safe ones with the help of these techniques. In this new wave of phishing, which has become increasingly personalised, multimodal, and distributed at unprecedented scales, it is becoming increasingly difficult to detect. 

The FBI, as well as the Cybersecurity and Infrastructure Security Agency (CISA), have already issued warnings regarding artificial intelligence-enhanced phishing campaigns that target Gmail accounts. There was one case in which Ethereum developer Nick Johnson told of receiving a fraudulent “subpoena” email that passed Gmail's authentication checks and appeared to be just like a legitimate security alert. In similar attacks, phone calls and email have been used to harvest recovery codes, enabling full account takeover. 

Additionally, analysts found that attackers stole session cookies, enabling them to bypass login screens and bypass the entire process. Although Google's filters are now blocking nearly 10 million malicious emails per minute, experts warn that attackers are adapting faster, making stronger authentication measures and user vigilance essential. 

According to the technical analysis of the attack, it has been discovered that the (purpxqha[.]ru) Russian servers used to redirect traffic and perform cross-site requests should be responsible for the attack, while the primary domain name infrastructure was registered in Karachi, Pakistan. 

Using the malicious system, multiple layers of security within Gmail are bypassed, allowing hackers to not only collect email addresses and password combinations, but also two-factor authentication codes, Google Authenticator tokens, backup recovery keys, and even responses to security questions, enabling the attackers to completely take control of victims' accounts before they are aware that they have been compromised. Security experts have made several recommendations to organisations, including blocking identified domains, strengthening monitoring, and educating users about these evolving attack vectors. It must be noted that the Gmail phishing craze reflects a broader reality: cybersecurity is no longer a passive discipline but is a continuous discipline that must adapt to the speed of innovation as it evolves. 

There is no doubt that cultivating digital scepticism is a priority for individuals—they should question every unexpected email, voicemail, or login request, and they should reinforce their accounts with two-factor authentication or hardware security keys to ensure their accounts remain secure. A company’s responsibilities extend further, as they invest in employee awareness training, conduct mock phishing exercises, and implement adaptive tools capable of detecting subtle changes in behaviour. 

A cross-government collaboration between industry leaders, governments, and security researchers will be crucial to the dismantling of criminal infrastructure that exploits global trust. The need for vigilance in an environment where deception is becoming increasingly sophisticated each day has become more than an act of precaution, but a form of empowerment. This allows individuals and businesses alike to protect their digital identities from increasingly sophisticated threats while simultaneously protecting their digital identities.

AI and Quantum Computing: The Next Cybersecurity Frontier Demands Urgent Workforce Upskilling

 

Artificial intelligence (AI) has firmly taken center stage in today’s enterprise landscape. From the rapid integration of AI into company products, the rising demand for AI skills in job postings, and the increasing presence of AI in industry conferences, it’s clear that businesses are paying attention.

However, awareness alone isn’t enough. For AI to be implemented responsibly and securely, organizations must invest in robust training and skill development. This is becoming even more urgent with another technological disruptor on the horizon—quantum computing. Quantum advancements are expected to supercharge AI capabilities, but they will also amplify security risks.

As AI evolves, so do cyber threats. Deepfake scams and AI-powered phishing attacks are becoming more sophisticated. According to ISACA’s 2025 AI Pulse Poll, “two in three respondents expect deepfake cyberthreats to become more prevalent and sophisticated within the next year,” while 59% believe AI phishing is harder to detect. Generative AI adds another layer of complexity—McKinsey reports that only “27% of respondents whose organizations use gen AI say that employees review all content created by gen AI before it is used,” highlighting critical gaps in oversight.

Quantum computing raises its own red flags. ISACA’s Quantum Pulse Poll shows 56% of professionals are concerned about “harvest now, decrypt later” attacks. Meanwhile, 73% of U.S. respondents in a KPMG survey believe it’s “a matter of time” before cybercriminals exploit quantum computing to break modern encryption.

Despite these looming challenges, prioritization is alarmingly low. In ISACA’s AI Pulse Poll, just 42% of respondents said AI risks were a business priority, and in the quantum space, only 5% saw it becoming a top priority soon. This lack of urgency is risky, especially since no one knows exactly when “Q Day”—the moment quantum computers can break current encryption—will arrive.

Addressing AI and quantum risks begins with building a highly skilled workforce. Without the right expertise, AI deployments risk being ineffective or eroding trust through security and privacy failures. In the quantum domain, the stakes are even higher—quantum machines could render today’s public key cryptography obsolete, requiring a rapid transition to post-quantum cryptographic (PQC) standards.

While the shift sounds simple, the reality is complex. Digital infrastructures deeply depend on current encryption, meaning organizations must start identifying dependencies, coordinating with vendors, and planning migrations now. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has already released PQC standards, and cybersecurity leaders need to ensure teams are trained to adopt them.

Fortunately, the resources to address these challenges are growing. AI-specific training programs, certifications, and skill pathways are available for individuals and teams, with specialized credentials for integrating AI into cybersecurity, privacy, and IT auditing. Similarly, quantum security education is becoming more accessible, enabling teams to prepare for emerging threats.

Building training programs that explore how AI and quantum intersect—and how to manage their combined risks—will be crucial. These capabilities could allow organizations to not only defend against evolving threats but also harness AI and quantum computing for advanced attack detection, real-time vulnerability assessments, and innovative solutions.

The cyber threat landscape is not static—it’s accelerating. As AI and quantum computing redefine both opportunities and risks, organizations must treat workforce upskilling as a strategic investment. Those that act now will be best positioned to innovate securely, protect stakeholder trust, and stay ahead in a rapidly evolving digital era.

DevilsTongue Spyware Attacking Windows System, Linked to Saudi Arabia, Hungary


Cybersecurity experts have discovered a new infrastructure suspected to be used by spyware company Candiru to target computers via Windows malware.

DevilsTongue spyware targets Windows systems

The research by Recorded Future’s Insikt Group disclosed eight different operational clusters associated with the spyware, which is termed as DevilsTongue. Five are highly active, including clusters linked to Hungary and Saudi Arabia. 

About Candiru’ spyware

According to the report, the “infrastructure includes both victim-facing components likely used in the deployment and [command and control] of Candiru’s DevilsTongue spyware, and higher-tier infrastructure used by the spyware operators.” While a few clusters directly handle their victim-facing infrastructure, others follow an intermediary infrastructure layers approach or through the Tor network, which allows threat actors to use the dark web.

Additionally, experts discovered another cluster linked to Indonesia that seemed to be active until November 2024. Experts couldn’t assess whether the two extra clusters linked with Azerbaijan are still active.

Mode of operation

Mercenary spyware such as DevilsTongue is infamous worldwide, known for use in serious crimes and counterterrorism operations. However, it also poses various legal, privacy, and safety risks to targets, their companies, and even the reporter, according to Recorded Future.

Windows itself has termed the spyware Devil's Tongue. There is not much reporting on its deployment techniques, but the leaked materials suggest it can be delivered via malicious links, man-in-the-middle attacks, physical access to a Windows device, and weaponized files. DevilsTongue has been installed via both threat actor-controlled URLs that are found in spearphishing emails and via strategic website attacks known as ‘watering hole,’ which exploit bugs in the web browser.

Insikt Group has also found a new agent inside Candiru’s network that is suspected to have been released during the time when Candiru’s assets were acquired by Integrity Partners, a US-based investment fund. Experts believe that a different company might have been involved in the acquisition.

How to stay safe?

In the short term, experts from Recorded Future advise defenders to “implement security best practices, including regular software updates, hunting for known indicators, pre-travel security briefings, and strict separation of personal and corporate devices.” In the long term, organizations are advised to invest in robust risk assessments to create effective policies.

Ransomware Defence Begins with Fundamentals Not AI

 


The era of rapid technological advancements has made it clear that artificial intelligence isn't only influencing cybersecurity, it is fundamentally redefining its boundaries and capabilities as well. The transformation was evident at the RSA Conference in San Francisco in the year 2025, as more than 40,000 cybersecurity professionals gathered to discuss the path forward for the industry.

It was essential to emphasise that the rapid integration of agentic AI into cyber operations is one of the most significant topics discussed, highlighting both the disruptive potential and strategic complexities it introduces simultaneously. AI technologies continue to empower both defenders and adversaries alike, and organizations are taking a measured approach, recognising the immense potential of AI-driven solutions while remaining vigilant against the increasingly sophisticated attacks from adversaries. 

As the rise of artificial intelligence (AI) and its application in criminal activities dominates headlines more often than not, the narrative is far from a one-sided one, as there are several factors playing a role. However, the rise of AI reflects a broader industry shift toward balancing innovation with resilience in the face of rapidly shifting threats. 
Several cybercriminals are indeed using artificial intelligence (AI) and large language models (LLMs) to make ransomware campaigns more sophisticated and more convincing, crafting more convincing phishing emails, bypassing traditional security measures, and improving the precision with which victims are selected. In addition to increasing the stealth and efficiency of attackers, the stakes for organisational cybersecurity have increased as a result of these tools. 

Although AI is considered a weapon for adversaries, it is proving to be an essential ally in the defence against ransomware when integrated into security systems. By integrating AI into security systems, organisations are able to detect threats more quickly and accurately, which leads to quicker detection and response to ransomware attacks. 

Furthermore, AI helps enhance the containment and recovery efforts of incidents, leading to faster containment and a reduction in potential damage. Furthermore, AI helps to mitigate and recover from incidents more effectively. With AI coupled with real-time threat intelligence, security teams are able to adapt to evolving attack techniques, providing them with the agility to close the gap between offence and defence, making the cyber environment in which people live more and more automated.

In the wake of a series of high-profile ransomware attacks - most notably, those targeted at prominent brands like M&S - concerns have been raised that artificial intelligence may be contributing to a spike in cybercrime that has never been seen before. In spite of the fact that artificial intelligence is undeniably changing the threat landscape by streamlining phishing campaigns and automating attack workflows, its impact on ransomware operations has often been exaggerated. 

In practice, AI isn't really a revolutionary force at all, but rather a tool to accelerate tactics cybercriminals have relied on for years to come. Most ransomware groups continue to rely on proven, straightforward methods that offer speed, scalability, and consistent financial returns for their attacks. As far as successful ransomware campaigns are concerned, scammy emails, credential theft, and insider exploitation have continued to be the cornerstones of these campaigns, offering reliable results without requiring the use of advanced artificial intelligence. 

As security leaders are looking for effective ways to address these threats, they are focusing on getting a realistic perspective on how artificial intelligence is used within ransomware ecosystems. It has become increasingly evident that breach and attack simulation tools are critical assets for organisations as they enable them to identify vulnerabilities and close security gaps in advance of attackers exploiting them. 

There is a sense of balance in this approach, which emphasises the importance of bolstering foundational security controls while keeping pace with the incremental evolution of adversarial capabilities. Nevertheless, generative artificial intelligence is continuing to evolve in profound and often paradoxical ways as it continues to mature. In one way, it empowers defenders by automating routine security operations, detecting hidden patterns in complex data sets, and detecting vulnerabilities that might otherwise go undetected by the average defender. 

It also provides cybercriminals with the power to craft more sophisticated, targeted, scalable attacks, blurring the line between innovation and exploitation, providing them with powerful tools to craft more sophisticated, targeted, and scalable attacks. According to recent studies, over 80% of cyber incidents are caused by human error, which is why organisations need to harness artificial intelligence to strengthen their security posture to prevent future cyber attacks. 

AI is an excellent tool for cybersecurity leaders as it streamlines threat detection, reduces human oversight, and enables real-time response in real-time. There is, however, a danger that the same technologies may be adapted by adversaries to enhance phishing tactics, automate malware deployment, and orchestrate advanced intrusion strategies. The dual use of artificial intelligence has raised widespread concerns among executives due to its dual purpose. 

According to a recent survey, 84% of CEOs have expressed concern about generative AI being the source of widespread or catastrophic cyberattacks. Consequently, organisations are beginning to make a significant investment in AI-based cybersecurity, with projections showing a 43% increase in AI security budgets by 2025 as a result of this increase. 

In an increasingly complex digital environment, it is becoming increasingly recognised that even though generative AI introduces new vulnerabilities, it also holds the key to strengthening cyber resilience. This surge is indicative of a growing recognition of the need for generative AI. As artificial intelligence is increasing the speed and sophistication with which cyberattacks are taking place, it has never been more important than now to adhere to foundational cybersecurity practices. 

While artificial intelligence has unquestionably enhanced the tactics available to cybercriminals, allowing them to conduct more targeted phishing attempts, exploit vulnerabilities more quickly, and create more evasive malware, many of the core techniques have not changed. In other words, even though they have many similarities, the differences lie more in how they are executed, rather than in what they do. 

As such, rigorously and consistently applied traditional cybersecurity strategies remain critical bulwarks against even the threats that are enhanced by artificial intelligence. In addition to these foundational defences, multi-factor authentication (MFA), which is widely used, provides a vital safeguard against credential theft, particularly in light of the increasing use of artificial intelligence-generated phishing emails that mimic legitimate communication with astonishing accuracy - a powerful security measure that is critical today. 

As important as it is to maintain regular data backups, maintaining a secure backup mechanism also provides an effective fallback mechanism for ransomware, which is now capable of dynamically altering payloads to avoid detection. The most important element is to make sure that all systems and software are updated, as this prevents AI-enabled tools from exploiting known vulnerabilities. 

A Zero Trust architecture is becoming increasingly relevant as attackers with artificial intelligence move faster and stealthier than ever before. By assuming no implicit trust within the network and restricting lateral movement, this model greatly reduces the blast radius of any potential breach of the network and reduces the likelihood of the attack succeeding. 

A major upgrade is also required for email filtering systems, with AI-based tools that are better equipped to detect subtle nuances in phishing campaigns that have been successfully evading legacy solutions. It is also becoming more and more important for organisations to emphasise security awareness training to prevent breaches, as human error is still one of the leading causes. There is no better line of defence for a company than having employees trained to spot deceptive artificial intelligence-crafted deception.

Furthermore, the use of artificial intelligence-based anomaly detection systems is becoming increasingly important for detecting unusual behaviours that indicate a breach of security. In order to limit exposure and contain threats, segmentation, strict access control policies, and real-time monitoring are all complementary tools. However, it is important to note that even as AI has created new complexities in the threat landscape, it has not rendered traditional defences obsolete. 

Rather, these tried and true cybersecurity measures, augmented by intelligent automation and threat intelligence, are the cornerstones of resilient cybersecurity, not the opposite. Defending against adversaries powered by artificial intelligence requires not just speed but also strategic foresight and disciplined execution of proven strategies. 

As AI-powered cyberattacks become a bigger and more prevalent subject of discussion, organisations themselves are at risk from an unchecked and ungoverned use of artificial intelligence tools, a risk that is often overlooked. While much of the attention has been focused on how threat actors are capable of weaponising artificial intelligence, the internal vulnerabilities that arise from the unscheduled adoption of generative AI present a significant and present threat to the organisation. 

In what is referred to as "Shadow AI," employees are using tools like ChatGPT without formal authorisation or oversight, which circumvents established security protocols and could potentially expose sensitive corporate data. According to a recent study, nearly 40% of IT professionals admit that they have used generative AI tools without proper authorisation. 

Besides compromising governance efforts, such practices obscure visibility of data processing and handling, complicate incident response, and increase the organisation's vulnerability to attacks. The use of artificial intelligence by organisations is unregulated, coupled with inadequate data governance and poorly configured artificial intelligence services, resulting in a number of operational and security issues. 

The risks posed by internal AI tools must be mitigated by organisations treating them as if they were any enterprise technologies. Among the measures that must be taken to mitigate these risks is to establish robust governance frameworks, ensure the transparency of data flows, conduct regular audits, and provide cybersecurity training that addresses the dangers of shadow artificial intelligence, as well as ensure that leaders remain mindful of current threats to their organisations. 

Although artificial intelligence generates headlines, the most successful attacks continue to rely on the proven techniques - phishing, credential theft, and ransomware. The emphasis placed on the potential threats that could be driven by AI can distract attention from critical, foundational defences. In this context, complacency and misplaced priorities are the greatest risks, and not AI itself. 

 It remains true that maintaining a disciplined cyber hygiene, simulating attacks, and strengthening security fundamentals remain the most effective ways to combat ransomware in the long run. There is no doubt that artificial intelligence is not just a single threat or solution for cybersecurity, but rather a powerful force capable of strengthening as well as destabilising digital defences in an environment that is rapidly evolving. 

As organisations navigate this shifting landscape, it is imperative to have clarity, discipline, and strategic depth as they attempt to navigate this new terrain. Despite the fact that artificial intelligence may dominate headlines and influence funding decisions, it does not negate the importance of basic cybersecurity practices. 

What is needed is a recalibration of priorities as people move forward. Security leaders must build resilience against emerging technologies, rather than chasing the allure of emerging technologies alone. They need to adopt a realistic and layered approach to security, one that embraces AI as a tool while never losing sight of what consistently works. 

To achieve this goal, advanced automation, analytics, and tried-and-true defences must be integrated, governance around AI usage must be enforced, and access to data flows and user behaviour must remain tightly controlled. In addition, organisations need to realise that technological tools are only as powerful as the frameworks and people that support them. 

Threats are becoming increasingly automated, making it even more important to have human oversight. Training, informed leadership, and an environment that fosters a culture of accountability are not optional; they are imperative. In order for artificial intelligence to be effective, it must be part of a larger, more comprehensive security strategy that is based on visibility, transparency, and proactive risk management. 

As the battle against ransomware and AI-enhanced cyber threats continues, the key to success will not be whose tools have the greatest sophistication, but whose application of these tools will be consistent, purposeful, and foresightful. AI isn't a threat, but it's an opportunity to master it, regulate it internally, and never let innovation overshadow the fundamentals that keep security sustainable in the first place. Today's defenders have a winning formula: strong fundamentals, smart integration, and unwavering vigilance are the keys to their success.

Delta Airline is Using AI to Set Ticket Prices

 

With major ramifications for passengers, airlines are increasingly using artificial intelligence to determine ticket prices. Now, simple actions like allowing browser cookies, accepting website agreements, or enrolling into loyalty programs can influence a flight's price. The move to AI-driven pricing brings up significant challenges of equity, privacy, and the possibility of increased travel costs. 

Recently, Delta Air Lines revealed that the Israeli startup Fetcherr's AI technology is used to determine about 3% of its domestic ticket rates. To generate customised offers, this system analyses a number of variables, such as past purchasing patterns, user lifetime value, and the current context of each booking query. The airline plans to raise AI-based pricing to 20% of tickets by the end of 2025, according to Delta President Glen Hauenstein, who also emphasised the favourable revenue impact. 

Regulatory issues

US lawmakers have questioned the use of AI pricing models, fearing that it may result in increased fares and unfair disadvantages for some customers. The public's response has been mixed; some passengers are concerned about customised pricing schemes that could make air travel less transparent and affordable. 

In order to adopt dynamic, data-driven pricing strategies, other airlines are doing the same by investing in AI knowledge and creating machine learning solutions. Although this tendency welcomes increased regulatory scrutiny, it also signifies a larger transition within the industry. In an effort to strike a balance between innovation and justice, authorities are looking more closely at how AI technologies impact consumer rights and market competition. 

In Canada, airlines such as Porter recognise the use of dynamic pricing and the integration of AI in some operational areas, but they do not yet use AI for personalised ticket pricing. Canadian consumers benefit from enhanced privacy safeguards under the Personal Information Protection and Electronic Documents Act (PIPEDA), which requires firms to get "meaningful consent" before collecting, processing, or sharing personal data. 

Nevertheless, experts caution that PIPEDA is out of date and does not completely handle the complications posed by AI-driven pricing. Terry Cutler, a Canadian information security consultant, notes that, while certain safeguards exist, significant ambiguities persist, particularly when data is used in unexpected ways, such as changing prices based on browsing histories or device types. 

Implications for passengers 

As airlines accelerate the introduction of AI-powered pricing, passengers should be cautious about how their personal information is used. With regulatory frameworks trying to keep up with rapid technology innovations, customers must navigate an ever-evolving sector that frequently lacks transparency. Understanding these dynamics is critical for maintaining privacy and making informed judgements in the age of AI-powered air travel pricing.

AI-Powered Malware ‘LameHug’ Attacks Windows PCs via ZIP Files

 

Cybersecurity researchers have discovered a new and alarming trend in the world of online threats: "LameHug". This malicious program distinguishes out because it uses artificial intelligence, notably large language models (LLMs) built by companies such as Alibaba. 

LameHug, unlike classic viruses, can generate its own instructions and commands, making it a more adaptive and potentially difficult to detect adversary. Its primary goal is to infiltrate Windows-based personal PCs and then take valuable data surreptitiously. 

The malicious program typically begins its infiltration camouflaged as ordinary-looking ZIP files. These files are frequently sent via fraudulent emails that seem to come from legitimate government sources. When a user opens the seemingly innocent archive, the hidden executable and Python files inside begin to work. The malware then collects information about the affected Windows PC. 

Following this first reconnaissance, LameHug actively looks for text documents and PDF files stored in popular computer directories before discreetly transferring the obtained data to a remote web server. Its ability to employ AI to write its own commands makes it exceptionally cunning in its actions. 

LameHug was discovered by the Ukrainian national cyber incident response team (CERT-UA). Their investigation points to the Russian cyber group APT028, as the most likely source of this advanced threat. The malware is written in Python and uses Hugging Face's programming interfaces. These interfaces, in turn, are powered by a special Alibaba Cloud language model known as Qwen-2.5-Coder-32B-Instruct LLM, demonstrating the complex technological foundation of this new digital weapon. 

LameHug's arrival marks the first instance of malicious software being observed to use artificial intelligence to produce its own executable commands. Existing security software, which is often made to identify known attack patterns, has significant challenges as a result of these capabilities. The ongoing and intensifying arms race in the digital sphere is highlighted by this breakthrough as well as the mention of other emerging malware, such as "Skynet," that may elude AI detection techniques.

AI-Driven Phishing Threats Loom After Massive Data Breach at Major Betting Platforms

 

A significant data breach impacting as many as 800,000 users from two leading online betting platforms has heightened fears over sophisticated phishing risks and the growing role of artificial intelligence in exploiting compromised personal data.

The breach, confirmed by Flutter Entertainment, the parent company behind Paddy Power and Betfair, exposed users’ IP addresses, email addresses, and activity linked to their gambling profiles.

While no payment or password information was leaked, cybersecurity experts warn that the stolen details could still enable highly targeted attacks. Flutter, which also owns brands like Sky Bet and Tombola, referred to the event as a “data incident” that has been contained. The company informed affected customers that there is, “nothing you need to do in response to this incident,” but still advised them to stay alert.

With an average of 4.2 million monthly users across the UK and Ireland, even partial exposure poses a serious risk.

Harley Morlet, chief marketing officer at Storm Guidance, emphasized: “With the advent of AI, I think it would actually be very easy to build out a large-scale automated attack. Basically, focusing on crafting messages that look appealing to those gamblers.”

Similarly, Tim Rawlins, director and senior adviser at the NCC Group, urged users to remain cautious: “You might re-enter your credit card number, you might re-enter your bank account details, those are the sort of things people need to be on the lookout for and be conscious of that sort of threat. If it's too good to be true, it probably is a fraudster who's coming after your money.”

Rawlins also noted that AI technology is making phishing emails increasingly convincing, particularly in spear-phishing campaigns where stolen data is leveraged to mimic genuine communications.

Experts caution that relying solely on free antivirus tools or standard Android antivirus apps offers limited protection. While these can block known malware, they are less effective against deceptive emails that trick users into voluntarily revealing sensitive information.

A stronger defense involves practicing layered security—maintaining skepticism, exercising caution, and following strict cyber hygiene habits to minimize exposure

Germany’s Warmwind May Be the First True AI Operating System — But It’s Not What You Expect

 



Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.

Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.

Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.


What exactly is Warmwind?

Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.

Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.

The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.

Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.

Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.

Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.

While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.

Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

Is Your Bank Login at Risk? How Chatbots May Be Guiding Users to Phishing Scams

 


Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.

Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.

Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.

The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.

So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.

In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.

Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.

The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.

The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

North Korean hackers are infiltrating high-profile US-based tech firms through scams. Recently, they have even advanced their tactics, according to the experts. In a recent investigation by Microsoft, the company has requested its peers to enforce stronger pre-employment verification measures and make policies to stop unauthorized IT management tools. 

Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.  

US imposes sanctions against North Korea

The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired. 

Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited. 

DoJ arrests accused

The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.

In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.

Lazarus group behind such scams

One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”. 

"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.

How Ransomware Has Impacted Cyber Insurance Assessment Approach

How Ransomware Has Impacted Cyber Insurance Assessment Approach

Cyber insurance and ransomware

The surge in ransomware campaigns has compelled cyber insurers to rethink their security measures. Ransomware attacks have been a threat for many years, but it was only recently that threat actors realized the significant financial benefits they could reap from such attacks. The rise of ransomware-as-a-service (RaaS) and double extortion tactics has changed the threat landscape, as organizations continue to fall victim and suffer data leaks that are accessible to everyone. 

According to a 2024 threat report by Cisco, "Ransomware remains a prevalent threat as it directly monetizes attacks by holding data or systems hostage for ransom. Its high profitability, coupled with the increasing availability of ransomware-as-a-service platforms, allows even less skilled attackers to launch campaigns."

Changing insurance landscape due to ransomware

Cyber insurance is helping businesses to address such threats by offering services such as ransom negotiation, ransom reimbursement, and incident response. Such support, however, comes with a price. The years 2020 and 2021 witnessed a surge in insurance premiums. The Black Hat USA conference, scheduled in Las Vegas, will discuss how ransomware has changed businesses’ partnerships with insurers. Ransomware impacts an organization’s business model.

At the start of the 21st century, insurance firms required companies to buy a security audit to get a 25% policy discount. Insurance back then used to be a hands-on approach. The 2000s were followed by the data breach era; however, breaches were less common and frequent, targeting the hospitality and retail sectors. 

This caused insurers to stop checking for in-depth security audits, and they began using questionnaires to measure risk. In 2019, the ransomware wave happened, and insurers started paying out more claims than they were accepting. It was a sign that the business model was inadequate.

Questionnaires tend to be tricky for businesses to fill out. For instance, multifactor authentication (MFA) can be a complicated question to answer. Besides questionnaires, insurers have started using scans. 

Incentives to promote security measures

Threats have risen, but so have assessments, coverage incentives like vanishing retention mean that if policy users follow security instructions, retention disappears. Safety awareness training and patching vulnerabilities are other measures that can help in cost reductions. Scanning assessment can help in premium pricing, as it is lower currently. 

Cybercriminals Target AI Enthusiasts with Fake Websites to Spread Malware

 


Cyber attackers are now using people’s growing interest in artificial intelligence (AI) to distribute harmful software. A recent investigation has uncovered that cybercriminals are building fake websites designed to appear at the top of Google search results for popular AI tools. These deceptive sites are part of a strategy known as SEO poisoning, where attackers manipulate search engine algorithms to increase the visibility of malicious web pages.

Once users click on these links believing they’re accessing legitimate AI platforms they’re silently redirected to dangerous websites where malware is secretly downloaded onto their systems. The websites use layers of code and redirection to hide the true intent from users and security software.

According to researchers, the malware being delivered includes infostealers— a type of software that quietly gathers personal and system data from a user’s device. These can include saved passwords, browser activity, system information, and more. One type of malware even installs browser extensions designed to steal cryptocurrency.

What makes these attacks harder to detect is the attackers' use of trusted platforms. For example, the malicious code is sometimes hosted on widely used cloud services, making it seem like normal website content. This helps the attackers avoid detection by antivirus tools and security analysts.

The way these attacks work is fairly complex. When someone visits one of the fake AI websites, their browser is immediately triggered to run hidden JavaScript. This script gathers information about the visitor’s browser, encrypts it, and sends it to a server controlled by the attacker. Based on this information, the server then redirects the user to a second website. That second site checks details like the visitor’s IP address and location to decide whether to proceed with delivering the final malicious file.

This final step often results in the download of harmful software that invades the victim’s system and begins stealing data or installing other malicious tools.

These attacks are part of a growing trend where the popularity of new technologies, such as AI chatbots is being exploited by cybercriminals for fraudulent purposes. Similar tactics have been observed in the past, including misleading users with fake tools and promoting harmful applications through hijacked social media pages.

As AI tools become more common, users should remain alert while searching for or downloading anything related to them. Even websites that appear high in search engine results can be dangerous if not verified properly.

To stay safe, avoid clicking on unfamiliar links, especially when looking for AI services. Always download tools from official sources, and double-check website URLs. Staying cautious and informed is one of the best ways to avoid falling victim to these evolving online threats.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

UEBA: A Smarter Way to Fight AI-Driven Cyberattacks

 



As artificial intelligence (AI) grows, cyberattacks are becoming more advanced and harder to stop. Traditional security systems that protect company networks are no longer enough, especially when dealing with insider threats, stolen passwords, and attackers who move through systems unnoticed.

Recent studies warn that cybercriminals are using AI to make their attacks faster, smarter, and more damaging. These advanced attackers can now automate phishing emails and create malware that changes its form to avoid being caught. Some reports also show that AI is helping hackers quickly gather information and launch more targeted, widespread attacks.

To fight back, many security teams are now using a more intelligent system called User and Entity Behavior Analytics (UEBA). Instead of focusing only on known attack patterns, UEBA carefully tracks how users normally behave and quickly spots unusual activity that could signal a security problem.


How UEBA Works

Older security tools were based on fixed rules and could only catch threats that had already been seen before. They often missed new or hidden attacks, especially when hackers used AI to disguise their moves.

UEBA changed the game by focusing on user behavior. It looks for sudden changes in the way people or systems normally act, which may point to a stolen account or an insider threat.

Today, UEBA uses machine learning to process huge amounts of data and recognize even small changes in behavior that may be too complex for traditional tools to catch.


Key Parts of UEBA

A typical UEBA system has four main steps:

1. Gathering Data: UEBA collects information from many places, including login records, security tools, VPNs, cloud services, and activity logs from different devices and applications.

2. Setting Normal Behavior: The system learns what is "normal" for each user or system—such as usual login times, commonly used apps, or regular network activities.

3. Spotting Unusual Activity: UEBA compares new actions to normal patterns. It uses smart techniques to see if anything looks strange or risky and gives each unusual event a risk score based on its severity.

4. Responding to Risks: When something suspicious is found, the system can trigger alerts or take quick action like locking an account, isolating a device, or asking for extra security checks.

This approach helps security teams respond faster and more accurately to threats.


Why UEBA Matters

UEBA is especially useful in protecting sensitive information and managing user identities. It can quickly detect unusual activities like unexpected data transfers or access from strange locations.

When used with identity management tools, UEBA can make access control smarter, allowing easy entry for low-risk users, asking for extra verification for medium risks, or blocking dangerous activities in real time.


Challenges in Using UEBA

While UEBA is a powerful tool, it comes with some difficulties. Companies need to collect data from many sources, which can be tricky if their systems are outdated or spread out. Also, building reliable "normal" behavior patterns can be hard in busy workplaces where people’s routines often change. This can lead to false alarms, especially in the early stages of using UEBA.

Despite these challenges, UEBA is becoming an important part of modern cybersecurity strategies.