Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Sensitive Information. Show all posts

FBI Alerts Public about Scammers Using Altered Online Photos to Stage Fake Kidnappings

 



The Federal Bureau of Investigation has issued a new advisory warning people about a growing extortion tactic in which criminals take photos posted online, manipulate them, and present the edited images as supposed evidence during fake kidnapping attempts. The agency reports that these incidents, often described as virtual kidnappings, are designed to panic the target into paying quickly before verifying the claims.


How the scam begins

The operation usually starts when criminals search social media accounts or any platform where people share personal photos publicly. They collect pictures of individuals, including children, teenagers, and adults, and then edit those images to make it appear as though the person is being held against their will. Scammers may change facial expressions, blur backgrounds, add shadows, or alter body positions to create a sense of danger.

Once they prepare these altered images, they contact a relative or friend of the person in the photo. In most cases, they send a sudden text or place a call claiming a loved one has been kidnapped. The message is crafted to create immediate panic and often includes threats of harm if payment is not made right away.


The role of fake “proof of life”

One recurring tactic is the use of emotionally charged photos or short video clips that appear to show the victim in distress. These materials are presented as proof that the kidnapping is real. However, investigators have observed that the content often contains mistakes that reveal it has been edited. The inconsistencies can range from missing tattoos or scars to unnatural lighting, distorted facial proportions, or visual elements that do not match known photos of the person.

Criminals also try to limit the victim’s ability to examine the images closely. Some use disappearing messages or apps that make screenshots difficult. Others send messages in rapid succession to prevent the victim from taking a moment to reach out to the supposed abducted individual.


Why these scams escalate quickly

Scammers depend on speed and emotional intensity. They frequently insist that any delay will lead to harm, which pressures victims to make decisions without checking whether their loved one is actually safe. In some situations, criminals exploit posts about missing persons by inserting themselves into ongoing searches and providing false updates.

The FBI urges people to be mindful of the information they share online, especially when it involves personal photos, travel details, or locations. The agency recommends that families set up a private code word that can be used during emergencies to confirm identity. Individuals should avoid sharing personal information with unknown callers or strangers while traveling.

If someone receives a threatening call or message, the FBI advises them to stay calm and attempt to contact the alleged victim directly through verified communication channels. People should record or capture any messages, screenshots, phone numbers, images, or audio clips connected to the incident. These materials can help law enforcement determine whether the event is a hoax.

Anyone who believes they have been targeted by a virtual kidnapping attempt is encouraged to submit a report to the FBI’s Internet Crime Complaint Center at IC3.gov. The agency requests detailed information, including phone numbers used by the scammer, payment instructions, message transcripts, and any photos or videos that were provided as supposed evidence.





Protecting Sensitive Data When Employees Use AI Chatbots


 

In today's digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it's important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.

A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google's Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 

It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 

In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model's training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

Considering that we are living in a hybrid age, where artificial intelligence tools such as ChatGPT are rapidly becoming a new frontier for data breaches, this is particularly true in the age of hybrid work. While these platforms offer businesses a number of valuable features, including the ability to draft content and troubleshoot software, they also carry inherent risks. 

Experts warn that poor management of them can result in leakage of training data, violations of privacy, and accidental disclosure of sensitive company data. The latest Fortinet Work From Anywhere study highlights the magnitude of the problem: nearly 62% of organisations have reported experiencing data breaches as a result of switching to remote working. 

Analysts believe some of these incidents could have been prevented if employees had stayed on-premises with company-managed devices and applications and had not experienced the same issues. Nevertheless, security experts claim that the solution is not to return to the office again, but rather to create a robust framework for data loss prevention (DLP) in a decentralised work environment to safeguard the information.

To prevent sensitive information from being lost, stolen, or leaked across networks, storage systems, endpoints, and cloud environments, a robust DLP strategy combines tools, technologies, and best practices. A successful framework focuses on data at rest, in motion, and in use and ensures that they are continuously monitored and protected. 

Experts outline four essential components that a framework must have to succeed: Make sure the company data is classified and assigned security levels across the network, and that the network is secure. Maintain strict adherence to compliance when storing, deleting, and retaining user information. Make sure staff are educated regarding clear policies that prevent accidental sharing of information or unauthorised access to information. 

Embrace protection tools that can detect phishing, ransomware, insider threats, and unintentional exposures in order to protect the organisation's data. It is not enough to use technology alone to protect organisations; it is also essential to have clear policies in place. With DLP implemented correctly, organisations are not only less likely to suffer from leaks, but they are also more likely to comply with industry standards, government regulations, and the like. 

The balance between innovation and responsibility in the digital age, particularly in the era of digital transformation, is crucial for businesses that adopt hybrid work and AI-based tools. According to the UK General Data Protection Regulation (UK GDPR), businesses that utilise AI platforms, such as ChatGPT, must adhere to a set of strict obligations designed to protect personal information from unauthorised access.

In terms of data protection, any data that could identify the individual - such as an employee file, customer contact details, or client database - falls within the regulations' scope, and ultimately, business owners are responsible for protecting that data, even when it is handled by third parties. In order to cope with this scenario, companies will need to carefully evaluate how external platforms process, store, and protect their data. 

They often do so through legally binding Data Processing Agreements that specify confidentiality standards, privacy controls, and data deletion requirements for the platforms. It is equally important to ensure that organisations communicate with individuals when their information is incorporated into artificial intelligence tools and, if necessary, obtain explicit consent from them.

As part of the law, firms are also required to implement “appropriate technical and organisational measures.” These measures include checking whether AI vendors are storing their data overseas, ensuring that it is kept in order to prevent misuse, and determining what safeguards are in place. Besides the risks of financial penalties or fines that are imposed for failing to comply, there is also the risk of eroding employee and customer trust, which can be more difficult to repair than the financial penalties themselves. 

When it comes to ensuring safe data practices in the age of artificial intelligence, businesses are increasingly turning to Data Loss Prevention (DLP) solutions as a way of automating the otherwise unmanageable task of monitoring vast networks of users, devices, and applications, which can be a daunting task. The state and flow of information have determined the four primary categories of DLP software that have emerged. 

Often, DLP tools utilise artificial intelligence and machine learning to identify suspicious traffic within and outside a company's system — whether by downloading, transferring, or through mobile connections — by tracking data movement within and outside a company's systems. In addition to preventing unauthorised activities at the source, endpoint DLP is also installed directly on users' computers, which monitors memory, cached data, and files being accessed or transferred as they occur. 

In general, cloud DLP solutions are intended to safeguard information stored in online environments such as backups, archives, and databases. They are characterised by encryption, scanning, and access controls that are used for securing corporate assets. While Email DLP is largely responsible for keeping sensitive details from being leaked through internal and external correspondence, it is also designed to prevent these sensitive details from getting shared accidentally, maliciously or through a compromised mailbox as well. 

Despite some businesses' concerns about whether Extended Detection and Response (XDR) platforms are adequate, experts think that DLP serves a different purpose than XDR: XDR provides broad threat detection and incident response, while DLP focuses on protecting sensitive data, categorising information, reducing breach risks, and ultimately maintaining company reputations.

A number of major technology companies have adopted varying approaches to dealing with the data their AI chatbots have collected from their users, often raising concerns about transparency and control. Google, for example, maintains conversations with its Gemini chatbot by default for 18 months, but the setting can be modified if users desire. Although activity tracking can be disabled, these chats remain in storage for at least 72 hours even if they are not reviewed by human moderators in order to refine the system. 

However, Google warns users that sharing confidential information is not advisable and that any conversations that have already been flagged for human review cannot be erased. As part of Meta's artificial intelligence assistant, which can be found on Facebook, WhatsApp, and Instagram, it is trained to understand public posts, photos, captions, and data scraped from around the web. However, the application cannot handle private messages. 

There is no doubt that citizens of the European Union and the United Kingdom have the right to object to the use of their information for training under stricter privacy laws. However, those living in countries without such protections, such as the United States, have fewer options than their citizens in other countries. The opt-out process for Meta is quite complicated, and it is available only where it is available; users must submit evidence of their interactions with the chatbot as evidence of the opt-out. 

It is worth noting that Microsoft's Copilot does not provide an opt-out mechanism for personal accounts; users are only limited in their ability to delete their interaction history through their account settings, and there is no option to prevent future data retention. These practices demonstrate how AI privacy controls can be patchy, with users' choices often being more influenced by the laws and regulations of their jurisdiction, rather than corporate policy. 

The responsibility organisations as they navigate this evolving landscape relates not only to complying with regulations or implementing technical safeguards, but also to cultivating a culture of digital responsibility in their organisations. Employees need to be taught how important it is to understand and respect the value of their information, and how important it is to exercise caution when using AI-powered applications. 

By taking proactive measures such as implementing clear guidelines on chatbot usage, conducting regular risk assessments, and ensuring that vendors are compliant with stringent data protection standards, an organisation can significantly reduce the threat exposure they are exposed to. 

The businesses that implement a strong governance framework, at the same time, are not only protected but are also able to take advantage of AI's advantages with confidence, enhancing productivity, streamlining workflows, and maintaining competitiveness in an era of data-driven economies. Thus, it is essential to embrace AI responsibly, balancing innovation and vigilance, so that it isn't avoided, but rather embraced responsibly. 

A company's use of AI can be transformed from a potential liability to a strategic asset by combining regulatory compliance, advanced DLP solutions, and transparent communication with staff and stakeholders. It is important to remember that trust is currency in a marketplace where security is king, and companies that protect sensitive data will not only prevent costly breaches from occurring but also strengthen their reputation in the long run.

DHS Data Sharing Error Left Sensitive Intelligence Open to Thousands

 



A technology mishap inside the U.S. Department of Homeland Security (DHS) briefly left sensitive intelligence records open to people who were never supposed to see them. The issue, which lasted for several weeks in 2023, involved the Homeland Security Information Network (HSIN) — a platform where intelligence analysts share unclassified but sensitive reports with select government partners.

The restricted section of HSIN, known as HSIN-Intel, is designed for law enforcement agencies and national security officials who require access to intelligence leads and analyses. However, due to a misconfiguration, access controls were set incorrectly, making the files visible to the entire network rather than just the authorized users. As a result, thousands of individuals, including government employees in unrelated departments, private contractors, and even some foreign officials were able to view materials meant for a much smaller audience.

An internal review later revealed that 439 intelligence products were exposed during this period, with unauthorized users opening them more than 1,500 times. While many of the users were from within the United States, the inquiry confirmed that several foreign accounts also accessed the data. Nearly 40 percent of the leaked material related to cybersecurity, including reports on state-sponsored hacking groups and foreign attempts to infiltrate government IT systems. Other exposed content included law enforcement tips, assessments of disinformation campaigns, and files mentioning protest activity within the United States.

DHS acted quickly to fix the technical error once it was discovered. The department later stated that oversight bodies determined no serious harm resulted from the incident. Yet not all officials agreed with this conclusion. The internal memo describing the incident argued that personally identifiable information, such as details connected to U.S. citizens had been exposed and that the impact might have been greater than DHS initially suggested. The document recommended additional training for staff to ensure stronger protection of personal data.

Privacy experts point out that the incident raises wider concerns about domestic surveillance practices. When government agencies collect and store intelligence on Americans, even unclassified data, errors in handling it can create risks for both national security and individual privacy. Critics argue that such leaks highlight the need for stronger oversight and accountability, especially as legislative efforts to reform DHS’s intelligence powers continue in Congress.

Although DHS maintains that the exposure was contained and promptly resolved, the episode underlines how technical flaws in sensitive systems can have unintended consequences. When security tools are misconfigured, information meant for a limited circle of analysts can spread far beyond its intended audience. For citizens and policymakers alike, the event is a reminder of the delicate balance between gathering intelligence to protect the country and ensuring that privacy and civil liberties are not compromised in the process.



Hackers Breach French Military Systems, Leak 30GB of Classified Data

 




A hacker group has claimed responsibility for a cyberattack targeting France’s state-owned Naval Group, one of the country’s most important military shipbuilders. The attackers say they have already released 30 gigabytes of information and are threatening to publish more, claiming the stolen files include highly sensitive military details.

Naval Group designs and builds advanced naval vessels, including France’s nuclear-powered Suffren-class submarines and the nation’s only aircraft carrier, the Charles de Gaulle. The company plays a key role in France’s defense capabilities and is a major supplier to NATO allies.

According to the hackers’ statement on a dark web platform, the stolen material includes information on submarines, frigates, and possibly source code for submarine weapon systems. They allege they hold as much as one terabyte of data and have given the company 72 hours to confirm the breach.

Naval Group has rejected the claim that its internal networks were hacked. In a statement, the company said it “immediately launched technical investigations” after the material appeared online and described the incident as a “reputational attack”— suggesting the goal may be to damage the company’s public image rather than disrupt operations. The firm stressed that so far, there is no evidence of unauthorized access to its systems or any impact on its activities.

The leaked 30GB of files, if authentic, could contain sensitive information related to France’s nuclear submarine program, which is central to the country’s national security strategy. Naval Group, which is nearly two-thirds owned by the French government, employs over 15,000 people and generates annual revenues exceeding €4.4 billion.

Cybersecurity experts note that military contractors worldwide have increasingly become targets for cyberattacks, as they store valuable data on defense technology. The case comes shortly after other high-profile breaches, including Microsoft’s confirmation that certain vulnerabilities in its SharePoint servers remained exploitable, and an intrusion at the U.S. National Nuclear Security Administration, which oversees America’s nuclear arsenal.

Naval Group says all of its technical and security teams are currently working to confirm the authenticity, origin, and ownership of the published data. Investigations are ongoing, and French authorities are expected to monitor the situation closely.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

iHeartMedia Cyberattack Exposes Sensitive Data Across Multiple Radio Stations

 

iHeartMedia, the largest audio media company in the United States, has confirmed a significant data breach following a cyberattack on several of its local radio stations. In official breach notifications sent to affected individuals and state attorney general offices in Maine, Massachusetts, and California, the company disclosed that cybercriminals accessed sensitive customer information between December 24 and December 27, 2024. Although iHeartMedia did not specify how many individuals were affected, the breach appears to have involved data stored on systems at a “small number” of stations. 

The exact number of compromised stations remains undisclosed. With a network of 870 radio stations and a reported monthly audience of 250 million listeners, the potential scope of this breach is concerning. According to the breach notification letters, the attackers “viewed and obtained” various types of personal information. The compromised data includes full names, passport numbers, other government-issued identification numbers, dates of birth, financial account information, payment card data, and even health and health insurance records. 

Such a comprehensive data set makes the victims vulnerable to a wide array of cybercrimes, from identity theft to financial fraud. The combination of personal identifiers and health or insurance details increases the likelihood of victims being targeted by tailored phishing campaigns. With access to passport numbers and financial records, cybercriminals can attempt identity theft or engage in unauthorized transactions and wire fraud. As of now, the stolen data has not surfaced on dark web marketplaces, but the risk remains high. 

No cybercrime group has claimed responsibility for the breach as of yet. However, the level of detail and sensitivity in the data accessed suggests the attackers had a specific objective and targeted the breach with precision. 

In response, iHeartMedia is offering one year of complimentary identity theft protection services to impacted individuals. The company has also established a dedicated hotline for those seeking assistance or more information. While these actions are intended to mitigate potential fallout, they may offer limited relief given the nature of the exposed information. 

This incident underscores the increasing frequency and severity of cyberattacks on media organizations and the urgent need for enhanced cybersecurity protocols. For iHeartMedia, transparency and timely support for affected customers will be key in managing the aftermath of this breach. 

As investigations continue, more details may emerge regarding the extent of the compromise and the identity of those behind the attack.

Cyberattacks Hit U.S. Healthcare Firms, Exposing Data of Over 236,000 People

 


Two separate data breaches in the U.S. have exposed sensitive information of more than 236,000 people. These incidents involve two organizations: Endue Software in New York and Medical Express Ambulance (MedEx) in Illinois.

Endue Software creates software used by infusion centers, which help treat patients with medication delivered directly into their bloodstream. In February this year, the company found that hackers had broken into its system. This breach led to the exposure of personal details of around 118,000 individuals. The leaked information included full names, birth dates, Social Security numbers, and unique medical record identifiers. While there is currently no proof that the stolen data has been used illegally, the company isn’t taking any chances. It has added more safety tools and measures to its systems. It is also offering one year of free credit monitoring and identity protection to help affected people stay safe from fraud.

In a different case, MedEx, a private ambulance service provider based in Illinois, reported that it was also hit by a cyberattack. This breach happened last year, but the details have recently come to light. Information belonging to more than 118,000 people was accessed by attackers. The data included health records, insurance information, and even passport numbers in some cases.

These events are part of a larger pattern of cyberattacks targeting the healthcare industry in the U.S. In recent months, major organizations like UnitedHealth Group and Ascension Health have also suffered large-scale data breaches. Cybercriminals often go after hospitals and medical companies because the data they store is very valuable and can be used for scams or identity theft.

Both Endue and MedEx are working with cybersecurity experts to investigate the breaches and improve their systems. People affected by these incidents are being advised to be extra cautious. They should use the free protection services, monitor their bank and credit accounts, and immediately report anything unusual.



Landmark Admin Hack: Massive Data Leak Hits 1.6 Million Americans

 



Landmark Admin, a company based in Texas that works with insurance firms across the country, has shared new details about a cyberattack it suffered last year. According to the latest update, the number of people whose personal data may have been accessed has now reached more than 1.6 million.


How It Started

In May 2024, Landmark noticed something suspicious on its computer network. After looking into the issue, it found out that hackers had broken in and accessed files containing sensitive details of many individuals.

At first, the company believed the attack had affected around 806,000 people. However, in a recent filing with the Maine Attorney General’s Office, Landmark revealed that the total number of impacted people is now estimated at 1,613,773. They also said that this number might change again as the investigation continues.


What Information Was Stolen?

The hackers were able to get their hands on private data. This could include a person’s name, home address, Social Security number, or details from their passport or driver’s license. Some people’s financial information, health records, and insurance policy numbers may also have been exposed.

Not everyone had the same information stolen. The company has promised to send each affected person a letter that clearly mentions which of their details were accessed in the attack.


What Is Being Done to Help?

Landmark is still reviewing the situation with cybersecurity experts. They are in the process of informing everyone who may have been affected. People who get a notice from Landmark will also receive 12 months of free credit monitoring and identity theft protection to reduce the chances of further harm.

Those affected are encouraged to keep an eye on their credit activity. They may also consider placing a fraud alert or even freezing their credit to stay protected from possible misuse.

The full extent of the breach is still being investigated, which means the number of victims may grow. In the meantime, people are advised to stay alert, review their financial statements, and take steps to protect their identities.