Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.
Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.
This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.
One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.
Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.
Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.
So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.
Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.
In this modern-day digital world, companies are under constant pressure to keep their networks secure. Traditionally, encryption systems were deeply built into applications and devices, making them hard to change or update. When a flaw was found, either in the encryption method itself or because hackers became smarter, fixing it took time, effort, and risk. Most companies chose to live with the risk because they didn’t have an easy way to fix the problem or even fully understand where it existed.
Now, with data moving across various platforms, for instance cloud servers, edge devices, and personal gadgets — it’s no longer practical to depend on rigid security setups. Businesses need flexible systems that can quickly respond to new threats, government rules, and technological changes.
According to the IBM X‑Force 2025 Threat Intelligence Index, nearly one-third (30 %) of all intrusions in 2024 began with valid account credential abuse, making identity theft a top pathway for attackers.
This is where policy-driven cryptography comes in.
What Is Policy-Driven Crypto Agility?
It means building systems where encryption tools and rules can be easily updated or swapped out based on pre-defined policies, rather than making changes manually in every application or device. Think of it like setting rules in a central dashboard: when updates are needed, the changes apply across the network with a few clicks.
This method helps businesses react quickly to new security threats without affecting ongoing services. It also supports easier compliance with laws like GDPR, HIPAA, or PCI DSS, as rules can be built directly into the system and leave behind an audit trail for review.
Why Is This Important Today?
Artificial intelligence is making cyber threats more powerful. AI tools can now scan massive amounts of encrypted data, detect patterns, and even speed up the process of cracking codes. At the same time, quantum computing; a new kind of computing still in development, may soon be able to break the encryption methods we rely on today.
If organizations start preparing now by using policy-based encryption systems, they’ll be better positioned to add future-proof encryption methods like post-quantum cryptography without having to rebuild everything from scratch.
How Can Organizations Start?
To make this work, businesses need a strong key management system: one that handles the creation, rotation, and deactivation of encryption keys. On top of that, there must be a smart control layer that reads the rules (policies) and makes changes across the network automatically.
Policies should reflect real needs, such as what kind of data is being protected, where it’s going, and what device is using it. Teams across IT, security, and compliance must work together to keep these rules updated. Developers and staff should also be trained to understand how the system works.
As more companies shift toward cloud-based networks and edge computing, policy-driven cryptography offers a smarter, faster, and safer way to manage security. It reduces the chance of human error, keeps up with fast-moving threats, and ensures compliance with strict data regulations.
In a time when hackers use AI and quantum computing is fast approaching, flexible and policy-based encryption may be the key to keeping tomorrow’s networks safe.
Regardless of the web browser we use, criminals are always on the hunt. Threat actors generally prefer malicious extensions or add-ons; therefore, browser vendors like Mozilla offer background protections and public support to minimize these threats as much as possible. Despite such a measure, on July 4th, the Socket Threat Research Team's report revealed that threat actors are still targeting Firefox users.
According to Kush Pandya, security engineer at Socket Threat Research Team, said that while the “investigation focuses on Firefox extensions, these threats span the entire browser ecosystem.” However, the particular Firefox investigation revealed a total of eight potentially harmful extensions, including user session hijacking to earn commissions on websites, redirection to scam sites, surveillance via an invisible iframe tracking method, and the most serious: authentication theft.
Users are advised to read the technical details of the extensions. According to Forbes, Mozilla is taking positive action to protect Firefox users from such threats. The company has taken care of the extensions mentioned in the report. According to Mozilla, the malicious extension impacted a very small number of users; some of the extensions have been shut down.
“We help users customize their browsing experience by featuring a variety of add-ons, manually reviewed by our Firefox Add-ons team, on our Recommended Extensions page,” said a Firefox spokesperson. To protect the users, Mozilla has disabled “extensions that compromise their safety or privacy, or violate its policies, and continuously works to improve its malicious add-on detection tools and processes.”
To protect against these threats, Mozilla has advised users to Firefox users to take further steps, cautioning that such extensions are made by third parties. Users should check the extension rating and reviews, and be extra careful of extensions that need excessive permissions that are not compatible with what the extension claims to do. If any extension seems to be malicious, “users should report it for review,” a Firefox spokesperson said.
Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.
Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.
Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.
What exactly is Warmwind?
Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.
Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.
The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.
Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.
Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.
Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.
While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.
Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments.
Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts.
A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white.
According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”
Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.
Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.
Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.
According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”
As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?
Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.
Why Traditional Access Rules Don’t Work for AI
In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.
Why It Matters
Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.
Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.
Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.
What’s Making This So Difficult?
1. AI systems often blend data so deeply that it’s hard to tell what came from where.
2. Access rules are usually fixed, but AI relies on fast-changing data.
3. Companies have many users with different roles and permissions, making enforcement complicated.
4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.
How Can Businesses Fix This?
• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.
• Flexible Access Rules: Adjust permissions based on user roles and context.
• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.
• Separate Models: Train different AI models for different user groups, each with its own safe data.
• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.
As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.
The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps.
CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates.
After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.
According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”
During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers.
Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.
Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”
Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.
The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.
Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages. The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).
Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses.
The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed.
Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data.
According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."
Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts.
Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.
At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages
A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes.
According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”
It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.
According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”
Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication.