Inside a government building in Rome, located opposite the ancient Aurelian Walls, dozens of cybersecurity professionals have been carrying out continuous monitoring operations for nearly a year. Their work focuses on tracking suspicious discussions and coordination activity taking place across hidden corners of the internet, including underground criminal forums and dark web marketplaces. This monitoring effort forms a core part of Italy’s preparations to protect the Milano–Cortina Winter Olympic Games from cyberattacks.
The responsibility for securing the digital environment of the Games lies with Italy’s National Cybersecurity Agency, an institution formed in 2021 to centralize the country’s cyber defense strategy. The upcoming Winter Olympics represent the agency’s first large-scale international operational test. Officials view the event as a likely target for cyber threats because the Olympics attract intense global attention. Such visibility can draw a wide spectrum of malicious actors, ranging from small-scale cybercriminal groups seeking disruption or financial gain to advanced threat groups believed to have links with state interests. These actors may attempt to use the event as a platform to make political statements, associate attacks with ideological causes, or exploit broader geopolitical tensions.
The Milano–Cortina Winter Games will run from February 6 to February 22 and will be hosted across multiple Alpine regions for the first time in Olympic history. This multi-location format introduces additional security and coordination challenges. Each venue relies on interconnected digital systems, including communications networks, event management platforms, broadcasting infrastructure, and logistics systems. Securing a geographically distributed digital environment exponentially increases the complexity of monitoring, response coordination, and incident containment.
Officials estimate that the Games will reach approximately three billion viewers globally, alongside around 1.5 million ticket-holding spectators on site. This scale creates a vast digital footprint. High-visibility services, such as live streaming platforms, official event websites, and ticket purchasing systems, are considered particularly attractive targets. Disrupting these services can generate widespread media attention, cause public confusion, and undermine confidence in the organizers’ ability to safeguard critical digital operations.
Italy’s planning has been shaped by recent Olympic experience. During the 2024 Paris Summer Olympics, authorities recorded more than 140 cyber incidents. In 22 cases, attackers managed to gain access to information systems. While none of these incidents disrupted the competitions themselves, the sheer volume of hostile activity demonstrated the persistent pressure faced by host nations. On the day of the opening ceremony in Paris, France’s TGV high-speed rail network was also targeted in coordinated physical sabotage attacks involving explosive devices. This incident illustrated how large global events can attract both cyber threats and physical security risks at the same time.
Italian cybersecurity officials anticipate comparable levels of hostile activity during the Milano–Cortina Games, with an additional layer of complexity introduced by artificial intelligence. AI tools can be used by attackers to automate technical tasks, enhance reconnaissance, and support more convincing phishing and impersonation campaigns. These techniques can increase the speed and scale of cyber operations while making malicious activity harder to detect. Although authorities currently report no specific, elevated threat level, they acknowledge that the overall risk environment is becoming more complex due to the growing availability of AI-assisted tools.
The National Cybersecurity Agency’s defensive approach emphasizes early detection rather than reactive response. Analysts continuously monitor open websites, underground criminal communities, and social media channels to identify emerging threat patterns before they develop into direct intrusion attempts. This method is designed to provide early warning, allowing technical teams to strengthen defenses before attackers move from planning to execution.
Operational coordination will involve multiple teams. Around 20 specialists from the agency’s operational staff will focus exclusively on Olympic-related cyber intelligence from the headquarters in Rome. An additional 10 senior experts will be deployed to Milan starting on February 4 to support the Technology Operations Centre, which oversees the digital systems supporting the Games. These government teams will operate alongside nearly 100 specialists from Deloitte and approximately 300 personnel from the local organizing committee and technology partners. Together, these groups will manage cybersecurity monitoring, incident response, and system resilience across all Olympic venues.
If threats keep developing during the Games, the agency will continuously feed intelligence into technical operations teams to support rapid decision-making. The guiding objective remains consistent. Detect emerging risks early, interpret threat signals accurately, and respond quickly and effectively when specific dangers become visible. This approach reflects Italy’s broader strategy to protect the digital infrastructure that underpins one of the world’s most prominent international sporting events.
The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.
In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.
To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.
According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.
Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.
Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.
CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.
By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.
Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.
The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.
While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.
The approach to cybersecurity in 2026 will be shaped not only by technological innovation but also by how deeply digital systems are embedded in everyday life. As cloud services, artificial intelligence tools, connected devices, and online communication platforms become routine, they also expand the surface area for cyber exploitation.
Cyber threats are no longer limited to technical breaches behind the scenes. They increasingly influence what people believe, how they behave online, and which systems they trust. While some risks are still emerging, others are already circulating quietly through commonly used apps, services, and platforms, often without users realizing it.
One major concern is the growing concentration of internet infrastructure. A substantial portion of websites and digital services now depend on a limited number of cloud providers, content delivery systems, and workplace tools. This level of uniformity makes the internet more efficient but also more fragile. When many platforms rely on the same backbone, a single disruption, vulnerability, or attack can trigger widespread consequences across millions of users at once. What was once a diverse digital ecosystem has gradually shifted toward standardization, making large-scale failures easier to exploit.
Another escalating risk is the spread of misleading narratives about online safety. Across social media platforms, discussion forums, and live-streaming environments, basic cybersecurity practices are increasingly mocked or dismissed. Advice related to privacy protection, secure passwords, or cautious digital behavior is often portrayed as unnecessary or exaggerated. This cultural shift creates ideal conditions for cybercrime. When users are encouraged to ignore protective habits, attackers face less resistance. In some cases, misleading content is actively promoted to weaken public awareness and normalize risky behavior.
Artificial intelligence is further accelerating cyber threats. AI-driven tools now allow attackers to automate tasks that once required advanced expertise, including scanning for vulnerabilities and crafting convincing phishing messages. At the same time, many users store sensitive conversations and information within browsers or AI-powered tools, often unaware that this data may be accessible to malware. As automated systems evolve, cyberattacks are becoming faster, more adaptive, and more difficult to detect or interrupt.
Trust itself has become a central target. Technologies such as voice cloning, deepfake media, and synthetic digital identities enable criminals to impersonate real individuals or create believable fake personas. These identities can bypass verification systems, open accounts, and commit fraud over long periods before being detected. As a result, confidence in digital interactions, platforms, and identity checks continues to decline.
Future computing capabilities are already influencing present-day cyber strategies. Even though advanced quantum-based attacks are not yet practical, some threat actors are collecting encrypted data now with the intention of decrypting it later. This approach puts long-term personal, financial, and institutional data at risk and underlines the need for stronger, future-ready security planning.
As digital and physical systems become increasingly interconnected, cybersecurity in 2026 will extend beyond software and hardware defenses. It will require stronger digital awareness, better judgment, and a broader understanding of how technology shapes risk in everyday life.
The more we share online, the easier it becomes for attackers to piece together our personal lives. Photos, location tags, daily routines, workplace details, and even casual posts can be combined to create a fairly accurate picture of who we are. Cybercriminals use this information to imitate victims, trick service providers, and craft convincing scams that look genuine. When someone can guess where you spend your time or what services you rely on, they can more easily pretend to be you and manipulate systems meant to protect you. Reducing what you post publicly is one of the simplest steps to lower this risk.
Weak passwords add another layer of vulnerability, but a recent industry assessment has shown that the problem is not only with users. Many of the most visited websites do not enforce strong password requirements. Some platforms do not require long passwords, special characters, or case sensitivity. This leaves accounts easier to break into through automated attacks. Experts recommend that websites adopt stronger password rules, introduce passkey options, and guide users with clear indicators of password strength. Users can improve their own security by relying on password managers, creating long unique passwords, and enabling two factor authentication wherever possible.
Concerns about device security are also increasing. Several governments have begun reviewing whether certain networking devices introduce national security risks, especially when the manufacturers are headquartered in countries that have laws allowing state access to data. These investigations have sparked debates over how consumer hardware is produced, how data flows through global supply chains, and whether companies can guarantee independence from government requests. For everyday users, this tension means it is important to select routers and other digital devices that receive regular software updates, publish clear security policies, and have a history of addressing vulnerabilities quickly.
Another rising threat is ransomware. Criminal groups continue to target both individuals and large organisations, encrypting data and demanding payment for recovery. Recent cases involving individuals with cybersecurity backgrounds show how profitable illicit markets can attract even trained professionals. Because attackers now operate with high levels of organisation, users and businesses should maintain offline backups, restrict access within internal networks, and test their response plans in advance.
Privacy concerns are also emerging in the travel sector. Airline data practices are also drawing scrutiny. Travel companies cannot directly sell passenger information to government programs due to legal restrictions, so several airlines jointly rely on an intermediary that acts as a broker. Reports show that this broker had been distributing data for years but only recently registered itself as a data broker, which is legally required. Users can request removal from this data-sharing system by emailing the broker’s privacy address and completing identity verification. Confirmation records should be stored for reference. The process involves verifying identity details, and users should keep a copy of all correspondence and confirmations.
Finally, several governments are exploring digital identity systems that would allow residents to store official identification on their phones. Although convenient, this approach raises significant privacy risks. Digital IDs place sensitive information in one central location, and if the surrounding protections are weak, the data could be misused for tracking or monitoring. Strong legal safeguards, transparent data handling rules, and external audits are essential before such systems are implemented.
Experts warn that centralizing identity increases the potential impact of a breach and may facilitate tracking unless strict limits, independent audits, and user controls are enforced. Policymakers must balance convenience with strong technical and legal protections.
Practical, immediate steps one should follow:
1. Reduce public posts that reveal routines or precise locations.
2. Use a password manager and unique, long passwords.
3. Turn on two factor authentication for important accounts.
4. Maintain offline backups and test recovery procedures.
5. Check privacy policies of travel brokers and submit opt-out requests if you want to limit data sharing.
6. Prefer devices with clear update policies and documented security practices.
These measures lower the chance that routine online activity becomes a direct route into your accounts or identity. Small, consistent changes will greatly reduce risk.
Overall, users can strengthen their protection by sharing less online, reviewing how their travel data is handled, and staying informed about the implications of digital identification. Small and consistent actions reduce the likelihood of becoming a victim of cyber threats.