Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Legal Implications. Show all posts

ICC Investigates Russian Cyberattacks on Ukraine as War Crimes

 



The International Criminal Court (ICC) is conducting an unprecedented investigation into alleged Russian cyberattacks on Ukrainian civilian infrastructure, considering them possible war crimes. This marks the first time international prosecutors have delved into cyber warfare, potentially leading to arrest warrants if sufficient evidence is gathered.

Prosecutors are examining cyberattacks on infrastructure that jeopardised lives by disrupting power and water supplies, cutting connections to emergency responders, or knocking out mobile data services that transmit air raid warnings. An official familiar with the case, who requested anonymity, confirmed the ICC's focus on cyberattacks since the onset of Russia’s full-scale invasion in February 2022. Additionally, sources close to the ICC prosecutor's office indicated that the investigation might extend back to 2015, following Russia's annexation of Crimea.

Ukraine is actively collaborating with ICC prosecutors, collecting evidence to support the investigation. While the ICC prosecutor's office has declined to comment on ongoing investigations, it has previously stated its jurisdiction to probe cybercrimes. The investigation could set a significant legal precedent, clarifying the application of international humanitarian law to cyber warfare.

Among the cyberattacks being investigated, at least four major attacks on energy infrastructure stand out. Sources identified the hacker group "Sandworm," believed to be linked to Russian military intelligence, as a primary suspect. Sandworm has been implicated in several high-profile cyberattacks, including a 2015 attack on Ukraine's power grid. Additionally, the activist hacker group "Solntsepyok," allegedly a front for Sandworm, claimed responsibility for a December 2022 attack on the Ukrainian mobile provider Kyivstar.

The investigation raises questions about whether cyberattacks can constitute war crimes under international law. The Geneva Conventions prohibit attacks on civilian objects, but there is no universally accepted definition of cyber war crimes. Legal scholars, through the Tallinn Manual, have attempted to outline the application of international law to cyber operations. Experts argue that the foreseeable consequences of cyberattacks, such as endangering civilian lives, could meet the criteria for war crimes.

If the ICC prosecutes these cyberattacks as war crimes, it would provide much-needed clarity on the legal status of cyber warfare. Professor Michael Schmitt of the University of Reading, a key figure in the Tallinn Manual process, believes that attacks like the one on Kyivstar meet the criteria for war crimes due to their foreseeable impact on human lives. Ukraine’s intelligence agency, the SBU, has provided detailed information about the incident to ICC investigators.

Russia, which is not an ICC member, has dismissed accusations of cyberattacks as attempts to incite anti-Russian sentiment. Despite this, the ICC has issued four arrest warrants against senior Russian figures since the invasion began, including President Vladimir Putin. Ukraine, while not an ICC member, has granted the court jurisdiction to prosecute crimes on its territory.

The ICC's probe into Russian cyberattacks on Ukrainian infrastructure could redefine the boundaries of international law in cyberspace. As the investigation unfolds, it may establish a precedent for holding perpetrators of cyber warfare accountable under international humanitarian law.


What are the Legal Implications and Risks of Generative AI?


In the ever-evolving AI landscape, dealing with the changing regulations and securing data privacy has become a new challenge. With more efficient human capabilities, AI must not replace humans, especially in a world where its standards are still developing globally. 

There are certain risks that the unchecked generative AI possesses with the overabundant information it may hold. Companies run the risk of disclosing their valuable assets when they feed private, sensitive data into open AI models. Some businesses choose to localize AI models on their systems and train them using their confidential data in order to reduce this danger. However, for best outcomes, such a strategy necessitates a well-organized data architecture.

Risks of Unchecked Generative AI

The appealing elements of generative AI and Large Language Models (LLMs) are their capabilities to compile information to produce fresh ideas, but these skills also carry inherent risks. If not carefully handled, gen AI can unintentionally result in issues like: 

Personal Data Security 

AI systems must handle personal data with the utmost care, especially sensitive or special category personal data. Concerns about unintentional data leaks that could lead to data privacy violations are raised by the growing integration of marketing and consumer data into LLMs.

Contractual Violations 

It is occasionally illegal to use consumer data in AI systems, which has negative legal repercussions. As companies adopt AI, they must carefully negotiate this treacherous terrain to ensure they uphold contractual commitments.

Customer Transparency and Disclosure 

The goals of current and potential future AI regulations focus on a transparent and lucid disclosure of AI technology. For instance, the business must disclose whether a person or an AI is handling a customer's engagement with a chatbot on a support website. Maintaining trust and upholding ethical standards depend on adherence to such restrictions.

Legal Challenges and Risks for Businesses 

Recent legal actions against eminent AI companies highlight the significance of handling data responsibly. The importance of strict data governance and transparency is highlighted by these lawsuits, which include class action cases involving copyright infringement, consumer protection, and data protection issues. They also suggest possible conditions for exposing the origins of AI training data.

Since their use of copyrighted data to build and train their models, AI giants have been the main targets of various lawsuits. Allegations of copyright infringement, consumer protection violations, and data protection legislation violations are made in recent class action lawsuits filed in the Northern District of California, including one filed on behalf of authors and another on behalf of victim users. These submissions emphasize the value of treating data responsibly and could indicate that in the future it will be necessary to identify the sources of training data.

Moreover, businesses possess serious risks when they significantly rely on AI models, not just AI developers like OpenAI. The case of how many of the apps implement improper AI model training may taint entire products. The parent business Everalbum was forced to destroy improperly gathered data and AI models after the Federal Trade Commission (FTC) accused Everalbum of misleading consumers about the use of face recognition technology and data retention. This forced Everalbum to cease in 2020.

How to Mitigate AI Risks? 

Despite the legal challenges, CEOs are under pressure to adopt generative AI if they wish to increase their business’ productivity. Businesses can create best practices and get ready for new requirements by using the frameworks and legislation currently in place. AI systems are covered by provisions in existing data protection regulations, such as those requiring transparency, notice, and the protection of individual privacy rights. Some of these best practices involve:

  • Transparency and Documentation: Businesses are recommended to clearly mention the AI usage, and document AI logic, applications and potential impacts on the data subjects. Also, businesses must keep a record of data transactions and detailed logs of confidential information in order to maintain proper governance and data security.
  • Localizing AI Models: By ensuring that models are trained on pertinent, organization-specific information, internal localization and training with private data can lower data security risks and boost efficiency.
  • Discovering and Connecting: Companies must utilize generative AI to unveil new perspectives and create unexpected connections across different departments and information silos.
  • Preserving Human Element: Gen AI should improve human performance rather than completely replace it. To reduce model biases and data inaccuracies, human monitoring, critical decision review, and content verification of AI-created information are essential.