Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI regulation. Show all posts

Decoding the Digital Mind: EU's Blueprint for AI Regulation

 


In what's considered one of the most significant parts of the world's first comprehensive artificial intelligence regulation, the European Union has gotten to a preliminary agreement that restricts the use of the ChatGPT model and many other deep learning technologies. 

Bloomberg has been able to obtain a document from the EU which outlines basic transparency requirements for developers of general-purpose AI systems, which are powerful models that can be used for a wide range of purposes unless they are made available free and open-source, which is certainly allowed. 

As a result of artificial intelligence becoming more widespread, it will have an enormous impact on almost every aspect of our lives. Commercial enterprises can benefit enormously from the use of digital technology, but there are also significant risks associated with it. 

Such warnings have even been made by Sam Altman, the developer of the ChatGPT language model, and he also promotes these warnings. It has even been suggested by some scientists that if artificial intelligence develops applications which are aggressive beyond the control of mankind there could be a threat to our existence. 

A provisional agreement reached by the European Union (EU) has marked a significant milestone for establishing the world's first comprehensive artificial intelligence (AI) regulation. It limits the operation of cutting-edge AI models, such as ChatGPT, which is one of the most advanced artificial intelligence models available today. 

There are several transparency criteria, outlined in a Bloomberg report, directed at developers of general-purpose AI systems that are characterized by their versatility across different applications, as well as their ability to function effectively in a wide range of situations. 

A special note needs to be made concerning the fact that these requirements do not apply to free and open-source software models. Several stipulations must be implemented to comply with these demands, which include establishing an acceptable use policy, maintaining updated information on the model training methodology, submitting a detailed data summary used in the training, and establishing a policy to preserve copyright rights. 

To comply with the regulations, models that are determined to present a "systemic risk," which is determined based on the amount of computing power they use during training, are subjected to escalation. Experts highlight the GPT-4 model of OpenAI as the only model that automatically meets this criterion and is capable of exceeding the threshold of ten trillion or septillion operations per second. 

Several other models can be designated by the EU's executive arm based on factors such as the size of the dataset, EU business users, as well as the registration of end users. For highly capable models to be compliant with the AI Act, the European Commission must refine more cohesive and enduring controls while committing to a code of conduct. When models fail to sign the code, they must demonstrate compliance with the AI Act to the commission.

Notably, there is an exception for models posing systemic risks, which is not covered under the exemption for open-source models. This model entails a number of additional obligations, including the disclosure of energy consumption, adversarial testing either internally or externally, evaluating and mitigating systemic risk, reporting incidents, implementing cybersecurity controls, divulging information that is used for fine-tuning the model, and adhering to energy efficiency standards as needed. 

Current AI models have several shortcomings 


Current artificial intelligence models have several critical problems that make comprehensive regulation more necessary than ever:

Especially in critical applications such as healthcare or justice, there can be trust issues if there is a lack of transparency and explainability, which can lead to trust issues. Keeping this information safe and secure is very important, and misuse or breaches of this information can have severe consequences, making it crucial to protect it. 

There has been a lot of talk about AI systems' reliability and safety, but this is a difficult task since such systems may be subject to errors or manipulation, such as adversarial attacks designed intentionally to mislead AI models. In addition, it is important to make certain AI systems are robust and are capable of working safely, even if faced with unexpected situations or data. 

A significant environmental impact of large AI models can be attributed to the need for computing involved in training and operating them. Additionally, as the need for more powerful AI continues to grow, so does the amount of energy required to power them.

Last year, when OpenAI's ChatGPT was released to the public, generative AI became a hot topic in the media. As a result of that release, lawmakers were pushed to rethink their approach to 2021 when the initial EU proposals appeared. 

The ability to generate sophisticated and humanlike output from simple queries using vast amounts of data using generative AI tools such as ChatGPT and Stable Diffusion, Google's Bard and Anthropic's Claude has completely surprised AI experts and regulators with these tools. 

There are concerns that they might displace jobs, generate discriminatory language or violate privacy, which has sparked criticism. There is a new era dawning for the digital ethics community, with the announcement of the EU's landmark AI regulations. 

The blueprint sets a precedent for responsible AI by navigating the labyrinth of transparency, compliance, and environmental stewardship required in a truly transparent AI world. The digital world is undergoing a profound transformation that has the potential to lead to a technologically advanced future, and society is navigating the uncharted waters of artificial intelligence very carefully as it evolves.

Navigating Ethical Challenges in AI-Powered Wargames

The intersection of wargames and artificial intelligence (AI) has become a key subject in the constantly changing field of combat and technology. Experts are advocating for ethical monitoring to reduce potential hazards as nations use AI to improve military capabilities.

The NATO Wargaming Handbook, released in September 2023, stands as a testament to the growing importance of understanding the implications of AI in military simulations. The handbook delves into the intricacies of utilizing AI technologies in wargames, emphasizing the need for responsible and ethical practices. It acknowledges that while AI can significantly enhance decision-making processes, it also poses unique challenges that demand careful consideration.

The integration of AI in wargames is not without its pitfalls. The prospect of autonomous decision-making by AI systems raises ethical dilemmas and concerns about unintended consequences. The AI Safety Summit, as highlighted in the UK government's publication, underscores the necessity of proactive measures to address potential risks associated with AI in military applications. The summit serves as a platform for stakeholders to discuss strategies and guidelines to ensure the responsible use of AI in wargaming scenarios.

The ethical dimensions of AI in wargames are further explored in a comprehensive report by the Centre for Ethical Technology and Artificial Intelligence (CETAI). The report emphasizes the importance of aligning AI applications with human values, emphasizing transparency, accountability, and adherence to international laws and norms. As technology advances, maintaining ethical standards becomes paramount to prevent unintended consequences that may arise from the integration of AI into military simulations.

One of the critical takeaways from the discussions surrounding AI in wargames is the need for international collaboration. The Bulletin of the Atomic Scientists, in a thought-provoking article, emphasizes the urgency of establishing global ethical standards for AI in military contexts. The article highlights that without a shared framework, the risks associated with AI in wargaming could escalate, potentially leading to unforeseen geopolitical consequences.

The area where AI and wargames collide is complicated and requires cautious exploration. Ethical control becomes crucial when countries use AI to improve their military prowess. The significance of responsible procedures in leveraging AI in military simulations is emphasized by the findings from the CETAI report, the AI Safety Summit, and the NATO Wargaming Handbook. Experts have called for international cooperation to ensure that the use of AI in wargames is consistent with moral standards and the interests of international security.


Prez Biden Signs AI Executive Order for Monitoring AI Policies


On November 2, US President Joe Biden signed a new comprehensive executive order detailing intentions for business control and governmental monitoring of artificial intelligence. The legislation, released on October 30, aims at addressing several widespread issues in regard to privacy concerns, bias and misinformation enabled by the high-end AI technology that is becoming more and more ingrained in the contemporary world. 

The White House's Executive Order Fact Sheet makes it obvious that US regulatory authorities aim to both try to govern and benefit from the vast spectrum of emerging and rebranded "artificial intelligence" technologies, even though the solutions are still primarily conceptual.

The administrator’s executive order aims at creating new guidelines for the security and safety of AI use. By applying the Defense Production Act, the order directs businesses to provide US regulators with safety test results and other crucial data whenever they are developing AI that could present a "serious risk" for US military, economic, or public security. However, it is still unclear who will be monitoring these risks and to what extent. 

Nevertheless, prior to the public distribution of any such AI programs, the National Institute of Standards and Technology will shortly establish safety requirements that must be fulfilled.

In regards to the order, Ben Buchanan, the White House Senior Advisor for AI said, “I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do[…]We have to do safety and security, we have to do civil rights and equity, we have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”

“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development, and that they share the tests of those systems in accordance with those standards[…]Before it goes out to the public, it needs to be safe, secure, and trustworthy,” Mr. Buchanan added. 

A Long Road Ahead

In an announcement made by President Biden on Monday, he urged Congress to enact bipartisan data privacy legislation to “protect all Americans, especially kids,” from AI risks. 

While several US states like Massachusetts, California, Virginia, and Colorado have agreed on passing the legislation, the US however lacks comprehensive legal safeguards akin to the EU’s General Data Protection Regulation (GDPR).

GDPR, enacted in 2018, severely limits how businesses can access the personal data of their customers. If they are found to be violating the law, they may as well face hefty fines. 

However, according to Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University, the White House's most recent requests for data privacy laws "are unlikely to be answered[…]Both sides concur that action is necessary, but they cannot agree on how it should be carried out."