Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Data Feed. Show all posts

Here's How to Monitor the AI Data Feed

 

AI has swept the world by storm in the past year, leaving some to wonder: Is AI the next big tech trend, a threat to human enslavement, or something much more subtle? 

It's not easy. Speaking of which, the fact that ChatGPT passed the bar test is both remarkable and somewhat concerning for attorneys. But several flaws in the program's functionality are already becoming apparent. For example, a lawyer using ChatGPT in court discovered that the bot had falsified parts of their arguments. 

Undoubtedly, AI will continue to grow in capabilities, but numerous significant issues persist. How can we be sure AI is trustworthy? How can we be certain that its good is impartial and unfiltered in addition to being accurate? How can we be sure the data utilised for training the AI model wasn't altered, and where does it come from? 

Any AI model, but particularly those that will soon be employed for defence, transportation, safety, and other areas where human lives are at risk, is put in danger when it is tampered with. 

Regulation is required for safe AI verification 

The adoption of AI shouldn't happen carelessly, even though national agencies from across the globe acknowledge that it will eventually develop into a crucial component of our systems and processes. 

The following are the two most crucial queries we must address: 

  • Does a specific system make use of an AI model? 
  • What tasks may an AI model be used to command or influence? 

There are far fewer chances of AI being exploited if we are certain that a model has been trained for the intended application and that we are aware of the precise location and capabilities of the model. 

AI can be verified using a variety of approaches, including hardware inspection, system inspection, sustained verification, and Van Eck radiation analysis. 

Hardware inspections are physical checks of computing equipment that are used to detect the presence of AI chips. System inspection techniques, on the other hand, employ software to analyse a model, establish what it is capable of controlling, and flag any functions that should be restricted. 

The technique operates by recognising and isolating a system's quarantine zones, which are intentionally obfuscated to safeguard intellectual property and secrets. Instead of revealing any sensitive information or IP, the software inspects the surrounding transparent components to detect and flag any AI processing employed in the system. 

Methods of deeper verification 

Sustained verification techniques occur beyond the initial inspection, guaranteeing that once a model is deployed, it is not changed or tampered with. Some anti-tampering procedures, such as cryptographic hashing and code obfuscation, are carried out within the model itself. 

An inspector can identify whether the base state of a system has changed using cryptographic hashing without revealing the underlying data or code. Methods of code obfuscation, which are still in development, scramble system code at the machine level such that it cannot be decoded by outside forces. 

Van Eck radiation analysis examines the pattern of radiation released by a system while it is functioning. Because complex systems have multiple concurrent processes running, radiation is sometimes jumbled, making it impossible to extract specific code. However, the Van Eck approach can detect significant changes (such as new AI) without interpreting any critical information that the system's deployers prefer to keep hidden. 

Looking forward 

Business leaders must understand, at a high level, what verification methods are available and how effective they are at detecting the use of AI, model alterations, and biases in the original training data. The first stage is to identify solutions. The platforms that power these technologies act as a key barrier against any disgruntled employee, industrial/military spy, or simple human error that can lead to dangerous problems when combined with sophisticated AI models. 

While verification will not address each obstacle for an AI-based system, it will go a long way towards guaranteeing that the AI model works as intended and that its ability to evolve unexpectedly or be tampered with is instantly discovered. AI is becoming more incorporated into our daily lives, and we must assure that we can trust it.