Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Models. Show all posts

Microsoft Temporarily Blocks ChatGPT: Addressing Data Concerns

Microsoft recently made headlines by temporarily blocking internal access to ChatGPT, a language model developed by OpenAI, citing data concerns. The move sparked curiosity and raised questions about the security and potential risks associated with this advanced language model.

According to reports, Microsoft took this precautionary step on Thursday, sending ripples through the tech community. The decision came as a response to what Microsoft referred to as data concerns associated with ChatGPT.

While the exact nature of these concerns remains undisclosed, it highlights the growing importance of scrutinizing the security aspects of AI models, especially those that handle sensitive information. With ChatGPT being a widely used language model for various applications, including customer service and content generation, any potential vulnerabilities in its data handling could have significant implications.

As reported by ZDNet, Microsoft still needs to provide detailed information on the duration of the block or the specific data issues that prompted this action. However, the company stated that it is actively working with OpenAI to address these concerns and ensure a secure environment for its users.

This incident brings to light the continuous difficulties and obligations involved in applying cutting-edge AI models to practical situations. It is crucial to guarantee the security and moral application of these models as artificial intelligence gets more and more integrated into different businesses. Businesses must find a balance between protecting sensitive data and utilizing AI's potential.

It's important to note that instances like this add to the continuing discussion about AI ethics and the necessity of open disclosure about possible dangers. The tech titans' dedication to rapidly and ethically addressing issues is demonstrated by their partnership in tackling the data concerns through OpenAI and Microsoft.

Microsoft's recent decision to temporarily restrict internal access to ChatGPT highlights the dynamic nature of AI security and the significance of exercising caution while implementing sophisticated language models. The way the problem develops serves as a reminder that, in order to guarantee the ethical and secure use of AI technology, the tech community needs to continue being proactive in addressing possible data vulnerabilities.





Customized AI Models and Benchmarks: A Path to Ethical Deployment

 

As artificial intelligence (AI) models continue to advance, the need for industry collaboration and tailored testing benchmarks becomes increasingly crucial for organizations in their quest to find the right fit for their specific needs.

Ong Chen Hui, the assistant chief executive of the business and technology group at Infocomm Media Development Authority (IMDA), emphasized the importance of such efforts. As enterprises seek out large language models (LLMs) customized for their verticals and countries aim to align AI models with their unique values, collaboration and benchmarking play key roles.

Ong raised the question of whether relying solely on one large foundation model is the optimal path forward, or if there is a need for more specialized models. She pointed to Bloomberg's initiative to develop BloombergGPT, a generative AI model specifically trained on financial data. Ong stressed that as long as expertise, data, and computing resources remain accessible, the industry can continue to propel developments forward.

Red Hat, a software vendor and a member of Singapore's AI Verify Foundation, is committed to fostering responsible and ethical AI usage. The foundation aims to leverage the open-source community to create test toolkits that guide the ethical deployment of AI. Singapore boasts the highest adoption of open-source technologies in the Asia-Pacific region, with numerous organizations, including port operator PSA Singapore and UOB bank, using Red Hat's solutions to enhance their operations and cloud development.

Transparency is a fundamental aspect of AI ethics, according to Ong. She emphasized the importance of open collaboration in developing test toolkits, citing cybersecurity as a model where open-source development has thrived. Ong highlighted the need for continuous testing and refinement of generative AI models to ensure they align with an organization's ethical guidelines.

However, some concerns have arisen regarding major players like OpenAI withholding technical details about their LLMs. A group of academics from the University of Oxford highlighted issues related to accessibility, replicability, reliability, and trustworthiness (AART) stemming from the lack of information about these models.

Ong suggested that organizations adopting generative AI will fall into two camps: those opting for proprietary large language AI models and those choosing open-source alternatives. She emphasized that businesses focused on transparency can select open-source options.

As generative AI applications become more specialized, customized test benchmarks will become essential. Ong stressed that these benchmarks will be crucial for testing AI applications against an organization's or country's AI principles, ensuring responsible and ethical deployment.

In conclusion, the collaboration, transparency, and benchmarking efforts in the AI industry are essential to cater to specific needs and align AI models with ethical and responsible usage. The development of specialized generative AI models and comprehensive testing benchmarks will be pivotal in achieving these objectives.

Why is Skepticism the Best Protection When Adopting Generative AI?


It has become crucial for companies to implement generative artificial intelligence (AI) while minimizing potential hazards and with a healthy dose of skepticism. 

According to a Gartner report issued on Tuesday, 45% of firms are presently testing generative AI, while 10% have such technologies in use. During a webinar last month to examine the commercial costs and dangers of generative AI, 1,419 executives were polled.

In the recent survey, around 78% said that the advantages of generative AI exceeded its risks, compared to the 68% who felt the same way in the prior survey. 

According to Gartner, 22% of firms are expanding their generative AI investments across at least three different functions, with 45% of businesses doing so overall. Software development saw the biggest investment in or adoption of generative AI, at 21%, followed by marketing and customer service, at 19% and 16%, respectively.

Gartner’s group chief of research and an acclaimed analyst, "Organizations are not just talking about generative AI – they're investing time, money, and resources to move it forward and drive business outcomes."

"Executives are taking a bolder stance on generative AI as they see the profound ways that it can drive innovation, optimization, and disruption[…]Business and IT leaders understand that the 'wait and see' approach is riskier than investing," said Karamouzis.

Why is ‘Having a Doubt’ Necessary 

In order to grow their businesses companies must have a framework in place to ensure that they are adopting generative AI responsibly and ethically.

According to Kathy Baxter, Salesforce.com's principal architect of Responsible AI, skepticism should also be extended to technologies that can tell whether AI has been deployed.

Baxter further added that technology has now become ‘democratized,’ allowing anyone to have access to generative AI without many restrictions. However, despite the fact that many firms are making an attempt to screen out harmful information and are still investing in such initiatives, there is still a lack of knowledge regarding "how big a grain of salt" one should apply to AI-generated content.

Baxter noted that even AI detecting tools can make mistakes occasionally yet may be taken as always accurate in an interview with ZDNET, stressing that users accept all of this stuff as fact even if it is false. When generative AI and the tools that go along with it are employed in some fields, like education, these impressions could be detrimental since students might be falsely accused of employing AI in their work. 

She further raised concerns over such risks, urging individuals and organizations to use generative AI with ‘enough skepticism.’

She further highlighted the need for sufficient restrictions to ensure the safety and accuracy of AI. This will also help in case deployments are rolled out along with mitigation tools, she added. These can involve fault detection and reporting features, and mechanisms to collect and provide human feedback. 

Moreover, she emphasized the significance of the data used to train AI models and added that grounding AI is equally essential. But as she pointed out, not many businesses practice proper data hygiene.  

AI Models Produces Photos of Real People and Copyrighted Images


The infamous image generation models are used in order to produce identifiable photos of actual people. This leads to the privacy infringement of numerous individuals, as per a new research. 

The study demonstrates how these AI systems can be programmed to reproduce precisely copyrighted artwork and medical images. It is a result that might help artists who are suing AI companies for copyright violations.  

Research: Extracting Training Data from Diffusion Models 

Researchers from Google, DeepMind, UC Berkeley, ETH Zürich, and Princeton obtained their findings by repeatedly prompting Google’s Imagen with image captions, like the user’s name. Following this, they analyzed if any of the images they produced matched the original photos stored in the model's database. The team was successful in extracting more than 100 copies of photos from the AI's training set. 

These image-generating AI models are apparently produced over vast data sets, that consist of images with captions that have been taken from the internet. The most recent technology works by taking images in the data sets and altering pixels individually until the original image is nothing more than a jumble of random pixels. The AI model then reverses the procedure to create a new image from the pixelated mess. 

According to Ryan Webster, a Ph.D. student from the University of Caen Normandy, who has studied privacy in other image generation models but is not involved in the research, the study is the first to demonstrate that these AI models remember photos from their training sets. This could also serve as an implication for startups wanting to use AI models in health care since it indicates that these systems risk leaking users’ private and sensitive data. 

Eric Wallace, a Ph.D. scholar who was involved in the study group, raises concerns over the privacy issue and says they hope to raise alarm regarding the potential privacy concerns with these AI models before they are extensively implemented in delicate industries like medicine. 

“A lot of people are tempted to try to apply these types of generative approaches to sensitive data, and our work is definitely a cautionary tale that that’s probably a bad idea unless there’s some kind of extreme safeguards taken to prevent [privacy infringements],” Wallace says. 

Another major conflict between AI businesses and artists is caused by the extent to which these AI models memorize and regurgitate photos from their databases. Two lawsuits have been filed against AI by Getty Images and a group of artists who claim the company illicitly scraped and processed their copyrighted content. 

The researchers' findings will ultimately aid artists to claim that AI companies have violated their copyright. The companies may have to pay artists whose work was used to train Stable Diffusion if they can demonstrate that the model stole their work without their consent. 

According to Sameer Singh, an associate professor of computer science at the University of California, Irvine, these findings hold paramount importance. “It is important for general public awareness and to initiate discussions around the security and privacy of these large models,” he adds.