Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label open collaboration. Show all posts

Customized AI Models and Benchmarks: A Path to Ethical Deployment

 

As artificial intelligence (AI) models continue to advance, the need for industry collaboration and tailored testing benchmarks becomes increasingly crucial for organizations in their quest to find the right fit for their specific needs.

Ong Chen Hui, the assistant chief executive of the business and technology group at Infocomm Media Development Authority (IMDA), emphasized the importance of such efforts. As enterprises seek out large language models (LLMs) customized for their verticals and countries aim to align AI models with their unique values, collaboration and benchmarking play key roles.

Ong raised the question of whether relying solely on one large foundation model is the optimal path forward, or if there is a need for more specialized models. She pointed to Bloomberg's initiative to develop BloombergGPT, a generative AI model specifically trained on financial data. Ong stressed that as long as expertise, data, and computing resources remain accessible, the industry can continue to propel developments forward.

Red Hat, a software vendor and a member of Singapore's AI Verify Foundation, is committed to fostering responsible and ethical AI usage. The foundation aims to leverage the open-source community to create test toolkits that guide the ethical deployment of AI. Singapore boasts the highest adoption of open-source technologies in the Asia-Pacific region, with numerous organizations, including port operator PSA Singapore and UOB bank, using Red Hat's solutions to enhance their operations and cloud development.

Transparency is a fundamental aspect of AI ethics, according to Ong. She emphasized the importance of open collaboration in developing test toolkits, citing cybersecurity as a model where open-source development has thrived. Ong highlighted the need for continuous testing and refinement of generative AI models to ensure they align with an organization's ethical guidelines.

However, some concerns have arisen regarding major players like OpenAI withholding technical details about their LLMs. A group of academics from the University of Oxford highlighted issues related to accessibility, replicability, reliability, and trustworthiness (AART) stemming from the lack of information about these models.

Ong suggested that organizations adopting generative AI will fall into two camps: those opting for proprietary large language AI models and those choosing open-source alternatives. She emphasized that businesses focused on transparency can select open-source options.

As generative AI applications become more specialized, customized test benchmarks will become essential. Ong stressed that these benchmarks will be crucial for testing AI applications against an organization's or country's AI principles, ensuring responsible and ethical deployment.

In conclusion, the collaboration, transparency, and benchmarking efforts in the AI industry are essential to cater to specific needs and align AI models with ethical and responsible usage. The development of specialized generative AI models and comprehensive testing benchmarks will be pivotal in achieving these objectives.