Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Bot. Show all posts

Microsoft ‘Cherry-picked’ Examples to Make its AI Seem Functional, Leaked Audio Revealed


According to a report by Business Insiders, Microsoft “cherry-picked” examples of generative AI’s output since the system would frequently "hallucinate" wrong responses. 

The intel came from a leaked audio file of an internal presentation on an early version of Microsoft’s Security Copilot a ChatGPT-like artificial intelligence platform that Microsoft created to assist cybersecurity professionals.

Apparently, the audio consists of a Microsoft researcher addressing the result of "threat hunter" testing, in which the AI examined a Windows security log for any indications of potentially malicious behaviour.

"We had to cherry-pick a little bit to get an example that looked good because it would stray and because it's a stochastic model, it would give us different answers when we asked it the same questions," said Lloyd Greenwald, a Microsoft Security Partner giving the presentation, as quoted by BI.

"It wasn't that easy to get good answers," he added.

Security Copilot

Security Copilot, like any chatbot, allows users to enter their query into a chat window and receive responses as a customer service reply. Security Copilot is largely built on OpenAI's GPT-4 large language model (LLM), which also runs Microsoft's other generative AI forays like the Bing Search assistant. Greenwald claims that these demonstrations were "initial explorations" of the possibilities of GPT-4 and that Microsoft was given early access to the technology.

Similar to Bing AI in its early days, which responded so ludicrous that it had to be "lobotomized," the researchers claimed that Security Copilot often "hallucinated" wrong answers in its early versions, an issue that appeared to be inherent to the technology. "Hallucination is a big problem with LLMs and there's a lot we do at Microsoft to try to eliminate hallucinations and part of that is grounding it with real data," Greenwald said in the audio, "but this is just taking the model without grounding it with any data."

The LLM Microsoft used to build Security Pilot, GPT-4, however it was not trained on cybersecurity-specific data. Rather, it was utilized directly out of the box, depending just on its massive generic dataset, which is standard.

Cherry on Top

Discussing other queries in regards to security, Greenwald revealed that, "this is just what we demoed to the government."

However, it is unclear whether Microsoft used these “cherry-picked” examples in its to the government and other potential customers – or if its researchers were really upfront about the selection process of the examples.

A spokeswoman for Microsoft told BI that "the technology discussed at the meeting was exploratory work that predated Security Copilot and was tested on simulations created from public data sets for the model evaluations," stating that "no customer data was used."  

Warcraft Fans Trick AI with Glorbo Hoax

Ambitious Warcraft fans have persuaded an AI article bot into writing about the mythical character Glorbo in an amusing and ingenious turn of events. The incident, which happened on Reddit, demonstrates the creativity of the game industry as well as the limitations of artificial intelligence in terms of fact-checking and information verification.

The hoax gained popularity after a group of Reddit users decided to fabricate a thorough backstory for a fictional character in the World of Warcraft realm to test the capabilities of an AI-powered article generator. A complex background was given to the made-up gnome warlock Glorbo, along with a made-up storyline and special magic skills.

The Glorbo enthusiasts were eager to see if the AI article bot would fall for the scam and create an article based on the complex story they had created. To give the story a sense of realism, they meticulously edited the narrative to reflect the tone and terminology commonly used in gaming media.

To their delight, the experiment was effective, as the piece produced by the AI not only chronicled Glorbo's alleged in-game exploits but also included references to the Reddit post, portraying the character as though it were a real member of the Warcraft universe. The whimsical invention may be presented as news because the AI couldn't tell the difference between factual and fictional content.

The information about this practical joke swiftly traveled throughout the gaming and social media platforms, amusing and intriguing people about the potential applications of AI-generated material in the field of journalism. While there is no doubt that AI technology has transformed the way material is produced and distributed, it also raises questions about the necessity for human oversight to ensure the accuracy of information.

As a result of the experiment, it becomes evident that AI article bots, while efficient in producing large volumes of content, lack the discernment and critical thinking capabilities that humans possess. Dr. Emily Simmons, an AI ethics researcher, commented on the incident, saying, "This is a fascinating example of how AI can be fooled when faced with deceptive inputs. It underscores the importance of incorporating human fact-checking and oversight in AI-generated content to maintain journalistic integrity."

The amusing incident serves as a reminder that artificial intelligence technology is still in its infancy and that, as it develops, tackling problems with misinformation and deception must be a top focus. While AI may surely help with content creation, it cannot take the place of human context, understanding, and judgment.

Glorbo's developers are thrilled with the result and hope that this humorous occurrence will encourage discussions on responsible AI use and the dangers of relying solely on automated systems for journalism and content creation.




As ChatGPT Gains Popularity, Experts Call for Regulations Against Cybercrime

 

ChatGPT, the popular artificial intelligence chatbot, is making its way into more homes and offices around the world. With the capability to answer questions and generate content in seconds, this generation of chatbots can assist users in searching, explaining, writing, and creating almost anything. 

Experts warn, however, that the increased use of such AI-powered technology carries risks and may facilitate the work of scammers and cybercrime syndicates. Cybersecurity experts are calling for regulatory frameworks and increased user vigilance to prevent individuals from becoming victims. 

ChatGPT's benefit is the "convenient, direct, and quick solutions" it generates, according to Mr Lester Ho, a chatbot user. One reason why some users prefer ChatGPT as a search tool over traditional search engines like Google or Bing is the seemingly curated content for each individual.

“Google’s downside is that users have to click on different links to find out what is suitable for them. Compare that to ChatGPT, where users are given very quick responses, with one answer given at a time,” he said.

Another draw is the chatbot's ability to consolidate research into layman's terms, making it easier for users to digest information, according to Mr Tony Jarvis, director of enterprise security at cyber defense technology firm Darktrace.

Complicated topics, such as legal issues, can be simplified and paraphrased. Businesses have also flocked to chatbots, drawn in by their content creation and language processing capabilities, which can save them manpower, time, and money.

“This is definitely revolutionary technology. I believe sooner or later everybody will use it,” said Dr Alfred Ang, managing director of training provider Tertiary Infotech.

“Powerful chatbots will continue to emerge this year and the next few years,” added Dr Ang, whose firm uses AI to generate content for its website, write social media posts, and script marketing videos.

Its ability to write complete essays has proven popular among students looking for homework assistance, prompting educational institutions to scramble to combat misuse, with some outright banning the bot.

Regulation and Governance

Google, Microsoft, and Baidu are all jumping on board with similar products and plans to advance them midst of a chat engine race. With the adoption of AI chatbots expected to increase cybercrime, experts are urging authorities to investigate initiatives to defend against threats and protect users.

“To mitigate all these problems, (regulatory bodies) should set up some kind of ethical or governance framework, and also improve our Personal Data Protection Act (PDPA) or strengthen cybersecurity,” Dr. Ang said.

“Governance and digital trust for the use of AI will have to be investigated so that we know how to prevent abuse or malicious use,” added Prof Lam, who is also a GovWare Programme advisory board member.

According to authorities, phishing scams increased by more than 41% last year compared to the previous year. Aside from the government and regulators racing to implement security measures, users must also keep up with technology news and skills to keep themselves safe, according to experts.

Prof Lam concluded, “As more people use ChatGPT and provide data for it, we definitely should expect (the bot) to further improve. As end-users, we need to be more cautious. Cyber hygiene will be even more important than ever. In the coming years, chatbots are almost certainly going to become more human-like, and it's going to be less obvious that we're talking to one.”

Google's Bard AI Bot Error Costed the Company $100 Billion Shares

Google is looking for forms to reassure people that it is still at the forefront of artificial intelligence technology. So far, the internet behemoth appears to be getting it wrong. An advertisement for its new AI bot showed it answering a question incorrectly. 
Alphabet shares fell more than 7% on Wednesday, erasing $100 billion (£82 billion) from the company's market value. In the promotion for the bot, known as Bard, which was released on Twitter on Monday, the bot was asked what to tell a nine-year-old about James Webb Space Telescope discoveries.

It responded that the telescope was the first to take images of a planet outside the Earth's solar system, when in fact the European Very Large Telescope did so in 2004 - a mistake quickly corrected by astronomers on Twitter.

"Why didn't you fact check this example before sharing it?" Chris Harrison, a fellow at Newcastle University, replied to the tweet.

Investors were also underwhelmed by the company's presentation on its plans to incorporate artificial intelligence into its products. Since late last year, when Microsoft-backed OpenAI revealed new ChatGPT software, Google has been under fire. It rapidly became a viral sensation due to its ability to pass business school exams, compose song lyrics, and answer other questions.

A Google spokesperson stated the error emphasized "the importance of a rigorous testing process, something that we're kicking off this week with our Trusted Tester programme".

"We'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety, and roundedness in real-world information," they said.
 
Alphabet, Google's parent company, laid off 12,000 employees last month, accounting for about 6% of its global workforce.