Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Artifical Inteliigence. Show all posts

The Future of AI: Labour Replacement or Collaboration?

 


In a recent interview with CNBC at the World Economic Forum in Davos, Mustafa Suleyman, co-founder and CEO of Inflection AI, expressed his views on artificial intelligence (AI). Suleyman, who left Google in 2022, highlighted that while AI is an incredible technology, it has the potential to replace jobs in the long term.

Suleyman stressed upon the need to carefully consider how we integrate AI tools, as he believes they are fundamentally labour-replacing over many decades. However, he acknowledged the immediate benefits of AI, explaining that it makes existing processes much more efficient, leading to cost savings for businesses. Additionally, he pointed out that AI enables new possibilities, describing these tools as creative, empathetic, and more human-like than traditional relational databases.

Inflection AI, Suleyman's current venture, has developed an AI chatbot providing advice and support to users. The chatbot showcases AI's ability to augment human capabilities and enhance productivity in various applications.

One key concern surrounding AI, as highlighted by Stanford University professor Erik Brynjolfsson at the World Economic Forum, is the fear of job obsolescence. Some worry that AI's capabilities in tasks like writing and coding might replace human jobs. Brynjolfsson suggested that companies using AI to outright replace workers may not be making the wisest choice. He proposed a more strategic approach, where AI complements human workers, recognizing that some tasks are better suited for humans, while others can be efficiently handled by machines.

Since the launch of OpenAI's ChatGPT in November 2022, the technology has generated considerable hype. The past year has seen an increased awareness of AI and its potential impact on various industries.

As businesses integrate AI into their operations, there is a growing need to educate the workforce and the public on the nuances of this technology. AI, in simple terms, refers to computer systems that can perform tasks that typically require human intelligence. These tasks range from problem-solving and decision-making to creative endeavours.

Mustafa Suleyman's perspective on AI highlights its dual role – as a cost-saving tool in the short term and a potential job-replacing force in the long term. Balancing these aspects requires careful consideration and strategic planning.

Erik Brynjolfsson's advice to companies emphasises the importance of collaboration between humans and AI. Instead of viewing AI as a threat, companies should explore ways to leverage AI to enhance human capabilities and overall productivity.

The future of AI lies in how we go about its integration into our lives and workplaces. The key is to strike a balance that maximises the benefits of efficiency and productivity while preserving the unique skills and contributions of human workers. As AI continues to evolve, staying informed and fostering collaboration will be crucial for a harmonious coexistence between humans and machines.



Google Acquires Alter, an AI Avatar Startup Two Months Ago


Tech giant Google has reportedly acquired Alter for around $100m in an effort to boost the content game. Alter is an artificial intelligence (AI) avatar startup that aids brands and creators in expressing their virtual identities. The acquisition also overlaps with Google’s plan of competing more aggressively with the short video platform, TikTok.  
 
Avatar, formerly known as ‘Facemoji’, essentially works with AI to create avatars for its social media users. The company started by assisting developers to create avatars for games and apps, later it rebranded as ‘Alter’ in 2020 and started helping businesses and creators generate avatars so as to build an online identity. Proficient in 3D avatar system designs, Alter empowers creators and businesses to create and monetize new experiences. 
 
The acquisition which was concluded approximately two months ago was made public only now as neither of the companies made an announcement until now. Notably, one of Google's spokespersons confirmed the accession but refused to provide details pertaining to the financial terms of the agreement.
 
With the acquisition, Google is aiming to integrate Alter’s tools to bolster its own arsenal of content, meanwhile providing Alter with new enhanced capabilities. Headquartered in the US and Czech, Alter is an open-source, cross-platform rendering engine that was jointly founded by Jon Slimak and Robin Raszka in 2017, who did not respond to a request for comment put forth by TechCrunch. 
 
The company’s advent marks a progression for web3 interoperability and the open metaverse as it adeptly works with code to modify and develop face recognition technology. 
 
According to the report, a part of Alter’s workforce has updated their new role, announcing that they have joined Google, however, an official public announcement is still pending. 
 
“Alter is an open source, cross-platform [software development kit (SDK)] consisting of a real-time 3D avatar system and motion capture built from scratch for web3 interoperability and the open metaverse. With Alter, developers can easily pipe avatars into their app, game or website,” as per the company’s LinkedIn page. 

Furthermore, in regard Google has also enhanced the emoji experience for its rather wide base of users, now offering personalised experience to them with the newly rolled out custom emojis for the web versions of Chat.

OpenAI : Students are Using AI Tools to Write Paper for Them

 

University students are acing in their examinations through the dedicated hours given to their advanced language generators and AI language tool such as OpenAI playground. 
 
According to Motherboard, these tools help students write their papers effortlessly, as, in these AI-produced responses, it is hard to detect if it is ‘not’ written by the student himself. Since these responses cannot even be detected by plagiarism software, schools and universities may find it challenging to counteract this next-generation subversion. 
 
In an interview with Motherboard, a student who goes by the Reddit username innovative_rye says "It would be simple assignments that included extended responses." 
 
"For biology, we would learn about biotech and write five good and bad things about biotech. I would send a prompt to the AI like, 'what are five good and bad things about biotech?' and it would generate an answer that would get me an A," he added. 
 
In addition to this, innovative_rye also describes how using AI tools helps him in focusing on what he thinks is important. "I still do my homework on things I need to learn to pass, I just use AI to handle the things I don't want to do or find meaningless," While it is still a debated topic whether AI-generated writing should ever be considered an original work or not, since it is undetected in plagiarism software, they see these AI-made prompts as original works.  
 
If only the plagiarism software were capable of generating these AI-generated writings, it would not have been a problem. However, it is still a question of if and when software will be able to catch up with AI.  
 
"[The text] is not copied from somewhere else, it's produced by a machine, so plagiarism checking software is not going to be able to detect it and it's not able to pick it up because the text wasn't copied from anywhere else," says George Veletsianos, Canada Research Chair in Innovative Learning & Technology and associate professor at Royal Roads University. 
 
"Without knowing how all these other plagiarism checking tools quite work and how they might be developed in the future,[...] I don't think that AI text can be detectable in that way." He continued. 
 
While it is truly an issue of concern for the teachers as these students are definitely cheating in their papers, the AI tools also raise questions of whether the learning is moving forward for the generation.

Researchers Embedded Malware into an AI's 'Neurons' and it Worked Scarily Well

 

According to a new study, as neural networks become more popularly used, they may become the next frontier for malware operations. 

The study published to the arXiv preprint site stated, malware may be implanted directly into the artificial neurons that make up machine learning models in a manner that protects them from being discovered.

The neural network would even be able to carry on with its usual activities. The authors from the University of the Chinese Academy of Sciences wrote, "As neural networks become more widely used, this method will become universal in delivering malware in the future." 

With actual malware samples, they discovered that changing up to half of the neurons in the AlexNet model—a benchmark-setting classic in the AI field—kept the model's accuracy rate over 93.1 percent. The scientists determined that utilizing a method known as steganography, a 178MB AlexNet model may include up to 36.9MB of malware buried in its structure without being detected. The malware was not identified in some of the models when they were tested against 58 different antivirus programs. 

Other ways of invading businesses or organizations, such as attaching malware to papers or files, are frequently unable to distribute harmful software in large quantities without being discovered. As per the study, this is because AlexNet (like many machine learning models) is comprised mainly of millions of parameters and numerous complicated layers of neurons, including fully connected "hidden" layers, 

The researchers discovered that altering certain other neurons had no influence on performance since the massive hidden layers in AlexNet were still intact. 

The authors set out a playbook for how a hacker could create a malware-loaded machine learning model and distribute it in the wild: "First, the attacker needs to design the neural network. To ensure more malware can be embedded, the attacker can introduce more neurons. Then the attacker needs to train the network with the prepared dataset to get a well-performed model. If there are suitable well-trained models, the attacker can choose to use the existing models. After that, the attacker selects the best layer and embeds the malware. After embedding malware, the attacker needs to evaluate the model’s performance to ensure the loss is acceptable. If the loss on the model is beyond an acceptable range, the attacker needs to retrain the model with the dataset to gain higher performance. Once the model is prepared, the attacker can publish it on public repositories or other places using methods like supply chain pollution, etc." 

According to the article, when malware is incorporated into the network's neurons, it is "disassembled" and assembled into working malware by a malicious receiver software, which may also be used to download the poisoned model via an upgrade.  The virus can still be halted if the target device checks the model before executing it. Traditional approaches like static and dynamic analysis can also be used to identify it.

Dr. Lukasz Olejnik, a cybersecurity expert and consultant, told Motherboard, “Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there.” 

"But it's also a problem because custom methods to extract malware from the [deep neural network] model means that the targeted systems may already be under attacker control. But if the target hosts are already under attacker control, there's a reduced need to hide extra malware." 

"While this is legitimate and good research, I do not think that hiding whole malware in the DNN model offers much to the attacker,” he added. 

The researchers anticipated that this would “provide a referenceable scenario for the protection on neural network-assisted attacks,” as per the paper. They did not respond to a request for comment from Motherboard.

This isn't the first time experts have looked at how malicious actors may manipulate neural networks, such as by presenting them with misleading pictures or installing backdoors that lead models to malfunction. If neural networks represent the future of hacking, major corporations may face a new threat as malware campaigns get more sophisticated. 

The paper notes, “With the popularity of AI, AI-assisted attacks will emerge and bring new challenges for computer security. Network attack and defense are interdependent. We hope the proposed scenario will contribute to future protection efforts.”