Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Chatbots. Show all posts

How is Brave’s ‘Leo’ a Better Generative AI Option?


Brave Browser 

Brave is a Chromium-based browser, running on Brave search engine, that restricted tracking for personal ads. 

Brave’s new product – Leo – is a generative AI assistant, on top of Anthropic's Claude and Meta's Llama 2. Apparently, Leo promotes user-privacy as its main feature. 

Unlike any other generative AI-chatbots, like ChatGPT, Leo offers much better privacy to its users. The AI assistant does not store any of the user’s chat history, neither does it use the user’s data for training purposes. 

Moreover, a user does not need to make an account in order to access Leo. Also, if a user is leveraging its premium experience, Brave will not link their accounts to the data they may have used. / Leo chatbot has been put to test for three months now. However, Brave is now making Leo available to all users of the most recent 1.60 desktop browser version. As soon as Brave rolls it out to you, you ought to see the Leo emblem on the sidebar of the browser. In the upcoming months, Leo support will be added to the Brave apps for Android and iPhone.

Privacy with Leo AI Assistant 

User privacy has remained a major concern when it comes to ChatGPT and Google Bard or any AI product. 

A better option in AI chatbots, along with their innovative features, will ultimately be the one which provides better privacy to its users. Leo, in this case, has a potential to bring a revolution, taking into account that Brave promotes the chatbot’s “unparalleled privacy” feature straight away. 

Since users do not require any account to access Leo, they need not verify their emails or phones numbers as well. This way, the user’s contact information is rather secure. 

Moreover, if the user chooses to use $15/month Leo Premium, they receive tokens that are not linked to their accounts. However, Brave notes that, this way, “ you can never connect your purchase details with your usage of the product, an extra step that ensures your activity is private to you and only you.”

The company says, “the email you used to create your account is unlinkable to your day-to-day use of Leo, making this a uniquely private credentialing experience.”

Brave further notes that all Leo requests will be sent via an anonymous server, meaning that Leo traffic cannot be connected to user’s IP addresses. 

More significantly, Brave will no longer host Leo's conversations. As soon as they are formed, they will be disposed of instantly. Leo will also not learn from those conversations. Moreover, Brave will not gather any personal identifiers, such as your IP address. Leo will not gather user data, nor will any other third-party model suppliers. Considering that Leo is based on two language models, this is significant.  

Lawmaker Warns: Meta Chatbots Could Influence Users by ‘Manipulative’ Advertising


Senator Ed Markey has urged Meta to postpone the launch of its new chatbots since they could lead to increased data collection and confuse young users by blurring the line between content and advertisements.

The warning letter was issued the same day Meta revealed their plans to incorporate chatbots powered by AI into their sponsored apps, i.e. WhatsApp, Messenger, and Instagram.

In the letter, Markey wrote to Meta CEO Mark Zuckerberg that, “These chatbots could create new privacy harms and exacerbate those already prevalent on your platforms, including invasive data collection, algorithmic discrimination, and manipulative advertisements[…]I strongly urge you to pause the release of any AI chatbots until Meta understands the effect that such products will have on young users.”

According to Markey, the algorithms have already “caused serious harms,” to customers, like “collecting and storing detailed personal information[…]facilitating housing discrimination against communities of color.”

He added that while chatbots can benefit people, they also possess certain risks. He further highlighted the risk of chatbots, noting the possibility that they could identify the difference between ads and content. 

“Young users may not realize that a chatbot’s response is actually advertising for a product or service[…]Generative AI also has the potential to adapt and target advertising to an 'audience of one,' making ads even more difficult for young users to identify,” states Markey.

Markey also noted that chatbots might also make social media platforms more “addictive” to the users (than they already are).

“By creating the appearance of chatting with a real person, chatbots may significantly expand users’ -- especially younger users’ – time on the platform, allowing the platform to collect more of their personal information and profit from advertising,” he wrote. “With chatbots threatening to supercharge these problematic practices, Big Tech companies, such as Meta, should abandon this 'move fast and break things' ethos and proceed with the utmost caution.”

The lawmaker is now asking Meta to respond to a series of questions in regards to their new chatbots, including the ones that might have an impact on users’ privacy and advertising.

Moreover, the questions include a detailed insight into the roles of chatbots when it comes to data collection and whether Meta will commit not to use any information gleaned from them to target advertisements for their young users. Markey inquired about the possibility of adverts being integrated into the chatbots and, if so, how Meta intends to prevent those ads from confusing children.

In their response, a Meta spokesperson has confirmed that the company has indeed received the said letter. 

Meta further notes in a blog post that it is working in collaboration with the government and other entities “to establish responsible guardrails,” and is training the chatbots with consideration to safety. For instance, Meta writes, the tools “will suggest local suicide and eating disorder organizations in response to certain queries, while making it clear that it cannot provide medical advice.”  

"From Chatbots to Cyberattacks: How AI is Transforming Cybercrime"

 


Cybersecurity – both on the good side and on the bad side – is becoming increasingly dependent on artificial intelligence. Organizations can maximize the efficiency and protection of their systems and data resources by leveraging the latest AI-based tools that are available. 

However, cybercriminals are also capable of launching more sophisticated attacks with the help of technology. Artificial intelligence is changing the face of cybercriminals, it offers tools to clean up their language and allows hackers to open new doors for them to break into computer networks. 

For example, e-mails that trick recipients into sharing personal information or that fabricate images or videos that are used to extort victims have enabled cybercriminals to clean up their language. It is believed that the increase in cyberattacks has played a role in fueling the growth of the AI-based security products market. 

The advisory firm Acumen Research and Consulting estimates that by 2030 the global market for systems and robotics is likely to reach $133.8 billion, up from $14.9 billion in 2021, based on a report released in July 2022. 

A growing number of reports of cyberattacks on schools, medical centres, private companies, government agencies and military contractors is increasing as the FBI in San Antonio is dealing with a rapidly increasing number of cyberattacks. 

Cybercrime is the act of stealing personal health and financial information from local computer networks by international hackers. In general, the FBI tends to refrain from stating whether it is investigating a specific case, but Delzotto has said the FBI has "absolutely" seen an increase in the reports of ransomware attacks and business email compromise scams, also known as BEC scams, which they label as malicious acts. 

The language models that comprise large language networks (LLNs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard AI, continue to be a matter of surprise to us because of their unique capabilities for processing language. It was not only Midjourney and Dall-E that blew our minds. 

Other generative AI applications such as Stable Diffusion and Midjourney have inspired us with their amazing artworks just by following a few sentences in their instructions. With every passing day, it seems that these projects keep getting better and it is so great that anyone can get access to them, and the fact that they can be used by anyone is an excellent thing. 

Despite this, some people might also find themselves at risk because of this. Malware of the Next Generation Cyber security experts has recently released a report which outlines how LLMs have the potential to be used to create advanced malware that will be able to evade security measures and potentially escape detection. 

A group of researchers managed to circumvent a content filter that was designed to prevent the generation of malware by the researchers. It has come to our attention that LLMs have also been used to create malware (albeit to varying degrees of success in some cases). 

A significant amount of implications can be drawn from this. LLMs may already be being used by cybercriminals to develop less detectable, easier-to-evade traditional cyber security defences and more destructive malware which can cause greater damage than traditional malware by using LLMs. Increasing numbers of companies in San Antonio have been affected by cyberattacks in recent months. 

A breach of HCA Healthcare's San Antonio division, which includes Methodist Healthcare hospitals in the city, has led to approximately one million patient records being posted to the deep web, which is inaccessible to search engines. This breach affected about one million patient records in July. 

There have been 11 million patients affected across 20 different states as a result of the breach, which was uncovered in July. Along with hospitals and clinics in Austin, the Rio Grande Valley, Corpus Christi, and Houston, several other Texas hospitals and clinics had also been affected by the outbreak. 

There were 18,000 members of Generations Federal Credit Union affected by a data breach that occurred in the summer of 2016 that affected the union's membership. Consumer information included name, address, social security number, driver's license number, passport number, credit card number, health information, and medical information. 

During that same month, USAA, a company that deals in insurance and financial services, also announced a data breach, which affected almost 19 thousand members, including 3,726 Texas residents, whose personal information had been accessed by "unauthorized individuals." 

A ransomware attack was blamed for a security outage on Rackspace Technology's Microsoft Hosted Exchange platform in December when thousands of customers were unable to access email on the platform due to an outage caused by the ransomware attack. 

Following the incident, the cloud computing company was sent a few federal lawsuits, which resulted in it discontinuing that particular business line as a result. 

AI-Powered Cyberattacks Deep fakes Deepfake is a term that combines the words "deep learning" and "fake media," as it involves the use of artificial intelligence (AI) to create/manipulate audio/video content to appear authentic, using artificial intelligence. 

Using this technology, cybercriminals have already been manipulating celebrities' non-consensual pornography as well as spreading false news about politics. In 2019, a UK-based energy company was tricked into transferring €220,000 to a Hungarian bank account by using this technology. 

The use of machine learning (ML) and artificial intelligence (AI) by cybercriminals is being used to exploit the algorithms used to guess the passwords of users. There are already some algorithms available for breaking passwords, but cybercriminals will be able to extract password variations from large password datasets and use them as a source for password cracking. 

AI-Assisted Hacking: The use of artificial intelligence is being used by cybercriminals for a wide range of hacking activities, in addition to password cracking. There are countless ways in which Artificial Intelligence algorithms can be used to automate vulnerability scanning, to detect and exploit weaknesses in systems intelligently, to develop adaptive malware, etc. 

Supply Chain Attacks:  It is also possible to use machine learning to compromise the software & hardware supply chains of an organization, such as embedding malicious code or components within legitimate products or services to compromise the supply chain. 

Artificial intelligence has proven to be a highly useful tool for cybercriminals, allowing them to carry out more complex and efficient attacks with greater efficiency than ever before. Even though the threats are constantly evolving and getting more sophisticated, the threats will continue to increase. 

Businesses must develop a proactive approach against these threats rather than being reactive to them. This requires an advanced, multi-faceted approach that combines advanced artificial intelligence-powered cybersecurity solutions with a proactive stance that is aimed at preventing threats from being introduced.

Risks and Best Practices: Navigating Privacy Concerns When Interacting with AI Chatbots

 

The use of artificial intelligence chatbots has become increasingly popular. Although these chatbots possess impressive capabilities, it is important to recognize that they are not without flaws. There are inherent risks associated with engaging with AI chatbots, including concerns about privacy and the potential for cyber-attacks. Caution should be exercised when interacting with these chatbots.

To understand the potential dangers of sharing information with AI chatbots, it is essential to explore the risks involved. Privacy risks and vulnerabilities associated with AI chatbots raise significant security concerns for users. Surprisingly, chat companions such as ChatGPT, Bard, Bing AI, and others can inadvertently expose personal information online. These chatbots rely on AI language models that derive insights from user data.

For instance, Google's chatbot, Bard, explicitly states on its FAQ page that it collects and uses conversation data to train its model. Similarly, ChatGPT also has privacy issues as it retains chat records for model improvement, although it provides an opt-out option.

Storing data on servers makes AI chatbots vulnerable to hacking attempts. These servers contain valuable information that cybercriminals can exploit in various ways. They can breach the servers, steal the data, and sell it on dark web marketplaces. Additionally, hackers can leverage this data to crack passwords and gain unauthorized access to devices.

Furthermore, the data generated from interactions with AI chatbots is not restricted to the respective companies alone. While these companies claim that the data is not sold for advertising or marketing purposes, it is shared with certain third parties for system maintenance.

OpenAI, the organization behind ChatGPT, admits to sharing data with "a select group of trusted service providers" and allowing some "authorized OpenAI personnel" to access the data. These practices raise additional security concerns surrounding AI chatbot interactions, as critics argue that generative AI security concerns may worsen.

Therefore, it is crucial to safeguard personal information when interacting with AI chatbots to maintain privacy.

To ensure privacy and security, it is important to follow best practices when interacting with AI chatbots:

1. Avoid sharing financial details: Sharing financial information with AI chatbots can expose it to potential cybercriminals. Limit interactions to general information and broad questions. For personalized financial advice, consult a licensed financial advisor.

2. Be cautious with personal and intimate thoughts: AI chatbots lack real-world knowledge and may provide generic responses to mental health-related queries. Sharing personal thoughts with them can compromise privacy. Use AI chatbots as tools for general information and support, but consult a qualified mental health professional for personalized advice.

3. Refrain from sharing confidential work-related information: Sharing confidential work information with AI chatbots can lead to unintended disclosure. Exercise caution when sharing sensitive code or work-related details to protect privacy and prevent data breaches.

4. Never share passwords: Sharing passwords with AI chatbots can jeopardize privacy and expose personal information to hackers. Protect login credentials to maintain online security.

5. Avoid sharing residential details and other personal data: Personal Identification Information (PII) should not be shared with AI chatbots. Familiarize yourself with chatbot privacy policies, avoid questions that reveal personal information, and be cautious about sharing medical information or using AI chatbots on social platforms.

In conclusion, while AI chatbots offer significant advancements, they also come with privacy risks. Protecting data by controlling shared information is crucial when engaging with AI chatbots. Adhering to best practices mitigates potential risks and ensures privacy.

Oracle and Cohere Collaborate for New Gen AI Service

 

During Oracle's recent earnings call, company founder Larry Ellison made an exciting announcement, confirming the launch of a new generation AI service in collaboration with Cohere. This partnership aims to deliver powerful generative AI services for businesses, opening up new possibilities for innovation and advanced applications.

The collaboration between Oracle and Cohere signifies a strategic move by Oracle to enhance its AI capabilities and offer cutting-edge solutions to its customers. With AI playing a pivotal role in transforming industries and driving digital transformation, this partnership is expected to strengthen Oracle's position in the market.

Cohere, a company specializing in natural language processing (NLP) and generative AI models, brings its expertise to the collaboration. By leveraging Cohere's advanced AI models, Oracle aims to empower businesses with enhanced capabilities in areas such as text summarization, language generation, chatbots, and more.

One of the key highlights of this collaboration is the potential for businesses to leverage the power of generative AI to automate and optimize various processes. Generative AI has the ability to create content, generate new ideas, and perform complex tasks, making it a valuable tool for organizations across industries.

The joint efforts of Oracle and Cohere are expected to result in the development of state-of-the-art AI models that can revolutionize how businesses operate and innovate. By harnessing the power of AI, organizations can gain valuable insights from vast amounts of data, enhance customer experiences, and streamline operations.

This announcement comes in the wake of Oracle's recent acquisition of Cerner, a healthcare technology company, further solidifying Oracle's commitment to revolutionizing the healthcare industry through advanced technologies. The integration of AI into healthcare systems holds immense potential to improve patient care, optimize clinical processes, and enable predictive analytics for better decision-making.

As the demand for AI-powered solutions continues to rise, businesses are seeking comprehensive platforms that can deliver sophisticated AI services. With Oracle and Cohere joining forces, organizations can benefit from an expanded suite of AI tools and services that can address a wide range of industry-specific challenges.

The collaboration between Oracle and Cohere highlights the growing importance of AI in driving innovation and digital transformation across industries. As businesses increasingly recognize the value of AI, partnerships like this one are crucial for pushing the boundaries of what AI can achieve and bringing advanced capabilities to the market.

The partnership between Oracle and Cohere signifies a significant step forward in the realm of AI services. The collaboration is expected to deliver powerful generative AI solutions that can empower businesses to unlock new opportunities and drive innovation. With Oracle's expertise in enterprise technology and Cohere's proficiency in AI models, this collaboration holds great promise for businesses seeking to leverage the full potential of AI in their operations and strategies.

Online Predators are Targeting Children Webcams


The Internet Watch Foundation reports that since 2019, there has been an increase in sexual abuse imagery generated with webcams and other recording devices worldwide. 

One of the most frequently used platforms to contact kids is social media chatrooms, through which abuse may happen both online and offline. Predators are increasingly leveraging technological advancements to commit sexual abuse with the aid of technology.

Once a predator has succeeded in getting access to a child’s webcam, the content is then used to record, produce and distribute child pornography.

Chatbots: How was the Study Conducted 

A team of criminologists, studying cybercrime and cybersecurity, conducted research to investigate the methodologies used by online predators to hack children’s webcams.

For this, the researchers posed as children (potential victims) to assess the movements of online predators. They started by creating several automated chatbots to lure online predators in some of the chatrooms popular among children.

The bots are programmed in a way that they would not initiate any conversation and will respond only to users who are above 18 years of age.

Furthermore, they are programmed to start each conversation by mentioning their age, sex, and location. This was done to ensure that the conversations documented were with individuals over the age of 18 who were knowingly and voluntarily conversing with a minor. It is standard procedure in chatroom culture. Although it is likely that some of those involved were minors impersonating adults, a prior study has shown that online predators tend to portray themselves as younger rather than older, not the other way around.

Methods of Attack 

The chatbots recorded 953 chats with self-identified adults who claimed to be adults who were told they were conversing with a 13-year-old girl. The chats were almost exclusively sexual in nature, with a focus on webcams. Some predators made their demands clear and offered to pay for films of the child performing sexual acts right away. Others made an attempt to solicit videos by making promises of future love and partnerships. Along with these frequently employed strategies, it was being discovered that 39% of chats had an unsolicited link.

A forensic investigation conducted on the links reports that 19% (71 links) were embedded with malware, 5% (18 links) led to phishing websites, and 41% (154 links) were associated with whereby, a video conferencing platform operated by a company in Norway. 

It was very obvious how some of these links were used by a predator to harm the child victims. Online predators can remotely access a child's camera by infecting their computer with spyware. Personal information can be collected from phishing websites and utilized by the predator to harm their victim. For instance, phishing scams can give a predator access to a child's computer password, which can then be used to log in and control the child's camera remotely.

How can you Keep Your Child Safe From Online Predators? 

Awareness is the initial step towards a safe and trustable virtual space. These attack methods are mentioned for the parents and policymakers so that they could protect and educate the otherwise vulnerable individuals.

Since the issue is now made transparent to videoconferencing firms, they are looking forward can modifying their platforms to prevent such assaults in the future. In the long run, putting more emphasis on privacy could stop designs that could be used for evil purposes.

Here, we are recommending some of the ways that could help in keeping your child safe while in cyberspace: 

  • Protect your child's webcam at all times. While this does not stop sexual abuse, it will prevent online predators from eavesdropping on victims through a webcam. 
  • It is highly advised to actively monitor your child’s online activities. Chatrooms and social media’s attribute of anonymity serve as an advantage to predators to facilitate initial contact following up on a case of online sexual assault. One must keep in mind that online strangers are still strangers, thus making it crucial for your child to be taught about ‘stranger danger’.