Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI bots. Show all posts

The Rise of Bots: Imperva's Report Reveals Rising Trends in Internet Traffic

 

In the intricate tapestry of the digital realm, where human interactions intertwine with automated processes, the rise of bots has become an undeniable phenomenon reshaping the landscape of internet traffic. Recent findings from cybersecurity leader Imperva unveil the multifaceted nature of this phenomenon, shedding light on the complex interplay between legitimate and malicious bot activities.
 
At the heart of Imperva's report lies a staggering statistic: 49.6% of global internet traffic originates from bots, marking the highest recorded level since the company commenced its analysis in 2013. This exponential surge in bot-driven activity underscores the growing reliance on automated systems to execute tasks traditionally performed by humans. From web scraping to automated interactions, bots play a pivotal role in shaping the digital ecosystem. 

However, not all bots operate with benign intentions. Imperva's study reveals a troubling trend: the proliferation of "bad bots." These nefarious entities, comprising 32% of all internet traffic in 2023, pose significant cybersecurity threats. Nanhi Singh, leading application security at Imperva, emphasizes the pervasive nature of these malicious actors, labeling them as one of the most pressing challenges facing industries worldwide. 

Bad bots, armed with sophisticated tactics, infiltrate networks with the aim of extracting sensitive information, perpetrating fraud, and spreading misinformation. From account takeovers to data breaches, the repercussions of bot-driven attacks are far-reaching and detrimental. Alarmingly, the report highlights a 10% increase in account takeovers in 2023, underscoring the urgency for proactive security measures. 

Geographical analysis further elucidates the global landscape of bot activity. Countries such as Ireland, Germany, and Mexico witness disproportionate levels of malicious bot traffic, posing significant challenges for cybersecurity professionals. Against this backdrop, organizations must adopt a proactive stance, implementing robust bot management strategies to safeguard against evolving threats. While the rise of bots presents formidable challenges, it also heralds opportunities for innovation and efficiency. 

Legitimate bots, such as AI-powered assistants like ChatGPT, enhance productivity and streamline processes. By leveraging generative AI, businesses can harness the power of automation to drive growth and innovation. Imperva's report serves as a clarion call for stakeholders across industries to recognize the complexities of internet traffic and adapt accordingly. 

As bot-driven activities continue to proliferate, a holistic approach to cybersecurity is imperative. From advanced threat detection to stringent access controls, organizations must fortify their defenses to mitigate risks and safeguard against evolving threats. 

Imperva's comprehensive analysis sheds light on the multifaceted nature of internet traffic dominated by bots. By understanding the nuances of bot behavior and implementing proactive security measures, businesses can navigate the digital landscape with confidence, ensuring resilience in the face of emerging cyber threats.

How is Brave’s ‘Leo’ a Better Generative AI Option?


Brave Browser 

Brave is a Chromium-based browser, running on Brave search engine, that restricted tracking for personal ads. 

Brave’s new product – Leo – is a generative AI assistant, on top of Anthropic's Claude and Meta's Llama 2. Apparently, Leo promotes user-privacy as its main feature. 

Unlike any other generative AI-chatbots, like ChatGPT, Leo offers much better privacy to its users. The AI assistant does not store any of the user’s chat history, neither does it use the user’s data for training purposes. 

Moreover, a user does not need to make an account in order to access Leo. Also, if a user is leveraging its premium experience, Brave will not link their accounts to the data they may have used. / Leo chatbot has been put to test for three months now. However, Brave is now making Leo available to all users of the most recent 1.60 desktop browser version. As soon as Brave rolls it out to you, you ought to see the Leo emblem on the sidebar of the browser. In the upcoming months, Leo support will be added to the Brave apps for Android and iPhone.

Privacy with Leo AI Assistant 

User privacy has remained a major concern when it comes to ChatGPT and Google Bard or any AI product. 

A better option in AI chatbots, along with their innovative features, will ultimately be the one which provides better privacy to its users. Leo, in this case, has a potential to bring a revolution, taking into account that Brave promotes the chatbot’s “unparalleled privacy” feature straight away. 

Since users do not require any account to access Leo, they need not verify their emails or phones numbers as well. This way, the user’s contact information is rather secure. 

Moreover, if the user chooses to use $15/month Leo Premium, they receive tokens that are not linked to their accounts. However, Brave notes that, this way, “ you can never connect your purchase details with your usage of the product, an extra step that ensures your activity is private to you and only you.”

The company says, “the email you used to create your account is unlinkable to your day-to-day use of Leo, making this a uniquely private credentialing experience.”

Brave further notes that all Leo requests will be sent via an anonymous server, meaning that Leo traffic cannot be connected to user’s IP addresses. 

More significantly, Brave will no longer host Leo's conversations. As soon as they are formed, they will be disposed of instantly. Leo will also not learn from those conversations. Moreover, Brave will not gather any personal identifiers, such as your IP address. Leo will not gather user data, nor will any other third-party model suppliers. Considering that Leo is based on two language models, this is significant.  

Forget ChatGPT, Google Bard may Possess Some Serious Security Flaws


A latest research claims that Google’s AI chatbot, Google Bard may let its users to use it for creating phishing emails and other malicious content, unlike ChatGPT.

At one such instances, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code, by using the Redmond giant’s AI tool.

Using the AI tool of the Redmond behemoth, cybersecurity researchers Check Point were able to produce phishing emails, keyloggers, and some basic ransomware code.

The researchers' report further noted how they set out to compare Bard's security to that of ChatGPT. From both sites, they attempted to obtain three things: phishing emails, malicious keyloggers, and some simple ransomware code.

The researchers described that simply asking the AI bots to create phishing emails yielded no results, however asking the Bard to provide ‘examples’ of the same provided them with plentiful phishing mails. ChatGPT, on the other hand, refused to comply, claiming that doing so would amount to engaging in fraudulent activity, which is illegal.

The researchers further create malware like keyloggers, to which the bots performed somewhat better. Here too, a direct question did not provide any result, but a tricky question as well yielded nothing since both the AI bots declined. However, answers for being asked to create keyloggers differed in both the platforms. While Bard simply said, “I’m not able to help with that, I’m only a language model,” ChatGPT gave a much detailed explanation.

Later, on being asked to provide a keylogger to log their keys, both ChatGPT and Bard ended up generating a malicious code. However, ChatGPT did provide a disclaimer before doing the aforementioned.

The researchers finally proceeded to asking Bard to run a basic ransomware script. While this was much trickier than getting the AI bot to generate phishing emails or keylogger, they finally managed to get Bard into the game.

“Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT[…]Consequently, it is much easier to generate malicious content using Bard’s capabilities,” they concluded.

Why Does it Matter? 

The reason, in simpler terms is: Malicious use of any new technology is inevitable.

Here, one can conclude that these issues with the emerging generative AI technologies are much expected. AI, as an extremely developed tool has the potential to alter an entire cybersecurity script.

Cybersecurity experts and law enforcements have already been concerned for the same and have been warning against the AI technology for it can be well used in increasing the ongoing innovation in cybercrime tactics like convincing phishing emails, malware, and more. The development in technologies have made it accessible to users in such a way that now a cybercriminal can deploy a sophisticated cyberattack by only having minimal hand in coding.

While regulators and law enforcement are doing their best to impose limits on technology and ensure that it is utilized ethically, developers are working to do their bit by educating platforms to reject being used for criminal activity.

While generative AI market is decentralized, big companies will always be under the watch of regulatory bodies and law enforcements. However, smaller companies will remain in the radar of a potential cyberattack, especially the ones that are incapable to fight against or prevent the abuse.

Researchers and security experts suggests that the only way to improve the cybersecurity posture is to fight with full strength. Even though AI is already being used to identify suspicious network activity and other criminal conduct, it cannot be utilized to make entrance barriers as high as they once were. There is no closing the door ever again.

AI Amplifies Crypto Scam Threat, Warns Web3 Expert

 

hThe utilization of artificial intelligence (AI) by cybercriminals in crypto scams has taken a concerning turn, introducing more sophisticated tactics. Jamie Burke, the founder of Outlier Ventures, a prominent Web3 accelerator, highlighted this worrisome development in an interview with Yahoo Finance UK on The Crypto Mile. Burke shed light on the evolution of AI in cybercrime and the potential consequences it holds for the security of the crypto industry.

The integration of AI into crypto scams enables malicious actors to create advanced bots that can impersonate family members, tricking them into fraudulent activities. These AI-powered bots closely resemble the appearance and speech patterns of the targeted individuals, leading them to make requests for financial assistance, such as wiring money or cryptocurrency.

Burke stressed the importance of implementing proof of personhood systems to verify the true identities of individuals involved in digital interactions.

Burke said: “If we just look at the statistics of it, in a hack you need to catch out just one person in a hundred thousand, this requires lots of attempts, so malicious actors are going to be leveling up their level of sophistication of their bots into more intelligent actors, using artificial intelligence.”  

The integration of AI technology in cybercrime has far-reaching implications that raise concerns. This emerging trend provides cybercriminals with new avenues to exploit AI capabilities, deceiving unsuspecting individuals and organizations into revealing sensitive information or transferring funds.

By leveraging AI's ability to mimic human behavior, malicious actors can make it increasingly difficult for individuals to distinguish between genuine interactions and fraudulent ones. Encountering an AI-driven crypto scam can have a severe psychological impact, eroding trust and compromising the security of online engagements.

Experts emphasize the importance of cultivating a skeptical mindset and educating individuals about the potential risks associated with AI-powered scams. These measures can help mitigate the impact of fraudulent activities and promote a safer digital environment.