Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Bot. Show all posts

Google's 6 Essential Steps to Mitigate Risks in Your A.I. System

 

Generative A.I. has the potential to bring about a revolutionary transformation in businesses of all sizes and types. However, the implementation of this technology also carries significant risks. It is crucial to ensure the reliability of the A.I. system and protect it from potential hacks and breaches. 

The main challenge lies in the fact that A.I. technology is still relatively young, and there are no widely accepted standards for constructing, deploying, and maintaining these complex systems.

To address this issue and promote standardized security measures for A.I., Google has introduced a conceptual framework called SAIF (Secure AI Framework).

In a blog post, Royal Hansen, Google's vice president of engineering for privacy, safety, and security, and Phil Venables, Google Cloud's chief information security officer, emphasized the need for both public and private sectors to adopt such a framework.

They highlighted the risks associated with confidential information extraction, hackers manipulating training data to introduce faulty information, and even theft of the A.I. system itself. Google's framework comprises six core elements aimed at safeguarding businesses that utilize A.I. technology. 

Here are the core elements of Google's A.I. framework, and how they can help in safeguarding

  • Establish a strong foundation:
First and foremost, assess your existing digital infrastructure's standard protections as a business owner. However, bear in mind that these measures may need to be adapted to effectively counter A.I.-based security risks. After evaluating how your current controls align with your A.I. use case, develop a plan to address any identified gaps.

  • Enhance threat detection capabilities:
Google emphasizes the importance of swift response to cyberattacks on your A.I. system. One crucial aspect to focus on is the establishment of robust content safety policies. Generative A.I. has the ability to generate harmful content such as imagery, audio, and video. By implementing and enforcing content policies, you can safeguard your system from malicious usage and protect your brand simultaneously.

  • Automate your defenses:
To protect your system from threats like data breaches, malicious content creation, and A.I. bias, Google suggests deploying automated solutions such as data encryption, access control, and automatic auditing. These automated defenses are powerful and often eliminate the need for manual tasks, such as reverse-engineering malware binaries. However, human intervention is still necessary to exercise judgment in critical decisions regarding threat identification and response strategies.

  • Maintain a consistent strategy:
Once you integrate A.I. into your business model, establish a process to periodically review its usage within your organization. In case you observe different controls or frameworks across different departments, consider implementing a unified approach. Fragmented controls increase complexity, result in redundant efforts, and raise costs.

  • Be adaptable:
Generative A.I. is a rapidly evolving field, with new advancements occurring daily. Consequently, threats are constantly evolving as well. Conducting "red team" exercises, which involve ethical hackers attempting to exploit system vulnerabilities, can help you identify and address weaknesses in your system before they are exploited by malicious actors.

  • Determine risk tolerance:
Before implementing any A.I.-powered solutions, it is essential to determine your specific use case and the level of risk you are willing to accept. Armed with this information, you can develop a process to evaluate different third-party machine learning models. This assessment will help you match each model to your intended use case while considering the associated level of risk.

Overall, while generative A.I. holds enormous potential for businesses, it is crucial to address the security challenges associated with its implementation. Google's Secure AI Framework offers a comprehensive approach to mitigate risks and protect businesses from potential threats. By adhering to the core elements of this framework, businesses can safeguard their A.I. systems and fully leverage the benefits of this transformative technology.

House GOP Considers Robot Dogs for Border Patrol

 

The deployment of modern robotic technology to improve border security was the focus of a recent House GOP meeting. The discussions centered on the prospective use of robot canines to patrol US borders, which would be a significant advancement in the continuing campaign to safeguard the country's frontiers.

The House GOP's consideration of this cutting-edge technology follows a series of debates on bolstering border security and immigration control. The proposal aims to leverage the capabilities of robot dogs to supplement the efforts of law enforcement agencies in monitoring and safeguarding the vast stretches of the US borders.

One of the primary motivations behind exploring this initiative is the robot dogs' ability to access remote and difficult terrains, where traditional border patrol methods may encounter challenges. By deploying these agile and adaptable machines, authorities hope to increase their presence in areas that are not easily accessible by human agents, thereby enhancing overall surveillance and response capabilities.

The tech industry has made significant strides in the development of sophisticated robotic devices, and the deployment of robot dogs for border security is gaining traction worldwide. These robots are equipped with state-of-the-art sensors, cameras, and artificial intelligence, allowing them to detect and track movement with impressive accuracy. Additionally, their non-threatening appearance enables them to blend into their surroundings, making them less likely to be detected or targeted.

The debates have brought up ethical and privacy concerns despite the possible benefits of utilizing robot dogs for border patrol. The use of sophisticated surveillance tools, such as robot dogs, is criticized as having the potential to violate people's right to privacy and expand the monitoring of border communities. These issues highlight the demand for a fair strategy that protects the border while upholding the rights and dignity of locals.

Representative Alexandria Ocasio-Cortez (AOC) has tweeted her opposition to the measure. She emphasized the significance of dealing with privacy concerns and establishing responsibility and openness in the usage of such technology. Her position matches the general public's opinion on the use of robotic surveillance equipment.

The House Oversight Committee has scheduled a hearing titled "Using Cutting-Edge Technologies to Keep America Safe" in response to issues brought up by politicians and the general public. This hearing seeks expert advice on developing a strategy that strikes a balance between safety and privacy concerns while delving deeper into the possible advantages and hazards of using robot dogs for border patrol.

How LofyGang Is Using Discord In A Massive Credential Stealing Attack

 

Checkmarx researchers have mapped out a complex web of criminal activity that all points back to a threat actor known as LofyGang. This group of cybercriminals provides free hacking tools, Discord-related npm packages, and other services to other nefarious actors and Discord users. These tools, packages, and services, however, come with a hidden cost: the theft of users' accounts and credit card credentials. 

The researchers discovered at least 200 malicious npm packages uploaded to the official npm website by various LofyGang sock puppet accounts. These npm packages look like genuine packages that enable users to interact with the Discord API. LofyGang dupes users into installing malicious packages instead of legitimate ones by uploading multiple versions of its packages with different misspellings of popular packages.

In order to give their malicious packages credibility on the npm website, the group also ties their npm packages to active and reputable GitHub repositories. An unsuspecting user who enters a typo while searching for a legitimate package may come across a listing for one of these malicious packages, fail to notice the misspelling, and install the package.

Unfortunately for those who install malicious npm packages, the packages are designed to steal users' account and credit card information. However, rather than containing malicious code directly, these packages rely on secondary packages that contain malicious code. Because malware is hidden in dependencies, the original malicious packages are less likely to be reported as malicious and removed from the npm website.

If one of the malicious dependencies is reported and removed, the threat actor can simply upload a new malicious dependency and push an update to the user's original npm package, instructing it to rely on this new malicious dependency.

LofyGang distributes malicious hacking tools on GitHub in addition to malicious npm packages. The hacking tools, like the npm packages, are usually Discord-related. These programmes also contain malicious dependencies that steal account and credit card information. LofyGang promotes these tools on a variety of platforms, including YouTube, where the group posts tool tutorials.

The LofyGang's Discord server, which has been operational since October 2021, is another avenue for promoting the group's malicious hacking tools. Users can join this Discord server to get assistance with the tools. The server also includes a Discord bot that can grant users a free Discord Nitro subscription using stolen credit card information. 

However, in order to use the bot, users must provide their Discord account credentials, which LofyGang is likely to add to the growing list of credentials stolen by its malicious packages and tools. At the end of the day, Checkmarx's report shows that anyone using LofyGang's packages, tools, and services, whether they realise it or not, is handing over their account and credit card credentials.

PseudoManuscrypt Malware Proliferating Similarly as CryptBot Targets Koreans

 

Since at least May 2021, a botnet known as PseudoManuscrypt has been targeting Windows workstations in South Korea, using the same delivery methods as another malware known as CryptBot. 

South Korean cybersecurity company AhnLab Security Emergency Response Center (ASEC) said in a report published, "PseudoManuscrypt is disguised as an installer that is similar to a form of CryptBot and is being distributed. Not only is its file form similar to CryptBot but it is also distributed via malicious sites exposed on the top search page when users search commercial software-related illegal programs such as Crack and Keygen."
  
According to ASEC, approximately 30 computers in the country are compromised on a daily basis on average. PseudoManuscrypt was originally discovered in December 2021, when Russian cybersecurity firm Kaspersky revealed details of a "mass-scale spyware attack campaign" that infected over 35,000 PCs in 195 countries around the world. 

PseudoManuscrypt attacks, which were first discovered in June 2021, targeted a large number of industrial and government institutions, including military-industrial complex firms and research in Russia, India, and Brazil, among others. The primary payload module has a wide range of spying capabilities, enabling the attackers virtually complete access over the compromised device. Stealing VPN connection data, recording audio with the microphone, and capturing clipboard contents and operating system event log data are all part of it. 

Additionally, PseudoManuscrypt can access a remote command-and-control server controlled by the attacker to perform malicious tasks like downloading files, executing arbitrary instructions, log keypresses, and capturing screenshots and videos of the screen. 

The researchers added, "As this malware is disguised as an illegal software installer and is distributed to random individuals via malicious sites, users must be careful not to download relevant programs. As malicious files can also be registered to service and perform continuous malicious behaviours without the user knowing, periodic PC maintenance is necessary."

New Robocall Bot on Telegram can Trick Targets Into Giving Up Their Password

 

Researchers at CyberNews have identified a new form of automated social engineering tool that can harvest one-time passwords (OTPs) from users in the United States, the United Kingdom, and Canada. 

Without any direct connection with the victim, the so-called OTP Bot may mislead victims into providing criminals credentials to their bank accounts, email, and other internet services. It's exhausting for a probable victim to listen to someone try to scam them blind by taking advantage of their generosity. 

As a new type of bot-for-hire is conquering the field of social engineering, OTP Bot, the latest form of malicious Telegram bot that uses robocalls to trick unsuspecting victims into handing over their one-time passwords, which fraudsters then use to login and empty their bank accounts. Even worse, the newfangled bot's userbase has exploded in recent weeks, with tens of thousands of people signing up. 

How Does OTP Bot Works?

OTP Bot is the latest example of the emerging Crimeware-as-a-Service model, where cybercriminals rent out destructive tools and services to anybody ready to pay, according to CyberNews expert Martynas Vareikis. After being purchased, OTP Bot enables the users to collect one-time passwords from innocent people by simply typing the target's phone number, as well as any extra information obtained via data leaks or the black market, into the bot's Telegram chat window. 

“Depending on the service the threat actor wishes to exploit, this additional information could include as little as the victim’s email address,” says Vareikis. The bot is being marketed on a Telegram chat channel with over 6,000 users, allowing its owners to make a lot of money by selling monthly memberships to cybercriminals. Meanwhile, its users brag about their five-figure profits from robbing their targets' bank accounts. 

Bot-for-hire services, according to Jason Kent, a hacker in residence at Cequence Security, have already commoditized the automated threat industry, making it very easy for criminals to enter into social engineering. 

Kent told CyberNew, “At one time, a threat actor would need to know where to find bot resources, how to cobble them together with scripts, IP addresses, and credentials. Now, a few web searches will uncover full Bot-as-a-Service offerings where I need only pay a fee to use a bot. It’s a Bots-for-anyone landscape now and for security teams.” 

Gift cards make the scam go-round: 

Card linking is the most common scamming tactic used by OTP Bot subscribers. It comprises linking a victim's credit card to their mobile payment app account and then purchasing gift cards in real stores with it.

“Credit card linking is a favorite among scammers because stolen phone numbers and credit card information are relatively easy to come by on the black market,” reckons Vareikis. 

“With that data in hand, a threat actor can choose an available social engineering script from the chat menu and simply feed the victim’s information to OTP Bot.” 

The bot also contacts the victim's number, acting as a support representative, and tries to mislead them into giving their one-time password, which is necessary to log in to the victim's Apple Pay or Google Pay account, using a fake caller ID. The threat actor can then link the victim's credit card to the payment app and go on a gift card buying spree in a nearby physical store after logging in with the stolen one-time password. 

Scammers use linked credit cards to buy prepaid gifts for one simple reason as they leave no financial footprints. This is particularly useful during a pandemic, when mask regulations are in effect in almost all interior areas, making it considerably simpler for criminals to conceal their identities throughout the process. 

Since its release on Telegram in April, the service looks to be gaining a lot of momentum, especially in the last few weeks. The OTP Bot Telegram channel currently has 6,098 members, a massive 20 percent growth in just seven days. 

The simplicity of use and the bot-for-hire model, which allow unskilled or even first-time fraudsters to easily rob their victims with the least input and zero social contact, appear to be some of the reasons for the fast rise. In fact, some OTP Bot users blatantly broadcast their success tales in the Telegram conversation, flaunting to other members of the channel about their ill-gotten gains. 

Based on the popularity of OTP Bot, it's apparent that this new sort of automated social engineering tool will only gain more popularity. Indeed, it'll only be a matter of time until a slew of new knockoff services hit the market, attracting even more fraudsters looking to make a fast buck off unsuspecting victims. 

The creator of Spyic, Katherine Brown, warns that as more bots enter the market, the opportunities for social engineering and abuse will grow exponentially. “This year we’ve already seen bots emerge that automate attacks against political targets to drive public opinion,” says Brown. 

The growth of social engineering bots-for-hire is even more alarming, according to Dr. Alexios Mylonas, senior cybersecurity lecturer at the University of Hertfordshire, since the pandemic has put greater limitations on our social connections. 

“This is particularly true for those who are not security-savvy. Threat actors are known to use automation and online social engineering attacks, which enables them to optimize their operations, to achieve their goals and the CyberNews team has uncovered yet another instance of it,” Mylonas stated CyberNews. 

How to Recognize Social Engineering Scams?

Keeping all of this in mind, understanding how to detect a social engineering attempt is still critical for protecting money and personal information. Here's how to do it: 

1.Calls from unknown numbers should not be answered. 

2.Never give out personal information: Names, usernames, email addresses, passwords, PINs, and any other information that may be used to identify you fall into this category. 

3. Don’t fall into the trap: Scammers frequently use a false feeling of urgency to get targets to hand up their personal information. If someone is attempting to persuade the user to make a decision, they should hang up or say they will call back them later. Then dial the toll-free number for the firm they claim to represent. 

4.Don't trust caller ID: By mimicking names and phone numbers, scammers might impersonate a firm or someone from your contact list. 

Financial service companies, on the other hand, never call their clients to validate personal information. They will simply block the account if they detect suspicious behavior and expect the user to contact the firm through official means to fix the problem. As a result, be watchful, even if the caller ID on your phone screen appears to be legitimate.

Deepfake Bots on Telegram, Italian Authorities Investigating

 

Cybercriminals are using a newly created Artificial Intelligence bot to generate and share deepfake nude images of women on the messaging platform Telegram. The Italian Data Protection Authority has begun to investigate the matter following the news by a visual threat intelligence firm Sensity, which exposed the 'deepfake ecosystem' — estimating that almost 104,852 fake images have been created and shared with a large audience via public Telegram channels as of July 2020. 
 
The bots are programmed to create fake nudes having watermarks or displaying nudity partially. Users upon accessing the partially nude image, pay for the whole photo to be revealed to them. They can do so by simply submitting a picture of any woman to the bot and get back a full version wherein clothes are digitally removed using the software called "DeepNude", which uses neural networks to make images appear "realistically nude". Sometimes, it's done for free of cost as well. 
 
According to the claims of the programmer who created DeepNude, he took down the app long ago. However, the software is still widely accessible on open source repositories for cybercriminals to exploit. Allegedly, it has been reverse-engineered and made available on torrenting websites, as per the reports by Sensity. 
 
In a conversation with Motherboard, Danielle Citron, professor of law at the University of Maryland Carey School of Law, called it an "invasion of sexual privacy", "Yes, it isn’t your actual vagina, but... others think that they are seeing you naked."   

"As a deepfake victim said to me—it felt like thousands saw her naked, she felt her body wasn’t her own anymore," she further told. 
 
More than 50% of these pictures are being obtained through victims' social media accounts or from anonymous sources. The women who are being targeted are from all across the globe including the U.S., Italy, Russia, and Argentina.
 
Quite alarmingly, the bot has also been noticed sharing child pornography as most of the pictures circulated belonged to underage girls. The company headquartered in Amsterdam also told that the vicious Telegram network is build up of 101,080 members approximately. 

In an email to Motherboard, the unknown creator of DeepNude, who goes by the name Alberto, confirmed that the software only works with women as nude pictures of women are easier to find online, however, he's planning to make a male version too. The software is based on an open-source algorithm "pix2pix" that uses generative adversarial networks (GANs). 
 
"The networks are multiple because each one has a different task: locate the clothes. Mask the clothes. Speculate anatomical positions. Render it," he told. "All this makes processing slow (30 seconds in a normal computer), but this can be improved and accelerated in the future."

Hacking Attack Neutralized: France



A recent hacking attack was neutralized by the French government where 850,000 computers had been taken control of. The malware had been removed from the infected devices.

Retadup, a software worm was responsible for taking over of the devices in the Paris region according to sources.

The number of computers infected was massive which certainly indicates that it was a gigantic operation on the part of the hackers.

The police officials created a copy of the server which was responsible for the attack and allowed the hackers get into systems and take control.

All the infected computers were advised to uninstall Retadup malware which according to researchers had a part to play in the Monero Crypto-currency creation.

A few suggestions made by the researchers to ensure safety against malware attacks included:
·       Don’t open emails from unknown senders.
·       Don’t click attachments that pretend to offer anti-viruses for free.
·       Install and activate the anti-virus software immediately.