Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Bots. Show all posts

The Rise of Bots: Imperva's Report Reveals Rising Trends in Internet Traffic

 

In the intricate tapestry of the digital realm, where human interactions intertwine with automated processes, the rise of bots has become an undeniable phenomenon reshaping the landscape of internet traffic. Recent findings from cybersecurity leader Imperva unveil the multifaceted nature of this phenomenon, shedding light on the complex interplay between legitimate and malicious bot activities.
 
At the heart of Imperva's report lies a staggering statistic: 49.6% of global internet traffic originates from bots, marking the highest recorded level since the company commenced its analysis in 2013. This exponential surge in bot-driven activity underscores the growing reliance on automated systems to execute tasks traditionally performed by humans. From web scraping to automated interactions, bots play a pivotal role in shaping the digital ecosystem. 

However, not all bots operate with benign intentions. Imperva's study reveals a troubling trend: the proliferation of "bad bots." These nefarious entities, comprising 32% of all internet traffic in 2023, pose significant cybersecurity threats. Nanhi Singh, leading application security at Imperva, emphasizes the pervasive nature of these malicious actors, labeling them as one of the most pressing challenges facing industries worldwide. 

Bad bots, armed with sophisticated tactics, infiltrate networks with the aim of extracting sensitive information, perpetrating fraud, and spreading misinformation. From account takeovers to data breaches, the repercussions of bot-driven attacks are far-reaching and detrimental. Alarmingly, the report highlights a 10% increase in account takeovers in 2023, underscoring the urgency for proactive security measures. 

Geographical analysis further elucidates the global landscape of bot activity. Countries such as Ireland, Germany, and Mexico witness disproportionate levels of malicious bot traffic, posing significant challenges for cybersecurity professionals. Against this backdrop, organizations must adopt a proactive stance, implementing robust bot management strategies to safeguard against evolving threats. While the rise of bots presents formidable challenges, it also heralds opportunities for innovation and efficiency. 

Legitimate bots, such as AI-powered assistants like ChatGPT, enhance productivity and streamline processes. By leveraging generative AI, businesses can harness the power of automation to drive growth and innovation. Imperva's report serves as a clarion call for stakeholders across industries to recognize the complexities of internet traffic and adapt accordingly. 

As bot-driven activities continue to proliferate, a holistic approach to cybersecurity is imperative. From advanced threat detection to stringent access controls, organizations must fortify their defenses to mitigate risks and safeguard against evolving threats. 

Imperva's comprehensive analysis sheds light on the multifaceted nature of internet traffic dominated by bots. By understanding the nuances of bot behavior and implementing proactive security measures, businesses can navigate the digital landscape with confidence, ensuring resilience in the face of emerging cyber threats.

AI Chatbots Have Extensive Knowledge About You, Whether You Like It or Not

 

Researchers from ETH Zurich have recently released a study highlighting how artificial intelligence (AI) tools, including generative AI chatbots, have the capability to accurately deduce sensitive personal information from people based solely on their online typing. This includes details related to race, gender, age, and location. This implies that whenever individuals engage with prompts from ChatGPT, they may inadvertently disclose personal information about themselves.

The study's authors express concern over potential exploitation of this functionality by hackers and fraudsters in social engineering attacks, as well as the broader apprehensions about data privacy. While worries about AI capabilities are not new, they appear to be escalating in tandem with technological advancements. 

Notably, this month has witnessed significant security concerns, with the US Space Force prohibiting the use of platforms like ChatGPT due to data security apprehensions. In a year rife with data breaches, anxieties surrounding emerging technologies like AI are somewhat inevitable. 

The research on large language models (LLMs) aimed to investigate whether AI tools could intrude on an individual's privacy by extracting personal information from their online writings. 

To conduct this, researchers constructed a dataset from 520 genuine Reddit profiles, demonstrating that LLMs accurately inferred various personal attributes, including job, location, gender, and race—categories typically safeguarded by privacy regulations. Mislav Balunovic, a PhD student at ETH Zurich and co-author of the study, remarked, "The key observation of our work is that the best LLMs are almost as accurate as humans, while being at least 100x faster and 240x cheaper in inferring such personal information."

This revelation raises significant privacy concerns, particularly because the information was assumed on a previously unattainable scale. With this capability, users might be targeted by hackers posing seemingly innocuous questions. Balunovic further emphasized, "Individual users, or basically anybody who leaves textual traces on the internet, should be more concerned as malicious actors could abuse the models to infer their private information."

The study evaluated four models in total, with GPT-4 achieving an 84.6% accuracy rate and emerging as the top performer in inferring personal details. Meta's Llama2, Google's PalM, and Anthropic's Claude were also tested and closely trailed behind.

An example from the study showcased how the researcher's model deduced that a Reddit user hailed from Melbourne based on their use of the term "hook turn," a phrase commonly used in Melbourne to describe a traffic maneuver. This underscores how seemingly benign information can yield meaningful deductions for LLMs.

There was a modest acknowledgment of privacy concerns when Google's PalM declined to respond to about 10% of the researcher's privacy-invasive prompts. Other models exhibited similar behavior, though to a lesser extent.

Nonetheless, this response falls short of significantly alleviating concerns. Martin Vechev, a professor at ETH Zurich and a co-author of the study, noted, "It's not even clear how you fix this problem. This is very, very problematic."

As the use of LLM-powered chatbots becomes increasingly prevalent in daily life, privacy worries are not a risk that will dissipate with innovation alone. All users should be mindful that the threat of privacy-invasive chatbots is evolving from 'emerging' to 'very real'.

Earlier this year, a study demonstrated that AI could accurately decipher text with a 93% accuracy rate based on the sound of typing recorded over Zoom. This poses a challenge for entering sensitive data like passwords.

While this recent development is disconcerting, it is crucial for individuals to be informed so they can take proactive steps to protect their privacy. Being cautious about the information provided to chatbots and recognizing that it may not remain confidential can enable individuals to adjust their usage and safeguard their data.

Revolutionizing Everyday Life: The Transformative Potential of AI and Blockchain

 

Artificial intelligence (AI) and blockchain technology have emerged as two pivotal forces of innovation over the past decade, leaving a significant impact on diverse sectors like finance and supply chain management. The prospect of merging these technologies holds tremendous potential for unlocking even greater possibilities.

Although the integration of AI within the cryptocurrency realm is a relatively recent development, it demonstrates the promising potential for expansion. Forecasts suggest that the blockchain AI market could attain a valuation of $980 million by 2030.

Exploring below the potential applications of AI within blockchain reveals its capacity to bolster the crypto industry and facilitate its integration into mainstream finance.

Elevated Security and Fraud Detection

One domain where AI can play a crucial role is enhancing the security of blockchain transactions, resulting in more robust payment systems. Firstly, AI algorithms can scrutinize transaction data and patterns, preemptively identifying and preventing fraudulent activities on the blockchain.

Secondly, AI can leverage machine learning algorithms to reinforce transaction privacy. By analyzing substantial volumes of data, AI can uncover patterns indicative of potential data breaches or unauthorized account access. This enables businesses to proactively implement security measures, setting up automated alerts for suspicious behavior and safeguarding sensitive information in real time.

Instances of AI integration are already evident. Scorechain, a crypto-tracking platform, harnessed AI to enhance anti-money laundering transaction monitoring and fortify fraud prediction capabilities. CipherTrace, a Mastercard-backed blockchain security initiative, also adopted AI to assess risk profiles of crypto merchants based on on-chain data.

In essence, the amalgamation of AI algorithms and blockchain technology fosters a more dependable and trustworthy operational ecosystem for organizations.

Efficiency in Data Analysis and Management

AI can revolutionize data collection and analysis for enterprises. Blockchain, with its transparent and immutable information access, provides an efficient framework for swiftly acquiring accurate data. Here, AI can amplify this advantage by streamlining the data analysis process. AI-powered algorithms can rapidly process blockchain network data, identifying nuanced patterns that human analysts might overlook. The result is actionable insights to support business functions, accompanied by a significant reduction in manual processes, thereby optimizing operational efficiency.

Additionally, AI's integration can streamline supply chain management and financial transactions, automating tasks like invoicing and payment processing, eliminating intermediaries, and enhancing efficiency. AI can also ensure the authenticity and transparency of products on the blockchain, providing a shared record accessible to all network participants.

A case in point is IBM's blockchain-based platform introduced in 2020 for tracking food manufacturing and supply chain logistics, facilitating collaborative tracking and accounting among European manufacturers, distributors, and retailers.

Strengthening Decentralized Finance (DeFi)

The synergy of AI and blockchain can empower decentralized finance and Web3 by facilitating the creation of improved decentralized marketplaces. While blockchain's smart contracts automate processes and eliminate intermediaries, creating these contracts can be complex. AI algorithms, like ChatGPT, employ natural language processing to simplify smart contract creation, reducing errors, enhancing coding efficiency, and broadening access for new developers.

Moreover, AI can enhance user experiences in Web3 marketplaces by tailoring recommendations based on user search patterns. AI-powered chatbots and virtual assistants can enhance customer service and transaction facilitation, while blockchain technology ensures product authenticity.

AI's data analysis capabilities further contribute to identifying trends, predicting demand and supply patterns, and enhancing decision-making for Web3 marketplace participants.

Illustrating this integration is the example of Kering, a luxury goods company, which launched a marketplace combining AI-driven chatbot services with crypto payment options, enabling customers to use Ethereum for purchases.

Synergistic Future of AI and Blockchain

Though AI's adoption within the crypto sector is nascent, its potential applications are abundant. In DeFi and Web3, AI promises to enhance market segments and attract new users. Furthermore, coupling AI with blockchain technology offers significant potential for traditional organizations, enhancing business practices, user experiences, and decision-making.

In the upcoming months and years, the evolving collaboration between AI and blockchain is poised to yield further advancements, heralding a future of innovation and progress.

AI: How to guard against privacy invasion by bots

 

Since ChatGPT entered our life and launched a conflict in the artificial intelligence area, a new race known as the AI Arms Race 2.0 has started.

The same ChatGPT that everyone initially found impressive also raised concerns about copyright theft, privacy invasion, and even the stability of nations in terms of their economies, security, and politics.

The chatbot, which has become a consumer product in practically every home and receives millions of monthly users, is the fundamental aspect of artificial intelligence.

As a result, investments in the area have increased, and major technology companies are now competing to develop AI technologies. Concerns about the privacy of personal information have also been brought up by the introduction of ChatGPT. 

In response to these worries, regulators and lawmakers around the world have taken action, and soon they will need to decide whether to restrict, pause, or even accept different advancements in the area.

Worldwide limitations

Beyond issues with privacy, others see AI as a public and economic issue that authorities should pay attention to, especially in Europe, where they want to lead and be the first to respond.

For instance, OpenAI's ChatGPT was unable to function in Italy when Garante, the nation's data protection body, forbade its usage and opened an inquiry into possible infringement of privacy laws.

This procedure serves as a safeguard against data leaks to nefarious parties or infractions of privacy rules.

After that, the business made a number of improvements to the Chatbot's operations and promised complete transparency regarding its operation. 

However, people over the age of 18 will be able to use the program just like those under the age of 13, but only with parental permission. Regulators in Italy are concerned about this because it is simple to fake or alter parental approval.

Along with Italy, the gathering and use of data by artificial intelligence systems is being looked into and examined by data protection authorities in Spain, France, Germany, Ireland, and other Western nations.

A task force on ChatGPT has just been established by the European Data Protection Board, which brings together the privacy authorities of Europe, to conduct investigations and take enforcement action on this contentious issue. This could result in a uniform regulation across the entire continent.

The service was restarted after these instances, although it is still being constantly watched over. Many European data protection authorities are still waiting for new AI demands, like tougher age verification, information processing campaigns, and citizen rights relating their data.

Additionally, concerns have been raised about the use of the chatbot's own data and if it can be used without permission.

Despite multiple efforts to enforce and research the matter, the European Union has lately developed a set of regulations that are anticipated to be passed in the next months.



Innovative AI System Trained to Identify Recyclable Waste

 

According to the World Bank, approximately 2.24 billion tonnes of solid waste were generated in 2020, with projections indicating a 73% increase to 3.88 billion tonnes by 2050.

Plastic waste is a significant concern, with research from the Universities of Georgia and California revealing that over 8.3 billion tonnes of plastic waste was produced between the 1950s and 2015.

Training AI systems to recognize and classify various forms of rubbish, such as crumpled and dirty items like a discarded Coke bottle, remains a challenging task due to the complexity of waste conditions.

Mikela Druckman, the founder of Greyparrot, a UK start-up focused on waste analysis, is well aware of these staggering statistics. Greyparrot utilizes AI technology and cameras to analyze waste processing and recycling facilities, monitoring around 50 sites in Europe and tracking 32 billion waste objects per year.

"It is allowing regulators to have a much better understanding of what's happening with the material, what materials are problematic, and it is also influencing packaging design," says Ms Druckman.

"We talk about climate change and waste management as separate things, but actually they are interlinked because most of the reasons why we are using resources is because we're not actually recovering them.

"If we had stricter rules that change the way we consume, and how we design packaging, that has a very big impact on the value chain and how we are using resource."

Troy Swope, CEO of Footprint, is dedicated to developing better packaging solutions and has collaborated with supermarkets and companies like Gillette to replace plastic trays with plant-based fiber alternatives.

Swope criticizes the "myth of recycling" in a blog post, arguing that single-use plastic is more likely to end up in landfills than to be recycled. He advocates for reducing dependence on plastic altogether to resolve the plastic crisis.

"It's less likely than ever that their discarded single-use plastic ends up anywhere but a landfill," wrote Mr Swope. "The only way out of the plastics crisis is to stop depending on it in the first place."
 
So-called greenwashing is a big problem, says Ms Druckman. "We've seen a lot of claims about eco or green packaging, but sometimes they are not backed up with real fact, and can be very confusing for the consumer."

Polytag, a UK-based company, tackles this issue by applying ultraviolet (UV) tags to plastic bottles, enabling verification of recycling through a cloud-based app. Polytag has collaborated with UK retailers Co-Op and Ocado to provide transparency and accurate recycling data.

In an effort to promote recycling and encourage participation, the UK government, along with administrations in Wales and Northern Ireland, plans to introduce a deposit return scheme in 2025. This scheme will involve "reverse vending machines" where people can deposit used plastic bottles and metal cans in exchange for a monetary reward.

However, the challenge of finding eco-friendly waste disposal methods continues to persist, as new issues arise each year. The rising popularity of e-cigarettes and vapes has resulted in a significant amount of electronic waste that is difficult to recycle.

Disposable single-use vapes, composed of various materials including plastics, metals, and lithium batteries, pose a challenge to the circular economy. Research suggests that 1.3 million vapes are discarded per week in the UK alone, leading to a substantial amount of lithium ending up in landfills.

Ray Parmenter, head of policy and technical at the Chartered Institute of Waste Management, emphasizes the importance of maximizing the use of critical raw materials like lithium.

"The way we get these critical raw materials like lithium is from deep mines - not the easiest places to get to. So once we've got it out, we need to make the most of it," says Mr Parmenter.

Mikela Druckman highlights the need for a shift in thinking, she added  "It doesn't make economic sense, it doesn't make any sense. Rather than ask how do we recycle them, ask why we have single-use vapes in the first place?"

In conclusion, addressing the growing waste crisis requires collaborative efforts from industries, policymakers, and consumers, with a focus on sustainable packaging, improved recycling practices, and reduced consumption.

Can Twitter Fix its Bot Crisis with an API Paywall?

 


A newly updated Twitter policy relating to the application programming interface (API) has just been implemented, according to researchers - and the changes will have a profound impact on social media bots, both positive (RSS integration, for example) and negative (political influencer campaigns), respectively. 

A tweet from the Twitter development team announced that starting February 9, the API would no longer be accessible for free. It was Elon Musk's personal amendment. Upon hearing some negative publicity, Elon Musk stepped in personally to amend the original terms of service - Twitter is to continue to provide its bots with a light, write-only API that allows them to produce high-quality content for free. 

In a computer program, APIs are used to enable different parts of the program to communicate with each other. An API provides an interface for two software programs to interact with one another. This is the same way that your computer provides an interface so that you can easily interact with all of its many complex functions. Enterprises, educational institutions, or bot developers who want to develop applications on Twitter are most likely to need the API for management and analytics. 

Whether you choose a limited or subscription model, we are at risk of displacing smaller, less well-funded developers and academics who have utilized free access to develop bots, applications, and research that provide real value for users. 

It is also pertinent to note that Twitter has been targeted by malicious bots since the start of time. The use of these social media platforms is on the increase by hackers spreading scams and by evil regimes spreading fake news, and that's without mentioning the smaller-scale factors that affect influencer culture, marketing, and general trolling, which are widespread as well. 

What are the pros and cons of using a paid API to solve Twitter's influence campaigns and bot-driven problems? Several experts believe the new move is just a smokescreen to cover up the real problem. 

Bad bots on Twitter 


According to a report published by the National Bureau of Economic Research in Cambridge, Mass., in May 2018, social media bots play a significant role in shaping public opinion, particularly at the local level. It was found that Twitter bots had been greatly influenced by the US presidential election and the UK vote on leaving the European Union. This was during the 2016 elections. Based on the data, it appears that the aggressive use of Twitter bots, along with the fragmentation of social media and the influence of sentiment, may all be factors that contributed to the outcome of the votes. 

In the UK, the increase in automated pro-leave tweets may have resulted in 1.76 percentage points of the actual pro-leave vote share is explained by the increasing volume of automated tweets. While in the US, 3.23 percentage points of the actual vote could be explained by the influence of bots. 

During that election, three states were critical swing states - Pennsylvania, Wisconsin, and Michigan - with a combined number of electoral votes that could have made the difference between victory or defeat - won the election by a mere fraction of a percent.   

Often, bots are just helpful tools that can be used by hackers to commit cybercrime at scale without necessarily swaying world history - this can make them a useful tool for committing cybercrime at scale. The use of Twitter bots by cyber criminals has been observed in the distribution of spam and malicious links on Twitter. This is as well as the amplifying of their content and profiles on the site. 

David Maynor, director of the Cybrary Threat Intelligence Team and chief technology officer for Dark Reading, explains in an interview that bots are an incredibly huge problem for the Internet. Some random objects taunt people so much that victims would spend hours or days trying to prove that they were wrong. That would be the real world. Bots also give Astroturf efforts a veneer of legitimacy, they do not deserve. 

Astroturfing is a type of marketing strategy designed to create an impression that a product or service has been chosen by the general public in a way that appears to be an independent assessment without actually being so (hiding sponsorship information, for instance, or presenting "reviews" as objective third-party assessments). 

Are Twitter's motives hidden? 


According to some people, Twitter's real motive behind placing its API behind a paywall has nothing to do with security, and instead, it could be something else entirely. The question is then, would a basic subscription plan be strong enough to guard against a cybercrime group, or indeed a lone scammer, who might be targeting your account? One of the most active operators of social media influence campaigns in the world is certainly not the Russian government. 

There are many mobile app security platforms and cloud-based solutions that can be used to eliminate bot traffic from mobile apps easily, and Elon Musk is well aware of these technologies. Ted Miracco, CEO at Approov, says: Bot traffic could be largely eliminated overnight if the proper technologies are implemented. 

Several methods and tools exist to help social media sites (and site owners and administrators of all types of websites) snuff out botnets, and they can be used by all our social media users. It is imperative to keep in mind that bots tend to respond predictably. They, for example, post regularly and only in certain ways. There are specialized tools that can help you identify entire networks of bots. By identifying just a few suspect accounts, these tools can help reveal what are a few suspect accounts. 

There is a theory that naming and shaming may well be critically significant in diagnosing malicious automated tweets along with detecting malicious automated tweets: This might not be popular, but it is the only way to stop bots and information operations. People and organizations must be tied to real-life accounts and organizations. 

In this regard, Livnek adds, Whilst this raises concerns about privacy and misuse of data, remember that these platforms are already mining all of the available data on the platforms to increase user engagement. Tying accounts to real-world identities wouldn't affect the platforms' data harvesting, but would instead enable them to stamp out bots and [astroturfing]. 

It seems a bit extreme to remove free API access before we have exhausted all feasible security measures that might have been available to us. 

As Miracco argues, the reason for this is an open secret in Silicon Valley - it is basically the elephant in the room. According to Miracco, social media companies are increasingly liking their bots in terms of generating revenue for them. 

Twitter makes money by selling advertisements and this is the basis of its business model. As a result, bots are viewed by advertisers as users, i.e. they generate revenue in the same way as users do. There is more money to be made when there are more bots. 

Tesla CEO Elon Musk threatened to pull out of his plan to buy Twitter in January, reportedly as a result of the revelation that a large portion of Twitter's alleged users is actually bots or other automated programming. As he transitioned from being an interested party to becoming the outright owner of the company, his mood may have changed. The Miracco Group's CEO predicts that "revealing the problem now will result in a precipitous fall in traffic, so revenue must be discovered along the way to maintain the company's relevance along the path to reduced traffic, which was the motivation behind the API paywall. His explanation is straightforward: the paywall is ostensibly used to stop bots, but the truth is that it is being used to drive revenue. 

There has just been the implementation of a paywall. Whether it will be able to solve Twitter's bot problem by itself or if it will only be a matter of Musk's pockets being lined, only time will tell. 

Despite a request from reporters for comment, Twitter did not respond immediately to the query.   

This Linux Malware Bombards Computers with DDoS Bots and Cryptominers

 

Security experts have discovered a new Linux malware downloader that uses cryptocurrency miners and DDoS IRC bots to attack Linux servers with weak security. After the downloader's shell script compiler (SHC) was uploaded to VirusTotal, researchers from ASEC found the attack. It appears that Korean users were the ones who uploaded the SHC, and Korean users are also the targets. 

Additional research has revealed that threat actors target Linux servers with weak security by brute-forcing their way into administrator accounts over SSH. Once inside, they'll either set up a DDoS IRC bot or a cryptocurrency miner. XMRig, arguably the most well-liked cryptocurrency miner among hackers, is the miner that is being used.

It generates Monero, a privacy-focused cryptocurrency whose transactions appear to be impossible to track and whose users are allegedly impossible to identify, using the computing power of a victim's endpoints.

Threat actors can use the DDoS IRC bot to execute commands like TCP Flood, UDP Flood, or HTTP Flood. They can execute port scans, Nmap scans, terminate various processes, clear the logs, and other operations. Malicious deployments are continuously thrown at Linux systems, most frequently ransomware and cryptojacking.

"Because of this, administrators should use passwords that are difficult to guess for their accounts and change them periodically to protect the Linux server from brute force attacks and dictionary attacks, and update to the latest patch to prevent vulnerability attacks," ASEC stated in its report. "Administrators should also use security programs such as firewalls for servers accessible from outside to restrict access by attackers."

The continued success of Linux services in the digital infrastructure and cloud industries, as well as the fact that the majority of anti-malware and cybersecurity solutions are concentrated on protecting Windows-based devices, according to a VMware report from February 2022, put Linux in a risky situation.

Global Scam Operation "Classiscam" Expanded to Singapore

 

Classiscam, a sophisticated scam-as-a-service business, has now entered Singapore, after more than 1.5 years  migrating to Europe. 

"Scammers posing as legitimate buyers approach sellers with the request to purchase goods from their listings and the ultimate aim of stealing payment data," Group-IB said in a report shared with The Hacker News. 

The operators were described as a "well-coordinated and technologically advanced scammer criminal network" by the cybersecurity firm. Classiscam is a Russia-based cybercrime operation that was originally detected in the summer of 2019 but only came to light a year later, coinciding with an uptick in activity due to an increase in online buying following the COVID-19 epidemic. 

Classiscam, the pandemic's most commonly utilised fraud scheme, targets consumers who use marketplaces and services related to property rentals, hotel bookings, online bank transfers, online retail, ride-sharing, and package deliveries. Users of major Russian ads and marketplaces were initially targeted, before spreading to Europe and the United States. 

Over 90 active organisations are said to be utilising Classiscam's services to target consumers in Bulgaria, the Czech Republic, France, Kazakhstan, Kirghizia, Poland, Romania, Ukraine, the United States, and Uzbekistan. The fraudulent operation spans 64 countries in Europe, the Commonwealth of Independent States (CIS), and the Middle East, and employs 169 brands to carry out the assaults. Criminals using Classiscam are reported to have gained at least $29.5 million in unlawful earnings between April 2020 and February 2022. 

This campaign is remarkable for its dependence on Telegram bots and conversations to coordinate activities and generate phishing and scam pages. Here's how it all works: Scammers put bait advertising on famous marketplaces and classified websites, frequently promising game consoles, laptops, and cellphones at steep prices. When a potential victim contacts the seller (i.e., the threat actor) via the online storefront, the Classiscam operator dupes the target into continuing the conversation on a third-party messaging service like WhatsApp or Viber before sending a link to a rogue payment page to complete the transaction. 

The concept includes a hierarchy of administrators, workers, and callers. While administrators are in charge of recruiting new members, automating the building of scam pages, and registering new accounts, it is the employees that make accounts on free classifieds websites and submit the false advertising. 

"Workers are key participants of the Classiscam scam scheme: their goal is to attract traffic to phishing resources," the researchers said. 

In turn, the phishing URLs are produced by Telegram bots that replicate the payment pages of local classified websites but are housed on lookalike domains. This necessitates the workers to submit the URL containing the bait product to the bot. 

"After initial contact with the legitimate seller, the scammers generate a unique phishing link that confuses the sellers by displaying the information about the seller's offer and imitating the official classified's website and URL," the researchers said. 

"Scammers claim that payment has been made and lure the victim into either making a payment for delivery or collecting the payment." 

The phishing pages also offer the option of checking the victim's bank account balance in order to find the most "valuable" cards. Furthermore, some cases involve a second attempt to deceive the victims by phoning them and requesting a refund in order to collect their money back. 

These calls are made by assistant employees posing as platform tech support professionals.  In this scenario, the targets are sent to a fraudulent payment page where they must input their credit card information and confirm it with an SMS passcode. Instead of a refund, the victim's card is charged the same amount again.

While the aforementioned method is an example of seller scam, in which a buyer (i.e., victim) receives a phishing payment link and is cheated of their money, buyer scams also exist.

A fraudster contacts a legal vendor as a client and sends a bot-generated fraudulent payment form imitating a marketplace, ostensibly for verification purposes. However, after the seller inputs their bank card details, an amount equal to the cost of the goods is debited from their account.

Classiscammers' complete attack infrastructure consists of 200 domains, 18 of which were constructed to deceive visitors of an undisclosed Singaporean classified website. Other sites in the network masquerade as Singaporean movers, European, Asian, and Middle Eastern classified websites, banks, markets, food and cryptocurrency businesses, and delivery services.

"As it sounds, Classiscam is far more complex to tackle than the conventional types of scams," Group-IB's Ilia Rozhnov siad. "Unlike the conventional scams, Classiscam is fully automated and could be widely distributed. Scammers could create an inexhaustible list of links on the fly."

"To complicate the detection and takedown, the home page of the rogue domains always redirects to the official website of a local classified platform."

Attackers UtilizingDefault Credentials to Target Businesses, Raspberry Pi and Linux Top Targets

 

While automated attacks remain a major security concern to enterprises, findings from a Bulletproof analysis highlight the challenge created by inadequate security hygiene. According to research conducted in 2021, bot traffic currently accounts for 70% of total web activity.

Default credentials are the most popular passwords used by malicious attackers, acting as a 'skeleton key' for criminal access. With attackers increasingly deploying automated attack methods 

Brian Wagner, CTO at Bulletproof stated, “On the list are the default Raspberry Pi credentials (un:pi/pwd:raspberry). There are more than 200,000 machines on the internet running the standard Raspberry Pi OS, making it a reasonable target for bad actors. We also can see what looks like credentials used on Linux machines (un:nproc/pwd:nproc). This highlights a key issue – default credentials are still not being changed.”

“Using default credentials provides one of the easiest entry points for attackers, acting as a ‘skeleton key’ for multiple hacks. Using legitimate credentials can allow attackers to avoid detection and makes investigating and monitoring attacks much harder.” 

According to the findings, attackers are continuously utilising the same typical passwords to gain access to systems. Some are default passwords that haven't been updated since the company started using them. The RockYou database leak from December 2009 is accountable for a quarter of all passwords used by attackers today. This degree of activity suggests that these passwords are still valid. 

During the period of the research, threat actors started almost 240,000 sessions. The top IP address, which came from a German server, started 915 sessions and stayed on the Bulletproof honeypot for a total of five hours. Another attacker spent 15 hours on the honeypot, successfully logging in 29 times with more than 30 different passwords. In sum, 54 per cent of the more than 5,000 distinct IP addresses had intelligence indicating they were bad actor IP addresses.

Wagner continued, “Within milliseconds of a server being put on the internet, it is already being scanned by all manner of entities. Botnets will be targeting it and a host of malicious traffic is then being driven to the server.” 

“Although some of our data shows legitimate research companies scanning the internet, the greatest proportion of traffic we encountered to our honeypot came from threat actors and compromised hosts. These insights, combined with our data, highlight the importance of proactive monitoring to ensure you are aware of the threats to your business on a daily basis, as well as a tried and tested incident response plan.”

Analysts Warn of Telegram Powered Bots Stealing Bank OTPs

 

In the past few years, two-factor verification is one of the simplest ways for users to safeguard their accounts. It has now become a major target for threat actors. As per Intel 471, a cybersecurity firm, it has observed a rise in services that allow threat actors to hack OTP (one time password) tokens. Intel 471 saw all these services since June which operate via a Telegram bot or provide assistance to customers via a Telegram channel. Through these assistance channels, users mostly share their feats while using this bot and often walk away thousand dollars from target accounts. 

Recently, threat actors have been providing access to services that call victims, which on the surface, looks like a genuine call from a bank and then fool victims into providing an OTP or other authentication code into a smartphone to steal and give the codes to the provider. Few services also attack other famous financial services or social media platforms, giving SIM swapping and e-mail phishing services. According to experts, a bot known as SMSRanger, is very easy to use. With one slash command, a user can enable various modes and scripts targeted towards banks and payment apps like Google Pay, Apple Pay, PayPal, or a wireless carrier. 

When the victim's phone number has been entered, the rest of the work is carried out by the bot, allowing access to the victim's account that has been attacked. The bot's success rate is around 80%, given the victims respond to the call and provides correct information. BloodOTPBot, a bot similar to SMSRanger sends the user a fake OTP code via message. In this case, the hacker has to spoof the target's phone number and appear like a company or bank agent. After this, the bot tries to get the authentication code with the help of social engineering tricks. 

The bot sends the code to the operator after the target receives the OTP and types it on the phone keyboard. A third bot, known as SMS buster, however, requires more effort from the attacker for retrieving out information. The bot has a feature where it fakes a call to make it look like a real call from a bank, and allows hackers to contact from any phone number. The hacker could follow a script to fake the victim into giving personal details like ATM pin, CVV, and OTP.

Scammers are Using Twitter Bots for PayPal and Venmo Scams

 

Internet scammers are using Twitter bots to trick users into making PayPal and Venmo payments to accounts under their possession. Venmo and PayPal are the popular online payment services for users to pay for things such as charity donations or for goods such as the resale of event tickets. This latest campaign, however, is a stark warning against making or revealing any sort of transaction on a public platform.

How fraudsters operate? 

The fraud campaign begins when a well-meaning friend asks the person in need for a specific money transferring account — PayPal or Venmo. Then the Twitter bot springs into action, presumably identifying these tweets via a search for keywords such as ‘PayPal’ or ‘Venmo’.

Twitter bot impersonates the original poster by scraping the profile picture and adopting a similar username within minutes in order to substitute their own payment account for that of the person who really deserves the money. 

Twitter user ‘Skye’ (@stimmyskye) posted a screenshot online detailing how she was targeted by a Twitter bot. Skye noted that the bot blocks the account that it is mimicking, and scraps the whole profile. 

“Because you’re blocked, you’ll see that there’s one reply to that question but the reply tweet won’t show up. If you see a ghost reply to a comment like that, it’s almost always a scam bot. They delete as fast as they clone your account. You won’t even know it happened,” Skye wrote.

“They will delete the reply tweet, but the account itself will usually not be deleted, just change the username. So, the accounts are usually not brand new, they even have followers. You need to check closely,” she warned. 

“Given that the mechanism is automated, I’m willing to bet that the attack is fairly successful. A Twitter user would need to pay close attention to what is going on in order to notice what’s happened. Don’t publicly link to your PayPal (or similar) account – deal with payments via direct message instead. By doing this, the scam bot won't be triggered, and wouldn't be able to show up in the same chain of direct messages even if it was,” Andy Patel, researcher with F-Secure’s Artificial Intelligence Center of Excellence, advised users.

Microsoft shuts down World's Largest Botnet Army


According to Microsoft, the company was part of a team that took down the global network of zombie bots. Necurs is one of the largest botnets globally and is also responsible for attacking more than 9 million computers. It is infamous for multiple criminal cyberattacks that include sending phishing emails like fake pharmaceuticals e-mail and stealing personal user data. The hackers use Botnets for taking over remote access of internet-connected systems to install malware and dangerous software. The hackers then use the installed malicious software to steal personal user data like user activity on the computer, send spams and fake e-mails, modify or delete user information without the knowledge of the owner.


The taking down of the Necurs happened after 8 years of consistent hard work and patience along with co-ordinated planning with 35 counties across the world, says Tom Burt, VP of customer security and trust, Microsoft. According to Tom, now that the botnet network is down, hackers will no longer be able to execute cyberattacks with the help of the botnet network.

About Botnet

Botnets are systems of the web-connected computers that run on self-automated commands. Hackers use this network of systems to send malware (malicious software) that allows them remote access to a computer. If the malware is installed or starts affecting the computer, hackers steal personal user information or use the infected device as a host to launch more cyberattacks by sending spams and malware. When the device is infected through malware, it's called Zombie.

Origin of Botnet Network

The news of the 1st Necurs attack appeared in 2012. According to experts, Necurs is said to have affected more than 9 million computers. Necurs used domain generation algorithms to grow its network. It turned arbitrary domain names into websites and used them to send spams or malware to the attacked computers. Fortunately, Microsoft and the team deciphered the algorithm pattern and predicted the next domain name that Necurs would have used to launch another cyberattack, and prevented the attack from happening.

Signs your computer might be affected

  • Systems run slow and programs load slowly 
  • Computer crashes frequently 
  • Suspicious filling up of storage 
  • Your account sends spam emails to your contacts