Search This Blog

Showing posts with label Misinformation. Show all posts

Smash and Grab: Meta Takes Down Disinformation Campaigns Run by China and Russia

 

Meta, Facebook’s parent company has confirmed that it has taken down two significant but unrelated ‘disinformation operations’ rolling out from China and Russia. 

The campaigns began at the beginning of May 2022, targeting media users in Germany, France, Italy, Ukraine, and the UK. The campaign attempted to influence public opinions by pushing fake narratives in the west, pertaining to US elections and the war in Ukraine. 

The campaign spoofed around 60 websites, impersonating legitimate news websites, such as The Guardian in the UK and Bild and Der Spiegel in Germany. The sites did not only imitate the format and design of the original news sites but also copied photos and bylines from the news reporters in some cases. 

“There, they would post original articles that criticized Ukraine and Ukrainian refugees, supported Russia, and argued that Western sanctions on Russia would backfire […] They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change.org and Avaaz, and even LiveJournal” Meta stated in a blog post. 

In the wake of this security incident, Facebook and Instagram have reportedly removed nearly 2,000 accounts, more than 700 pages, and one group. Additionally, Meta detected around $105,000 in advertising. While Meta has been actively quashing fake websites, more spoofed websites continue to show up.  

However, “It presented an unusual combination of sophistication and brute force,” claims Meta’s Ben Nimmo and David Agranovich in a blog post announcing the takedowns. “The spoofed websites and the use of many languages demanded both technical and linguistic investment. The amplification on social media, on the other hand, relied primarily on crude ads and fake accounts.” 

“Together, these two approaches worked as an attempted ‘smash-and-grab’ against the information environment, rather than a serious effort to occupy it long term.” 

Both the operations are now taken down as the campaigns were a violation of Meta’s “coordinated inauthentic behaviour” rule, defined as “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation”. 

Addressing the situation of emerging fraud campaigns, Ben Nimmo further said, “We know that even small operations these days work across lots of different social media platforms. So the more we can share information about it, the more we can tell people how this is happening, the more we can all raise our defences.”

30 Million Data Theft Hacktivists Detained in Ukraine

The Security Service of Ukraine's (SSU) cyber division has eliminated a group of hackers responsible for the data theft or roughly 30 million people. 

According to SSU, its cyber branch has dismantled a group of hacktivists who stole 30 million accounts and sold the data on the dark web. According to the department, the hacker organization sold these accounts for about UAH 14 million ($375,000). 

As stated by the SSU, the hackers sold data packs that pro-Kremlin propagandists bought in bulk and then utilized the accounts to distribute false information on social media, generate panic, and destabilize Ukraine and other nations. 

YuMoney, Qiwi, and WebMoney, which are not permitted in Ukraine, were used by the group to receive funds.The police discovered and seized many hard drives containing stolen personal data, alongside desktops, SIM cards, mobile phones, and flash drives, during the raids on the attackers' homes in Lviv, Ukraine. 

By infecting systems with malware, fraudsters were able to gather sensitive data and login passwords. They targeted systems in the European Union and Ukraine. According to Part 1 of Article 361-2 of the Ukrainian Criminal Code, unauthorized selling of material with restricted access, the group's organizer has been put under investigation.

The number of people detained is still unknown, but they are all charged criminally with selling or disseminating restricted-access material stored in computers and networks without authorization. There are lengthy prison terms associated with these offenses.

The gang's primary clients were pro-Kremlin propagandists who utilized the stolen accounts in their destabilizing misinformation efforts in Ukraine and other nations.

The SSU took down five bot farms that spread misinformation around the nation in March and employed 100,000 fictitious social media profiles. A huge bot farm with one million bots was found and destroyed by Ukrainian authorities in August.

The SSU discovered two further botnets in September that were using 7,000 accounts to propagate false information on social media.

Malware producers are frequently easier to recognize, but by using accounts belonging to real people, the likelihood that the operation would be discovered is greatly reduced due to the history of the posts and the natural activity.






According to Europol, Deepfakes are Used Frequently in Organized Crime

 

The Europol Innovation Lab recently released its inaugural report, titled "Facing reality? Law enforcement and the challenge of deepfakes", as part of its Observatory function. The paper presents a full overview of the illegal use of deepfake technology, as well as the obstacles faced by law enforcement in identifying and preventing the malicious use of deepfakes, based on significant desk research and in-depth interaction with law enforcement specialists. 

Deepfakes are audio and audio-visual consents that "convincingly show individuals expressing or doing activities they never did, or build personalities which never existed in the first place" using artificial intelligence. Deepfakes are being utilized for malevolent purposes in three important areas, according to the study: disinformation, non-consensual obscenity, and document fraud. As technology further advances in the near future, it is predicted such attacks would become more realistic and dangerous.

  1. Disinformation: Europol provided several examples of how deepfakes could be used to distribute false information, with potentially disastrous results. In the geopolitical domain, for example, producing a phony emergency warning that warns of an oncoming attack. The US charged the Kremlin with a disinformation scheme to use as a pretext for an invasion of Ukraine in February, just before the crisis between Russia and Ukraine erupted.  The technique may also be used to attack corporations, for example, by constructing a video or audio deepfake which makes it appear as if a company's leader committed contentious or unlawful conduct. Criminals imitating the voice of the top executive of an energy firm robbed the company of $243,000. 
  2. Non-consensual obscenity: According to the analysis, Sensity found non-consensual obscenity was present in 96 percent of phony videos. This usually entails superimposing a victim's face onto the body of a philanderer, giving the impression of the victim is performing the act.
  3. Document fraud: While current fraud protection techniques are making it more difficult to fake passports, the survey stated that "synthetic media and digitally modified facial photos present a new way for document fraud." These technologies, for example, can mix or morph the faces of the person who owns the passport and the person who wants to obtain one illegally, boosting the likelihood the photo will pass screening, including automatic ones. 

Deepfakes might also harm the court system, according to the paper, by artificially manipulating or producing media to show or deny someone's guilt. In a recent child custody dispute, a mother of a kid edited an audiotape of her husband to persuade the court he was abusive to her. 

Europol stated all law enforcement organizations must acquire new skills and tools to properly deal with these types of threats. Manual detection strategies, such as looking for discrepancies, and automatic detection techniques, such as deepfake detection software uses artificial intelligence and is being developed by companies like Facebook and McAfee, are among them. 

It is quite conceivable that malicious threat actors would employ deepfake technology to assist various criminal crimes and undertake misinformation campaigns to influence or corrupt public opinion in the months and years ahead. Machine learning and artificial intelligence advancements will continue to improve the software used to make deepfakes.

Misinformation is a Hazard to Cyber Security

 

Most cybersecurity leaders recognize the usefulness of data, but data is merely information. What if the information you've been given is actually false? Or it is deception? What methods does your cybersecurity program use to determine what is real and what isn't?

Ian Hill, Global Director of Cyber Security with Royal BAM Group defined misinformation as "inaccurate or purposely misleading information." This might be anything from misinformation to deceptive advertising to satire carried too far. So, while disinformation isn't meant to be destructive, it can cause harm. 

The ideas, tactics, and actions used in cybersecurity and misinformation attacks are very similar. Misinformation takes advantage of our cognitive biases and logical fallacies, whereas cyberattacks target computer systems. Information that has been distorted, miscontextualized, misappropriated, deep fakes, and cheap fakes are all used in misinformation attacks. To wreak even more harm, nefarious individuals combine both attacks. 

Misinformation has the potential to be more damaging than viruses, worms, and other malware. Individuals, governments, society, and corporations can all be harmed by misinformation operations to deceive and damage people. 

The attention economy and advertisement-centric business models to launch a sophisticated misinformation campaign that floods the information channels the truth at unprecedented speed and scale. Understanding the agent, message, and interpreter of a specific case of information disorder is critical for organizations to stop it. Find out who's behind it — the "agent" — and what the message is that's being sent. Understanding the attack's target audience — the interpreter — is just as critical.

Misconceptions and deceptions from basic phishing scams, cyberattacks have progressed. Misinformation and disinformation are cybersecurity risks for four reasons, according to Disinfo. EU. They're known as the 4Ts:

  •  Terrain, or the infrastructure that disseminates falsehoods 
  •  Misinformation tactics, or how the misinformation is disseminated
  •  The intended victims of the misinformation that leads to cyberattacks, known as targets.
  •  Temptations, or the financial motivations for disseminating false information in cyberattacks.
 
Employees who are educated on how threat actors, ranging from an amateur hacker to a nation-state criminal, spread false information will be less likely to fall for false narratives and harmful untruths. It is now up to cybersecurity to distinguish between the true and the fraudulent.

Facebook Struggles Against Hate Speech and Misinformation, Fails to Take Actions


In the last month, FB CEO Mark Zuckerberg and others met with civil rights activists to discuss FB's way of dealing with the rising hate speeches on the platform. The activists were not too happy about Facebook's failure to deal with hate speeches and misinformation. As it seems, the civil rights group took an 'advertising boycott' action against the social media giant and expressed their stark criticism. According to these civil groups, they have had enough with Mark Zuckerberg's incompetency to deal with white supremacy, propaganda, and voters suppression on FB.


This move to boycott Facebook came as a response to Donald Trump's recent statement on FB. Trump said that anti-racism protesters should be treated with physical violence, and he also spread misinformation about mail-in voting. FB, however, denies these allegations, saying these posts didn't violate community policies. Even after such incidents, the company ensures that everything's alright, and it just needs to toughen up its enforcement actions.

"Facebook stands firmly against hate. Being a platform where everyone can make their voice heard is core to our mission, but that doesn't mean it's acceptable for people to spread hate. It's not. We have clear policies against hatred – and we constantly strive to get better and faster at enforcing them. We have made real progress over the years, but this work is never finished, and we know what a big responsibility Facebook has to get better at finding and removing hateful content." "Later this morning, Mark and I, alongside our team, are meeting with the organizers of the Stop Hate for Profit campaign followed by a meeting with other civil rights leaders who have worked closely with us on our efforts to address civil rights," said COO Sheryl Sandberg in her FB post.

In another incident, FB refused to take action against T. Raja Singh, an Indian politician from BJP. According to the Wall Street Journal, the company didn't apply its hate speech policies on Raja's Islamophobic remarks. FB employees admitted that the politicians' statements were enough to terminate his FB account. The company refused to, as according to the FB executive in India, could hurt FB's business in India.

Twitter removes nearly 4,800 accounts linked to Iran government

Twitter has removed nearly 4,800 accounts it claimed were being used by Iranian government to spread misinformation, the company said on Thursday.

Iran has made wide use of Twitter to support its political and diplomatic goals.

The step aims to prevent election interference and misinformation.

The social media giant released a transparency report that detailed recent efforts to tamp down on the spread of misinformation by insidious actors on its platform. In addition to the Iranian accounts, Twitter suspended four accounts it suspected of being linked to Russia's Internet Research Agency (IRA), 130 fake accounts associated with the Catalan independence movement in Spain and 33 accounts operated by a commercial entity in Venezuela.

It revealed the deletions in an update to its transparency report.

The 4,800 accounts were not a unified block, said Yoel Roth, Twitter's head of site integrity in a blog detailing its actions.

The Iranian accounts were divided into three categories depending on their activities. More than 1,600 accounts were tweeting global news content that supported the Iranian policies and actions. A total of 248 accounts were engaged specifically in discussion about Israel. Finally, a total of 2,865 accounts were banned due to taking on a false persona which was used to target political and social issues in Iran.

Since October 2018, Twitter has been publishing transparency reports on its investigations into state-backed information operations, releasing datasets on more than 30 million tweets.

Twitter has been regularly culling accounts it suspects of election interference from Iran, Russia and other nations since the fallout from the 2016 US presidential election. Back in February, the social media platform announced it had banned 2,600 Iran-linked accounts and 418 accounts tied to Russia's IRA it suspected of election meddling.

“We believe that people and organizations with the advantages of institutional power and which consciously abuse our service are not advancing healthy discourse but are actively working to undermine it,” Twitter said.

Whatsapp Declines to comply with the Government’s Demand



With general elections scheduled to be held one year from now in India, the Indian Government is taking a strict prospect of the utilization of various social media platforms like Facebook, Twitter, and WhatsApp for the spread of prevarication of information.

In the light of the same it had requested from WhatsApp for a solution for track the outset of messages on its platform.

The Facebook owned firm though declined to comply with the government's request saying that the move will undermine the protection and privacy of WhatsApp users.

Sources in the IT Ministry have said that the administration has declared that WhatsApp should keep on exploring the specialized technical advancements whereby if there should be an occurrence of mass circulation of offensive and detestable messages whipping up clashes and delinquency, the outset can be figured out easily.

The ministry is additionally looking for an all the more firm affirmation of the assent with Indian laws from the company, along with the foundation of grievance officer with a wide framework.

Accentuation has been given to the fact that a local corporate entity, subject to Indian laws, ought to be set up by the company in the outlined time period.


Prior this week the WhatsApp Head Chris Daniels got together with the IT Minister Ravi Shankar Prasad for tending issues similar to this one. After the gathering, Mr. Prasad said that the legislature has requested that WhatsApp set up a local corporate entity and uncover a technological solution in order to ascertain the outset of the  phony messages circled through its platform simultaneously commission  a grievance  officer.

 “People rely on WhatsApp for all kinds of sensitive conversations, including with their doctors, banks and families. Building traceability would undermine end-to-end encryption and the private nature of WhatsApp, creating potential for serious misuse,” the Facebook-owned firm said on Thursday.

“WhatsApp will not weaken the privacy protections we provide,” a company spokesperson stressed, adding, “Our focus remains working closely with others in India to educate people about misinformation and help keep people safe.”

A month ago, WhatsApp top administrators, including COO Matthew Idema, met IT Secretary and other Indian government authorities to summarize the several different advances being taken by the company on this issue.