Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Twitter. Show all posts

Investigating the Role of DarkStorm Team in the Recent X Outage

 


It has been reported that Elon Musk’s social media platform, X, formerly known as Twitter, was severely disrupted on Monday after a widespread cyberattack that has caused multiple service disruptions. Data from outage monitoring service Downdetector indicates that at least three significant disruptions were experienced by the platform throughout the day, affecting millions of users around the world. During this time, over 41,000 people around the world, including Europe, North America, the Middle East, and Asia, reported outages. 
 
The most common technical difficulties encountered by users were prolonged connection failures and a lack of ability to fully load the platform. According to a preliminary assessment, it is possible that the disruptions were caused by a coordinated and large-scale cyber attack. While cybersecurity experts are still investigating the extent and origin of the incident, they have pointed to the growing trend of organised cyber-attacks targeting high-profile digital infrastructures, which is of concern. A number of concerns have been raised regarding the security framework of X following the incident, especially since the platform plays a prominent role in global communications and information dissemination. Authorities and independent cybersecurity analysts continue to analyze data logs and attack signatures to identify the perpetrators and to gain a deeper understanding of the attack methodology. An Israeli hacktivist collective known as the Dark Storm Team, a collective of pro-Palestinian hacktivists, has emerged as an important player in the cyberwarfare landscape. Since February 2010, the group has been orchestrating targeted cyberattacks against Israeli entities that are perceived as supportive of Israel. 
 
In addition to being motivated by a combination of political ideology and financial gain, this group is also well known for using aggressive tactics in the form of Distributed Denial-of-Service (DDoS) attacks, database intrusions, and other disruptive cyber attacks on government agencies, public infrastructure, and organizations perceived to be aligned with Israeli interests that have gained widespread attention. 
 
It has been reported that this group is more than just an ideological movement. It is also a cybercrime organization that advertises itself openly through encrypted messaging platforms like Telegram, offering its services to a variety of clients. It is rumored that it sells coordinated DDoS attacks, data breaches, and hacking tools to a wide range of clients as part of its offerings. It is apparent that their operations are sophisticated and resourceful, as they are targeting both vulnerable and well-protected targets. A recent activity on the part of the group suggests that it has escalated both in scale and ambition in the past few months. In February 2024, the Dark Storm Team warned that a cyberattack was imminent, and threatened NATO member states, Israel, as well as countries providing support for Israel. This warning was followed by documented incidents that disrupted critical government and digital infrastructure, which reinforced the capability of the group to address its threats. 
 
According to intelligence reports, Dark Storm has also built ties with pro-Russian cyber collectives, which broadens the scope of its operations and provides it with access to advanced hacking tools. In addition to enhancing their technical reach, this collaboration also signals an alignment of geopolitical interests. 

Among the most prominent incidents attributed to the group include the October 2024 DDoS attack against the John F Kennedy International Airport's online systems, which was a high-profile incident. As part of their wider agenda, the group justified the attack based on the airport's perceived support for Israeli policies, showing that they were willing to target essential infrastructure as part of their agenda. Dark Storm, according to analysts, combines ideological motivations with profit-driven cybercrime, making it an extremely potent threat in today's cyber environment, as well as being a unique threat to the world's cybersecurity environment. 
 
An investigation is currently underway to determine whether or not the group may have been involved in any of the recent service disruptions of platform X which occured. In order to achieve its objectives, the DarkStorm Team utilizes a range of sophisticated cyber tactics that combine ideological activism with financial motives in cybercrime. They use many of their main methods, including Distributed Denial-of-Service (DDoS) platforms, ransomware campaigns, and leaking sensitive information for a variety of reasons. In addition to disrupting the operations of their targeted targets, these activities are also designed to advance specific political narratives and generate illicit revenue in exchange for the disruption of their operations. In order to coordinate internally, recruit new members, and inform the group of operating updates, the group heavily relies on encrypted communication channels, particularly Telegram. Having these secure platforms allows them to operate with a degree of anonymity, which complicates the efforts of law enforcement and cybersecurity firms to track and dismantle their networks. 

Along with the direct cyberattacks that DarkStorm launches, the company is actively involved in the monetization of stolen data through the sale of compromised databases, personal information, and hacking tools on the darknet, where it is commonly sold. Even though DarkStorm claims to be an organization that consists of grassroots hackers, cybersecurity analysts are increasingly suspecting the group may have covert support from nation-state actors, particularly Russia, despite its public position as a grassroots hacktivist organization. Many factors are driving this suspicion, including the complexity and scale of their operations, the strategic choice of their targets, and the degree of technical sophistication evident in their attacks, among others. A number of patterns of activity suggest the groups are coordinated and well resourced, which suggests that they may be playing a role as proxy groups in broader geopolitical conflicts, which raises concerns about their possible use as proxies. 
 
It is evident from the rising threat posed by groups like DarkStorm that the cyber warfare landscape is evolving, and that ideological, financial, and geopolitical motivations are increasingly intertwined. Thus, it has become significantly more challenging for targeted organisations and governments to attribute attacks and defend themselves, as Elon Musk has become increasingly involved in geopolitical affairs, adding an even greater degree of complexity to the recent disruption of platform X cyberattack narrative. When Russian troops invaded Ukraine in February 2022, Musk has been criticized for publicly mocking Ukrainian President Volodymyr Zelensky, and for making remarks considered dismissive of Ukraine's plight. Musk was the first to do this in the current political environment. The President of the Department of Government Efficiency (DOGE), created under the Trump administration, is the head of the DOGE, an entity created under Trump’s administration that has been reducing U.S. federal employment in an unprecedented way since Trump returned to office. There is a marked change in the administration's foreign policy stance, signaling a shift away from longstanding US support for Ukraine, and means that the administration is increasingly conciliatory with Russia. Musk has a geopolitical entanglement that extends beyond his role at X as well. 
 
A significant portion of Ukraine's digital communication has been maintained during the recent wartime thanks to the Starlink satellite internet network, which he operates through his aerospace company SpaceX. It has been brought to the attention of the public that these intersecting spheres of influence – spanning national security, communication infrastructure, and social media – have received heightened scrutiny, particularly as X continues to be a central node in global politics. According to cybersecurity firms delving into the technical aspects of the Distributed Denial-of-Service (DDoS) attack, little evidence suggests that Ukrainian involvement may have been involved in the attack. 
 
It is believed that a senior analyst at a leading cybersecurity firm spoke on the condition of anonymity because he was not allowed to comment on X publicly because of restrictions on discussing X publicly. This analyst reported that no significant traffic was originating from Ukraine and that it was absent from the top 20 sources of malicious IPs linked to the attack. Despite the fact that Ukrainian IP addresses are rarely spotted in such data due to the widespread practice of IP spoofing and the widespread distribution of compromised devices throughout the world, the absence of Ukrainian IP addresses is significant since it allows attention to be directed to more likely sources, such as organized cybercrime groups and state-related organizations. 
 
There is no denying the fact that this incident reflects the fragile state of digital infrastructure in a politically polarized world where geopolitical tensions, corporate influence, and cyberwarfare are convergent, and as investigations continue, experts are concerned that actors such as DarkStorm Team's role and broader implications for global cybersecurity policy will continue to be a source of controversy.

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

Winklevoss Crypto Firm Gemini to Return $1.1B to Customers in Failed "Earn" Scheme

‘Earn’ product fiasco

Gemini to return money

As part of a settlement with regulators on Wednesday, the cryptocurrency company Gemini, owned by the Winklevoss twins, agreed to repay at least $1.1 billion to consumers of its failed "Earn" loan scheme and pay a $37 million fine for "significant" compliance violations.

The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.

What is the Earn Program?

The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.

Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.

Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.

NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown." 

Customers win lawsuit

Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.

“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.

Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."

Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.

"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."

Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.

The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.

The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.

More about the lawsuit

Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."

James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.

Subsequently, the twins stated that the change was made to the advantage of the patrons.

The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."

In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.

The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.

Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Global Outage Strikes Social Media Giant X

The recent global outage of Social Media Platform X caused a stir in the online community during a time when digital media predominates. Users everywhere became frustrated and curious about the cause of this extraordinary disruption when they realized they couldn't use the platform on December 21, 2023.

Reports of the outage, which was first discovered by Downdetector, began to arrive from all over the world, affecting millions of customers. The impact of the outage has increased because Social Media Platform X, a significant player in the social media ecosystem, has grown to be an essential part of peoples' everyday lives.

One significant aspect of the outage was the diverse range of issues users faced. According to reports, users experienced difficulties in tweeting, accessing their timelines, and even logging into their accounts. The widespread nature of these problems hinted at a major technical glitch rather than localized issues.

TechCrunch reported that the outage lasted for several hours, leaving users in limbo and sparking speculation about the root cause. The incident raised questions about the platform's reliability and prompted discussions about the broader implications of such outages in an interconnected digital world.

Assuring users that their technical teams were actively working to repair the issue, the platform's official response was prompt in admitting the inconvenience. Both users and specialists were in the dark, though, as there were few details regarding the precise cause.

Experts weighed in on the outage, emphasizing the need for robust infrastructure and redundancy measures to prevent such widespread disruptions in the future. The incident served as a reminder of the vulnerabilities inherent in our dependence on centralized digital platforms.

In the aftermath of the outage, Social Media Platform X released a formal apology, expressing regret for the inconvenience caused to users. The incident prompted discussions about the need for transparency from tech giants when addressing such disruptions and the importance of contingency plans to mitigate the impact on users.

Amidst the growing digitalization of our world, incidents such as the worldwide disruption of Social Media Platform X highlight the vulnerability of our interdependent networks. It's a wake-up call for users and tech businesses alike to put resilience and transparency first when faced with unanticipated obstacles in the digital space.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.   

The Race to Train AI: Who's Leading the Pack in Social Media?

 


A growing number of computer systems and data sets consisting of large, complex information have enabled the rise of artificial intelligence over the last few years. AI has the potential to be practical and profitable by being used in numerous applications such as machine learning, which provides a way for a system to locate patterns within large sets of data. 

In modern times, AI is playing a significant role in a wide range of computer systems, including iPhones that recognize and translate voice, driverless cars that carry out complicated manoeuvres under their power, and robots in factories and homes that automate tasks. 

AI has become increasingly important in research over the last few years, and it is now being used in several applications, such as the processing of vast amounts of data that lie at the heart of fields like astronomy and genomics, producing weather forecasts and climate models, and interpreting medical imaging images for signs of disease. 

In a recent update to its privacy policy, X, the social media platform that used to be known as Twitter, stated it may train an AI model based on posts from users. According to Bloomberg early this week, X's recently updated privacy policy informs its users that the company is now collecting various kinds of information about its users, including biometric data, as well as their job history and educational background. 

The data that X will be collecting on users appears to be more than what it has planned to do with them. Another update to the company's policy specifies that it plans to use the data it collects along with other publicly available information to train its machine learning and artificial intelligence models based on the information it collects and other data sources.

Several schools are recommending the use of private data, such as text messages in direct messages, to train their models, according to The Office of Elon Musk, owner of the company and former CEO. There is no reason to be surprised by this change. 

According to Musk, his latest startup, xAI, was founded to help researchers and engineers in the enterprise build new products by utilizing data collected from the microblogging site and his latest startup, Twitter. For accessing the company's data via an API, X charges companies using its API $42,000. 

It was reported in April that X was removed from Microsoft's advertising platforms due to increased fees and, in response, had threatened to sue the company for allegedly using Twitter data illegally. These fees increased after Microsoft reportedly pulled X from its advertising platforms. Elon Musk has called on AI research labs to halt work on systems that can compete with human intelligence, in a tweet published late Thursday.

Musk has called on several tech leaders to stop the development of systems that are at human levels of brightness. Several AI labs have been strongly urged to cease training models more powerful than GPT-4, the newest version of the large language model software developed by the U.S. startup OpenAI, according to an open letter signed by Musk, Steve Wozniak, and 2020 presidential candidate Andrew Yang from the Future of Life Institute. 

Founded in Cambridge, Massachusetts, the Future of Life Institute is a non-profit organization dedicated to pushing forward the responsible and ethical development of artificial intelligence in the future. A few of the founders of the company include Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, the co-founder of Skype. 

Musk and Google's AI lab DeepMind, which is owned by Google, have previously agreed not to develop lethal autonomous weapons systems as part of the organization's previous campaign. In an appeal to all AI labs, the institute said it was taking immediate steps to “pause for at least 6 months at least the training of AI systems with higher levels of power than the GPT-4.” 

The GPT-4, which was released earlier this month, is believed to be a far more sophisticated version of the GPT-3 than its predecessor. Researchers have been amazed to learn that ChatGPT, the viral artificial intelligence chatbot, has been able to mimic human-type responses when users ask its questions. In only two months after its launch in January this year, ChatGPT had accrued 100 million monthly active users, making it the fastest-growing application in the history of consumer applications. 

A machine learning algorithm is trained by taking vast amounts of data from the internet at a time and applying it to write poetry in the style of William Shakespeare to draft legal opinions based on the facts in a case. However, some ethical and moral scholars have raised concerns that AI might also be abused for crime and misinformation purposes, which could lead to exploitation. 

During CNBC's contact with OpenAI, the company was unable to comment immediately upon being contacted. Microsoft, the world's largest technology company whose headquarters are located in Redmond, Washington, has invested $10 billion in OpenAI, which is backed by the company. 

Microsoft is also integrating its natural language processing technology, called GPT, into its Bing search engine for natural language search to make it more conversational and useful to users. There was a follow-up announcement from Google, which announced its line of conversational artificial intelligence (AI) products aimed at consumers. 

According to Musk, AI, or artificial intelligence, may represent one of the biggest threats to civilization shortly. OpenAI was founded by Elon Musk and Sam Altman in 2015, though Musk left OpenAI's board in 2018 and therefore does not hold any stake in the company that he helped found. It has been his view that the organization has diverged from its original purpose several times lately, and he has voiced his opinion about the same.

There is also a race among regulators to get a grip on AI tools due to the rapid advance of technology in this area. In a report published on Wednesday, the United Kingdom announced the publication of a white paper on artificial intelligence, deferring the job of overseeing the use of such tools in different sectors by applying the existing laws within their jurisdictions.