Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Twitter. Show all posts

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Investigating the Role of DarkStorm Team in the Recent X Outage

 


It has been reported that Elon Musk’s social media platform, X, formerly known as Twitter, was severely disrupted on Monday after a widespread cyberattack that has caused multiple service disruptions. Data from outage monitoring service Downdetector indicates that at least three significant disruptions were experienced by the platform throughout the day, affecting millions of users around the world. During this time, over 41,000 people around the world, including Europe, North America, the Middle East, and Asia, reported outages. 
 
The most common technical difficulties encountered by users were prolonged connection failures and a lack of ability to fully load the platform. According to a preliminary assessment, it is possible that the disruptions were caused by a coordinated and large-scale cyber attack. While cybersecurity experts are still investigating the extent and origin of the incident, they have pointed to the growing trend of organised cyber-attacks targeting high-profile digital infrastructures, which is of concern. A number of concerns have been raised regarding the security framework of X following the incident, especially since the platform plays a prominent role in global communications and information dissemination. Authorities and independent cybersecurity analysts continue to analyze data logs and attack signatures to identify the perpetrators and to gain a deeper understanding of the attack methodology. An Israeli hacktivist collective known as the Dark Storm Team, a collective of pro-Palestinian hacktivists, has emerged as an important player in the cyberwarfare landscape. Since February 2010, the group has been orchestrating targeted cyberattacks against Israeli entities that are perceived as supportive of Israel. 
 
In addition to being motivated by a combination of political ideology and financial gain, this group is also well known for using aggressive tactics in the form of Distributed Denial-of-Service (DDoS) attacks, database intrusions, and other disruptive cyber attacks on government agencies, public infrastructure, and organizations perceived to be aligned with Israeli interests that have gained widespread attention. 
 
It has been reported that this group is more than just an ideological movement. It is also a cybercrime organization that advertises itself openly through encrypted messaging platforms like Telegram, offering its services to a variety of clients. It is rumored that it sells coordinated DDoS attacks, data breaches, and hacking tools to a wide range of clients as part of its offerings. It is apparent that their operations are sophisticated and resourceful, as they are targeting both vulnerable and well-protected targets. A recent activity on the part of the group suggests that it has escalated both in scale and ambition in the past few months. In February 2024, the Dark Storm Team warned that a cyberattack was imminent, and threatened NATO member states, Israel, as well as countries providing support for Israel. This warning was followed by documented incidents that disrupted critical government and digital infrastructure, which reinforced the capability of the group to address its threats. 
 
According to intelligence reports, Dark Storm has also built ties with pro-Russian cyber collectives, which broadens the scope of its operations and provides it with access to advanced hacking tools. In addition to enhancing their technical reach, this collaboration also signals an alignment of geopolitical interests. 

Among the most prominent incidents attributed to the group include the October 2024 DDoS attack against the John F Kennedy International Airport's online systems, which was a high-profile incident. As part of their wider agenda, the group justified the attack based on the airport's perceived support for Israeli policies, showing that they were willing to target essential infrastructure as part of their agenda. Dark Storm, according to analysts, combines ideological motivations with profit-driven cybercrime, making it an extremely potent threat in today's cyber environment, as well as being a unique threat to the world's cybersecurity environment. 
 
An investigation is currently underway to determine whether or not the group may have been involved in any of the recent service disruptions of platform X which occured. In order to achieve its objectives, the DarkStorm Team utilizes a range of sophisticated cyber tactics that combine ideological activism with financial motives in cybercrime. They use many of their main methods, including Distributed Denial-of-Service (DDoS) platforms, ransomware campaigns, and leaking sensitive information for a variety of reasons. In addition to disrupting the operations of their targeted targets, these activities are also designed to advance specific political narratives and generate illicit revenue in exchange for the disruption of their operations. In order to coordinate internally, recruit new members, and inform the group of operating updates, the group heavily relies on encrypted communication channels, particularly Telegram. Having these secure platforms allows them to operate with a degree of anonymity, which complicates the efforts of law enforcement and cybersecurity firms to track and dismantle their networks. 

Along with the direct cyberattacks that DarkStorm launches, the company is actively involved in the monetization of stolen data through the sale of compromised databases, personal information, and hacking tools on the darknet, where it is commonly sold. Even though DarkStorm claims to be an organization that consists of grassroots hackers, cybersecurity analysts are increasingly suspecting the group may have covert support from nation-state actors, particularly Russia, despite its public position as a grassroots hacktivist organization. Many factors are driving this suspicion, including the complexity and scale of their operations, the strategic choice of their targets, and the degree of technical sophistication evident in their attacks, among others. A number of patterns of activity suggest the groups are coordinated and well resourced, which suggests that they may be playing a role as proxy groups in broader geopolitical conflicts, which raises concerns about their possible use as proxies. 
 
It is evident from the rising threat posed by groups like DarkStorm that the cyber warfare landscape is evolving, and that ideological, financial, and geopolitical motivations are increasingly intertwined. Thus, it has become significantly more challenging for targeted organisations and governments to attribute attacks and defend themselves, as Elon Musk has become increasingly involved in geopolitical affairs, adding an even greater degree of complexity to the recent disruption of platform X cyberattack narrative. When Russian troops invaded Ukraine in February 2022, Musk has been criticized for publicly mocking Ukrainian President Volodymyr Zelensky, and for making remarks considered dismissive of Ukraine's plight. Musk was the first to do this in the current political environment. The President of the Department of Government Efficiency (DOGE), created under the Trump administration, is the head of the DOGE, an entity created under Trump’s administration that has been reducing U.S. federal employment in an unprecedented way since Trump returned to office. There is a marked change in the administration's foreign policy stance, signaling a shift away from longstanding US support for Ukraine, and means that the administration is increasingly conciliatory with Russia. Musk has a geopolitical entanglement that extends beyond his role at X as well. 
 
A significant portion of Ukraine's digital communication has been maintained during the recent wartime thanks to the Starlink satellite internet network, which he operates through his aerospace company SpaceX. It has been brought to the attention of the public that these intersecting spheres of influence – spanning national security, communication infrastructure, and social media – have received heightened scrutiny, particularly as X continues to be a central node in global politics. According to cybersecurity firms delving into the technical aspects of the Distributed Denial-of-Service (DDoS) attack, little evidence suggests that Ukrainian involvement may have been involved in the attack. 
 
It is believed that a senior analyst at a leading cybersecurity firm spoke on the condition of anonymity because he was not allowed to comment on X publicly because of restrictions on discussing X publicly. This analyst reported that no significant traffic was originating from Ukraine and that it was absent from the top 20 sources of malicious IPs linked to the attack. Despite the fact that Ukrainian IP addresses are rarely spotted in such data due to the widespread practice of IP spoofing and the widespread distribution of compromised devices throughout the world, the absence of Ukrainian IP addresses is significant since it allows attention to be directed to more likely sources, such as organized cybercrime groups and state-related organizations. 
 
There is no denying the fact that this incident reflects the fragile state of digital infrastructure in a politically polarized world where geopolitical tensions, corporate influence, and cyberwarfare are convergent, and as investigations continue, experts are concerned that actors such as DarkStorm Team's role and broader implications for global cybersecurity policy will continue to be a source of controversy.

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

Winklevoss Crypto Firm Gemini to Return $1.1B to Customers in Failed "Earn" Scheme

‘Earn’ product fiasco

Gemini to return money

As part of a settlement with regulators on Wednesday, the cryptocurrency company Gemini, owned by the Winklevoss twins, agreed to repay at least $1.1 billion to consumers of its failed "Earn" loan scheme and pay a $37 million fine for "significant" compliance violations.

The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.

What is the Earn Program?

The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.

Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.

Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.

NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown." 

Customers win lawsuit

Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.

“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.

Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."

Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.

"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."

Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.

The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.

The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.

More about the lawsuit

Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."

James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.

Subsequently, the twins stated that the change was made to the advantage of the patrons.

The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."

In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.

The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.

Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Global Outage Strikes Social Media Giant X

The recent global outage of Social Media Platform X caused a stir in the online community during a time when digital media predominates. Users everywhere became frustrated and curious about the cause of this extraordinary disruption when they realized they couldn't use the platform on December 21, 2023.

Reports of the outage, which was first discovered by Downdetector, began to arrive from all over the world, affecting millions of customers. The impact of the outage has increased because Social Media Platform X, a significant player in the social media ecosystem, has grown to be an essential part of peoples' everyday lives.

One significant aspect of the outage was the diverse range of issues users faced. According to reports, users experienced difficulties in tweeting, accessing their timelines, and even logging into their accounts. The widespread nature of these problems hinted at a major technical glitch rather than localized issues.

TechCrunch reported that the outage lasted for several hours, leaving users in limbo and sparking speculation about the root cause. The incident raised questions about the platform's reliability and prompted discussions about the broader implications of such outages in an interconnected digital world.

Assuring users that their technical teams were actively working to repair the issue, the platform's official response was prompt in admitting the inconvenience. Both users and specialists were in the dark, though, as there were few details regarding the precise cause.

Experts weighed in on the outage, emphasizing the need for robust infrastructure and redundancy measures to prevent such widespread disruptions in the future. The incident served as a reminder of the vulnerabilities inherent in our dependence on centralized digital platforms.

In the aftermath of the outage, Social Media Platform X released a formal apology, expressing regret for the inconvenience caused to users. The incident prompted discussions about the need for transparency from tech giants when addressing such disruptions and the importance of contingency plans to mitigate the impact on users.

Amidst the growing digitalization of our world, incidents such as the worldwide disruption of Social Media Platform X highlight the vulnerability of our interdependent networks. It's a wake-up call for users and tech businesses alike to put resilience and transparency first when faced with unanticipated obstacles in the digital space.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.