Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label X. Show all posts

Sensitive AI Key Leak : A Wave of Security Concerns in U.S. Government Circles

 




A concerning security mistake involving a U.S. government employee has raised alarms over how powerful artificial intelligence tools are being handled. A developer working for the federal Department of Government Efficiency (DOGE) reportedly made a critical error by accidentally sharing a private access key connected to xAI, an artificial intelligence company linked to Elon Musk.

The leak was first reported after a programming script uploaded to GitHub, a public code-sharing platform, was found to contain login credentials tied to xAI’s system. These credentials reportedly unlocked access to at least 52 of the company’s internal AI models including Grok-4, one of xAI’s most advanced tools, similar in capacity to OpenAI’s GPT-4.

The employee, identified in reports as 25-year-old Marko Elez, had top-level access to various government platforms and databases. These include systems used by sensitive departments such as Homeland Security, the Justice Department, and the Social Security Administration.

The key remained active and publicly visible for a period of time before being taken down. This has sparked concerns that others may have accessed or copied the credentials while they were exposed.


Why It Matters

Security experts say this isn’t just a one-off mistake, it’s a sign that powerful AI systems may be handled too carelessly, even by insiders with government clearance. If the leaked key had been misused before removal, bad actors could have gained access to internal tools or extracted confidential data.

Adding to the concern, xAI has not yet issued a public response, and there’s no confirmation that the key has been fully disabled.

The leak also brings attention to DOGE’s track record. The agency, reportedly established to improve government tech systems, has seen past incidents involving poor internal cybersecurity practices. Elez himself has been previously linked to issues around unprofessional behavior online and mishandling of sensitive information.

Cybersecurity professionals say this breach is another reminder of the risks tied to mixing government projects with fast-moving private AI ventures. Philippe Caturegli, a cybersecurity expert, said the leak raises deeper questions about how sensitive data is managed behind closed doors.


What Comes Next

While no immediate harm to the public has been reported, the situation highlights the need for stricter rules around how digital credentials are stored, especially when dealing with cutting-edge AI technologies.

Experts are calling for better oversight, stronger internal protocols, and more accountability when it comes to government use of private AI tools.

For now, this case serves as a cautionary tale: even one small error like uploading a file without double-checking its contents can open up major vulnerabilities in systems meant to be secure.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Lazarus Gang Targets Job Seekers to Install Malware

Lazarus Gang Targets Job Seekers to Install Malware

North Korean hackers responsible for Contagious Interview are trapping job seekers in the cryptocurrency sector by using the popular ClickFix social-engineering attack strategy. They aimed to deploy a Go-based backdoor— earlier undocumented— known as GolangGhost on Windows and macOS systems. 

Hackers lure job seekers

The latest attack, potentially a part of a larger campaign, goes by the codename ClickFake Interview, according to French cybersecurity company Sekoia. Aka DeceptiveDeployment, DEV#POPPER, and Famoys Chollima; Contagious Interview has been active since December 2022, however, it was publicly reported only after late 2023. 

The attack uses legitimate job interview sites to promote the ClickFix tactic and deploy Windows and MacOS backdoors, said Sekoia experts Amaury G., Coline Chavane, and Felix Aimé, attributing the attack to the notorious Lazarus Group. 

Lazarus involved

One major highlight of the campaign is that it mainly attacks centralized finance businesses by mimicking firms like Kraken, Circle BlockFi, Coinbase, KuCoin, Robinhood, Tether, and Bybit. Traditionally, Lazarus targeted decentralized finance (DeFi) entities. 

Attack tactic explained

Like Operation Dream Job, Contagious Interview also uses fake job offers as traps to lure potential victims and trick them into downloading malware to steal sensitive data and cryptocurrency. The victims are approached via LinkedIn or X to schedule a video interview and asked to download malware-laced video conference software that triggers the infection process. 

Finding of Lazarus ClickFix attack

Security expert Tayloar Monahan first reported the Lazarus Group’s use of ClickFix in late 2022, saying the attack chains led to the installment of a malware strain called FERRET that delivered the Golang backdoor. In this malware campaign, the victims are prompted to use a video interview, ‘Willow,’ and do a sell video assessment. 

The whole process is carefully built to gain users and “proceeds smoothly until the user is asked to enable their camera,” Sekoia said. At this stage, an “error message appears, indicating that the user needs to download a driver to fix the issue. This is where the operator employs the ClickFix technique," adds Sekoia. 

Different attack tactics for Windows and MacOS users

The prompts given to victims may vary depending on the OS. For Windows, victims are asked to open the Command Prompt and run a curl command to perform a Visual Basic Script (VBS) file to launch a basic script to run GolanGhost. MacOS victims are prompted to open the Terminal app and perform a curl command to run a malicious shell script, which then runs another shell script that runs a stealer module called FROSTYFERRET—aka ChromwUpdateAlert— and the backdoor. 

Polish Space Agency "POLSA" Suffers Breach; System Offline

Polish Space Agency "POLSA" Suffers Breach; System Offline

Systems offline to control breach

The Polish Space Agency (POLSA) suffered a cyberattack last week, it confirmed on X. The agency didn’t disclose any further information, except that it “immediately disconnected” the agency network after finding that the systems were hacked. The social media post indicates the step was taken to protect data. 

US News said “Warsaw has repeatedly accused Moscow of attempting to destabilise Poland because of its role in supplying military aid to its neighbour Ukraine, allegations Russia has dismissed.” POLSA has been offline since to control the breach of its IT infrastructure. 

Incident reported to authorities

After discovering the attack, POLSA reported the breach to concerned authorities and started an investigation to measure the impact. Regarding the cybersecurity incident, POLSA said “relevant services and institutions have been informed.”  

POLSA didn’t reveal the nature of the security attack and has not attributed the breach to any attacker. "In order to secure data after the hack, the POLSA network was immediately disconnected from the Internet. We will keep you updated."

How did the attack happen?

While no further info has been out since Sunday, internal sources told The Register that the “attack appears to be related to an internal email compromise” and that the staff “are being told to use phones for communication instead.”

POLSA is currently working with the Polish Military Computer Security Incident Response Team (CSIRT MON) and the Polish Computer Security Incident Response Team (CSIRT NASK) to patch affected services. 

Who is responsible?

Commenting on the incident, Poland's Minister of Digital Affairs, Krzysztof Gawkowski, said the “systems under attack were secured. CSIRT NASK, together with CSIRT MON, supports POLSA in activities aimed at restoring the operational functioning of the Agency.” On finding the source, he said, “Intensive operational activities are also underway to identify who is behind the cyberattack. We will publish further information on this matter on an ongoing basis.”

About POLSA

A European Space Agency (ESA) member, POLSA was established in September 2014. It aims to support the Polish space industry and strengthen Polish defense capabilities via satellite systems. The agency also helps Polish entrepreneurs get funds from ESA and also works with the EU, other ESA members and countries on different space exploration projects.  

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

X's URL Blunder Sparks Security Concerns

 



X, the social media platform formerly known as Twitter, recently grappled with a significant security flaw within its iOS app. The issue involved an automatic alteration of Twitter.com links to X.com links within Xeets, causing widespread concern among users. While the intention behind this change was to maintain brand consistency, the execution resulted in potential security vulnerabilities.

The flaw originated from a feature that indiscriminately replaced any instance of "Twitter" in a URL with "X," regardless of its context. This meant that legitimate URLs containing the word "Twitter" were also affected, leading to situations where users unknowingly promoted malicious websites. For example, a seemingly harmless link like netflitwitter[.]com would be displayed as Netflix.com but actually redirect users to a potentially harmful site.

The implications of this flaw were significant, as it could have facilitated phishing campaigns or distributed malware under the guise of reputable brands such as Netflix or Roblox. Despite the severity of the issue, X chose not to address it publicly, likely in an attempt to mitigate negative attention.

The glitch persisted for at least nine hours, possibly longer, before it was eventually rectified. Subsequent tests confirmed that URLs are now displaying correctly, indicating that the issue has been resolved. However, it's important to note that the auto-change policy does not apply when the domain is written in all caps.

This incident underscores the importance of thorough testing and quality assurance in software development, particularly for platforms with large user bases. It serves as a reminder for users to exercise caution when clicking on links, even if they appear to be from trusted sources.

To better understand how platforms like X operate and maintain user trust, it's essential to consider the broader context of content personalization. Profiles on X are utilised to tailor content presentation, potentially reordering material to better match individual interests. This customization considers users' activity across various platforms, reflecting their interests and characteristics. While content personalization enhances user experience, incidents like the recent security flaw highlight the importance of balancing personalization with user privacy and security concerns.


Scam: Chennai Woman Exposes Cyber Crime Involving Adhaar Card, Courier, Drugs


Woman discloses scam, alerts netizens

By bringing attention to a fresh cybercrime strategy, a marketing expert from Chennai has assisted others in avoiding the scam. Lavanya Mohan, the woman, talked about her experience on X, (formerly Twitter). She said how she got a call saying that someone was using her Aadhaar card to carry drugs over international borders.

The woman said she had recently read in the news about how two residents of Gurugram were conned out of almost Rs 2 crores by cybercriminals who tricked FedEx executives and cybercrime branch experts into calling people and pretending their Aadhar cards were being used to smuggle drugs into Thailand. 

A woman revealed the scam of "Aadhar Card Misused For Drug Smuggling"

Mohan described her conversation with the fraudsters in a series of X threads posted on her social media account, @lavsmohan. The caller, who was impersonating a customer service agent from a delivery company (FedEx, in Mohan's case), had concocted a story about a package that was supposed to be shipped with drugs from Thailand using her Aadhar ID.

Even more phony data were provided by the fraudster, such as shipment information, a forged FIR number, and even a phony employee ID, to increase the impression of urgency and validity. The caller then warned her about "rising scams" and offered to put her in touch with a customs official to settle the matter. 

In her post, Mohan went into further detail about what had happened and expressed her knowledge, saying, "Ma'am, if you don't go ahead with the complaint, your Aadhar will continue to be misused so let me connect you right away with the cyber crime branch."  "Threatening consequences + urgency = scam," she continued. 

The Gurugram incident served as a reminder

Mohan revealed how she was made aware of the news from Gurugram two weeks prior, when two men lost Rs 1.3 crores and Rs 56 lakhs, respectively, to scammers. 

But Mohan held ground and refrained from succumbing to the conman's manipulations. She refused to speak with the caller any further and withheld any personal information, telling them she would wait for police officers to get in touch with her and hang up. She saw the warning signs, which included unwanted calls, threats of legal consequences, and attempts to pressure her into acting quickly. 

In response to the crime occurrence, Mohan wrote: "The amount of information he had to provide me is concerning. Their approach is to put you in contact with the police, who then assert that your ID has connections to the criminal underworld." She further stated, "People are losing their hard-earned money and they can't be blamed because these scams are growing more sophisticated."

FedEx clears the air

Following the cybercrimes on Wednesday that used FedEx's name, the business made it clear in an informative statement that it only phones consumers to inquire about shipped products if the client specifically wants to do so. 

The company's statement went on to caution that anyone should notify local law officials right away and report any strange calls or messages requesting personal information to the cybercrime. 

A similar instance of a "sophisticated" cyber scam was brought to light by well-known Bollywood actress Anjali Patil, who has starred in movies including Newton and Mirzya. The actor was defrauded of Rs 5.79 lakhs in a similar, widely publicized "drug parcel scam" in December 2023. 


Corporate Accountability: Tech Titans Address the Menace of Misleading AI in Elections

 


In a report issued on Friday, 20 leading technology companies pledged to take proactive steps to prevent deceptive uses of artificial intelligence from interfering with global elections, including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe. 

According to a press release issued by the 20 companies participating in the event, they are committed to “developing tools to detect and address online distributions of artificial intelligence content that is intended to deceive voters.” 

The companies are also committed to educating voters about the use of artificial intelligence and providing transparency in elections around the world. It was the head of the Munich Security Conference, which announced the accord, that lauded the agreement as a critical step towards improving election integrity, increasing social resilience, and creating trustworthy technology practices that would help advance the advancement of election integrity. 

It is expected that in 2024, over 4 billion people will be eligible to cast ballots in over 40 different countries. A growing number of experts are saying that easy-to-use generative AI tools could potentially be used by bad actors in those campaigns to sway votes and influence those elections. 

From simple text prompts, users can generate images, videos, and audio using tools that use generative artificial intelligence (AI). It can be said that some of these services do not have the necessary security measures in place to prevent users from creating content that suggests politicians or celebrities say things they have never said or do things they have never done. 

In a tech industry "agreement" intended to reduce voter deception regarding candidates, election officials, and the voting process, the technology industry aims at AI-generated images, video, and audio. It is important to note, however, that it does not call for an outright ban on such content in its entirety. 

It should be noted that while the agreement is intended to show unity among platforms with billions of users, it mostly outlines efforts that are already being implemented, such as those designed to identify and label artificial intelligence-generated content already in the pipeline. 

Especially in the upcoming election year, which is going to see millions of people head to the polls in countries all around the world, there is growing concern about how artificial intelligence software could mislead voters and maliciously misrepresent candidates. 

AI appears to have already impersonated President Biden in New Hampshire's January primary attempting to discourage Democrats from voting in the primary as well as purportedly showing a leading candidate claiming to have rigged the election in Slovakia last September by using obvious AI-generated audio. 

The agreement, endorsed by a consortium of 20 corporations, encompasses entities involved in the creation and dissemination of AI-generated content, such as OpenAI, Anthropic, and Adobe, among others. Notably, Eleven Labs, whose voice replication technology is suspected to have been utilized in fabricating the false Biden audio, is among the signatories. 

Social media platforms including Meta, TikTok, and X, formerly known as Twitter, have also joined the accord. Nick Clegg, Meta's President of Global Affairs, emphasized the imperative for collective action within the industry, citing the pervasive threat posed by AI. 

The accord delineates a comprehensive set of principles aimed at combating deceptive election-related content, advocating for transparent disclosure of origins and heightened public awareness. Specifically addressing AI-generated audio, video, and imagery, the accord targets content falsifying the appearance, voice, or conduct of political figures, as well as disseminating misinformation about electoral processes. 

Acknowledged as a pivotal stride in fortifying digital communities against detrimental AI content, the accord underscores a collaborative effort complementing individual corporate initiatives. As per the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections," signatories commit to developing and deploying technologies to mitigate risks associated with deceptive AI election content, including the potential utilization of open-source solutions where applicable.

 Notably, Adobe, Amazon, Arm, Google, IBM, and Microsoft, alongside others, have lent their support to the accord, as confirmed in the latest statement.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Elon Musk's X Steps Up: Pledges Legal Funds for Workers Dealing with Unfair Bosses

 


In a recent interview, Elon Musk said that his company X social media platform, formerly known as Twitter, would cover members' legal bills and sue those whose jobs are unfairly treated by their employers for posting or liking something on the platform.  

There have been no further details shared by Musk about how "unfair treatment" by employers is viewed by him or how he will vet users seeking legal counsel. 

In a follow-up, he stated that the company would fund the legal fees regardless of how much they charge. However, there has not been any response from the company regarding who qualifies for legal support and how users will be screened for eligibility for legal support. 

Throughout the years, Facebook users, as well as celebrities and many other public figures, have faced controversy with their employers in the form of posts, likes, or reposts they have made while using the platform. 

As Musk announced earlier in the day, a fight between him and Matrix's CEO Mark Zuckerberg would also be streamed live on the microblogging platform, which is largely operated by Facebook. Two of the top tech titans had faced off against one another in a cage fight last month after both had accepted a challenge from the other. 

Musk has made a statement to the effect that the Zuck v Musk fight will be live-streamed on X and all proceeds will go to a charity for veterans. In late October, the tech billionaire shared a graph showing the latest count, and a statement that he had reached a new record for monthly users of X. 

X had reached 540 million users at the end of October, he added. It was reported in January by the Daily Wire that Kara Lynne, a streamer at a gaming company, was fired from her job for following the controversial X account "Libs of TikTok".

In the wake of organizational changes at the company and in an attempt to boost falling advertising revenue, the figures have come out and the company is going through restructuring. The Twitter logo was familiar for 17 years, but in July, Musk launched a new logo accompanied by a new name, renaming the social media platform to X and committing to building an "all-in-one app" rather than the existing blue bird logo.  

A few weeks ago, Musk stated that the platform has a negative cash flow because advertising revenues have dropped nearly 50 percent and the platform has a large amount of debt. Even though advertising revenues rose in June more than expected, the good news did not play out as expected. 

Many previously banned users have been allowed to rejoin since he has taken control of the company—including former President Donald Trump, for example. In addition, he has weakened the content moderation policies and fired a majority of the team responsible for overseeing hate speech/other forms of potentially harmful content on the site, as well as loosened up the rules regarding moderation. 

As Musk's commitment to free speech has been demonstrated, it has not been without consequences for those who exercise that right, as several journalists who wrote about Musk's organization were temporarily suspended by Musk, and an account that tracked his private jet's flight path using publicly available data was banned as well. 

Several reports indicate Musk also publicly fired an employee who criticized him on his platform and laid off colleagues who criticized him in private, but both actions were reportedly taken in response to criticism. There is an apparent presence of a "woke mind virus" in the minds of people that Musk campaigns against some social causes such as transgender rights since he launched his initial bid to acquire Twitter early last year and has shared several posts on social media. 

The CEO of Tesla, Elon Musk, also tweeted that "cis" and "cisgender" would now be considered slurs on the app, a change he announced back in June. There has been a rise in the number of employee terminations after employees post or publicly endorse offensive content on social media platforms, and this is not just for controversial activities that relate to social issues, but also for a wide range of other major reasons. 

The Californian tech worker Michelle Serna, who posted a video on TikTok while a meeting was taking place in the background, was fired from her company in May after posting the video online. Inadequate moderation of hate speech during recent months, the tycoon who purchased Twitter for $44 billion last October has seen the company's advertising business collapse, in part because the company did not moderate hate speech as it should have, and previously banned accounts have returned to the platform. 

According to Musk, his desire for free expression motivates his changes, and he has often lashed out at what he views as a threat posed to free expression caused by the shifting cultural sensibilities influencing technological advancement. CCDH, the non-profit organization focused on countering the spread of hate speech on the Internet, feels that the platform has flourished under the influence of hate speech.  This finding of the CCDH is disputed by X and he is suing the agency for its findings. 

Trump's Twitter account was reinstated by Musk in December, but it appears the former US president is yet to resume his use of Twitter. Several supporters of the ex-president tried unsuccessfully to overturn the results of the 2020 election by attacking the Capitol Building on January 6 of the following year, but he was banned from Twitter in early 2021 as a result of his role in the attack. A US media outlet reports that social media platform X recently reinstated Kanye West's account after he was suspended eight months ago when it was found that he posted an antisemitic comment.