Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Tech Giant. Show all posts

Critical Flaw Identified in Apple's Silicon M-Series Chips – And it Can't be Patched

 

Researchers have identified a novel, unpatched security vulnerability that can allow an attacker to decrypt data on the most advanced MacBooks. 

This newly discovered vulnerability affects all Macs utilising Apple silicon, including the M1, M2, and M3 CPUs. To make matters worse, the issue is built into the architecture of these chips, so Apple can't fix it properly. Instead, any upgrades must be done before the iPhone maker launches its M4 chips later this year. 

The vulnerability, like last year's iLeakage attack, is a side channel that, under specific circumstances, allows an attacker to extract the end-to-end encryption keys. Fortunately, exploiting this flaw is challenging for an attacker, as it can take a long time. 

The new flaw was identified by a group of seven academic academics from universities across the United States, who outlined their findings in a research paper (PDF) on microarchitectural side channel attacks. 

To demonstrate how this issue could be exploited by hackers, they created GoFetch, an app that does not require root access. Instead, it merely requires the same user privileges as most third-party Mac apps. For those unfamiliar with Apple's M-series chips, they are all organised into clusters that house their respective cores. 

If the GoFetch app and the cryptography app being targeted by an attacker share the same performance cluster, GoFetch will be able to mine enough secrets to reveal a secret key. 

Patching will hinder performance

Patching this flaw will be impossible as it exists in Apple's processors, not in its software. To fully resolve the issue, the iPhone manufacturer would have to create entirely new chips. 

The researchers who found the vulnerability advise Apple to use workarounds in the company's M1, M2, and M3 chips to solve it, as there is no way to fix it. 

In order to implement these solutions, cryptographic software developers would need to incorporate remedies such as ciphertext blinding, which modifies or eliminates masks applied to sensitive variables, such as those found in encryption keys, before or after they are loaded into or saved from memory. 

Why there's no need for concern

To leverage this unfixable vulnerability in an attack, a hacker would first have to dupe a gullible Mac user into downloading and installing a malicious app on their computer. In macOS with Gatekeeper, Apple limits unsigned apps by default, which would make it much harder to install the malicious app required to carry out an attack. 

From here, this attack takes quite some time to complete. In reality, during their tests, the researchers discovered that it took anywhere between an hour and ten hours, during which time the malicious app would have to be operating continually. 

While we haven't heard anything from Apple about this unpatched issue yet, we'll update this post if we do. Until then, the researchers advised that users maintain all of the software on their Apple silicon-powered Macs up to date and apply Apple updates as soon as they become available.

Roku Data Breach: Over 15,000 Accounts Compromised; Data Sold for Pennies

 

A data breach impacting more than 15,000 consumers was revealed by streaming giant Roku. The attackers employed stolen login credentials to gain unauthorised access and make fraudulent purchases. 

Roku notified customers of the breach last Friday, stating that hackers used a technique known as "credential stuffing" to infiltrate 15,363 accounts. Credential stuffing is the use of exposed usernames and passwords from other data breaches to attempt to enter into accounts on other services. These attacks started in December 2023 and persisted until late February 2024, as per the company. 

Bleeping Computer was the first to reveal the hack, pointing out that attackers used automated tools to undertake credential-stuffing assaults on Roku. The hackers were able to bypass security protections using techniques such as specific URLs and rotating proxy servers. 

In this case, hackers probably gained login credentials from previous hacks of other websites and attempted to use them on Roku accounts. If successful, they could change the account information and take complete control, locking users out of their own accounts. 

The publication also uncovered that stolen accounts are being sold for as few as 50 cents each on hacking marketplaces. Purchasers can then employ the stored credit card information on these accounts to purchase Roku gear, such as streaming devices, soundbars, and light strips. 

Roku stated that hackers used stolen credentials to acquire streaming subscriptions such as Netflix, Hulu, and Disney Plus in some instances. The company claims to have safeguarded the impacted accounts and required password resets. Furthermore, Roku's security team has discovered and cancelled unauthorised purchases, resulting in refunds for affected users. 

Fortunately, the data breach did not compromise critical information such as social security numbers or full credit card information. So hackers should be unable to perform fraudulent transactions outside of the Roku ecosystem. However, it is recommended that you update your Roku password as a precaution. 

Even if you were not affected, this is a wake-up call that stresses the significance of proper password hygiene. Most importantly, change your passwords every few months and avoid using the same password across multiple accounts whenever possible.

Microsoft Claims Russian Hackers are Attempting to Break into Company Networks.

 

Microsoft warned on Friday that hackers affiliated to Russia's foreign intelligence were attempting to break into its systems again, using data collected from corporate emails in January to seek new access to the software behemoth whose products are widely used throughout the US national security infrastructure.

Some experts were alarmed by the news, citing concerns about the security of systems and services at Microsoft, one of the world's major software companies that offers digital services and infrastructure to the United States government. 

The tech giant revealed that the intrusions were carried out by a Russian state-sponsored outfit known as Midnight Blizzard, or Nobelium.

The Russian embassy in Washington did not immediately respond to a request for comment on Microsoft's statement, nor on Microsoft's earlier statements regarding Midnight Blizzard activity.

Microsoft reported the incident in January, stating that hackers attempted to break into company email accounts, including those of senior company executives, as well as cybersecurity, legal, and other services. 

Microsoft's vast client network makes it unsurprising that it is being attacked, according to Jerome Segura, lead threat researcher at Malwarebytes' Threatdown Labs. He said that it was concerning that the attack was still ongoing, despite Microsoft's efforts to prevent access. 

Persistent Threat

Several experts who follow Midnight Blizzard claim that the group has a history of targeting political bodies, diplomatic missions, and non-governmental organisations. Microsoft claimed in a January statement that Midnight Blizzard was probably gunning after it since the company had conducted extensive study to analyse the hacking group's activities. 

Since at least 2021, when the group was discovered to be responsible for the SolarWinds cyberattack that compromised a number of U.S. federal agencies, Microsoft's threat intelligence team has been looking into and sharing research on Nobelium.

The company stated on Friday that the ongoing attempts to compromise Microsoft are indicative of a "sustained, significant commitment of the threat actor's resources, coordination, and focus.” 

"It is apparent that Midnight Blizzard is attempting to use secrets of different types it has found," the company added. "Some of these secrets were shared between customers and Microsoft in email, and as we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures.”

Meta Plans to Launch Enhanced AI model Llama 3 in July

 

The Information reported that Facebook's parent company, Meta, plans to launch Llama 3, a new AI language model, in July. As part of Meta's attempts to enhance its large language models (LLMs), the open-source LLM was designed to offer more comprehensive responses to contentious queries. 

In order to give context to questions they believe to be contentious, meta researchers are attempting to "loosen up" the model. For example, Llama 2, Meta's current chatbot model for social media sites, ignores contentious subjects like "kill a vehicle engine" and "how to win a war." The study claims that Llama 3 would be able to comprehend more nuanced questions like "how to kill a vehicle's engine," which refers to turning a vehicle off as opposed to taking it out of service. 

To ensure that the responses from the new model are more precise and nuanced, Meta will internally designate a single person to oversee tone and safety training. The goal of the endeavour is to improve the ability to respond and use Meta's new large language model. This project is crucial because Google recently disabled the Gemini chatbot's capacity to generate images in response to criticism over old photos and phrases that were sometimes mistranslated. 

The research was released in the same week that Microsoft, the challenger to OpenAI's ChatGPT, Mistral, the French AI champion, announced a strategic relationship and investment. As the tech giant attempts to attract more clients for its Azure cloud services, the multi-year agreement underscores Microsoft's plans to offer a variety of AI models in addition to its biggest bet in OpenAI.

Microsoft confirmed its investment in Mistral, but stated that it owns no interest in the company. The IT behemoth is under regulatory investigation in Europe and the United States for its massive investment in OpenAI. 

The Paris-based startup develops open source and proprietary large language models (LLM), such as the one OpenAI pioneered with ChatGPT, to interpret and generate text in a human-like manner. Its most recent proprietary model, Mistral Large, will be made available to Azure customers first through the agreement. Mistral's technology will run on Microsoft's cloud computing infrastructure.

Apple Cancels It's Ambitious Plan of Building an Electric Car

 

Ten years after it was first claimed that the maker of the iPhone was working on the project, Apple is reported to have scrapped its plan of making electric cars (EVs).

The project, which employs nearly two thousand people, has never been acknowledged by the company in public. 

According to Bloomberg News, many project workers will be transferred to the iPhone maker's artificial intelligence (AI) section. 

Apple's automobile team was reportedly named the Special Projects Group as part of CEO Tim Cook's Project Titan. 

The company was first reported to be working on a fully autonomous car without a steering wheel or pedals while it spent billions of dollars on research and development. However, the team was understood to still be years away from producing a vehicle. 

"This is a smart and long-awaited decision," Ray Wang, founder and chief executive of Silicon Valley-based consultancy Constellation Research stated. "The market demand for EVs is not there and AI is where all the action is.”

Apple has been looking at fresh opportunities beyond the iPhone and computers, such as its recently released Vision Pro virtual reality gear. 

Demand for EVs has slowed in recent months as borrowing prices remain high, making the market more competitive as big firms compete to acquire customers. In recent months, US automakers Ford and General Motors have postponed plans to increase EV output. 

Rivian, the developer of electric trucks, announced last week that it plans to reduce its personnel by 10% and intends to keep production the same this year. 

In January, Tesla warned that its sales growth would be slower this year than in 2023. The company, led by multibillionaire Elon Musk, has been lowering pricing in major areas across the world, including Europe and China, as it faces stiff competition from Chinese rivals like BYD. 

Emojis of a salute and a cigarette were used by Mr. Musk to respond to a story about the Apple project's demise on the social media platform X.

Amazon Issues ‘Warning’ For Employees Using AI At Work

 

A leaked email to employees revealed Amazon's guidelines for using third-party GenAI tools at work. 

Business Insider claims that the email mandates employees to refrain from using third-party software due to data security concerns.

“While we may find ourselves using GenAl tools, especially when it seems to make life easier, we should be sure not to use it for confidential Amazon work,” the email reads. “Don’t share any confidential Amazon, customer, or employee data when you’re using 3rd party GenAl tools. Generally, confidential data would be data that is not publicly available.” 

This is not the first time that Amazon has had to remind employees. A company lawyer advised employees not to provide ChatGPT with "any Amazon confidential information (including Amazon code you are working on)" in a letter dated January 20, 2023.

The warning was issued due to concerns that these types of third-party resources may claim ownership over the information that workers exchange, leading to future output that might involve or resemble confidential data. "There have already been cases where the results closely align with pre-existing material," the lawyer stated at the time. 

Over half of employees are using GenAI without permission from their employer, according to Salesforce research, and seven out of ten employees are using AI without receiving training on its safe or ethical use. Merely 17% of American industries own vaguely defined AI policies. In sectors like healthcare, where 87% of worldwide workers report that their employer lacks a clear policy on AI use, the issue is particularly noticeable. 

Employers and HR departments need to have greater insight into how their staff members are utilising AI in order to ensure that they are using it carefully.

Epic Games Wins: Historic Decision Against Google in App Store Antitrust Case

The conflict between tech behemoths Google and Apple and Fortnite creator Epic Games is a ground-breaking antitrust lawsuit that has rocked the app ecosystem. An important turning point in the dispute occurred when a jury decided to support the gaming behemoth over Google after Epic Games had initially challenged the app store duopoly.

The core of the dispute lies in the exorbitant fees imposed by Google and Apple on app developers for in-app purchases. Epic Games argued that these fees, which can go as high as 30%, amount to monopolistic practices, stifling competition and innovation in the digital marketplace. The trial has illuminated the murky waters of app store policies, prompting a reevaluation of the power dynamics between tech behemoths and app developers.

One of the key turning points in the trial was the revelation of internal emails from Google, exposing discussions about the company's fear of losing app developers to rival platforms. These emails provided a rare glimpse into the inner workings of tech giants and fueled Epic Games' claims of anticompetitive behavior.

The verdict marks a significant blow to Google, with the jury finding in favor of Epic Games. The decision has broader implications for the tech industry, raising questions about the monopolistic practices of other app store operators. While Apple has not yet faced a verdict in its case with Epic Games, the outcome against Google sets a precedent that could reverberate across the entire digital ecosystem.

Legal experts speculate that the financial repercussions for Google could be substantial, potentially costing the company billions. The implications extend beyond financial penalties; the trial has ignited a conversation about the need for regulatory intervention to ensure a fair and competitive digital marketplace.

Industry observers and app developers are closely monitoring the fallout from this trial, anticipating potential changes in app store policies and fee structures. The ruling against Google serves as a wake-up call for tech giants, prompting a reassessment of their dominance in the digital economy.

As the legal battle between Epic Games and Google unfolds, the final outcome remains years away. However, this trial has undeniably set in motion a reexamination of the app store landscape, sparking debates about antitrust regulations and the balance of power in the ever-evolving world of digital commerce.

Tim Sweeney, CEO of Epic Games, stated "this is a monumental step in the ongoing fight for fair competition in digital markets and for the basic rights of developers and creators." In the coming years, the legal structure controlling internet firms and app store regulations will probably be shaped by the fallout from this trial.

Reminder: Google Has Started to Purge Inactive Accounts

 

You should log into any old Google account you wish to maintain if you haven't used it in a few years to avoid having it deleted due to Google's inactive account policy. Google revealed the new guidelines in May, stating that account deletions would start as early as December 2023. Since then, Google has begun notifying impacted users through email that their accounts may be deleted starting in the first week of December. 

To be clear, Google has not stated that it will delete all eligible accounts from the first of December.The company intends to proceed in stages, "beginning with accounts that were created and never used again." However, now appears to be as good a time as any to ensure that your old accounts are in order so that you don't risk losing important data.

For a Google Account to remain active for an additional two years, it is often sufficient to simply sign in. Google adds that actions that fall under its policy regarding inactive accounts include sending or receiving emails, using Google Drive, viewing YouTube content, downloading apps from the Google Play Store, searching the Google Play Store, and signing in with Google to access third-party services. 

It's a good idea to confirm that the email address linked to your account is accessible after you log in. This is due to Google's announcement that it will notify affected users of an upcoming deletion through several notifications sent to both their recovery email addresses and affected Google accounts. 

If you want to prevent the deletion of any content stored in Google Photos, you'll need to sign in separately, but logging in to your Google account should be sufficient to stop it from being deleted altogether for two years. According to a 2020 policy, the search giant "reserves the right to delete data in a product if you are inactive in that product for at least two years." Nevertheless, neither accounts with active subscriptions linked to them nor accounts with YouTube videos will be deleted. 

Google stated that it modified its policies for security reasons when it announced the new guidelines in May, pointing out that inactive and outdated accounts are more likely to be compromised. Ruth Kricheli, vice president of product management at Google, stated in the company blog that "forgotten or unattended accounts often rely on old or re-used passwords that may have been compromised, haven't had two factor authentication set up, and receive fewer security checks by the user.”

Amazon Introduces Q, a Business Chatbot Powered by Generative AI

 

Amazon has finally identified a solution to counter ChatGPT. Earlier this week, the technology giant announced the launch of Q, a business chatbot powered by generative artificial intelligence. 

The announcement, made in Las Vegas at the company's annual conference for its AWS cloud computing service, represents Amazon's response to competitors who have released chatbots that have captured the public's attention.

The introduction of ChatGPT by San Francisco startup OpenAI a year ago sparked a wave of interest in generative AI tools among the general public and industry, as these systems are capable of generating text passages that mimic human writing, such as essays, marketing pitches, and emails.

The primary financial backer and partner of OpenAI, Microsoft, benefited initially from this attention. Microsoft owns the rights to the underlying technology of ChatGPT and has used it to develop its own generative AI tools, called Copilot. However, competitors such as Google were also prompted to release their own versions. 

These chatbots are the next wave of AI systems that can interact, generate readable text on demand, and even generate unique images and videos based on what they've learned from a massive database of digital books, online writings, and other media. 

According to tech giant, Q can perform tasks like content synthesis, daily communication streamlining, and employee assistance with blog post creation. Businesses can get a customised experience that is more relevant to their business by connecting Q to their own data and systems, according to the statement. 

Although Amazon is the industry leader in cloud computing, surpassing competitors Google and Microsoft, it is not thought to be at the forefront of AI research that is leading to advances in generative AI. 

Amazon was ranked lowest in a recent Stanford University index that evaluated the transparency of the top 10 foundational AI models, including Titan from Amazon. Less transparency, according to Stanford researchers, can lead to a number of issues, including making it more difficult for users to determine whether they can trust the technology safely. 

In the meantime, the business has continued to grow. In September, Anthropic, a San Francisco-based AI startup founded by former OpenAI employees, announced that Amazon would invest up to $4 billion in the business. 

The tech giant has also been releasing new services, such as an update for its well-liked assistant Alexa that enables users to have conversations with it that are more human-like and AI-generated summaries of customer product reviews.

UK Home Secretary Clashes with Meta Over Data Privacy

 

Suella Braverman, the UK Home Secretary, wants to "work constructively" with Meta on the company's plans to implement end-to-end encrypted (E2EE) messaging in Instagram and Facebook by the end of the year, which she thinks will provide a "safe haven" for paedophiles and harm children. 

Meta said it will continue to share relevant details with law enforcement and child abuse charities. Braverman has written to tech giant to voice her worries. 

A number of charities and technology professionals have signed the letter, which begs the firm to disclose more details on how it will keep consumers safe. 

Braverman told Times Radio earlier this week that E2EE might lead to platforms being "safe havens for paedophiles."

"Meta has failed to provide assurances that they will keep their platforms safe from sickening abusers," Braverman added, urging parents to "take seriously the threat that Meta is posing to our children. It also must develop appropriate safeguards to sit alongside their plans for end-to-end encryption.” 

Braverman stated that the government will use the powers given to it by the new Online Safety Bill legislation, which allows telecoms regulator Ofcom to compel tech companies to violate E2EE and hand over information linked to probable abuse cases if necessary. 

It is currently unclear whether this is possibly feasible without incorporating back-door access to such systems, which, according to tech companies, creates security and privacy issues. 

Meta stated that it has a "clear and thorough approach to safety" that focuses on "sharing relevant information with the National Centre for Missing and Exploited Children and law enforcement agencies." 

Braverman's intervention comes a day after the Online Safety Bill was given final approval by parliament and will now receive royal assent before becoming law. Tech firms such as Meta have decried the bill's threat to E2EE, with WhatsApp threatening to leave the UK if it becomes law. 

The government appeared to make a partial retreat earlier this month, stating it would only employ these powers as a "last resort" and when a technology that permits information to be extracted in a secure manner is established. 

Prime Minister Rishi Sunak stated his support for the measure earlier this year, in April. "I think everyone wants to make sure their privacy is protected online," Sunak said. "But people also want to know that law enforcement agencies can keep them safe and have reasonable ways to do so, and that's what we're trying to do with the Online Safety Bill." 

Meta said in August that by the end of the year, it would be implementing E2EE on private communications across all of its platforms.

AI Development May Take a Toll on Tech Giant’s Environment Image


The Reputation of tech giants as a safe investment for investors interested in the environment, social issues, and governance as well as consumers who value sustainability is clashing with a new reality – the development and deployment of AI capabilities. 

With new data centres that use enormous quantities of electricity and water, as well as power-hungry GPUs used to train models, AI is becoming a greater environmental risk.

For instance, reports show that Amazon's data centre empire in North Virginia has consumed more electricity than Seattle, the company's home city. In 2022, Google data centres used 5.2 billion gallons of water, an increase of 20% from the previous year. The Llama 2 model from Meta is also thirsty.

Some examples of tech-giants that have taken initiatives to reduce the added environment strain include Microsoft’s commitment to have their Arizona data centers consume no water for more than half the year. Also, Google announced a cooperation with the industry leader in AI chip Nvidia and has a 2030 goal of replacing 120% of the freshwater used by its offices and data centres.

However, these efforts seem like some carefully-crafted marketing strategy, according to Adrienne Russell, co-director of the Center for Journalism, Media, and Democracy at the University of Washington.

"There has been this long and concerted effort by the tech industry to make digital innovation seem compatible with sustainability and it's just not," she said. 

To demonstrate her point, she explains the shift to cloud computing and noted the way Apple’s products are sold and presented to show association with counterculture, independence, digital innovation, and sustainability, a strategy used by many organizations. 

This marketing strategy is now being used to showcase AI as an environment-friendly concept. 

The CEO of Nvidia, Jensen Huang, touted AI-driven "accelerated computing"—what his business sells—as more affordable and energy-efficient than "general purpose computing," which he claimed was more expensive and comparatively worse for the environment.

The latest Cowen research report claims that AI data centres seek power, which is more than five times the power used in a conventional facility. GPUs supplied by Nvidia consume around 400 watts of power, making one AI server consume at least 2 kilowatts of power. Apparently, a regular cloud server uses around 300-500 watts.

Russel further added, "There are things that come carted along with this, not true information that sustainability and digital innovation go hand-in-hand, like 'you can keep growing' and 'everything can be scaled massively, and it's still fine' and that one type of technology fits everyone." 

As businesses attempt to integrate huge language models into more of their operations, the momentum surrounding AI and its environmental impact is set to rise.

Russel further recommended that companies should put emphasis on other sustainable innovations, like mesh networks and indigenous data privacy initiatives.

"If you can pinpoint the examples, however small, of where people are actually designing technology that's sustainable then we can start to imagine and critique these huge technologies that aren't sustainable both environmentally and socially," she said.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.   

Zscaler, Palo Alto Under Pressure from Microsoft's Rapidly Increasing Cybersecurity Offerings

 

Microsoft (MSFT) continues to put pressure on cybersecurity stocks with new products aimed at Zscaler (ZS), Palo Alto Networks (PANW), Cloudflare (NET), CrowdStrike Holdings (CRWD), and others. MSFT stock has kept on shining this year, with a 39% increase, amid an artificial intelligence-driven boom in the technology sector.

Zscaler shares slid 6.6% today, closing at 137.68 on the stock exchange. The Microsoft security revelation also caused ZS stock to slump 4.5% on Tuesday. Microsoft's stock increased 1.4% to 337.20 on Wednesday. PANW stock dropped 7% to 232.64.

Analysts believe Microsoft's foray into the computer security Secure Service Edge sector, known as SSE, will have an influence on other cybersecurity equities such as Palo Alto Networks, Cloudflare, and upstart Netskope. 

Expansion in cybersecurity offerings

"While the offering will be competitive to the SSE vendors, we view Microsoft's competitive position in SSE as less strong than other areas of security where it has long had capabilities (endpoint, identity), and likely to be less competitive in the enterprise segment," said KeyBanc Capital analyst Michael Turits in a note to clients. 

"The reaction to ZS stock appears overdone," noted analyst Roger Boyd at UBS, "especially when considering ZS's near-exclusive focus on the enterprise segment. The expectation is that the initial Microsoft Security Edge solution will be primarily aimed at small- and medium-sized businesses. Instead, we see this as a bigger risk for vendors with SMB exposure, including Cloudflare and potentially Fortinet (FTNT)." 

By offering bulk discounts on numerous products to big businesses, Microsoft has put pressure on some cybersecurity stock prices. A new automated cybersecurity product powered by generative AI was also released by Microsoft in early 2023. The platform manages threat detection using a new AI assistant and is known as Microsoft Security Copilot. 

Analysts have reported that the software giant's security sector currently generates $20 billion in yearly sales and is increasing at a 33% annual rate. Microsoft bundles products through its Azure cloud computing business and Office 365 platform.

According to Goldman Sachs analyst Gabriela Borges, Microsoft's yearly research and development investment on security now exceeds $4 billion, up from $1 billion three years ago. However, Borges believes that cybersecurity companies will be able to weather the storm. 

"Many enterprises and small businesses rely on Microsoft to provide a baseline level of security, and then overlay best-in-breed providers," Borges explained. "We see cloud security as a good example of a total addressable market in which both Microsoft and third-party vendors like Palo Alto Networks, Zscaler, and (startup) Wiz are capturing growth."

Meanwhile, Microsoft has not specified a release date for the new computer network SSE products. Microsoft also did not reveal pricing.

Here's Why Twitter is Limiting the Number of Tweets You May View

 

Twitter users who read an excessive number of tweets may eventually run out of something to read. You can thank Elon Musk for it. 

The Twitter owner said last Saturday that rate constraints, or limitations on the number of tweets you can read each day, will be temporarily imposed for users. Musk used the practice of data scraping and system manipulation, in which bots and outside entities take large amounts of Twitter data for their own use, to justify the new plan. 

However, as Musk continued altering limits, confusion quickly arose. He said in his initial tweet on Saturday that verified users would only be able to read 6,000 posts per day, unverified accounts 600 posts per day, and new unverified accounts 300 posts per day. Verified accounts pay the $8 monthly or $84 annually for a Twitter Blue subscription, whilst unverified accounts use Twitter for free. 

Musk amended the figures later that day when he tweeted that verified users will be able to see 8,000 tweets per day, unverified accounts 800, and new unverified accounts 400. There's still more, though. The restrictions would be 10,000 per day for verified users, 1,000 per day for unverified users, and 500 per day for new unconfirmed users, according to the Twitter CEO in a third tweet that same day.

Several Twitter users also began asking questions, such as what counts as a new unverified account, how to keep track of how many tweets we've viewed, and how long these temporary restrictions will last. One person said, "Elon, if I close my eyes and skip past some, will they still count towards my 6,000?" Another person inquired, "Can we have a Ticker that shows how many we have viewed?" 

Musk had already expressed his concern over data scraping before the new limits were revealed. In a post on Friday, he claimed that hundreds, if not thousands, of businesses had been actively mining Twitter data to the point that it was impairing user experience. He indicated he was open to suggestions when I asked what Twitter might do to stop that. 

With regard to data scraping, Musk's most recent complaints have been on AI firms like OpenAI that collect a sizable amount of information from websites like Twitter to train their chatbots. 

He stated in an additional tweet on Friday that "almost every company doing AI, from startups to some of the biggest corporations on Earth, was scraping vast amounts of data. It is rather galling to have to bring large numbers of servers online on an emergency basis just to facilitate some AI startup's outrageous valuation."

Tech Giant Alibaba to Launch ChatGPT Rival

 

Alibaba, a global leader in technology, has revealed a new artificial intelligence product that will soon be incorporated into all of the company's apps and is similar to ChatGPT. 

Earlier this year, Alibaba revealed it was developing a ChatGPT competitor to the immensely popular AI chatbot. Alibaba Cloud, the company's cloud computing division, revealed on Tuesday that it would be releasing a generative AI model called Tongyi Qianwen (TQ), which stands for "truth from a thousand questions." 

In an initial product demonstration, the AI model was seen scheduling travel plans, writing invitations, and advising customers on make-up purchases. 

The company announced that the TQ rollout will start with a deployment on Tmall Genie, the company's Alexa-like voice assistant, and DingTalk, Alibaba's office communication and collaboration software. 

The device has the ability to write emails, create business proposals, and summarise meeting notes in both Chinese and English. Alibaba claimed that by using the voice assistant Tmall Genie, the product can have "more dynamic and vivid conversations" with users in China. Specifically, TQ can give users vacation tips, healthy diet recipes, and "develop and tell stories to children." 

Alibaba Group CEO Daniel Zhang stated at a webcast event that the new technology would "bring about big changes to the way we produce, the way we work, and the way we live our lives." 

"We are at a technological watershed moment driven by generative AI and cloud computing, and businesses across all sectors have started to embrace intelligence transformation to stay ahead of the game,” stated Zhang. 

"As a leading global cloud computing service provider, Alibaba Cloud is committed to making computing and AI services more accessible and inclusive for enterprises and developers," he added. 

The company stated the AI model will be incorporated into "all business applications across Alibaba's ecosystem in the near future," though it did not give a specific timeline for the TQ implementation. 

China's new AI regulations 

The launch of Alibaba's AI product coincides with the release of recently-drafted AI regulations from China's cyber space authority. The Cyberspace Administration of China presented steps for controlling generative AI on Tuesday, mandating that creators of new AI products submit to security evaluations before making them available to the general public. 

According to the draft law, information produced by upcoming AI products must uphold the nation's "core socialist values" and refrain from inciting subversion of the rule of law.

Additionally, it outlined guidelines forbidding AI material from endorsing racial, ethnic, and gender discrimination and specified that AI content shouldn't spread misinformation.

In a frenetic battle for market supremacy, tech behemoths from all over the world have recently begun creating and selling generative AI technologies. 

While initiatives from Meta, Microsoft, and Google have been unveiled to varying degrees of acclaim, ChatGPT has continued to make quick advancements to its ground-breaking AI language model.

A contentious request was made earlier this month for major tech firms to comply with a six-month moratorium on "out-of-control" AI development. 

The broad expansion of AI has sparked growing concerns about the technology's potential ethical and economic implications. In response, over 1,300 academics, tech experts, and business professionals signed an open letter supported by Elon Musk encouraging AI companies to limit development in late March.

A ban against ChatGPT has been issued by Italy, the first Western nation to do so, after the nation's privacy agency expressed worries about the AI chatbot's data privacy practises. Meanwhile, an Australian mayor may file the first lawsuit over ChatGPT's errors. 

Alibaba's cloud division hopes to make TQ AI available to customers so they may create their own unique language models, and the business has already announced plans to include additional features including picture processing and text-to-image synthesis.

Beta testing of TQ is currently available for mainstream enterprise customers in China.

Freenom Suspends Domain Registrations After Being Sued by Meta

 

Freenom, a domain name registrar that has attracted spammers and phishers with its free domain names, no longer accepts new domain name registrations. The action was taken just days after Meta filed a lawsuit against the Netherlands registrar, alleging that the latter ignored abuse reports concerning phishing websites while generating revenue from visitors to such abusive domains, according to Brian Krebs.

Five so-called "country code top level domains" (ccTLDs) are managed by Freenom, including.cf for the Central African Republic,.ga for Gabon,.gq for Equatorial Guinea,.ml for Mali, and.tk for Tokelau. 

Freenom has never charged for the registration of domains in these country-code extensions, likely to entice consumers to pay for services that are related to them, such as registering a.com or.net domain, for which Freenom does charge a fee. 

Social media giant Meta filed a lawsuit against Freenom in Northern California on March 3, 2023, citing trademark infringement and violations of cybersquatting. The lawsuit also demands information on the names of 20 separate "John Does" — Freenom customers that Meta says have been particularly active in phishing assaults against Facebook, Instagram, and WhatsApp users. 

The lawsuit makes reference to a 2021 study on domain abuse done for the European Commission, which found that those ccTLDs run by Freenom comprised five of the Top Ten TLDs most frequently utilised by phishers. 

As per Brian Krebs, the complaint asserts that the five ccTLDs to which Freenom offers its services are the TLDs of choice for cybercriminals because Freenom offers cost-free domain name registration services and hides the identities of its customers even after being shown proof that the domain names are being used for unlawful purposes. Freenom keeps granting those same clients additional infringing domain names even after getting complaints from them about infringement or phishing. 

Meta further claims that "Freenom has repeatedly failed to take appropriate steps to investigate and respond appropriately to reports of abuse," and that it monetizes traffic from infringing domains by reselling them and by including "parking pages" that direct visitors to other commercial websites, pornographic websites, and websites used for malicious activities like phishing. 

Requests for comment have not yet received a response from Freenom. However, as at the time of writing, attempts to register a domain via the business' website resulted in the following error message: 

“Because of technical issues the Freenom application for new registrations is temporarily out-of-order. Please accept our apologies for the inconvenience. We are working on a solution and hope to resume operations shortly. Thank you for your understanding.” 

Freenom has its headquarters in The Netherlands, but the case also names a few of its other sister firms as defendants, some of which are established in the US. When Meta first filed this action in December 2022, it requested that the case be sealed in order to limit the public's access to court records related to the case. Following the denial of that request, Meta modified and re-filed the case last week. 

According to Meta, this isn't just an instance of another domain name registrar ignoring abuse concerns because it's bad for business. According to the lawsuit, Freenom's proprietors "are a part of a web of businesses established to promote cybersquatting, all for the advantage of Freenom." 

“On information and belief, one or more of the ccTLD Service Providers, ID Shield, Yoursafe, Freedom Registry, Fintag, Cervesia, VTL, Joost Zuurbier Management Services B.V., and Doe Defendants were created to hide assets, ensure unlawful activity including cybersquatting and phishing goes undetected, and to further the goals of Freenom,” Meta claimed. 

Brian further explained that although the reason for Freenom's decision to stop offering domain registration is yet unknown, it's possible that the company has recently been the target of disciplinary action by the nonprofit Internet Corporation for Assigned Names and Numbers (ICANN), which regulates domain registrars. 

In June 2015, ICANN put a 90-day hold on Freenom's ability to register new domain names or start inbound transfers of existing ones. ICANN's conclusion that Freenom "has engaged in a pattern and practise of trafficking in or use of domain names identical or confusingly similar to a trademark or service mark of a third party in which the Registered Name Holder has no rights or legitimate interest" is the basis for the suspension, according to Meta.


Apple is Tracking Your Every Move, Here's All You Need to Know

 

Tech giant Apple projects itself as a privacy-focused firm, but according to the latest research, the company might be contradicting its own practices when it comes to collecting App Store data. 

According to a Twitter thread published by an iOS developer and security researcher Tommy Mysk, Apple tracks customers' activity via 'Directory Services Identifier' or DSLD which is linked to the customer’s iCloud and is able to collect private data like name, email address, and contacts. 

What’s more worrying is that the revelations reported in the thread state that even if customers switch off device analytics in the ‘Settings menu, the company deploys this dsId to other apps too. 

“Apple’s analytics data include an ID called “dsId”. We were able to verify that “dsId” is the “Directory Services Identifier”, an ID that uniquely identifies an iCloud account. Meaning, Apple’s analytics can personally identify you,” Mysk tweeted. 

However, the tech giant’s Device Analytics & Privacy document says that none of the user information collected is linked to that individual, suggesting that as a user, you would appear anonymous.

“None of the collected information identifies you personally. Personal data is either not logged at all, is subject to privacy preserving techniques such as differential privacy, or is removed from any reports before they’re sent to Apple. You can review this information on your iOS device by going to Settings > Privacy & Security > Analytics & Improvements and tapping Analytics Data,” the document reads.

Even though Apple continues to prattle that it is a privacy-oriented firm that values customers’ privacy and focuses to give them more control over what data they want to share or not share with advertisers and app designers, it can still employ DSLD for its own personal benefits, whatever those may be. 

Earlier this month, Gizmodo reported that a lawsuit was filed against Apple, with the plaintiff stating that Apple illegally siphons user data even when the firm's own privacy settings promise not to. The lawsuit was filed based on Mysk’s research; however, the researcher was unable to analyze the data in iOS 16 due to its encryption.

A Constant Battle Between Apple and Zero-Day Security Vulnerabilities

 


Recently, there has been a noticeable increase in the number of attackers targeting Apple, especially by using zero-day exploits. Among the main reasons why hackers like zero-day exploits so much are because they might just become the most valuable asset in a hacker's portfolio. As of 2022, Apple has discovered seven zero-day vulnerabilities in its products and has followed up on these discoveries with relevant updates to address these issues. Even so, it seems as though there will not be an end to this classic cat-and-mouse game anytime soon.

During 2021, there were more than double the amount of zero-days recorded, compared to the same year in 2020. This is the highest level since tracking began in 2014, with the number of zero-days increasing every year since then – the trend has been demonstrated by the repository maintained by Project Zero. 

As described by the MIT Technology Review, the increase in hacking over the past few years has been attributed to the rapid proliferation of hacking tools globally and the willingness of powerful state and non-state groups to invest handsomely in discovering and infiltrating these operating systems. Threat actors actively search for vulnerabilities and then sell the information about those vulnerabilities to the highest bidder.

Apple has repeatedly been compromised by these attackers. In 2022, Apple, one of the four most dominating IT companies in the world, is advancing into a year where it is welcoming a new year with two zero-day bugs in its operating systems, a WebKit flaw that could have left users' browsing data vulnerable and after recovering from 12 recorded exploits and remediations in 2021, they have been hit by two zero-day bugs in their operating systems. 

The company released 23 security patches less than one month after it discovered these issues. A new flaw was discovered that could be exploited by attackers to exploit a user's device if certain malicious websites are loaded onto a user's device, leading to an infection of their device.

Keeping this in mind, if we fast forward to August 17 of this year, we learn Apple has discovered two new vulnerabilities in its operating system  CVE-2022-32893 and CVE-2022-32894. The first vulnerability is a remote code execution (RCE) vulnerability in Apple's Safari Web browser kit, which is used by all browsers that are iOS-enabled and macOS-enabled. As for the second vulnerability, another RCE vulnerability, it gives attackers complete access to the user's software and hardware without any limitations. 

In the past couple of weeks, two major vulnerabilities have been found that affect a wide variety of Apple devices  especially the iPhone 6 and later models, the iPad Pro, iPad Air 2 onwards, iPad 5th generation and newer models, iPad mini 4 and newer versions, iPod touch (7th generation), and macOS Monterey. The officials updated the security systems to create a protected environment against “actively exploited” vulnerabilities.

The research team at Digital Shadows prepared a report which included that the Zero-day exploits sell for up to $10 million, which is the most expensive commodity in a rather wide array of cybercrime. The report further added that these exploits in the market are bound to expand and provoke more cyber threats.