Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Microsoft. Show all posts

Unveiling Vulnerabilities in Microsoft PlayReady DRM: Impact on Streaming Platforms

 

In a meticulous research endeavor, Security Explorations, a division of AG Security Research, embarked on an exhaustive analysis of Microsoft's Warbird and Protected Media Path (PMP) technologies. The culmination of this investigation has unearthed critical deficiencies within the security architecture of Microsoft's PlayReady Digital Rights Management (DRM) system, posing profound implications for content security across a spectrum of streaming platforms. 

At the core of Microsoft's content protection ecosystem lies Protected Media Path (PMP), an amalgamation of cryptographic protocols, code integrity checks, and authentication mechanisms designed to fortify content security within Windows OS environments. In tandem, Microsoft Warbird endeavors to erect formidable barriers against reverse engineering attempts, encrypting and obfuscating binaries to thwart unauthorized access. 

However, despite the multifaceted security measures embedded within these technologies, Security Explorations' research has illuminated vulnerabilities within PMP components. These vulnerabilities lay bare the underbelly of Microsoft's DRM infrastructure, allowing for the extraction of plaintext content keys essential for the decryption of high-definition content. The ramifications of such exploits extend far and wide, implicating prominent streaming platforms including Canal+ Online, Netflix, HBO Max, Amazon Prime Video, and Sky Showtime. 

Of particular concern is the vulnerability's prevalence on Windows 10 systems lacking Hardware DRM capability, a demographic constituting a significant portion of the user base due to compatibility constraints with Windows 11. The exploitation of Software DRM implementations prevalent in these environments underscores the urgent need for remedial action. While Microsoft's PlayReady team has been apprised of these findings, Security Explorations has refrained from disclosing detailed technical information through the MSRC channel, citing proprietary concerns and the imperative to safeguard intellectual property. 

Beyond the immediate ramifications for individual platforms, the research underscores broader implications for the content security landscape. With the burgeoning digital streaming industry valued at $544 billion, the imperative of ensuring robust DRM solutions cannot be overstated. The compromise of plaintext content keys not only imperils individual platforms but also undermines consumer trust and revenue streams, posing a systemic risk to the digital content ecosystem. 

Mitigating these vulnerabilities demands a concerted effort from industry stakeholders. Streaming platforms may consider transitioning to alternative DRM technologies or implementing interim safeguards to mitigate the risk of exploitation. However, the challenge lies in striking a delicate balance between security measures and user accessibility, ensuring seamless functionality without compromising content security. The research findings underscore the imperative for collaborative efforts between security researchers and industry stakeholders to fortify DRM ecosystems against evolving threats. 
Moreover, they highlight the pressing need for enhanced regulatory scrutiny and industry standards to bolster content security in the digital age. 

In light of these revelations, streaming platforms must reassess their security posture and implement robust measures to safeguard against unauthorized access and content piracy. Failure to address these vulnerabilities not only jeopardizes consumer confidence but also undermines the viability of streaming platforms in an increasingly interconnected world. As the digital landscape continues to evolve, proactive measures are indispensable to safeguarding content integrity and preserving the sanctity of digital content distribution channels. Only through collective vigilance and concerted action can the industry fortify itself against the ever-looming specter of security threats.

Unveiling the Threat: Microsoft's Executive Speaks Out on State-Backed Hacking

 


The executive vice president of security at Microsoft, Charlie Bell, recently proposed that the company is the neighbourhood of foreign state-sponsored hackers in an interview with Bloomberg. It has certainly been true over the years that they are particularly good at collecting data over time, gathering momentum over time, and being able to leverage that momentum into more successes over time,' Bell speaks at length about their abilities. 

Microsoft announced the Secure Future Initiative last November, following a series of cybersecurity breaches, associated with foreign governments, which resulted in Microsoft launching the initiative to protect its users' data. A notable example of these breaches was the intrusion of Chinese hackers who gained access to customer email addresses in May by breaking into systems through a malware program. 

Approximately 30 million customer's data were compromised as a result of hacking by a Russian-allied group known as Anonymous Sudan in the summer of 2023. Even though Microsoft has implemented several security initiatives over the past few years, there are still breaches that occur. 

There have been several incidents involving hackers that have hacked into the email accounts of Microsoft employees, including those of executives, and exposed vulnerabilities even further. The hackers have been named Midnight Blizzard, a group supported by Russia. 

It was subsequently determined that Microsoft's security systems were compromised due to a series of failures found within the software as a result of the breach, according to a report from the US Cyber Safety Review Board (CSRB). 

There is no doubt that Microsoft's security culture is insufficient to safeguard its customers' information and business operations, according to the CSRB report, which calls for a significant overhaul of the corporate culture, given the company's pivotal role in the technology ecosystem and the massive trust that customers place in it. 

The company has taken steps to strengthen its security framework as well as removing over 700,000 obsolete applications from its database, as well as 1.7 million outdated accounts. The company has stepped up efforts to implement multi-factor authentication across more than one million accounts as well as enhance its security protections to prevent the theft of employee identities by hackers by increasing its efforts to achieve multi-factor authentication in more than one million accounts. 

As a result, critics of Microsoft's security infrastructure argue that these actions are not sufficient to correct Microsoft's fundamental security flaws and do not go far enough in addressing them. It has been more than a month since Microsoft has responded to criticisms. A report released by Microsoft recently shows that Chinese state-sponsored hackers are using artificial intelligence (AI) to spread misinformation in advance of the upcoming presidential election, adding another layer of concern to the cybersecurity landscape. 

It will make it imperative to keep developing robust defensive strategies to counter the ever-evolving tactics of cyber adversaries and protect democratic processes as well as national security in times when they are vulnerable to cyber-attacks.

Secrets of SharePoint Security: New Techniques to Evade Detection

 



According to a recent discovery by Varonis Threat Labs, two new techniques have emerged that pose a significant threat to data security within SharePoint, a widely used platform for file management. These techniques enable users to evade detection and retreat files without triggering alarm bells in audit logs.

Technique 1: Open in App Method

The first technique leverages SharePoint's "open in app" feature, allowing users to access and download files while leaving behind only access events in the file's audit log. This method, which can be executed manually or through automated scripts, enables rapid exfiltration of multiple files without raising suspicion.

Technique 2: SkyDriveSync User-Agent

The second technique exploits the User-Agent for Microsoft SkyDriveSync, disguising file downloads as sync events rather than standard downloads. By mislabeling events, threat actors can bypass detection tools and policies, making their activity harder to track.

Implications for Security

These techniques pose a significant challenge to traditional security tools such as cloud access security brokers and data loss prevention systems. By hiding downloads as less suspicious access and sync events, threat actors can circumvent detection measures and potentially exfiltrate sensitive data unnoticed.

Microsoft's Response

Despite Varonis disclosing these methods to Microsoft, the tech giant has designated them as a "moderate" security concern and has not taken immediate action to address them. As a result, these vulnerabilities remain in SharePoint deployments, leaving organisations vulnerable to exploitation.

Recommendations for Organisations

To alleviate the risk posed by these techniques, organisations are advised to closely monitor access events in their SharePoint and OneDrive audit logs. Varonis recommends leveraging User and Entity Behavior Analytics (UEBA) and AI features to detect and stop suspicious activities, such as mass file access.

What Are the Risks?

While SharePoint and OneDrive are essential tools for facilitating file access in organisations, misconfigured permissions and access controls can inadvertently expose sensitive data to unauthorised users. Threat actors often exploit these misconfigurations to exfiltrate data, posing a significant risk to organisations across various industries.

Detection and Prevention Strategies

To detect and prevent unauthorised data exfiltration, organisations should implement detection rules that consider behavioural patterns, including frequency and volume of sync activity, unusual device usage, and synchronisation of sensitive folders. By analysing these parameters, organisations can identify and mitigate potential threats before they escalate.




Microsoft's Priva Platform: Revolutionizing Enterprise Data Privacy and Compliance

 

Microsoft has taken a significant step forward in the realm of enterprise data privacy and compliance with the expansive expansion of its Priva platform. With the introduction of five new automated products, Microsoft aims to assist organizations worldwide in navigating the ever-evolving landscape of privacy regulations. 

In today's world, the importance of prioritizing data privacy for businesses cannot be overstated. There is a growing demand from individuals for transparency and control over their personal data, while governments are implementing stricter laws to regulate data usage, such as the AI Accountability Act. Paul Brightmore, principal group program manager for Microsoft’s Governance and Privacy Platform, highlighted the challenges faced by organizations, noting a common reactive approach to privacy management. 

The new Priva products are designed to shift organizations from reactive to proactive data privacy operations through automation and comprehensive risk assessment. Leveraging AI technology, these offerings aim to provide complete visibility into an organization’s entire data estate, regardless of its location. 

Brightmore emphasized the capabilities of Priva in handling data requests from individuals and ensuring compliance across various data sources. The expanded Priva family includes Privacy Assessments, Privacy Risk Management, Tracker Scanning, Consent Management, and Subject Rights Requests. These products automate compliance audits, detect privacy violations, monitor web tracking technologies, manage user consent, and handle data access requests at scale, respectively. 

Brightmore highlighted the importance of Privacy by Design principles and emphasized the continuous updating of Priva's automated risk management features to address emerging data privacy risks. Microsoft's move into the enterprise AI governance space with Priva follows its recent disagreement with AI ethics leaders over responsibility assignment practices in its AI copilot product. 

However, Priva's AI capabilities for sensitive data identification could raise concerns among privacy advocates. Brightmore referenced Microsoft's commitment to protecting customer privacy in the AI era through technologies like privacy sandboxing and federated analytics. With fines for privacy violations increasing annually, solutions like Priva are becoming essential for data-driven organizations. 

Microsoft strategically positions Priva as a comprehensive privacy governance solution for the enterprise, aiming to make privacy a fundamental aspect of its product stack. By tightly integrating these capabilities into the Microsoft cloud, the company seeks to establish privacy as a key driver of revenue across its offerings. 

However, integrating disparate privacy tools under one umbrella poses significant challenges, and Microsoft's track record in this area is mixed. Privacy-native startups may prove more agile in this regard. Nonetheless, Priva's seamless integration with workplace applications like Teams, Outlook, and Word could be its key differentiator, ensuring widespread adoption and usage among employees. 

Microsoft's Priva platform represents a significant advancement in enterprise data privacy and compliance. With its suite of automated solutions, Microsoft aims to empower organizations to navigate complex privacy regulations effectively while maintaining transparency and accountability in data usage.

Critical Security Alert Released After Malicious Code Found in XZ Utils

 

On Friday, Red Hat issued a high-priority security alert regarding a discovery related to two versions of a widely-used data compression library called XZ Utils (formerly known as LZMA Utils). It was found that these specific versions of the library contained malicious code intentionally inserted by unauthorized parties. 

This code was designed with the malicious intent of allowing remote access to systems without authorization. This unauthorized access can lead to serious security threats to individuals and organizations utilizing these compromised versions of the library, potentially leading to data breaches or other malicious activities. 

The discovery and reporting of the issue have been attributed to Microsoft security researcher Andres Freund. It was revealed that the malicious code, which was heavily obfuscated, was introduced through a sequence of four commits made to the Tukaani Project on GitHub. These commits were attributed to a user named Jia Tan (JiaT75). 

What XZ Utils Used For? 

XZ is a compression tool and library widely utilized on Unix-like systems such as Linux. It is renowned for its ability to significantly reduce file sizes while maintaining fast decompression speeds. This compression is achieved through the implementation of the LZMA (Lempel-Ziv-Markov chain algorithm) compression algorithm, which is well-regarded for its efficient compression ratios. 

Let’s Understand the Severity of the Attack 

The breach has garnered a critical CVSS score of 10.0, indicating the most severe level of threat. This vulnerability has been found to impact XZ Utils versions 5.6.0 and 5.6.1, which were released on February 24 and March 9, respectively. 

The Common Vulnerability Scoring System (CVSS) is a widely used tool in the cybersecurity sector, offering a standardized approach to evaluate the gravity of security vulnerabilities found in computer systems. Its main objective is to aid cybersecurity experts in prioritizing the resolution of these vulnerabilities based on their urgency. 

"Through a series of complex obfuscations, the liblzma build process extracts a prebuilt object file from a disguised test file existing in the source code, which is then used to modify specific functions in the liblzma code," an IBM subsidiary reported. 

Additionally, Red Hat clarified that while no versions of Red Hat Enterprise Linux (RHEL) are affected by this security flaw, evidence indicates successful injections within xz 5.6.x versions designed for Debian unstable (Sid). It is also noted that other Linux distributions may potentially be impacted by this vulnerability. 

In response to the security breach, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has taken action by issuing its own alert.  "CISA and the open source community are responding to reports of malicious code being embedded in XZ Utils versions 5.6.0 and 5.6.1. This activity was assigned CVE-2024-3094. XZ Utils is data compression software and may be present in Linux distributions. The malicious code may allow unauthorized access to affected systems".  

CISA is advising users to downgrade their XZ Utils installations to a version unaffected by the compromise. Specifically, they recommend reverting to an uncompromised version such as XZ Utils 5.4.6 Stable.

Russian Hackers Breach Microsoft's Security: What You Need to Know

 


In a recent set of events, reports have surfaced of a significant cyberattack on Microsoft, allegedly orchestrated by Russian hackers. This breach, attributed to a group known as Midnight Blizzard or Nobelium, has raised serious concerns among cybersecurity experts and the public alike.

The attack targeted Microsoft's source code repositories, exposing sensitive company information and communications with partners across various sectors, including government, defence, and business. While Microsoft assures that no customer-facing systems were compromised, the breach has far-reaching implications for national and international security.

Cybersecurity experts warn of the potential for increased zero-day vulnerabilities, which are undiscovered security flaws that can be exploited by hackers. Access to source code provides attackers with a "master key" to infiltrate systems, posing a significant threat to organisations and users worldwide.

The severity of the breach has prompted strong reactions from industry professionals. Ariel Parnes, COO of Mitiga, describes the incident as "severe," emphasising the critical importance of source code security in the digital age. Shawn Waldman, CEO of Secure Cyber Defense, condemns the attack as a "worst-case scenario," highlighting the broader implications for national security.

The compromised data includes emails of senior leadership, confidential communications with partners, and cryptographic secrets such as passwords and authentication keys. Larry Whiteside Jr., a cybersecurity expert, warns of potential compliance complications for Microsoft users and partners, as regulators scrutinise the breach's impact on data protection laws.

As the fallout from the breach unfolds, there are growing concerns about the emergence of zero-day vulnerabilities and the need for proactive defence measures. Experts stress the importance of threat hunting and incident response planning to mitigate the risks posed by sophisticated cyber threats.

The incident underscores the ongoing battle in the global cyber warfare landscape, where even tech giants like Microsoft are not immune to attacks. With cybercriminals increasingly targeting supply chains, the need for enhanced security measures has never been more urgent.

The breach of Microsoft's systems serves as a wake-up call for individuals and organisations alike. It highlights the ever-present threat of cyberattacks in an increasingly interconnected world and underscores the need for enhanced cybersecurity measures. By staying vigilant and proactive, establishments can mitigate the risks posed by cyber threats and protect their digital assets from exploitation.

As the field of cybersecurity keeps changing and developing, stakeholders must work together to address the underlying threats and ensure the protection of critical infrastructure and data. This recent breach of Microsoft's security by Russian hackers has raised serious concerns about the vulnerability of digital systems and the need for robust cybersecurity measures.


Latest SEC Cyber Rules Raise 'Head Scratching' Breach Disclosures

SEC Disclosure

SEC disclosure mandate

The Securities and Exchange Commission's recently implemented cybersecurity regulations have prompted some breach disclosures from publicly traded firms, such as Microsoft and Hewlett Packard Enterprise.

Among other things, the guidelines mandate that a "material" cybersecurity event be reported to the SEC within four days of its classification as such. The SEC states that they were meant to give investors timely and “decision-useful” cybersecurity information; nevertheless, experts point out that several of the early disclosures only included rudimentary breach details, raising significant concerns that remain unaddressed.

According to Scott Kimpel, a partner at Hunton Andrews Kurth, "Some of these disclosures, I think, are question-begging." "They just provide us with superficial, newsworthy details about the occurrence.

SEC disclosure for companies: What does it mean?

Companies must assess an incident's materiality "without unreasonable delay following discovery and, if the incident is determined material, file an Item 1.05 Form 8-K generally within four business days of such determination," according to SEC regulations.

The incident's "material impact or reasonably likely material impact," as well as its material features of nature, scope, and chronology, must all be disclosed.

"Norms have not yet been established because we're early in the process," stated Richard Marcus, head of information security at cloud-based risk management startup AuditBoard. Therefore, Companies ask themselves, "How much can I get away with here? What exactly are my stockholders hoping to get? I believe that businesses are benchmarking against each other quite a bit."

Without mentioning any particular businesses, Kimpel claimed that some have submitted puzzling incident disclosures, in which they discuss a breach that hasn't yet had a major impact on their business operations and might or might not ultimately have a material impact on their financial situation. 

According to Kimpel, one argument is that these businesses might be disclosing a breach that they considered significant from a "qualitative" as opposed to a "quantitative" standpoint. Financial injury is one type of qualitative material impact, he said, while reputational harm and the possibility of future legal or regulatory problems are among the "almost endless list of possibilities" that make up quantitative material consequences.

Small companies exempted

Except for smaller reporting companies, all covered firms had to abide by the revised breach disclosure requirements as of December 18. As of June 5, smaller reporting organizations will have to comply with them.

Microsoft revealed in an Item 1.05 Form 8-K filing in January that a "nation-state associated threat actor" had obtained access to and exfiltrated data from a "very small percentage" of employee email accounts, comprising staff members in the company's legal, cybersecurity, and senior leadership teams, among other departments.

Among the businesses that have used similar language in breach disclosures submitted to the SEC following the new cybersecurity regulations are HP Enterprise and Prudential Financial.

What next?

As the Wall Street Journal reported in January, Microsoft notified the SEC of the breach even though, at the time of its regulatory filing, the company's investigation had not revealed any consequences that would have exceeded the agency's material damage criteria. The corporation stated, "But because the law is so new, we wanted to make sure we honor the spirit of the law," as stated in the Journal article.

According to Kimpel, SEC filings may create investor confusion when businesses disclose breaches that don't seem to be as serious as they claim, sometimes without explaining their actions.

Protect Yourself: Tips to Avoid Becoming the Next Target of a Microsoft Hack

 

The realm of cybersecurity, particularly within the Microsoft 365 environment, is in a constant state of evolution. Recent events involving major tech firms and cybersecurity entities underscore a crucial truth: grasping security best practices for Microsoft 365 isn't synonymous with effectively putting them into action.

According to Kaspersky, 2023 witnessed a significant 53% surge in cyber threats targeting documents, notably Microsoft Office documents, on a daily basis. Attackers increasingly employed riskier tactics, such as surreptitiously infiltrating systems through backdoors. 

For instance, in one scenario, a non-production test account lacking multifactor authentication (2FA/MFA) fell victim to exploitation, while in another case, a backdoor was implanted into a file, initiating a supply chain attack. These incidents serve as stark reminders that even seemingly low-risk accounts and trusted updates within Microsoft 365 can serve as conduits for security breaches if not adequately safeguarded and monitored.

Despite the profound expertise within organizations, these targeted entities succumbed to advanced cyberattacks, highlighting the pressing need for meticulous implementation of security protocols within the Microsoft 365 realm.

The domain of artificial intelligence (AI) has experienced exponential growth in recent years, permeating nearly every aspect of technology. In this era dominated by AI and large language models (LLMs), sophisticated AI models can enhance cloud security measures. AI is rapidly becoming standard practice, compelling organizations to integrate it into their frameworks. By fine-tuning AI algorithms with specialized domain knowledge, organizations can gain actionable insights and predictive capabilities to preemptively detect and address potential security threats. These proactive strategies empower organizations to effectively safeguard their digital assets.

However, the proliferation of AI also heightens the necessity for robust cloud security. Just as ethical practitioners utilize AI to advance technological frontiers, malicious actors leverage AI to unearth organizational vulnerabilities and devise more sophisticated attacks. Open-source LLM models available online can be utilized to orchestrate intricate attacks and enhance red-team and blue-team exercises. Whether wielded for benevolent or malevolent purposes, AI significantly influences cybersecurity today, necessitating organizations to comprehend its dual implications.

Ways to Enhance Your Security

As digital threats grow increasingly sophisticated and the ramifications of a single breach extend across multiple organizations, the imperative for vigilance, proactive security management, and continuous monitoring within Microsoft 365 has never been more pronounced.

One approach involves scrutinizing access control policies comprehensively. Orphaned elements can serve as goldmines for cybercriminals. For example, a departing employee's access to sales-related data across email, SharePoint, OneDrive, and other platforms must be promptly revoked and monitored to prevent unauthorized access. Regular audits and updates of access control policies for critical data elements are indispensable.

Moreover, reviewing delegations and managing permissions consistently is imperative. Delegating authentication credentials is vital for onboarding new programs or personnel, but these delegations must be regularly assessed and adjusted over time. Similarly, ensuring segregation of duties and deviations is crucial to prevent any single individual from wielding excessive control. Many organizations grapple with excessive permissions or outdated delegations, heightening the risk of cybersecurity breaches. Emphasizing delegation and segregation of duties fosters accountability and transparency.

Maintaining oversight over the cloud environment is another imperative. Solutions supporting cloud governance can enforce stringent security policies and streamline management processes. When selecting a cloud governance provider, organizations must exercise discernment as their chosen partner will wield access to their most sensitive assets. Security should be viewed as a layered approach; augmenting layers enhances governance without compromising productivity or workflows.

Given the alarming frequency of security breaches targeting Microsoft 365, it's evident that conventional security paradigms no longer suffice. Gone are the days when basic antivirus software provided ample protection; technological advancements necessitate significant enhancements to our defense mechanisms.

Implementing rigorous security measures, conducting regular audits, and upholding governance can markedly fortify an organization's defense against cyber threats. By remaining vigilant and proactive, it's feasible to mitigate security risks and shield critical data assets from potential breaches before they inflict harm on organizations or their clientele.

Microsoft Source Code Heist: Russian Hackers Escalate Cyberwarfare

 


There was an update on the hacking attempts by hackers linked to Russian foreign intelligence on Friday. They used data stolen from corporate emails in January to gain access to Microsoft's systems again, which were used by the foreign intelligence services to gain access to the tech giant's products, which are widely used in the national security establishment in the United States. 

Analysts were alarmed by the disclosure as they expressed concerns about whether the U.S. government could use Microsoft's digital services and infrastructure safely. Microsoft is one of the world's largest software companies which provides systems and services to the government, including cloud computing. 

It has been alleged that the hackers have in recent weeks gained access to Microsoft's internal systems and source code repositories using information stolen from the company's corporate email system. The tech firm said that the hackers had used this information to access the company's corporate email systems. It is the nuts and bolts of a software program which make it work. 

Therefore, source code is of great importance to corporations - as well as spies trying to penetrate it. With access to the source code, hackers may be able to carry out follow-on attacks against other systems if they have access. During the first days of January, Microsoft announced that its cloud-based email system had been breached by the same hackers, days before another big tech company, Hewlett Packard Enterprise, announced that its cloud-based email system was breached. 

Although the full scope and purpose of the hacking activity is unclear, experts say the group responsible for the hack has a history of conducting extensive intelligence-gathering campaigns for the Kremlin. According to Redmond, which is examining the extent of the breach, the Russian state-sponsored threat actor may be trying to take advantage of the different types of secrets that it found in its investigation, including emails that were shared between Microsoft and its customers. 

Even though they have contacted the affected customers directly, the company didn't reveal what the secrets were nor what the extent of the compromise was. It is unclear what source code was accessed in this case. According to Microsoft, as well as stating that it has increased its security investments, the adversary ramped up its password spray attacks more than tenfold in February, in comparison to the "amount of activity" that was observed earlier in the year. 

Several analysts who track Midnight Blizzard report that they target governments, diplomatic agencies, non-governmental organizations, and other non-governmental organizations. Because of Microsoft's extensive research into Midnight Blizzard's operations, the company believes the hacker group might have targeted it in its January statement. 

Ever since at least 2021, when the group was found to have been behind a series of cyberattacks that compromised a wide range of U.S. government agencies, Microsoft's threat intelligence team has been conducting research on Nobleium and sharing it with the public. According to Microsoft, persistent attempts to breach the company are a sign that the threat actor has committed significant resources, coordination, and focus to the breach effort. 

As part of their espionage campaigns, Russian hackers have continued to hack into widely used tech companies in the years since the 2020 hack. US officials and private experts agree that this is indicative of their persistent, significant commitments to the breach. An official blog post that accompanied the SEC filing on Friday said that the hackers may have gathered an inventory of potential targets and are now planning to attack them, and may have enhanced their ability to do so by using the information they stole from Microsoft. 

Several high-profile cyberattacks have occurred against Microsoft due to its lax cybersecurity operations, including the compromise of Microsoft 365 (M365) cloud environment by Chinese threat actors Storm-0558, as well as a series of PrintNightmare vulnerabilities, ProxyShell bugs, two zero-day exchange server vulnerabilities known as ProxyNotShell that have been reported as well. 

Microsoft released the February Patch Tuesday update which addressed the admin-to-kernel exploit in the AppLocker driver that was disclosed by Avast six months after Microsoft accepted Avast's report about the exploit. The North Korean adversary Lazarus Group, which is known for exploiting the Windows kernel's read/write primitive to establish a read/write primitive on the operating system, used the vulnerability to install a rootkit on the system. The company replaced its long-time chief information security officer, Bret Arsenault, with Igor Tsyganskiy in December 2023 to alleviate security concerns.

Microsoft Employee Raises Alarms Over Copilot Designer and Urges Government Intervention

 

Shane Jones, a principal software engineering manager at Microsoft, has sounded the alarm about the safety of Copilot Designer, a generative AI tool introduced by the company in March 2023. 

His concerns have prompted him to submit a letter to both the US Federal Trade Commission (FTC) and Microsoft's board of directors, calling for an investigation into the text-to-image generator. Jones's apprehension revolves around Copilot Designer's unsettling capacity to generate potentially inappropriate images, spanning themes such as explicit content, violence, underage drinking, and drug use, as well as instances of political bias and conspiracy theories. 

Beyond highlighting these concerns, he has emphasized the critical need to educate the public, especially parents and educators, about the associated risks, particularly in educational settings where the tool may be utilized. Despite Jones's persistent efforts over the past three months to address the issue internally at Microsoft, the company has not taken action to remove Copilot Designer from public use or implement adequate safeguards. His recommendations, including the addition of disclosures and adjustments to the product's rating on the Android app store, were not implemented by the tech giant. 

Microsoft responded to the concerns raised by Jones, assuring its commitment to addressing employee concerns within the framework of company policies. The company expressed appreciation for efforts aimed at enhancing the safety of its technology. However, the situation underscores the internal challenges companies may face in balancing innovation with the responsibility of ensuring their technologies are safe and ethical. 

This incident isn't the first time Jones has spoken out about AI safety concerns. Despite facing pressure from Microsoft's legal team, Jones persisted in voicing his concerns, even extending his efforts to communicate with US senators about the broader risks associated with AI safety. The case of Copilot Designer adds to the ongoing scrutiny of AI technologies in the tech industry. Google recently paused access to its image generation feature on Gemini, its competitor to OpenAI's ChatGPT, after facing complaints about historically inaccurate images involving race. 

DeepMind, Google's AI division, reassured users that the feature would be reinstated after addressing the concerns and ensuring responsible use of the technology. As AI technologies become increasingly integrated into various aspects of our lives, incidents like the one involving Copilot Designer highlight the imperative for vigilant oversight and ethical considerations in AI development and deployment. The intersection of innovation and responsible AI use remains a complex landscape that necessitates collaboration between tech companies, regulatory bodies, and stakeholders to ensure the ethical and safe evolution of AI technologies.

Lazarus Group Exploits Microsoft Zero-Day in a Covert Rootkit Assault

 


The North Korean government-backed hackers were able to gain a major victory when Microsoft left a zero-day vulnerability unpatched for six months after learning it was actively exploited for six months. As a result of this, attackers were able to take advantage of existing vulnerabilities, thereby gaining access to sensitive information. Although Microsoft has since patched this vulnerability, the damage had already been done. 

Researchers from the Czech cybersecurity firm Avast discovered a zero-day vulnerability in AppLocker earlier this month, and Microsoft patched the flaw at the beginning of this month. AppLocker is a service that allows administrators to control which applications are allowed to run on their systems. 

APT38, the Lazarus group, is a state-run hacking team operated by the North Korean government. It's tasked with cyberespionage, sabotage, and sometimes even cybercrime to raise money for the regime. Although Lazarus has operated for many years, some researchers believe it is essentially a group of subgroups operating their campaigns and developing specific types of malware for specific targets that they use to accomplish their objectives. 

In addition to Lazarus's toolset tools, FudModule has been analyzed by other cybersecurity firms in the past in 2022 and is not new to Lazarus. Essentially, it is an in-user data-only rootkit that is active within the user space, utilizing kernel read/write privileges through the drivers to alter Windows security mechanisms and hinder the detection of other malicious components by security products. 

In August 2023, the security company Avast developed a proof-of-concept exploit for this vulnerability after observing the Lazarus attack and sending it to Microsoft. The vulnerability has been tracked as CVE-2024-21338 and was identified in the Lazarus attack last year. In an updated version of its FudModule rootkit, which ESET first documented in late 2022, Lazarus exploited CVE-2024-21338 to create a read/write kernel primitive, which Avast reports. 

As part of the rootkit, previously, BYOVD attacks were performed using a Dell driver. Avast reported that threat actors had previously established the administrative-to-kernel primitive through BYOVD (Bring Your Own Vulnerable Driver) techniques, which are noisy. However, there seems to be no doubt that this new zero-day exploit has made it easier for kernel-level read/write primitives to be established. 

The issue was discovered in further detail due to a thin line in Microsoft Windows Security that has been left for a very long time, which was the cause of this issue. Since "administrator-to-kernel vulnerabilities are not a security boundary", Microsoft still retains the right to patch them. Furthermore, it is also important to remember that threat actors with administrative privileges have access to the Windows kernel. 

Since this is an open space that attackers can play around with, they take advantage of any vulnerabilities they find to gain access to the kernel.  The threat actors will gain kernel-level access to the OS once they have managed to disrupt the software, conceal infection indicators, and disable kernel-mode telemetry, among other malicious activities once they have gained kernel-level access to the OS. 

In an announcement made by Avast, a cybersecurity vendor that discovered an admin-to-kernel exploit for the bug, the company noted that by weaponizing the kernel flaw, the Lazarus Group could manipulate kernel objects directly in an updated version of their data-only rootkit FudModule by performing direct kernel object manipulation.." 

A rootkit named FudModule has been detected by ESET and AhnLab since October 2022 as capable of disabling the monitoring of all security solutions on infected hosts. As a result of the Bring Your Own Vulnerable Driver (BYOVD) attack, in which an attacker implants a driver with known or unknown flaws to escalate privileges, the security solution is unable to monitor the network. 

There is something important about the latest attack because it goes "beyond BYOVD by exploiting a zero-day vulnerability in a driver that is already installed on the target machine, which is known to be a zero-day vulnerability." It is an appid.sys driver, which plays a crucial role in the functioning of an application control feature in Windows called AppLocker. 

In a study published earlier this week, researchers discovered that Lazarus was spreading malicious open-source software packages to a repository where Python software is hosted, aimed directly at software developers. The researchers report that the malicious packages have been downloaded hundreds of times, according to their findings. 

The South Korean judicial system was also targeted by Lazarus as part of his endeavours. There was a large hack at the Supreme Court of South Korea last year, which was allegedly carried out by the criminal Lazarus group of hackers. Police confiscated servers from the court in February. It is still being investigated whether or not the servers are compromised. 

North Korean hackers, including Lazarus, are said to have hacked more crypto platforms for the first time last year, according to a report by crypto analytics firm Chainalysis. The number of stolen assets reached $1 billion, more than any other year.

Microsoft Copilot for Finance: Transforming Financial Workflows with AI Precision

 

In a groundbreaking move, Microsoft has unveiled the public preview for Microsoft Copilot for Finance, a specialized AI assistant catering to the unique needs of finance professionals. This revolutionary AI-powered tool not only automates tedious data tasks but also assists finance teams in navigating the ever-expanding pool of financial data efficiently. 

Microsoft’s Corporate Vice President of Business Applications Marketing, highlighted the significance of Copilot for Finance, emphasizing that despite the popularity of Enterprise Resource Planning (ERP) systems, Excel remains the go-to platform for many finance professionals. Copilot for Finance is strategically designed to leverage the Excel calculation engine and ERP data, streamlining tasks and enhancing efficiency for finance teams. 

Building upon the foundation laid by Microsoft's Copilot technology released last year, Copilot for Finance takes a leap forward by integrating seamlessly with Microsoft 365 apps like Excel and Outlook. This powerful AI assistant focuses on three critical finance scenarios: audits, collections, and variance analysis. Charles Lamanna, Microsoft’s Corporate Vice President of Business Applications & Platforms, explained that Copilot for Finance represents a paradigm shift in the development of AI assistants. 

Unlike its predecessor, Copilot for Finance is finely tuned to understand the nuances of finance roles, offering targeted recommendations within the Excel environment. The specialization of Copilot for Finance sets it apart from the general Copilot assistant, as it caters specifically to the needs of finance professionals. This focused approach allows the AI assistant to pull data from financial systems, analyze variances, automate collections workflows, and assist with audits—all without requiring users to leave the Excel application. 

Microsoft's strategic move towards role-based AI reflects a broader initiative to gain a competitive edge over rivals. Copilot for Finance has the potential to accelerate impact and reduce financial operation costs for finance professionals across organizations of all sizes. By enabling interoperability between Microsoft 365 and existing data sources, Microsoft aims to provide customers with seamless access to business data in their everyday applications. 

Despite promising significant efficiency gains, the introduction of AI-driven systems like Copilot for Finance raises valid concerns around data privacy, security, and compliance. Microsoft assures users that they have implemented measures to address these concerns, such as leveraging data access permissions and avoiding direct training of models on customer data. 

As Copilot for Finance moves into general availability later this year, Microsoft faces the challenge of maintaining data governance measures while expanding the AI assistant's capabilities. The summer launch target for general availability, as suggested by members of the Copilot for Finance launch team, underscores the urgency and anticipation surrounding this transformative AI tool. 

With over 100,000 organizations already benefiting from Copilot, the rapid adoption of Copilot for Finance could usher in a new era of AI in the enterprise. Microsoft's commitment to refining data governance and addressing user feedback will be pivotal in ensuring the success and competitiveness of Copilot for Finance in the dynamic landscape of AI-powered financial assistance.

Corporate Accountability: Tech Titans Address the Menace of Misleading AI in Elections

 


In a report issued on Friday, 20 leading technology companies pledged to take proactive steps to prevent deceptive uses of artificial intelligence from interfering with global elections, including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe. 

According to a press release issued by the 20 companies participating in the event, they are committed to “developing tools to detect and address online distributions of artificial intelligence content that is intended to deceive voters.” 

The companies are also committed to educating voters about the use of artificial intelligence and providing transparency in elections around the world. It was the head of the Munich Security Conference, which announced the accord, that lauded the agreement as a critical step towards improving election integrity, increasing social resilience, and creating trustworthy technology practices that would help advance the advancement of election integrity. 

It is expected that in 2024, over 4 billion people will be eligible to cast ballots in over 40 different countries. A growing number of experts are saying that easy-to-use generative AI tools could potentially be used by bad actors in those campaigns to sway votes and influence those elections. 

From simple text prompts, users can generate images, videos, and audio using tools that use generative artificial intelligence (AI). It can be said that some of these services do not have the necessary security measures in place to prevent users from creating content that suggests politicians or celebrities say things they have never said or do things they have never done. 

In a tech industry "agreement" intended to reduce voter deception regarding candidates, election officials, and the voting process, the technology industry aims at AI-generated images, video, and audio. It is important to note, however, that it does not call for an outright ban on such content in its entirety. 

It should be noted that while the agreement is intended to show unity among platforms with billions of users, it mostly outlines efforts that are already being implemented, such as those designed to identify and label artificial intelligence-generated content already in the pipeline. 

Especially in the upcoming election year, which is going to see millions of people head to the polls in countries all around the world, there is growing concern about how artificial intelligence software could mislead voters and maliciously misrepresent candidates. 

AI appears to have already impersonated President Biden in New Hampshire's January primary attempting to discourage Democrats from voting in the primary as well as purportedly showing a leading candidate claiming to have rigged the election in Slovakia last September by using obvious AI-generated audio. 

The agreement, endorsed by a consortium of 20 corporations, encompasses entities involved in the creation and dissemination of AI-generated content, such as OpenAI, Anthropic, and Adobe, among others. Notably, Eleven Labs, whose voice replication technology is suspected to have been utilized in fabricating the false Biden audio, is among the signatories. 

Social media platforms including Meta, TikTok, and X, formerly known as Twitter, have also joined the accord. Nick Clegg, Meta's President of Global Affairs, emphasized the imperative for collective action within the industry, citing the pervasive threat posed by AI. 

The accord delineates a comprehensive set of principles aimed at combating deceptive election-related content, advocating for transparent disclosure of origins and heightened public awareness. Specifically addressing AI-generated audio, video, and imagery, the accord targets content falsifying the appearance, voice, or conduct of political figures, as well as disseminating misinformation about electoral processes. 

Acknowledged as a pivotal stride in fortifying digital communities against detrimental AI content, the accord underscores a collaborative effort complementing individual corporate initiatives. As per the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections," signatories commit to developing and deploying technologies to mitigate risks associated with deceptive AI election content, including the potential utilization of open-source solutions where applicable.

 Notably, Adobe, Amazon, Arm, Google, IBM, and Microsoft, alongside others, have lent their support to the accord, as confirmed in the latest statement.

Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

 

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence (AI) and cybercrime has become a potent concern. Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models (LLMs) to bolster their cyber operations. 

The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare. According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton’s 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies. 

Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection. This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas. The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms. 

Similarly, Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally. While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape. Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks. 

The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities. Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations. This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities. 

In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes. The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data. Additionally, Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats. 

The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve. The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats. By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.

Microsoft Copilot: A Visual Revolution in AI Image Editing

 

In a significant and forward-thinking development, Microsoft has recently upgraded its AI-powered coding assistant, Copilot, introducing a groundbreaking feature that extends its capabilities into the realm of AI image editing. This not only marks a substantial expansion of Copilot's functionalities but also brings about a visual overhaul to its interface, signifying a noteworthy stride in the convergence of artificial intelligence and creative processes. 

Microsoft Copilot initially gained prominence for its role in assisting developers with code suggestions. However, it has now transcended its traditional coding domain, venturing into the arena of image editing. Leveraging advanced machine learning algorithms, Copilot can intelligently understand and interpret user inputs, providing real-time suggestions for image editing. This fusion of coding assistance and visual creativity not only showcases the versatility of AI technologies but also points towards an era where these technologies seamlessly integrate into various aspects of digital workflows. 

Accompanying the introduction of AI image editing, Microsoft Copilot's user interface has undergone a substantial visual overhaul. The interface seamlessly integrates both coding and image editing functionalities, offering users a unified and intuitive experience. This revamped design is intended to streamline workflows, allowing users to transition seamlessly between coding tasks and creative endeavours without encountering friction in their digital workspaces. 

The integration of AI image editing within Microsoft Copilot holds the potential to revolutionize the collaborative efforts of developers and designers. With a single tool now offering both coding assistance and visual creativity, there is an opportunity for increased synergy between these traditionally distinct roles. This streamlined workflow could result in more efficient project development, ultimately reducing the gap between the ideation and execution phases of digital projects. 

Furthermore, Microsoft Copilot's foray into image editing emphasizes the growing influence of AI in creative processes. By harnessing machine learning capabilities, Copilot can analyze image contexts and user preferences, providing relevant and context-aware suggestions. This not only accelerates the image editing process but also introduces an element of creativity and inspiration driven by AI algorithms. 

In the ever-evolving landscape of technology, the upgrade to Microsoft Copilot with AI image editing capabilities signifies a significant step forward. As the boundaries between coding and creative tasks blur, this development showcases the transformative potential of artificial intelligence in shaping the future of digital workspaces. Microsoft Copilot stands as a testament to Microsoft's commitment to innovation, highlighting the seamless integration of technology into diverse aspects of digital work.

Microsoft's Super Bowl Pitch: We Are Now an AI Firm

 

Microsoft made a comeback to the Super Bowl on Sunday with a commercial for its AI-powered chatbot, highlighting the company's resolve to shake off its reputation as a stuffy software developer and refocus its offerings on the potential of artificial intelligence. 

The one-minute ad, which was uploaded to YouTube on Thursday of last week, shows users accessing Copilot, the AI assistant that Microsoft released a year ago, via their smartphones. The app can be seen helping users in automating a range of tasks, including generating computer code snippets and creating digital artwork. 

Microsoft's Super Bowl commercial, which marked the company's first appearance in the game in four years, showcased its efforts to reposition itself as an AI-focused company. The IT behemoth has invested $1 billion in OpenAI in 2019 alone, and it has put billions more into refining its AI skills. The technology has also been incorporated into staples like Microsoft Word, Excel, and Azure. 

The tech giant now wants customers and companies looking for an AI boost to use its services instead of rivals like Google, which on Thursday revealed an update to its AI program. 

Wedbush Securities Analyst Dan Ives told CBS MoneyWatch that the outcome of the AI race will have a significant impact on multinational tech businesses, as the industry is expected to reach $1.3 trillion by 2032. "This is no longer your grandfather's Microsoft … and the Super Bowl is a unique time to further change perceptions," he stated. 

For 30 seconds of airtime during this year's game, advertisers paid over $7 million, with over 100 million viewers predicted. In a blog post last week, Microsoft Consumer Chief Marketing Officer Yusuf Mehdi announced that the Copilot app is receiving an update "coinciding with the launch of our Super Bowl ad." The update includes a "cleaner, sleeker look" and suggested prompts that could help users take advantage of the app's AI capabilities. 

Thus far, Microsoft's strategy has proven successful. Its cloud-based revenue increased by 24% to $33.7 billion in the most recent quarter, aided by the incorporation of AI into its Azure cloud computing service.

Microsoft Introduces PC Cleaner App to Boost PC Performance

 


In a move to enhance user experience, Microsoft has predicated its PC Cleaner app, now conveniently available on the Microsoft Store for both Windows 10 and Windows 11 users. Similar to popular third-party tools like CCleaner, this application aims to declutter system folders, potentially boosting your computer's performance.

Developed and tested since 2022 under the name PC Manager, originally intended for the Chinese market, the app is now accessible in more regions, including the United States. While it might not be visible on all Windows 11 devices just yet, an official Microsoft PC Cleaner page assures users that it is on its way.

The PC Cleaner offers various features through a new floating toolbar. Users can expect tools like PC Boost, focusing on eliminating unnecessary processes and temporary files. The Smart Boost option efficiently handles spikes in RAM usage and large temporary files exceeding 1 GB. Another feature, Deep Cleanup, targets older Windows update files, recycle bin items, web cache, and application caches, giving users the flexibility to choose what to keep or remove.

The Process tool provides a comprehensive view of all running processes, allowing users to end any process within PC Cleaner without the need for Task Manager. The Startup feature empowers users to manage applications launching at startup, optimising system boot times. Large Files tool deftly locates sizable files on any drive, streamlining the process compared to manual searches through File Explorer.

Additional tools include Taskbar Repair to revert it to its original state and Restore Default Apps, which restores default app preferences. Notably, Microsoft seems to use the latter feature to encourage users to explore Microsoft apps, such as Edge.

Microsoft has been critical of third-party system cleaner apps in the past, expressing concerns about potential harm to crucial system files. Despite labelling apps like CCleaner as potentially unwanted programs (PUPs), they are still available for download from the Microsoft Store. However, with PC Cleaner, Microsoft assures users that the application, designed in-house, won't delete necessary system files, presenting a safer alternative to third-party options.

Offering a host of useful tools for free, PC Cleaner aligns with Microsoft's commitment to providing quality applications for Windows users. The app, matching your Windows theme, is set to be a secure and reliable choice straight from the Microsoft Store. While third-party apps like CCleaner have faced security concerns in the past, PC Cleaner's direct association with Microsoft provides users with a trustworthy solution. The app is free to use, and an official Microsoft page for PC Cleaner suggests a direct download link will be available soon for those who can't find it on the Microsoft Store yet.

To simplify this, Microsoft's introduction of PC Cleaner signifies a positive step toward providing users with a reliable, in-house solution for system optimisation. With its user-friendly features and assurance of not deleting crucial system files, PC Cleaner aims to facilitate the ins and outs of PC performance for Windows users.


AI Takes Center Stage: Microsoft's Bold Move to Unparalleled Scalability

 


In the world of artificial intelligence, Microsoft is currently making some serious waves with its recent success in deploying the technology at scale, making it one of the leading players. With a market value that has been estimated to be around $3tn, every one of Microsoft's AI capabilities is becoming the envy of the world. 

AI holds enormous potential for transformation and Microsoft is leading the way in harnessing the power of AI for a more efficient and effective life. It is not only Microsoft's impressive growth that demonstrates the company's potential, but it also emphasizes how artificial intelligence plays such a significant role in our digital environment. 

There is no doubt that artificial intelligence has revolutionized the world of business, transforming everything from healthcare to finance, and beyond. It is Microsoft's commitment to transforming the way we live and work that makes its commitment to deploying AI solutions at scale all the more evident. 

OpenAI, the manufacturer of the ChatGPT bot which was released in 2022, has a large stake in the tech giant, which led to a wave of optimism about the possibilities that could be accessed by technology. Despite this, OpenAI has not been without controversy. 

The New York Times, an American newspaper, is suing OpenAI for alleged copyright violations in training the system. Microsoft is also named as a defendant in the lawsuit, which states that the firms should be liable for damages worth "billions of dollars" in damages to the plaintiff. 

To "learn" by analysing massive amounts of data sourced from the internet, ChatGPT and other large language models (LLMs) analyze a vast amount of data. It is also important for Alphabet to keep an eye on artificial intelligence, as it updated investors on Tuesday as well. 

In the September-December quarter, Alphabet reported revenues and profits based on a 13 per cent increase year-over-year, which were nearly $20.7bn. It has also been said that AI investments are also helping to improve Google's search, cloud computing, and YouTube divisions, according to Sundar Pichai, the company's CEO. 

Although both companies have enjoyed gains this year, their workforces have continued to slim down. Google's headcount has been down almost 5% since last year, and it has announced another round of cuts earlier in the month. 

In the same vein, Microsoft announced plans to eliminate 1,900 jobs in its gaming division, reducing 9% of its staff. It became obvious that this move would be made following their acquisition of Activision Blizzard, the company that makes the games World of Warcraft and Call of Duty.

Midnight Blizzard: Russian Threat Actors Behind Microsoft Corporate Emails’ Breach


On Friday, Microsoft informed that some of its corporate accounts suffered a breach in which some of its data was compromised. The attack was conducted by a Russian state-sponsored hackers group named “Midnight Blizzard.”

The attack was first detected on January 12th, and Microsoft in its initial investigation attributed the attack to the Russian threat actors, known famously as Nobelium or APT-29.

Microsoft informs that the threat actors launched the attacks in November 2023, in which they carried out a password spray attack in order to access a legacy non-production test tenant account. 

Password Spray Attack

A password spray attack is a type of brute force attack where threat actors collect a list of potential login names and then attempt to log in to all of them using a particular password. If that password fails, they repeat this process with other passwords until they run out or successfully breach the account.

Since the hackers were able to access accounts using a brute force attack, it is clear that it lacked two-factor authentication or multi-factor authentication.

Microsoft claims that after taking control of the "test" account, the Nobelium hackers utilized it to access a "small percentage" of the company's email accounts for more than a month.

It is still unclear why a non-production test account would have the ability to access other accounts in Microsoft's corporate email system unless the threat actors utilized this test account to infiltrate networks and move to accounts with higher permissions.

Apparently, these breached accounts include members of Microsoft’s leadership team and employees assigned to the cybersecurity and legal departments, targeted by hackers to steal emails and attachments. 

"The investigation indicates they were initially targeting email accounts for information related to Midnight Blizzard itself," the Microsoft Security Response Center shared in a report on the incident.

"We are in the process of notifying employees whose email was accessed."

Microsoft reaffirms that the incident was caused by the brute force password attack, rather than a vulnerability in their product services.

However, it seems that Microsoft’s poorly managed security configuration played a major role in the success of the breach.

While this investigation is underway, Microsoft stated that they will release more information when it is appropriate.  

Bill Gates Explains How AI will be Transformative in 5 Years


It is a known fact that Bill Gates is positive about the future of artificial intelligence, however, he is now predicting that technology will be transformative for everyone in the next five years. 

The boom in AI technology has raised concerns over its potential to replace millions of jobs across the world. This week, the International Monetary Fund (IMF) reported that around 40% of all jobs will be impacted by the growing AI. 

While Gates does not disagree with the stats, he believes, and history has it, that with every new technology comes fear and then new opportunities. 

“As we had [with] agricultural productivity in 1900, people were like ‘Hey, what are people going to do?’ In fact, a lot of new things, a lot of new job categories were created and we’re way better off than when everybody was doing farm work,” Gates said. “This will be like that.”

AI, according to Gates, will make everyone's life easier. He specifically mentioned helping doctors with their paperwork, saying that it is "part of the job they don't like, we can make that very efficient," in a Tuesday interview with CNN's Fareed Zakaria.

He adds that since there is not a need for “much new hardware,” accessing AI will be over “the phone or the PC you already have connected over the internet connection you already have.”

Gates believes that improvements with OpenAI’s ChatGPT-4 were “dramatic since the AI bot can essentially “read and write,” this way it is “almost like having a white-collar worker to be a tutor, to give health advice, to help write code, to help with technical support calls.” 

He notes that incorporating new technology into sectors like education and medicine will be “fantastic.”

Microsoft and OpenAI have a multibillion-dollar collaboration. Gates remains one of Microsoft's biggest shareholders.

In his interview with Zakaria at Davos for the World Economic Forum, Bill Gates noted that the objective of Gates Foundation is “to make sure that the delay between benefitting people in poor countries versus getting to rich countries will make that very short[…]After all, the shortages of doctors and teachers is way more acute in Africa then it is in the West.”

However, the IMF had a more pessimistic view in this regard. The group believes that AI has the potential to ‘deepen inequality’ with any politician’s interference.