Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI. Show all posts

AI and Vulnerability Management: Industry Leaders Show Positive Signs

AI and Vulnerability Management: Industry Leaders Show Positive Signs

Positive trend: AI and vulnerability management

We are in a fast-paced industry, and with the rise of technological developments each day, the chances of cyber attacks always arise. Hence, defense against such attacks and cybersecurity becomes paramount. 

The latest research into the cybersecurity industry by Seemplicity revealed that 91% of participants claim their security budget is increasing this year. It shows us the growing importance of cybersecurity in organizations.

Understanding report: An insight into industry leaders' mindset

A survey of 300 US cybersecurity experts to understand views about breathing topics like automation, AI, regulatory compliance, vulnerability and exposure management. Organizations reported employing 38 cybersecurity vendors, highlighting sophisticated complexity and fragmentation levels within the attack surfaces. 

The fragmentation results in 51% of respondents feeling high levels of noise from the tools, feeling overwhelmed due to the traffic of notifications, alerts, and findings, most of which are not signaled anywhere. 

As a result, 85% of respondents need help with handling this noise. The most troubling challenge reported being slow or delayed risk reduction, highlighting the seriousness of the problem, because of the inundating noise slowing down effective vulnerability identification and therefore caused a delay in response to threats. 

Automation and vulnerability management on the rise

97% of respondents cited methods (at least one) to control noise, showing acceptance of the problem and urgency to resolve it. 97% showed some signs of automation, hinting at a growth toward recognizing the perks of automation in vulnerability and exposure management. The growing trend towards automation tells us one thing, there is a positive adoption response. 

However, 44% of respondents still rely on manual methods, a sign that there still exists a gap to full automation.

But the message is loud and clear, automation has helped in vulnerability and exposure management efficiency, as 89% of leaders report benefits, the top being a quicker response to emergency threats. 

AI: A weapon against cyber threats

The existing opinion (64%) that AI will be a key force against fighting cyber threats is a positive sign showing its potential to build robust cybersecurity infrastructure. However, there is also a major concern (68%) about the effects of integrating AI into software development on vulnerability and exposure management. AI will increase the pace of code development, and the security teams will find it difficult to catch up. 

AI's Rapid Code Development Outpaces Security Efforts

 


As artificial intelligence (AI) advances, it accelerates code development at a pace that cybersecurity teams struggle to match. A recent survey by Seemplicity, which included 300 US cybersecurity professionals, highlights this growing concern. The survey delves into key topics like vulnerability management, automation, and regulatory compliance, revealing a complex array of challenges and opportunities.

Fragmentation in Security Environments

Organisations now rely on an average of 38 different security product vendors, leading to significant complexity and fragmentation in their security frameworks. This fragmentation is a double-edged sword. While it broadens the arsenal against cyber threats, it also results in an overwhelming amount of noise from security tools. 51% of respondents report being inundated with alerts and notifications, many of which are false positives or non-critical issues. This noise significantly hampers effective vulnerability identification and prioritisation, causing delays in addressing real threats. Consequently, 85% of cybersecurity professionals find managing this noise to be a substantial challenge, with the primary issue being slow risk reduction.

The Rise of Automation in Cybersecurity

In the face of overwhelming security alerts, automation is emerging as a crucial tool for managing cybersecurity vulnerabilities. According to a survey by Seemplicity, 95% of organizations have implemented at least one automated method to manage the deluge of alerts. Automation is primarily used in three key areas:

1. Vulnerability Scanning: 65% of participants have adopted automation to enhance the precision and speed of identifying vulnerabilities, significantly streamlining this process.

2. Vulnerability Prioritization: 53% utilise automation to rank vulnerabilities based on their severity, ensuring that the most critical issues are addressed first.

3. Remediation: 41% of respondents automate the assignment of remediation tasks and the execution of fixes, making these processes more efficient.

Despite these advancements, 44% still rely on manual methods to some extent, highlighting obstacles to complete automation. Nevertheless, 89% of cybersecurity leaders acknowledge that automation has increased efficiency, particularly in accelerating threat response.

AI's Growing Role in Cybersecurity

The survey highlights a robust confidence in AI's ability to transform cybersecurity practices. An impressive 85% of organizations intend to increase their AI spending over the next five years. Survey participants expect AI to greatly enhance early stages of managing vulnerabilities in the following ways:

1. Vulnerability Assessment: It is argued by 38% of the demographic that AI will  boost the precision and effectiveness of spotting vulnerabilities.

2. Vulnerability Prioritisation: 30% view AI as crucial for accurately ranking vulnerabilities based on their severity and urgency.

Additionally, 64% of respondents see AI as a strong asset in combating cyber threats, indicating a high level of optimism about its potential. However, 68% are concerned that incorporating AI into software development will accelerate code production at a pace that outstrips security teams' ability to manage, creating new challenges in vulnerability management.


Views on New SEC Incident Reporting Requirements

The survey also sheds light on perspectives regarding the new SEC incident reporting requirements. Over half of the respondents see these regulations as opportunities to enhance vulnerability management, particularly in improving logging, reporting, and overall security hygiene. Surprisingly, fewer than a quarter of respondents view these requirements as adding bureaucratic burdens.

Trend Towards Continuous Threat Exposure Management (CTEM)

A trend from the survey is the likely adoption of Continuous Threat Exposure Management (CTEM) programs by 90% of respondents. Unlike traditional periodic assessments, CTEM provides continuous monitoring and proactive risk management, helping organizations stay ahead of threats by constantly assessing their IT infrastructure for vulnerabilities.

The Seemplicity survey highlights both the challenges and potential solutions in the evolving field of cybersecurity. As AI accelerates code development, integrating automation and continuous monitoring will be essential to managing the increasing complexity and noise in security environments. Organizations are increasingly recognizing the need for more intelligent and efficient methods to stay ahead of cyber threats, signaling a shift towards more proactive and comprehensive cybersecurity strategies.

Are We Ready for the Next Wave of Cyber Threats?



In our increasingly digital world, cybersecurity is a growing concern for everyone— from businesses and governments to everyday individuals. As technology advances, it opens up exciting possibilities and creates new, sophisticated cyber threats. Recent high-profile attacks, like those on Ascension and the French government, show just how damaging these threats can be.

Cybercriminals are always finding new ways to exploit weaknesses. According to Cybersecurity Ventures, global cybercrime damages could hit $10.5 trillion a year by 2025. This huge number highlights why strong cybersecurity measures are so important.

One major evolution in cyber threats is seen in ransomware attacks. These attacks used to be about locking up data and demanding a ransom to unlock it. Cybercriminals also steal data and threaten to release it publicly, which can disrupt businesses and ruin reputations. For example, in May, the Black Basta group attacked Ascension, the largest non-profit Catholic health system in the U.S., disrupting operations in its 140 hospitals and affecting patient care.

Supply chain attacks are another big concern. These attacks target vulnerabilities in the network of suppliers and partners that businesses rely on. This makes securing the entire supply chain crucial.

Cybercriminals are also using artificial intelligence (AI) to make their attacks more powerful. Examples include DeepLocker, a type of AI-powered malware that stays hidden until it reaches its target, and deepfake scams, where AI creates fake videos or audio to trick people into transferring money. AI-driven malware can change its behaviour to avoid detection, making it even more dangerous.

Distributed denial-of-service (DDoS) attacks are another serious threat. These attacks flood a website or network with so much traffic that it can’t function. In March 2024, a massive DDoS attack targeted over 300 web domains and 177,000 IP addresses linked to the French government, causing major disruptions.

Building a Strong Cybersecurity Defense

To fight these evolving threats, businesses need to build strong cybersecurity defenses. One effective approach is the zero-trust model, which means every access request is verified, no matter where it comes from. Key parts of this model include multi-factor authentication (MFA), which requires more than one form of verification to access systems, and least privilege access, which ensures users only have access to what they need to do their job.

Advanced monitoring tools are also essential. Security information and event management (SIEM) systems, combined with AI-driven analytics, help detect and respond to threats in real time by providing a comprehensive view of network activities.

Human error is a major vulnerability in cybersecurity, so employee training and awareness are crucial. Regular training programs can help employees recognise and respond to threats like phishing attacks, creating a culture of security awareness.

The Role of AI in Cybersecurity

While AI helps cybercriminals, it also offers powerful tools for defending against cyber threats. AI can analyse vast amounts of data to spot patterns and anomalies that might indicate an attack. It can detect unusual behaviour in networks and help security analysts respond more quickly and efficiently to threats.

AI can also identify and mitigate insider threats by analysing user behaviour and spotting deviations from typical activity patterns. This helps strengthen overall security.

The future of cybersecurity will involve constant innovation and adaptation to new challenges. AI will play a central role in both defence and predictive analytics, helping foresee and prevent potential threats. Ethical considerations and developing frameworks for responsible AI use will be important.

Businesses need to stay ahead by adopting new technologies and continuously improving their cybersecurity practices. Collaboration between industries and with government agencies will be crucial in creating comprehensive strategies.

Looking to the future, we need to keep an eye on potential threats and innovations. Quantum computing promises new breakthroughs but also poses a threat to current encryption methods. Advances in cryptography will lead to more secure ways to protect data against emerging threats.

As cyber threats evolve, staying informed and adopting best practices are essential. Continuous innovation and strategic planning are key to staying ahead of cybercriminals and protecting critical assets.


Investing in AI? Don’t Forget the Cyber Locks! VCs Advice.


The OpenAI Data Breach: A Wake-Up Call for Seed VCs

Security breaches are common in the current industry of artificial intelligence (AI) and machine learning (ML). However, when a prominent player like OpenAI falls victim to such an incident, it sends shockwaves through the tech community. This blog post delves into the recent OpenAI data breach and explores its impact on seed venture capitalists (VCs).

The Incident

OpenAI, known for its cutting-edge research in AI and its development of powerful language models, recently disclosed a security breach. Hackers gained unauthorized access to some of OpenAI’s internal systems, raising concerns about data privacy and security. While OpenAI assured users that no sensitive information was compromised, the incident highlights the vulnerability of AI companies to cyber threats.

Seed VCs on High Alert

Seed VCs, who invest in early-stage startups, should pay close attention to this breach. Here’s why:

Dependency on AI Companies

Seed VCs often collaborate with AI companies, providing funding and mentorship. As AI technologies become integral to various industries, VCs increasingly invest in startups leveraging AI/ML. The OpenAI breach underscores the need for due diligence when partnering with such firms.

Data Privacy Risks

Startups working with AI models generate and handle vast amounts of data. Seed VCs must assess the data security practices of their portfolio companies. A breach could harm the startup and impact the VC’s reputation and relationships with other investors.

Intellectual Property Concerns

Seed VCs invest in innovative ideas and technologies. If a startup’s IP is compromised due to lax security practices, it affects the VC’s investment. VCs should encourage startups to prioritize security and protect their intellectual assets.

Mitigating Risks: Seed VCs can take proactive steps

1. Due Diligence: Before investing, thoroughly evaluate a startup’s security protocols. Understand how they handle data, who has access, and their response plan in case of a breach.

2. Collaboration with AI Firms: Engage in open conversations with AI companies about security measures. VCs can influence best practices by advocating for robust security standards.

3. Education: Educate portfolio companies about security hygiene. Regular audits and training sessions can help prevent breaches.

The Future of Cybersecurity Jobs in an AI-Driven World

 

The Future of Cybersecurity Jobs in an AI-Driven World Artificial Intelligence (AI) is revolutionizing the cybersecurity landscape, enhancing both the capabilities of cyber attackers and defenders. But a pressing question remains: Will AI replace cybersecurity jobs in the future? AI is sparking debates in the cybersecurity community. Is it safe? Does it benefit the good guys or the bad guys more? And crucially, how will it impact jobs in the industry? 

Here, we explore what modern AI is, its role in cybersecurity, and its potential effects on your career. Let’s delve into it. 

What is Modern AI? 

Modern AI involves building computer systems that can do tasks usually needing human intelligence. It uses algorithms and trains Large Language Models (LLMs) with lots of data to make accurate decisions. These models connect related topics through artificial neural networks, improving their decision-making through continuous data training. This process is called machine learning or deep learning. AI can now handle tasks like recognizing images, processing language, and learning from feedback in robotics and video games. AI tools are now integrated with complex systems to automate data analysis. This trend began with ChatGPT and has expanded to include AI image generation tools like MidJourney and domain-specific tools like GitHub Copilot. 

Despite their impressive capabilities, AI has limitations. AI in Cybersecurity AI is playing a big role in cybersecurity. Here are some key insights from a report called "Turning the Tide," based on interviews with 500 IT leaders: 

Job Security Concerns: Only 9% of respondents are confident AI will not replace their jobs in the next decade. Nearly one-third think AI will automate all cybersecurity tasks eventually. 

AI-Enhanced Attacks: Nearly 20% of respondents expect attackers to use AI to improve their strategies by 2025. 

Future Predictions: By 2030, a quarter of IT leaders believe data access will depend on biometric or DNA data, making unauthorized access impossible. Other predictions include less investment in physical property due to remote work, 5G transforming network security, and AI-automated security systems. 

"AI is a useful tool in defending against threats, but its value can only be harnessed with human expertise”, Bharat Mistry from Trend Micro reported. 

AI's Limitations in Cybersecurity 

Despite its potential, AI has several limitations requiring human oversight: 

Lack of Contextual Understanding: AI can analyze large data sets but can't grasp the psychological aspects of cyber defense, like hacker motivations. Human intervention is crucial for complex threats needing deep context. 

Inaccurate Results: AI tools can generate false positives and negatives, wasting resources or missing threats. Humans need to review AI alerts to ensure critical threats are addressed. 

Adversarial Attacks: As AI use grows, attacks against AI models, such as poisoning malware scanners to misidentify threats, will likely increase. Human oversight is essential to counter these manipulations. 

AI Bias: AI systems trained on biased data can produce biased results, affecting cybersecurity. Human oversight is necessary to mitigate biases and ensure accurate defenses. 


As AI evolves, cybersecurity professionals must adapt by continuously learning about AI advancements and their impact on security, developing AI and machine learning skills, enhancing critical thinking and contextual understanding, and collaborating with AI as a tool to augment their capabilities. Effective human-AI collaboration will be crucial for future cybersecurity strategies.

From Hype to Reality: Understanding Abandoned AI Initiatives

From Hype to Reality: Understanding Abandoned AI Initiatives

A survey discovered that nearly half of all new commercial artificial intelligence projects are abandoned in the middle.

Navigating the AI Implementation Maze

A recent study by the multinational law firm DLA Piper, which surveyed 600 top executives and decision-makers worldwide, sheds light on the considerable hurdles businesses confront when incorporating AI technologies. 

Despite AI's exciting potential to transform different industries, the path to successful deployment is plagued with challenges. This essay looks into these problems and offers expert advice for navigating the complex terrain of AI integration.

Why Half of Business AI Projects Get Abandoned

According to the report, while more than 40% of enterprises fear that their basic business models will become obsolete unless they incorporate AI technologies, over half (48%) of companies that have started AI projects have had to suspend or roll them back. Worries about data privacy (48%), challenges with data ownership and insufficient legislative frameworks (37%), customer apprehensions (35%), the emergence of new technologies (33%), and staff worries (29%).

The Hype vs. Reality

1. Unrealistic Expectations

When organizations embark on an AI journey, they often expect immediate miracles. The hype surrounding AI can lead to inflated expectations, especially when executives envision seamless automation and instant ROI. However, building robust AI systems takes time, data, and iterative development. Unrealistic expectations can lead to disappointment and project abandonment.

2. Data Challenges

AI algorithms thrive on data, but data quality and availability remain significant hurdles. Many businesses struggle with fragmented, messy data spread across various silos. With clean, labeled data, AI models can continue. Additionally, privacy concerns and compliance issues further complicate data acquisition and usage.

The Implementation Pitfalls

1. Lack of Clear Strategy

AI projects often lack a well-defined strategy. Organizations dive into AI without understanding how it aligns with their overall business goals. A clear roadmap, including pilot projects, resource allocation, and risk assessment, is crucial.

2. Talent Shortage

Skilled AI professionals are in high demand, but the supply remains limited. Organizations struggle to find data scientists, machine learning engineers, and AI architects. Without the right talent, projects stall or fail.

3. Change Management

Implementing AI requires organizational change. Employees must adapt to new workflows, tools, and mindsets. Resistance to change can derail projects, leading to abandonment.

From Siri to 5G: AI’s Impact on Telecommunications

From Siri to 5G: AI’s Impact on Telecommunications

The integration of artificial intelligence (AI) has significantly transformed the landscape of mobile phone networks. From optimizing network performance to enhancing user experiences, AI plays a pivotal role in shaping the future of telecommunications. 

In this blog post, we delve into how mobile networks embrace AI and its impact on consumers and network operators.

1. Apple’s AI-Powered Operating System

Apple, a tech giant known for its innovation, recently introduced “Apple Intelligence,” an AI-powered operating system. The goal is to make iPhones more intuitive and efficient by integrating AI capabilities into Siri, the virtual assistant. Users can now perform tasks more quickly, receive personalized recommendations, and interact seamlessly with their devices.

2. Network Optimization and Efficiency

Telecom companies worldwide are leveraging AI to optimize mobile phone networks. Here’s how:

  • Dynamic Frequency Adjustment: Network operators dynamically adjust radio frequencies to optimize service quality. AI algorithms analyze real-time data to allocate frequencies efficiently, ensuring seamless connectivity even during peak usage.
  • Efficient Cell Tower Management: AI helps manage cell towers more effectively. During low-demand periods, operators can power down specific towers, reducing energy consumption without compromising coverage.

3. Fault Localization and Rapid Resolution

AI-driven network monitoring has revolutionized fault localization. For instance:

  • Korea Telecom’s Quick Response: In South Korea, Korea Telecom uses AI algorithms to pinpoint network faults within minutes. This rapid response minimizes service disruptions and enhances customer satisfaction.
  • AT&T’s Predictive Maintenance: AT&T in the United States relies on predictive AI models to anticipate network issues. By identifying potential problems before they escalate, they maintain network stability.

4. AI Digital Twins for Real-Time Monitoring

Network operators like Vodafone create AI digital twins—virtual replicas of real-world equipment such as masts and antennas. These digital twins continuously monitor network performance, identifying anomalies and suggesting preventive measures. As a result, operators can proactively address issues and maintain optimal service levels.

5. Data Explosion and the Role of 5G

The proliferation of AI generates massive data. Consequently, investments in 5G Standalone (SA) networks have surged. Here’s why:

  • Higher Speeds and Capacity: 5G SA networks offer significantly higher speeds and capacity compared to the older 4G system. This is essential for handling the data influx from AI applications.
  • Edge Computing: 5G enables edge computing, where AI processing occurs closer to the user. This reduces latency and enhances real-time applications like autonomous vehicles and augmented reality.

6. Looking Ahead: The Quest for 6G

Despite 5G advancements, experts predict that AI’s demands will eventually outstrip its capabilities. Anticipating this, researchers are already exploring 6G technology, expected around 2028. 6G aims to provide unprecedented speeds, ultra-low latency, and seamless connectivity, further empowering AI-driven applications.

Securing a Dynamic World: The Future of Cybersecurity Operations

Securing a Dynamic World: The Future of Cybersecurity Operations

Cybersecurity has become a critical concern for organizations worldwide. As threats evolve and technology advances, the role of cybersecurity operations is undergoing significant transformation. Let’s delve into the key aspects of this evolution. 

Today's changing cyber threat landscape presents a tremendous challenge to enterprises worldwide. With the rise of malevolent AI-powered threats and state-sponsored enterprises, the security sector is at a crossroads. 

Threat complexity increases, creating ubiquitous and multifaceted dangers, including sophisticated cyberattacks and internal weaknesses. This environment necessitates novel solutions, encouraging a move from old security paradigms to a more integrated, data-driven approach.

1. Dynamic Threat Landscape

Cyber threats are no longer limited to lone hackers in dark basements. Sophisticated state-sponsored attacks, ransomware gangs, and organized cybercrime syndicates pose substantial risks. The evolving threat landscape demands agility and adaptability from cybersecurity professionals.

2. Remote Work Challenges

The Covid-19 pandemic accelerated the adoption of remote work. While it offers flexibility, it also introduces security challenges. Securing remote endpoints, ensuring secure access, and protecting sensitive data outside the corporate network are top priorities.

3. Ransomware Surge

Ransomware attacks have surged, with costs doubling in 2021. These attacks not only encrypt critical data but also threaten to leak it publicly. Cybersecurity teams must focus on prevention, detection, and incident response to combat this menace.

4. Securing Remote Branches and IoT Devices

Organizations operate across multiple locations, including remote branches. Each branch introduces potential vulnerabilities. Additionally, the proliferation of Internet of Things (IoT) devices adds complexity. Cybersecurity operations must extend their reach to secure these distributed environments effectively.

5. Integrated, Data-Driven Solutions

Traditional security paradigms are shifting. Siloed approaches are giving way to integrated solutions that leverage data analytics, machine learning, and threat intelligence. Security operations centers (SOCs) now rely on real-time data to detect anomalies and respond swiftly.

6. Holistic Approach

Cybersecurity is no longer just about firewalls and antivirus software. A holistic approach involves risk assessment, vulnerability management, identity and access management, and continuous monitoring. Collaboration across IT, development, and business units is essential.

7. AI and Quantum Computing

Innovations like artificial intelligence (AI) and quantum computing are game-changers. AI enhances threat detection, automates routine tasks, and augments human decision-making. Quantum computing promises to revolutionize encryption and decryption methods.

Apple's AI Features Demand More Power: Not All iPhones Make the Cut

 


A large portion of Apple's developer conference on Monday was devoted to infusing artificial intelligence (AI) technology into its software. Some of the features Apple has rumoured to incorporate are not expected to work on all iPhones. If you read this article correctly, it sounds as if Apple is betting its long-awaited AI features will be enough to make you upgrade your iPhone — especially if the AI requires the latest smartphone. The annual developer conference of Apple, WWDC, is expected to take place on Monday with the announcement of iOS 18. 

According to Bloomberg, the company will release a new version of its artificial intelligence software, dubbed "Apple Intelligence," which will include features that will run directly on the iPhone's processor instead of being powered by cloud servers - in other words, they'll be powered directly from the device itself. According to the report, some of the AI services will still utilize cloud-based computing, however, many won't. The iPhone, iOS18, as well as any of Apple's other products and devices, are set to be updated, and anything short of a full array of AI-based features will likely disappoint developers and industry analysts, not to mention investors, with any changes Apple makes to its operating system. 

The company has turned to artificial intelligence (AI) as a way to revive its loyal fan base of over 1 billion customers and reverse the decline of its best-selling product in the face of choppy consumer spending and resurgent tech rivals. A key selling point that Apple uses to differentiate itself from its competitors is the fact that it is committed to privacy. There are still questions to be answered in regards to how Federighi will make sure that the personal context of a user will be shared across multiple devices belonging to the same user. 

However, he said that all data will be processed on-device and will never be shared across cloud servers. It is widely believed that the move by Apple was an evolution of the generative AI domain that would lead to the adoption of generative AI by enterprises by streamlining the best practices for AI privacy in the industrial sector. Analysts said that the software is likely to encourage a cascade of new purchases, as it requires at least an iPhone 15 or 15 Pro to be able to function. It has been predicted that we will likely see Apple's most significant upgrade cycle since the launch of the iPhone 12 in 2020, when 5G connectivity was part of the appeal for consumers for the device. 

A study from Apple analyst Ming-Chi Kuo published on Medium has claimed that the amount of on-board memory in the forthcoming iPhone 16 range, which is predicted to have 8GB of storage, may not be enough to be able to fully express the large language model (LLM) behind Apple's artificial intelligence (AI). It has been argued by analyst Kuo in a recent post that the iPhone 16's 8GB DRAM limit will likely restrict on-device learning curves from exceeding market expectations. Kuo suggests that eager Apple fans might want to temper their expectations before WWDC this year. 

Although this is true, Apple's powerful mobile chips and efficient iOS operating system can offer market-leading performance, regardless of how much RAM is available to them, on many of their previous iPhone models. As a result, memory has never been much of an issue on revious iPhone models. In the case of notoriously demanding AI tools, such as deep learning, however, the question becomes whether that level of complexity will still be applicable.

Several apps are set to feature AI technology, including Mail, Voice Memos, and Photos, as part of Apple's AI integration, but users will have to opt-in to use the features if they wish to use them. There were rumours that the company would deliver a series of features designed to simplify everyday tasks such as summarizing and writing emails, as well as suggesting custom emojis for emails. Moreover, Bloomberg reports that Siri is also going to undergo an AI overhaul to allow users to be able to do more specific tasks within apps, for instance, deleting an email inside an app will be one of these. According to The Information and Bloomberg, Apple has signed a deal with OpenAI to power some features, including a chatbot that is similar to ChatGPT, one of the most popular chatbots.

Meta Addresses AI Chatbot's YouTube Training Data Assertion

 


Eventually, artificial intelligence systems like ChatGPT will run out of the tens of trillions of words people have been writing and sharing on the web, which keeps them smarter. In a new study released on Thursday by Epoch AI, researchers estimate that tech companies will exhaust the available training data for AI language models sometime between 2026 and 2032 if the industry is to be expected to use public training data in the future. 

It is more open than Meta that the Meta AI chatbot will share its training data with me. It is widely known that Meta, formerly known as Facebook, has been trying to move into the generative AI space since last year. The company was aiming to keep up with the public's interest sparked by the launch of OpenAI's ChatGPT in late 2022. In April of this year, Meta AI was expanded to include a chat and image generator feature on all its apps, including Instagram and WhatsApp. However, much information about how Meta AI was trained has not been released to date. 

A series of questions were asked by Business Insider of Meta AI regarding the data it was trained on and the method by which Meta obtained such data. In the interview with Business Insider, Meta AI revealed that it had been trained on a large dataset of transcriptions from YouTube videos, as reported by Business Insider. Furthermore, it said that Meta has its web scraper bot, referred to as "MSAE" (Meta Scraping and Extraction), which scrapes a huge amount of information off the web to use for the training of AI systems. This scraper was never disclosed to Meta previously. 

The terms of service of YouTube do not allow users to collect their data by using bots and scrapers, nor can they use such data without permission from YouTube. As a result of this, OpenAI has recently come under scrutiny for purportedly using such data. According to a Meta spokesman, Meta AI has given correct answers regarding its scraper and training data. However, the spokesman suggested that Meta AI may be wrong in the process. 

A spokesperson from Intel explained that creative AI requires a large amount of data to be effectively trained, so data from a wide variety of sources is utilised for training, including publicly available information online as well as data that has been annotated. As part of its initial training, Meta AI said that 3.7 million YouTube videos had been transcribed by a third party. It was confirmed by Meta AI's chatbot that it did not use its scraper bot to scrape YouTube videos directly. In response to further questions on Meta AI's YouTube training data, Meta AI replied that another dataset with transcriptions from 6 million YouTube videos was also compiled by a third party as part of its training data set.

Besides the 1.5 million YouTube transcriptions and subtitles included in its training dataset, the company also added two more sets of YouTube subtitles, one with 2.5 million subtitles and another with 1.5 million subtitles, as well as several transcriptions from 2,500 YouTube stories showcasing TED Talks. In Meta AI's opinion, all of the data sets were compiled by third parties after they had been collected by them. According to Meta's chatbot, the company takes steps to ensure that it does not gather copyrighted information on its users. However, from my understanding, Meta AI in some form scrapes the web in an ongoing manner. 

As a result of several queries, results displayed sources including NBC News, CNN, and The Financial Times among others. In most cases, Meta AI does not include sources for its responses, unless specifically requested to provide such information. A new paid deal with Meta AI would provide Meta AI with access to more AI training data, which could improve the results of Meta AI in the future, according to BI reporting. As well as respecting robots.txt, Meta AI said it abides by the robots.txt protocol, a set of guidelines that website owners can use to ostensibly prevent bots from scraping pages for training AI. 

Meta used a large language model called Llama to develop the chatbot. Meta AI has yet to release an accompanying paper for the new model or disclose the training data used for the model, even though Llama 3 was released in April, around the time Meta AI was expanded. It was Meta's blog post that revealed that the huge set of 15 trillion tokens used to train Llama 3 was sourced from public sources, meaning "publicly available sources." Web scrapers can extract almost all available content that is accessible on the web, and they can do so effectively with tools such as OpenAI's GPTBot, Google's GoogleBot, and Common Crawl's CCBot. 

The content is stored in massive datasets fed into LLMs and often regurgitated by generative AI tools like ChatGPT. Several ongoing lawsuits concern owned and copyrighted content being freely absorbed by the world's biggest tech companies. The US Copyright Office is expected to release new guidance on acceptable uses for AI companies later this year. 

The content is stored in extensive datasets that are incorporated into large language models (LLMs) and frequently reproduced by generative AI tools such as ChatGPT. Multiple ongoing lawsuits address the issue of proprietary and copyrighted material being utilized without permission by major technology companies. The United States Copyright Office is anticipated to issue new guidelines later this year regarding the permissible use of such content by AI companies. 

Not a Science Fiction: What NVIDIA CEO Thinks About AI

Not a Science Fiction: What NVIDIA CEO Thinks About AI

Jensen Huang, CEO of NVIDIA, highlighted the company's robotics and industrial digitization advances at COMPUTEX 2024 in Taipei. Huang described how manufacturers like Foxconn use NVIDIA technology, such as Omniverse, Isaac, and Metropolis, to create advanced robotic facilities. "Robotics are here. Physical AI is here. "This is not science fiction, and it is being used all over Taiwan," Huang explained.

AI Factories AND Accelerated Platforms 

NVIDIA has stated that the accelerated platforms are now fully functioning. From AI PCs powered by NVIDIA RTX to corporations constructing AI factories with NVIDIA's full-stack computing platform, the future of computing is all about speed and efficiency. "The future of computing is accelerated," Huang said.

Sustainable Computing

NVIDIA is focusing on sustainable computing by integrating GPUs and CPUs to produce up to 100x faster performance with only a threefold increase in power usage. This results in 25 times greater performance per watt than CPUs alone. "Accelerated computing is sustainable computing," Huang explained.

New Semiconductor Roadmap

Huang revealed a new semiconductor release roadmap with a one-year timeframe. NVIDIA intends to launch the Rubin platform following the next Blackwell platform, which will have enhanced GPUs, an Arm-based CPU called Vera, and new networking technologies like as NVLink 6. "Our basic philosophy is straightforward: build the entire data centre scale, disaggregate and sell to your parts on a one-year rhythm, and push everything to technology limits," Huang said.

AI-enabled customer devices

NVIDIA's RTX AI PCs are designed to improve user experiences, including over 200 RTX AI laptops and over 500 AI-powered apps and games. The RTX AI Toolkit and new PC-based NIM inference microservices for the NVIDIA ACE digital human platform demonstrate NVIDIA's dedication to AI accessibility. 

Enabling developers with NVIDIA NIM

NVIDIA announced NIM (NVIDIA Inference Microservices) to make it easier for the world's 28 million developers to build generative AI applications. NIM offers models as optimized containers that may be deployed across multiple systems. Partners like as Cadence, Cloudera, and Synopsys are integrating NIM to accelerate generative AI implementations.

Industry Partners and AI Factory Deals

Taiwanese suppliers, such as ASRock Rack, ASUS, and GIGABYTE, use NVIDIA GPUs and networking solutions to develop powerful AI systems. The NVIDIA MGX modular reference design platform is compatible with these systems, allowing for the best performance in AI applications. AMD and Intel have also developed CPU designs that support the MGX architecture.

Next Generation Networking

Huang highlighted plans to release Spectrum-X devices annually to address the demand for high-performance Ethernet networking for artificial intelligence. NVIDIA Spectrum-X, the first Ethernet fabric designed for AI, improves network performance by 1.6x over typical Ethernet fabrics. CoreWeave and GMO Internet Group were among the early adopters.

Windows AI’s Screenshot Feature Labeled a ‘Disaster’ for Security

 


In the last few months, Microsoft has been touting AI PCs. Additionally, Microsoft recently released a new feature for Windows 11 called "Recall" that is capable of taking a screenshot of everything users do and making all their actions searchable. Additionally, the company claimed that Copilot and Recall activity data would not be remotely accessible by threat actors. 

However, a security researcher by the name of Kevin Beaumont claims that the data is stored in a simple SQLite database that is stored in plain text. Windows's recall feature, which is currently in preview, captures a screen snapshot every few seconds and stores it locally. Even though it is intended to provide users with an easy way to search for and revisit past activities, there are serious security and privacy concerns surrounding the feature. 

As a result of this feature, which tracks every activity on a Windows computer to help users find things easily in the future using natural language, Microsoft is being called a hackable security catastrophe. An individual who is a white-hat hacker has already developed a tool that is capable of extracting sensitive data from Recall.

The tool is called TotalRecall, and it is available on GitHub right now. Recall uses local artificial intelligence models to capture everything users do and see on their computer, and then they can search for and retrieve anything they want in seconds, even if it is in a different place on their computer. Users can even navigate through a timeline that they can explore. 

In Recall, everything is kept private and local on the device, so no data is used to train Microsoft's artificial intelligence models. It has been revealed by cybersecurity expert Kevin Beaumont that Microsoft's Recall AI-powered feature has some potential security flaws, even though Microsoft has claimed that it will be a secure and encrypted experience. As Beaumont, who previously worked for Microsoft in 2020, has been testing out the Recall feature for the past week, he has learned that the data is stored as plain text in a database. 

If that were the case, someone could easily exploit malware to extract the database and its contents with the help of an attacker. A plain text database was shared by Beaumont as an example of how Recall activity cannot be exfiltrated remotely by a hacker. Beaumont said he was annoyed that Microsoft informed media outlets that this couldn't happen. 

There is a fear that Recall makes it easier for malware and attackers to steal information from a user's PC, as the database is stored locally on the user's computer, but it is accessible from their AppData folder if a user is an admin. Currently, InfoStealer trojans exist in the market to steal credentials and information from a PC. These types of malware are being distributed by hackers to steal and sell personal details about individuals. 

As a result of the Recall, threat actors are now able to produce automatic scrapes within seconds of every webpage a user has ever visited, says Beaumont. Using the information he has obtained from his Recall database, Beaumont has implemented many new features, such as uploading personal databases and searching them instantly. To give them time to do anything with the feature, I have intentionally withheld technical details until Microsoft ships the feature, he explains. 

It is currently being planned by Microsoft for Recall to be enabled by default on Copilot Plus computers shortly. The setup process of Windows 8 is reportedly being discussed by Microsoft to be changed. By uploading a database he created called Recall onto a website that allows users to upload databases and search through them, the security researcher demonstrated the same experience.  

As Microsoft is preparing for Windows 11 Recall to be enabled when setting up a Copilot Plus PC, it can pose a serious privacy concern for end users who are not aware of how it works in terms of how the service works. Microsoft is reported to be considering adding an option that will let users opt out of the feature during the setup phase, which will make it possible for users to opt-out, out instead of having to opt in to the feature. Besides security researchers, there has also been criticism of the feature by the UK Information Commissioner's Office, and the organization is planning to reach out to Microsoft to get further information.

California Advances AI Regulation to Tackle Discrimination and Privacy Concerns

 

California lawmakers are making significant strides in regulating artificial intelligence (AI) technologies, with a series of proposals aimed at addressing discrimination, misinformation, privacy concerns and prohibiting deepfakes in the contexts of elections and pornography, advancing in the legislature last week. 

These proposals must now gain approval from the other legislative chamber before being presented to Governor Gavin Newsom. Experts and lawmakers warn that the United States is falling behind Europe in the race to regulate AI. The rapid development of AI technologies poses significant risks, including potential job losses, the spread of misinformation, privacy violations, and biases in automated systems. 

Governor Newsom has championed California as a frontrunner in both the adoption and regulation of AI. He has outlined plans for the state to deploy generative AI tools to reduce highway congestion, enhance road safety, and provide tax guidance. Concurrently, his administration is exploring new regulations to prevent AI discrimination in hiring practices. Speaking at an AI summit in San Francisco on Wednesday, Newsom revealed that California is considering at least three additional AI tools, including one designed to address homelessness. 

Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit organization that advises lawmakers on technology and privacy issues, said that California's strong privacy laws position it more favorably than other states with significant AI interests, such as New York, for enacting effective regulations. Rice further emphasized that California is well-equipped to lead in the development of impactful AI governance. 

Some companies, including hospitals, are using AI for hiring, housing, and medical decisions with little oversight. The U.S. Equal Employment Opportunity Commission reports that up to 83% of employers use AI in hiring, but the workings of these algorithms are mostly unknown. California is proposing an ambitious measure to regulate these AI models. 

This measure would require companies to disclose their use of AI in decision-making and inform those affected. AI developers would need to regularly check their models for bias. The state attorney general would have the power to investigate discriminatory AI models and issue fines of $10,000 per violation. 

Additionally, a bipartisan coalition aims to prosecute those using AI to create child sexual abuse images, as current laws do not cover AI-generated images that are not of real people. Additionally, Democratic lawmakers are supporting a bill to combat election deepfakes. This bill was prompted by AI-generated robocalls mimicking President Joe Biden before New Hampshire’s presidential primary. 

The proposal would ban deceptive election-related deepfakes in mailers, robocalls, and TV ads 120 days before and 60 days after Election Day. Another proposal would require social media platforms to label any election-related posts created by AI. 

California's proactive stance may pave the way for broader federal regulations to address these emerging challenges.

LAPD Website Unexpectedly Offline; Ransomware Ruled Out, Cause Unclear

 


On Friday afternoon, the Los Angeles Police Department's website went down due to an overload, officials said, despite claims on social media that an online group was responsible for the outage through their "cyber attack." It is widely believed that the Dark Storm group claims that its "cyberattack" was responsible for the downing of the website. 

However, Muniz said there was no evidence to suggest that that was true. Some claims have been made regarding the downing of the website. The LAPD has long planned for an upgrade to its website — with security being a major concern. The website contains general information about its bureaus, leaders, crime statistics, and other reports and documents published by the LAPD. 

After Lowriders claimed that law enforcement shut down their annual Cinco de Mayo celebration in Elysian Park Sunday, they ended up in Pasadena. The Los Angeles Police Department reported they did not have the permits to hold the event and would increase traffic during the Dodgers' game. The fact that neither side agreed on the date and time of this event is a disappointment. 

The LAPD Northeast Division Community Police Advisory Board has said that the event will take place between 6 a.m. and 8 p.m. The event is scheduled for between 6 a.m. and 8 p.m. The LAPD Media Relations Division Officer Drake Madison, who works in the Department of Media Relations, said that the website was not working just after 11 a.m. Friday. 

It is noted on the site at 4:45 p.m. that, at this point, there is a message stating, "Our services aren't available right now, but we are working to restore them as soon as possible. We look forward to seeing you soon." In her statement, Madison stated that the LAPD's website was not experiencing a security breach, nor was the website affected by the fact that it had been overloaded. 

The LAPD official further stated that a further review indicated that the system had been overloaded, which caused a "denial of services." There was also a report in the Los Angeles Times that an organization called Dark Storm claimed responsibility for the incident on Telegram as part of its "cyber attack." A spokesperson for the Los Angeles Police Department, Captain Kelly Muniz, confirmed to the newspaper that the claims were not accurate. 

In the Times report, the LAPD has long been planning to upgrade its site - based on security concerns being one of its major concerns - with the upgrade dating back as far as 2004. However, the description of the group claiming responsibility was not available, but an online search revealed it is likely to be a group of hackers.

Where Hackers Find Your Weak Spots: A Closer Look


Social engineering is one of the most common attack vectors used by cyber criminals to enter companies. These manipulative attacks often occur in four stages: 

  1. Info stealing from targets
  2. Building relationships with target and earning trust
  3. Exploitation: Convincing the target to take an action
  4. Execution: Collected info is used to launch attack 

Five Intelligence Sources

So, how do attackers collect information about their targets? Cybercriminals can employ five types of intelligence to obtain and analyze information about their targets. They are:

1. OSINT (open-source intelligence)

OSINT is a hacking technique used to gather and evaluate publicly available information about organizations and their employees. 

OSINT technologies can help threat actors learn about their target's IT and security infrastructure, exploitable assets including open ports and email addresses, IP addresses, vulnerabilities in websites, servers, and IoT (Internet of Things) devices, leaked or stolen passwords, and more. Attackers use this information to conduct social engineering assaults.

2. Social media intelligence (SOCMINT)

Although SOCMINT is a subset of OSINT, it is worth mentioning. Most people freely provide personal and professional information about themselves on major social networking sites, including their headshot, interests and hobbies, family, friends, and connections, where they live and work, current job positions, and a variety of other characteristics. 

Attackers can use SOCINT software like Social Analyzer, Whatsmyname, and NameCheckup.com to filter social media activity and information about individuals to create tailored social engineering frauds. 

3. ADINT (Advertising Intelligence)

Assume you download a free chess app for your phone. A tiny section of the app displays location-based adverts from sponsors and event organizers, informing users about local players, events, and chess meetups. 

When this ad is displayed, the app sends certain information about the user to the advertising exchange service, such as IP addresses, the operating system in use (iOS or Android), the name of the mobile phone carrier, the user's screen resolution, GPS coordinates, etc. 

Ad exchanges typically keep and process this information to serve appropriate adverts depending on user interests, behavior, and geography. Ad exchanges also sell this vital information. 

4. DARKINT (Dark Web Intelligence)

The Dark Web is a billion-dollar illegal marketplace that trades corporate espionage services, DIY ransomware kits, drugs and weapons, human trafficking, and so on. The Dark Web sells billions of stolen records, including personally identifiable information, healthcare records, financial and transaction data, corporate data, and compromised credentials. 

Threat actors can buy off-the-shelf data and use it for social engineering campaigns. They can even hire professionals to socially engineer people on their behalf or identify hidden vulnerabilities in target businesses. In addition, there are hidden internet forums and instant messaging services (such as Telegram) where people can learn more about possible targets. 

5. AI-INT (artificial intelligence)

In addition to the five basic disciplines, some analysts refer to AI as the sixth intelligence discipline. With recent breakthroughs in generative AI technologies, such as Google Gemini and ChatGPT, it's easy to envisage fraudsters using AI tools to collect, ingest, process, and filter information about their targets. 

Threat researchers have already reported the appearance of dangerous AI-based tools on Dark Web forums such as FraudGPT and WormGPT. Such technologies can greatly reduce social engineers' research time while also providing actionable information to help them carry out social engineering projects. 

What Can Businesses Do to Prevent Social Engineering Attacks?

All social engineering assaults are rooted in information and its negligent treatment. Businesses and employees who can limit their information exposure will significantly lessen their vulnerability to social engineering attacks. Here's how.

Monthly training: Use phishing simulators and classroom training to teach employees not to disclose sensitive or personal information about themselves, their families, coworkers, or the organization.

Draft AI-use policies: Make it plain to employees what constitutes acceptable and unacceptable online activity. For example, it is unacceptable to prompt ChatGPT with a line of code or private data, as well as to respond to strange or questionable queries without sufficient verification.

Utilize the same tools that hackers use: Use the same intelligence sources mentioned above to proactively determine how much information about your firm, its people, and its infrastructure is available online. Create a continuous procedure to decrease this exposure.

Good cybersecurity hygiene begins with addressing the fundamental issues. Social engineering and poor decision-making are to blame for 80% to 90% of all cyberattacks. Organizations must prioritize two objectives: limiting information exposure and managing human behavior through training exercises and education. Organizations can dramatically lower their threat exposure and its possible downstream impact by focusing on these two areas.

AI's Role in Averting Future Power Outages

 

Amidst an ever-growing demand for electricity, artificial intelligence (AI) is stepping in to mitigate power disruptions.

Aseef Raihan vividly recalls a chilling night in February 2021 in San Antonio, Texas, during winter storm Uri. As temperatures plunged to -19°C, Texas faced an unprecedented surge in electricity demand to combat the cold. 

However, the state's electricity grid faltered, with frozen wind turbines, snow-covered solar panels, and precautionary shutdowns of nuclear reactors leading to widespread power outages affecting over 4.5 million homes and businesses. Raihan's experience of enduring cold nights without power underscored the vulnerability of our electricity systems.

The incident in Texas highlights a global challenge as countries witness escalating electricity demands due to factors like the rise in electric vehicle usage and increased adoption of home appliances like air conditioners. Simultaneously, many nations are transitioning to renewable energy sources, which pose challenges due to their variable nature. For instance, electricity production from wind and solar sources fluctuates based on weather conditions.

To bolster energy resilience, countries like the UK are considering the construction of additional gas-powered plants. Moreover, integrating large-scale battery storage systems into the grid has emerged as a solution. In Texas, significant strides have been made in this regard, with over five gigawatts of battery storage capacity added within three years following the storm.

However, the effectiveness of these batteries hinges on their ability to predict optimal charging and discharging times. This is where AI steps in. Tech companies like WattTime and Electricity Maps are leveraging AI algorithms to forecast electricity supply and demand patterns, enabling batteries to charge during periods of surplus energy and discharge when demand peaks. 

Additionally, AI is enhancing the monitoring of electricity infrastructure, with companies like Buzz Solutions employing AI-powered solutions to detect damage and potential hazards such as overgrown vegetation and wildlife intrusion, thus mitigating the risk of power outages and associated hazards like wildfires.

AI Integration in Cybersecurity Challenges

 

In the ongoing battle against cyber threats, government and corporate heads are increasingly turning to artificial intelligence (AI) and machine learning (ML) for a stronger defense. However, the companies are facing a trio of significant hurdles. 

Firstly, the reliance on an average of 45 distinct cybersecurity tools per company presents a complex landscape. This abundance leads to gaps in protection, configuration errors, and a heavy burden of manual labor, making it challenging to maintain robust security measures. 

Additionally, the cybersecurity sector grapples with a shortage of skilled professionals. This scarcity makes it difficult to recruit, train, and retain experts capable of managing the array of security tools effectively. 

Furthermore, valuable data remains trapped within disparate cybersecurity tools, hindering comprehensive risk management. This fragmentation prevents companies from harnessing insights that could enhance their overall cybersecurity posture. 

The key to maximizing AI for cybersecurity lies in platformization, which streamlines integration and interoperability among security solutions. This approach addresses challenges faced by CISOs, such as tool complexity and data fragmentation. 

Platformization: Maximizing AI for Cybersecurity Integration Explore how platformization revolutionizes cybersecurity by fostering seamless integration and interoperability among various security solutions. 

Unified Operations: Enforcing Consistent Policies Across Security Infrastructure Delve into the benefits of unified management and operations, enabling organizations to establish and enforce policies consistently across their entire security ecosystem. 

Enhanced Insights: Contextual Understanding and Real-Time Attack Prevention Learn how integrating data from diverse sources provides a deeper understanding of security events, facilitating real-time detection and prevention of advanced threats. 

Data Integration: Fueling Effective AI with Comprehensive Datasets Discover the importance of integrating data from multiple sources to empower AI models with comprehensive datasets, enhancing their performance and effectiveness in cybersecurity. 

Strategic Alignment: Modernizing Security to Combat Evolving Threats Examine the imperative for companies to prioritize aligning their security strategies and modernizing legacy systems to effectively mitigate the ever-evolving landscape of cyber threats. 

Unveiling Zero-Day Vulnerabilities: AI enhances detection by analyzing code and behavior for key features like API calls and control flow patterns. 

Harnessing Predictive Insights: AI predicts future events by learning from past data, using models like regression or neural networks. 

Empowering User Authentication: AI strengthens authentication by analyzing behavior patterns, using methods like keystroke dynamics, to go beyond passwords. 

In the world of cybersecurity, we are discovering how AI can help us in many ways, like quickly spotting unusual activities and stopping new kinds of attacks. However, proper training and smart work is important to be adopted by companies to prevent unusual activities in the network.

Enterprise AI Adoption Raises Cybersecurity Concerns

 




Enterprises are rapidly embracing Artificial Intelligence (AI) and Machine Learning (ML) tools, with transactions skyrocketing by almost 600% in less than a year, according to a recent report by Zscaler. The surge, from 521 million transactions in April 2023 to 3.1 billion monthly by January 2024, underscores a growing reliance on these technologies. However, heightened security concerns have led to a 577% increase in blocked AI/ML transactions, as organisations grapple with emerging cyber threats.

The report highlights the developing tactics of cyber attackers, who now exploit AI tools like Language Model-based Machine Learning (LLMs) to infiltrate organisations covertly. Adversarial AI, a form of AI designed to bypass traditional security measures, poses a particularly stealthy threat.

Concerns about data protection and privacy loom large as enterprises integrate AI/ML tools into their operations. Industries such as healthcare, finance, insurance, services, technology, and manufacturing are at risk, with manufacturing leading in AI traffic generation.

To mitigate risks, many Chief Information Security Officers (CISOs) opt to block a record number of AI/ML transactions, although this approach is seen as a short-term solution. The most commonly blocked AI tools include ChatGPT and OpenAI, while domains like Bing.com and Drift.com are among the most frequently blocked.

However, blocking transactions alone may not suffice in the face of evolving cyber threats. Leading cybersecurity vendors are exploring novel approaches to threat detection, leveraging telemetry data and AI capabilities to identify and respond to potential risks more effectively.

CISOs and security teams face a daunting task in defending against AI-driven attacks, necessitating a comprehensive cybersecurity strategy. Balancing productivity and security is crucial, as evidenced by recent incidents like vishing and smishing attacks targeting high-profile executives.

Attackers increasingly leverage AI in ransomware attacks, automating various stages of the attack chain for faster and more targeted strikes. Generative AI, in particular, enables attackers to identify vulnerabilities and exploit them with greater efficiency, posing significant challenges to enterprise security.

Taking into account these advancements, enterprises must prioritise risk management and enhance their cybersecurity posture to combat the dynamic AI threat landscape. Educating board members and implementing robust security measures are essential in safeguarding against AI-driven cyberattacks.

As institutions deal with the complexities of AI adoption, ensuring data privacy, protecting intellectual property, and mitigating the risks associated with AI tools become paramount. By staying vigilant and adopting proactive security measures, enterprises can better defend against the growing threat posed by these cyberattacks.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



Cyber Extortion Stoops Lowest: Fake Attacks, Whistleblowing, Cyber Extortion

Cyber Extortion

Recently, a car rental company in Europe fell victim to a fake cyberattack, the hacker used ChatGPT to make it look like the stolen data was legit. It makes us think why would threat actors claim a fabricated attack? We must know the workings of the cyber extortion business to understand why threat actors do what they do.

Mapping the Evolution of Cyber Extortion

Threats have been improving their ransomware attacks for years now. Traditional forms of ransomware attacks used encryption of stolen data. After successful encryption, attackers demanded ransom in exchange for a decryption key. This technique started to fail as businesses could retrieve data from backups.

To counter this, attackers made malware that compromised backups. Victims started paying, but FBI recommendations suggested they not pay.

The attackers soon realized they would need something foolproof to blackmail victims. They made ransomware that stole data without encryption. Even if victims had backups, attackers could still extort using stolen data, threatening to leak confidential data if the ransom wasn't paid.

Making matters even worse, attackers started "milking" the victims and further profiting from the stolen data. They started selling the stolen data to other threat actors who would launch repeated attacks (double and triple extortion attacks). Even if the victims' families and customers weren't safe, attackers would even go to the extent of blackmailing plastic surgery patients in clinics.

Extortion: Poking and Pressure Tactics

Regulators and law enforcement organizations cannot ignore this when billions of dollars are on the line. The State Department is offering a $10 million prize for the head of a Hive ransomware group, like to a scenario from a Wild West film. 

Businesses are required by regulatory bodies to disclose “all material” connected to cyber attacks. Certain regulations must be followed to avoid civil lawsuits, criminal prosecution, hefty fines and penalties, cease-and-desist orders, and the cancellation of securities registration.

Cyber-swatting is another strategy used by ransomware perpetrators to exert pressure. Extortionists have used swatting attacks to threaten hospitals, schools, members of the C-suite, and board members. Artificial intelligence (AI) systems are used to mimic voices and alert law enforcement to fictitious reports of a hostage crisis, bomb threat, or other grave accusation. EMS, fire, and police are called to the victim's house with heavy weapons.

What Businesses Can Do To Reduce The Risk Of Cyberattacks And Ransomware

What was once a straightforward phishing email has developed into a highly skilled cybercrime where extortionists use social engineering to steal data and conduct fraud, espionage, and infiltration. These are some recommended strategies that businesses can use to reduce risks.

1. Educate Staff: It's critical to have a continuous cybersecurity awareness program that informs staff members on the most recent attacks and extortion schemes used by criminals.

2. Pay Attention To The Causes Rather Than The Symptoms: Ransomware is a symptom, not the cause. Examine the methods by which ransomware infiltrated the system. Phishing, social engineering, unpatched software, and compromised credentials can all lead to ransomware.

3. Implement Security Training: Technology and cybersecurity tools by themselves are unable to combat social engineering, which modifies human nature. Employees can develop a security intuition by participating in hands-on training exercises and using phishing simulation platforms.

4. Use Phishing-Resistant MFA and a Password Manager: Require staff members to create lengthy, intricate passwords. To prevent password reuse, sign up for a paid password manager (not one built into your browser). Use MFA that is resistant to phishing attempts to lower the risk of corporate account takeovers and identity theft.

5. Ensure Employee Preparedness: Employees should be aware of the procedures to follow in the case of a cyberattack, as well as the roles and duties assigned to incident responders and other key players.