As artificial intelligence (AI) advances, it accelerates code development at a pace that cybersecurity teams struggle to match. A recent survey by Seemplicity, which included 300 US cybersecurity professionals, highlights this growing concern. The survey delves into key topics like vulnerability management, automation, and regulatory compliance, revealing a complex array of challenges and opportunities.
Fragmentation in Security Environments
Organisations now rely on an average of 38 different security product vendors, leading to significant complexity and fragmentation in their security frameworks. This fragmentation is a double-edged sword. While it broadens the arsenal against cyber threats, it also results in an overwhelming amount of noise from security tools. 51% of respondents report being inundated with alerts and notifications, many of which are false positives or non-critical issues. This noise significantly hampers effective vulnerability identification and prioritisation, causing delays in addressing real threats. Consequently, 85% of cybersecurity professionals find managing this noise to be a substantial challenge, with the primary issue being slow risk reduction.
The Rise of Automation in Cybersecurity
In the face of overwhelming security alerts, automation is emerging as a crucial tool for managing cybersecurity vulnerabilities. According to a survey by Seemplicity, 95% of organizations have implemented at least one automated method to manage the deluge of alerts. Automation is primarily used in three key areas:
1. Vulnerability Scanning: 65% of participants have adopted automation to enhance the precision and speed of identifying vulnerabilities, significantly streamlining this process.
2. Vulnerability Prioritization: 53% utilise automation to rank vulnerabilities based on their severity, ensuring that the most critical issues are addressed first.
3. Remediation: 41% of respondents automate the assignment of remediation tasks and the execution of fixes, making these processes more efficient.
Despite these advancements, 44% still rely on manual methods to some extent, highlighting obstacles to complete automation. Nevertheless, 89% of cybersecurity leaders acknowledge that automation has increased efficiency, particularly in accelerating threat response.
AI's Growing Role in Cybersecurity
The survey highlights a robust confidence in AI's ability to transform cybersecurity practices. An impressive 85% of organizations intend to increase their AI spending over the next five years. Survey participants expect AI to greatly enhance early stages of managing vulnerabilities in the following ways:
1. Vulnerability Assessment: It is argued by 38% of the demographic that AI will boost the precision and effectiveness of spotting vulnerabilities.
2. Vulnerability Prioritisation: 30% view AI as crucial for accurately ranking vulnerabilities based on their severity and urgency.
Additionally, 64% of respondents see AI as a strong asset in combating cyber threats, indicating a high level of optimism about its potential. However, 68% are concerned that incorporating AI into software development will accelerate code production at a pace that outstrips security teams' ability to manage, creating new challenges in vulnerability management.
Views on New SEC Incident Reporting Requirements
The survey also sheds light on perspectives regarding the new SEC incident reporting requirements. Over half of the respondents see these regulations as opportunities to enhance vulnerability management, particularly in improving logging, reporting, and overall security hygiene. Surprisingly, fewer than a quarter of respondents view these requirements as adding bureaucratic burdens.
Trend Towards Continuous Threat Exposure Management (CTEM)
A trend from the survey is the likely adoption of Continuous Threat Exposure Management (CTEM) programs by 90% of respondents. Unlike traditional periodic assessments, CTEM provides continuous monitoring and proactive risk management, helping organizations stay ahead of threats by constantly assessing their IT infrastructure for vulnerabilities.
The Seemplicity survey highlights both the challenges and potential solutions in the evolving field of cybersecurity. As AI accelerates code development, integrating automation and continuous monitoring will be essential to managing the increasing complexity and noise in security environments. Organizations are increasingly recognizing the need for more intelligent and efficient methods to stay ahead of cyber threats, signaling a shift towards more proactive and comprehensive cybersecurity strategies.
A recent security incident at OpenAI serves as a reminder that AI companies have become prime targets for hackers. Although the breach, which came to light following comments by former OpenAI employee Leopold Aschenbrenner, appears to have been limited to an employee discussion forum, it underlines the steep value of data these companies hold and the growing threats they face.
The New York Times detailed the hack after Aschenbrenner labelled it a “major security incident” on a podcast. However, anonymous sources within OpenAI clarified that the breach did not extend beyond an employee forum. While this might seem minor compared to a full-scale data leak, even superficial breaches should not be dismissed lightly. Unverified access to internal discussions can provide valuable insights and potentially lead to more severe vulnerabilities being exploited.
AI companies like OpenAI are custodians of incredibly valuable data. This includes high-quality training data, bulk user interactions, and customer-specific information. These datasets are crucial for developing advanced models and maintaining competitive edges in the AI ecosystem.
Training data is the cornerstone of AI model development. Companies like OpenAI invest vast amounts of resources to curate and refine these datasets. Contrary to the belief that these are just massive collections of web-scraped data, significant human effort is involved in making this data suitable for training advanced models. The quality of these datasets can impact the performance of AI models, making them highly coveted by competitors and adversaries.
OpenAI has amassed billions of user interactions through its ChatGPT platform. This data provides deep insights into user behaviour and preferences, much more detailed than traditional search engine data. For instance, a conversation about purchasing an air conditioner can reveal preferences, budget considerations, and brand biases, offering invaluable information to marketers and analysts. This treasure trove of data highlights the potential for AI companies to become targets for those seeking to exploit this information for commercial or malicious purposes.
Many organisations use AI tools for various applications, often integrating them with their internal databases. This can range from simple tasks like searching old budget sheets to more sensitive applications involving proprietary software code. The AI providers thus have access to critical business information, making them attractive targets for cyberattacks. Ensuring the security of this data is paramount, but the evolving nature of AI technology means that standard practices are still being established and refined.
AI companies, like other SaaS providers, are capable of implementing robust security measures to protect their data. However, the inherent value of the data they hold means they are under constant threat from hackers. The recent breach at OpenAI, despite being limited, should serve as a warning to all businesses interacting with AI firms. Security in the AI industry is a continuous, evolving challenge, compounded by the very AI technologies these companies develop, which can be used both for defence and attack.
The OpenAI breach, although seemingly minor, highlights the critical need for heightened security in the AI industry. As AI companies continue to amass and utilise vast amounts of valuable data, they will inevitably become more attractive targets for cyberattacks. Businesses must remain vigilant and ensure robust security practices when dealing with AI providers, recognising the gravity of the risks and responsibilities involved.
Earlier this year, a hacker successfully breached OpenAI's internal messaging systems, obtaining sensitive details about the company's AI technologies. The incident, initially kept under wraps by OpenAI, was not reported to authorities as it was not considered a threat to national security. The breach was revealed through sources cited by The New York Times, which highlighted that the hacker accessed discussions in an online forum used by OpenAI employees to discuss their latest technologies.
The breach was disclosed to OpenAI employees during an April 2023 meeting at their San Francisco office, and the board of directors was also informed. According to sources, the hacker did not penetrate the systems where OpenAI develops and stores its artificial intelligence. Consequently, OpenAI executives decided against making the breach public, as no customer or partner information was compromised.
Despite the decision to withhold the information from the public and authorities, the breach sparked concerns among some employees about the potential risks posed by foreign adversaries, particularly China, gaining access to AI technology that could threaten U.S. national security. The incident also brought to light internal disagreements over OpenAI's security measures and the broader implications of their AI technology.
In the aftermath of the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the company's board of directors. In his memo, Aschenbrenner criticised OpenAI's security measures, arguing that the company was not doing enough to protect its secrets from foreign adversaries. He emphasised the need for stronger security to prevent the theft of crucial AI technologies.
Aschenbrenner later claimed that he was dismissed from OpenAI in the spring for leaking information outside the company, which he argued was a politically motivated decision. He hinted at the breach during a recent podcast, but the specific details had not been previously reported.
In response to Aschenbrenner's allegations, OpenAI spokeswoman Liz Bourgeois acknowledged his contributions and concerns but refuted his claims regarding the company's security practices. Bourgeois stated that OpenAI addressed the incident and shared the details with the board before Aschenbrenner joined the company. She emphasised that Aschenbrenner's separation from the company was unrelated to the concerns he raised about security.
While the company deemed the incident not to be a national security threat, the internal debate it sparked highlights the ongoing challenges in safeguarding advanced technological developments from potential threats.
In a recent speech at an AI summit in Switzerland, IMF First Deputy Managing Director Gita Gopinath cautioned that while artificial intelligence (AI) offers numerous benefits, it also poses grave risks that could exacerbate economic downturns. Gopinath emphasised that while discussions around AI have predominantly centred on issues like privacy, security, and misinformation, insufficient attention has been given to how AI might intensify economic recessions.
Historically, companies have continued to invest in automation even during economic downturns. However, Gopinath pointed out that AI could amplify this trend, leading to greater job losses. According to IMF research, in advanced economies, approximately 30% of jobs are at high risk of being replaced by AI, compared to 20% in emerging markets and 18% in low-income countries. This broad scale of potential job losses could result in severe long-term unemployment, particularly if companies opt to automate jobs during economic slowdowns to cut costs.
The financial sector, already a significant adopter of AI and automation, faces unique risks. Gopinath highlighted that the industry is increasingly using complex AI models capable of learning independently. By 2028, robo-advisors are expected to manage over $2 trillion in assets, up from less than $1.5 trillion in 2023. While AI can enhance market efficiency, these sophisticated models might perform poorly in novel economic situations, leading to erratic market behaviour. In a downturn, AI-driven trading could trigger rapid asset sell-offs, causing market instability. The self-reinforcing nature of AI models could exacerbate price declines, resulting in severe asset price collapses.
AI's integration into supply chain management could also present risks. Businesses increasingly rely on AI to determine inventory levels and production rates, which can enhance efficiency during stable economic periods. However, Gopinath warned that AI models trained on outdated data might make substantial errors, leading to widespread supply chain disruptions during economic downturns. This could further destabilise the economy, as inaccurate AI predictions might cause supply chain breakdowns.
To mitigate these risks, Gopinath suggested several strategies. One approach is to ensure that tax policies do not disproportionately favour automation over human workers. She also advocated for enhancing education and training programs to help workers adapt to new technologies, along with strengthening social safety nets, such as improving unemployment benefits. Additionally, AI can play a role in mitigating its own risks by assisting in upskilling initiatives, better targeting assistance, and providing early warnings in financial markets.
Gopinath accentuated the urgency of addressing these issues, noting that governments, institutions, and policymakers need to act swiftly to regulate AI and prepare for labour market disruptions. Her call to action comes as a reminder that while AI holds great promise, its potential to deepen economic crises must be carefully managed to protect global economic stability.
Introduction:
Phishing attacks have become one of the most prevalent cybersecurity threats, targeting individuals and organizations to steal sensitive information such as login credentials, financial data, and personal information. To combat this growing threat, a comprehensive approach involving the deployment of an anti-phishing product and an efficient take-down strategy is essential.
This case study outlines a generic framework for implementing such measures, with a focus on regulatory requirements mandating the use of locally sourced solutions and ensuring proper validation before take-down actions.
Challenge:
Organizations across various sectors, including finance, healthcare, and e-commerce, face persistent phishing threats that compromise data security and lead to financial losses. The primary challenge is to develop and implement a solution that can detect, prevent, and mitigate phishing attacks effectively while complying with regulatory requirements to use locally sourced cybersecurity products and ensuring that take-down actions are only executed when the orginization is phished/imitated.
Objectives:
1. Develop an advanced anti-phishing product with real-time detection and response capabilities.
2. Establish a rapid and effective take-down process for phishing websites.
3. Ensure the anti-phishing product is sourced from a local provider to meet regulatory requirements.
4. Implement a policy where take-down actions are only taken when the orginization is phished.
Solution:
A multi-faceted approach combining technology, processes, and education was adopted to address the phishing threat comprehensively.
1. Anti-Phishing Product Development
An advanced anti-phishing product from a local cybersecurity provider was developed with the following key features:
Real-time Monitoring and Detection:
Utilizing AI and machine learning algorithms to monitor email traffic, websites, and network activity for phishing indicators.
- Threat Intelligence Integration:
Incorporating global threat intelligence feeds to stay updated on new phishing tactics and campaigns.
- Automated Detection of Brand Violations: Implementing capabilities to automatically detect the use of logos, brand names, and other identifiers indicative of phishing activities.
- Automated Response Mechanisms:
Implementing automated systems to block phishing emails and malicious websites at the network level, while flagging suspicious sites for further review.
- User Alerts and Guidance: Providing immediate alerts to users when suspicious activities are detected, along with guidance on how to respond.
2. Phishing Website Take-Down Strategy
We developed a proactive approach to swiftly take down phishing websites, ensuring a balance between automation and human oversight, and validating the phishing activity before take-down:
- Rapid Detection Systems: Leveraging real-time monitoring tools to quickly identify phishing websites, especially those violating brand identities.
- Collaboration with ISPs and Hosting Providers:
Establishing partnerships with internet service providers and hosting companies to expedite the take-down process.
- Human Review Process and Validation of Phishing Activity:
Ensuring that no site is taken down without a human review to verify the phishing activity, preventing erroneous takedowns/rejections.
- Legal Measures:
Employing legal actions such as cease-and-desist letters to combat persistent phishing sites.
- Dedicated Incident Response Team:
Forming a specialized team to handle take-down requests and ensure timely removal of malicious sites, following human verification.
Results:
1. Reduction in Phishing Incidents: Organizations reported a significant decrease in successful phishing attempts due to the enhanced detection and response capabilities of the locally sourced anti-phishing product.
2. Efficient Phishing Site Take-Downs:
The majority of reported phishing websites were taken down within 24 hours, following human review and validation of phishing activity, minimizing the potential impact of phishing attacks.
Conclusion:
The implementation of an advanced, locally sourced anti-phishing product, combined with a robust take-down strategy and comprehensive educational initiatives, significantly enhances the cybersecurity posture of organizations. By adopting a multi-faceted approach that leverages technology, collaborative efforts, and user education, while ensuring compliance with regulatory requirements to use local solutions and validating phishing activity before take-down actions, organizations can effectively mitigate the risks posed by phishing attacks. This case study underscores the importance of an integrated strategy, ensuring automated systems are complemented by human oversight, in protecting against the ever-evolving threat of phishing.
By
Suriya Prakash & Sabari Selvan
CySecurity Corp
GitHub has unveiled a novel AI-driven feature aimed at expediting the resolution of vulnerabilities during the coding process. This new tool, named Code Scanning Autofix, is currently available in public beta and is automatically activated for all private repositories belonging to GitHub Advanced Security (GHAS) customers.
Utilizing the capabilities of GitHub Copilot and CodeQL, the feature is adept at handling over 90% of alert types in popular languages such as JavaScript, Typescript, Java, and Python.
Once activated, Code Scanning Autofix presents potential solutions that GitHub asserts can resolve more than two-thirds of identified vulnerabilities with minimal manual intervention. According to GitHub's representatives Pierre Tempel and Eric Tooley, upon detecting a vulnerability in a supported language, the tool suggests fixes accompanied by a natural language explanation and a code preview, offering developers the flexibility to accept, modify, or discard the suggestions.
The suggested fixes are not confined to the current file but can encompass modifications across multiple files and project dependencies. This approach holds the promise of substantially reducing the workload of security teams, allowing them to focus on bolstering organizational security rather than grappling with a constant influx of new vulnerabilities introduced during the development phase.
However, it is imperative for developers to independently verify the efficacy of the suggested fixes, as GitHub's AI-powered feature may only partially address security concerns or inadvertently disrupt the intended functionality of the code.
Tempel and Tooley emphasized that Code Scanning Autofix aids in mitigating the accumulation of "application security debt" by simplifying the process of addressing vulnerabilities during development. They likened its impact to GitHub Copilot's ability to alleviate developers from mundane tasks, allowing development teams to reclaim valuable time previously spent on remedial actions.
In the future, GitHub plans to expand language support, with forthcoming updates slated to include compatibility with C# and Go.
For further insights into the GitHub Copilot-powered code scanning autofix tool, interested parties can refer to GitHub's documentation website.
Additionally, the company recently implemented default push protection for all public repositories to prevent inadvertent exposure of sensitive information like access tokens and API keys during code updates.
This move comes in response to a notable issue in 2023, during which GitHub users inadvertently disclosed 12.8 million authentication and sensitive secrets across more than 3 million public repositories. These exposed credentials have been exploited in several high-impact breaches in recent years, as reported by BleepingComputer.