Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Code. Show all posts

GitHub Unveils AI-Driven Tool to Automatically Rectify Code Vulnerabilities

GitHub has unveiled a novel AI-driven feature aimed at expediting the resolution of vulnerabilities during the coding process. This new tool, named Code Scanning Autofix, is currently available in public beta and is automatically activated for all private repositories belonging to GitHub Advanced Security (GHAS) customers.

Utilizing the capabilities of GitHub Copilot and CodeQL, the feature is adept at handling over 90% of alert types in popular languages such as JavaScript, Typescript, Java, and Python.

Once activated, Code Scanning Autofix presents potential solutions that GitHub asserts can resolve more than two-thirds of identified vulnerabilities with minimal manual intervention. According to GitHub's representatives Pierre Tempel and Eric Tooley, upon detecting a vulnerability in a supported language, the tool suggests fixes accompanied by a natural language explanation and a code preview, offering developers the flexibility to accept, modify, or discard the suggestions.

The suggested fixes are not confined to the current file but can encompass modifications across multiple files and project dependencies. This approach holds the promise of substantially reducing the workload of security teams, allowing them to focus on bolstering organizational security rather than grappling with a constant influx of new vulnerabilities introduced during the development phase.

However, it is imperative for developers to independently verify the efficacy of the suggested fixes, as GitHub's AI-powered feature may only partially address security concerns or inadvertently disrupt the intended functionality of the code.

Tempel and Tooley emphasized that Code Scanning Autofix aids in mitigating the accumulation of "application security debt" by simplifying the process of addressing vulnerabilities during development. They likened its impact to GitHub Copilot's ability to alleviate developers from mundane tasks, allowing development teams to reclaim valuable time previously spent on remedial actions.

In the future, GitHub plans to expand language support, with forthcoming updates slated to include compatibility with C# and Go.

For further insights into the GitHub Copilot-powered code scanning autofix tool, interested parties can refer to GitHub's documentation website.

Additionally, the company recently implemented default push protection for all public repositories to prevent inadvertent exposure of sensitive information like access tokens and API keys during code updates.

This move comes in response to a notable issue in 2023, during which GitHub users inadvertently disclosed 12.8 million authentication and sensitive secrets across more than 3 million public repositories. These exposed credentials have been exploited in several high-impact breaches in recent years, as reported by BleepingComputer.

ChatGPT's Plug-In Vulnerabilities

 

ChatGPT, the revolutionary language model developed by OpenAI, has been making waves in the tech world for its impressive capabilities in natural language understanding. However, recent developments have highlighted a significant concern – ChatGPT's plug-in problem, which poses potential cybersecurity risks.

According to cybersecurity experts, the surge in cybercrime and the role of cryptocurrencies in facilitating illegal activities necessitate a crackdown on potential vulnerabilities. A prominent expert emphasized, "As artificial intelligence-based models like ChatGPT become more prevalent, it's essential to address any potential plug-in vulnerabilities to safeguard against cyber threats."

One of the key aspects contributing to this problem is the pluggable architecture that allows third-party developers to create and integrate their custom-built models or plugins with ChatGPT. While this flexibility has enabled rapid advancements in the capabilities of the language model, it also opens avenues for malicious actors to exploit the system.

To better understand the issue, it's crucial to consider the technology behind ChatGPT and the potential implications of its plug-in capabilities. Blockchain, the foundational technology behind cryptocurrencies, has been gaining attention for its secure and decentralized nature. Blockchain's design ensures that transactions are tamper-resistant and transparent, making it an attractive option for secure data management.

However, the implementation of blockchain in the context of ChatGPT's plug-ins poses unique challenges. Blockchain is resource-intensive and requires a consensus mechanism, which can significantly impact the responsiveness of an AI model. Moreover, the decentralized nature of blockchain may complicate the handling of sensitive data in compliance with privacy regulations.

Experts suggest that addressing the plug-in problem may involve a careful balance between innovation and security. Integrating blockchain-based solutions in a way that doesn't compromise the core functionality of ChatGPT is a complex task that requires collaboration among AI researchers, cybersecurity experts, and blockchain developers.

Furthermore, implementing robust auditing and validation processes for third-party plug-ins is crucial to minimize potential security breaches. OpenAI must rigorously vet and monitor the code submitted by developers to ensure it complies with security standards and does not expose users to undue risks.

OpenAI has already taken measures to address the plug-in challenges. They have instituted an internal review process and are actively working to enhance the security of ChatGPT. Additionally, they are exploring options to leverage blockchain technology for improving the model's transparency and accountability without compromising performance.

Auto-GPT: New autonomous 'AI agents' Can Act Independently & Modify Their Own Code

 

The next phase of artificial intelligence is here, and it is already causing havoc in the technology sector. The release of Auto-GPT last week, an artificial intelligence program capable of operating autonomously and developing itself over time, has encouraged a proliferation of autonomous "AI agents" that some believe could revolutionize the way we operate and live. 

Unlike current systems such as ChatGPT, which require manual commands for every activity, AI agents can give themselves new tasks to work on with the purpose of achieving a larger goal, and without much human interaction – an unparalleled level of autonomy for AI models such as GPT-4. Experts say it's difficult to predict the technology's future consequences because it's still in its early stages. 

According to Steve Engels, a computer science professor at the University of Toronto who works with generative AI, an AI agent is any artificial intelligence capable of performing a certain function without human intervention.

“The term has been around for decades,” he said. For example, programs that play chess or control video game characters are considered agents because “they have the agency to be able to control some of their own behaviors and explore the environment.”

This latest generation of AI agents is similarly autonomous, but with significantly higher capabilities, thanks to state-of-the-art AI systems like OpenAI's GPT-4 — a massive language model capable of tasks ranging from writing difficult code to creating sonnets to passing the bar exam.

Earlier this month, OpenAI published an API for GPT-4 and their hugely popular chatbot ChatGPT, allowing any third-party developer to integrate the company's technology into their own products. Auto-GPT is one of the most recent products to emerge from the API, and it may be the first example of GPT-4 being allowed to operate fully autonomously.

What exactly is Auto-GPT and what can it do?

Toran Bruce Richards, the founder and lead developer at video game studio Significant Gravitas Ltd, designed Auto-GPT. Its source code is freely accessible on Github, allowing anyone with programming skills to create their own AI agents.

Based on the project's Github page, Auto-GPT can browse the internet for "searches and information gathering," make visuals, maintain short-term and long-term memory, and even use text-to-speech to allow the AI to communicate.

Most notably, the program can rewrite and improve on its own code, allowing it to "recursively debug, develop, and self-improve," according to Significant Gravitas. It remains to be seen how effective these self-updates are.

“Auto-GPT is able to actually take those responses and execute them in order to make some larger task happen,” Engels said, including coming up with its own prompts in response to new information.

Auto-GPT became the #1 trending repository on Github almost immediately after its launch, earning over 61,000 stars by Friday night and spawning a slew of offshoots. Over the last week, the program has led Twitter's trending tab, with innumerable programmers and entrepreneurs offering their perspectives.

Prior to publishing, Richards and Significant Gravitas did not respond to the Star's requests for comment. Twitter has been flooded with users describing their uses for Auto-GPT, ranging from creating business blueprints to automating to-do lists.

While anyone may use Auto-GPT, it does require some programming skills to set up. Users, thankfully, have produced AgentGPT, which integrates Auto-GPT into one's web browser, allowing anyone to make their own AI Agents.

Given the program's skills and affordability, AI agents may eventually replace human positions such as customer service representatives, content writers, and even financial advisors. At the moment, the technology has flaws — for example, ChatGPT has been known to manufacture news reports or scientific studies, while Auto-GPT has struggled to stay on goal. Still, AI is evolving at a dizzying speed, and it's impossible to predict what will happen next, according to Engels.

“We don’t really know at this point what it’s going to be or even what the next iteration of it is going to look like,” he said. “Things are still very much in the development stage right now.”

US NIST Uncovers Winning Encryption Algorithm for IoT Data Protection

The National Institute of Standards and Technology (NIST) has declared that ASCON has won the "lightweight cryptography" programme, which seeks the best algorithm to protect small IoT (Internet of Things) devices with limited hardware resources. Small IoT devices are becoming progressively popular and ubiquitous, being used in wearable technology, "smart home" applications, and so on. 

However, they are still utilized to store and handle sensitive personal information such as health records, financial information, etc. Having stated that, implementing a standard for data encryption is critical in securing people's data. However, the weak chips inside these devices necessitate the utilization of an algorithm capable of providing robust encryption while using very little computational power.

Kerry McKay, a computer scientist at NIST stated, "The world is moving toward using small devices for lots of tasks ranging from sensing to identification to machine control, and because these small devices have limited resources, they need security that has a compact implementation. These algorithms should cover most devices that have these sorts of resource constraints."

ASCON was chosen as the best of 57 proposals submitted to NIST after several rounds of security analysis by leading cryptographers, implementation and benchmarking results, and workshop feedback. The entire programme lasted four years and began in 2019.

As per NIST, all ten finalists demonstrated exceptional performance that exceeded the set standards without raising security concerns, making the final selection extremely difficult. ASCON was eventually chosen as the winner due to its flexibility, seven-family support, energy efficiency, speed on slow hardware, and low overhead for short messages.

The algorithm had also withstood the test of time, having been formed in 2014 by a team of cryptographers from Graz University of Technology, Infineon Technologies, Lamarr Security Research, and Radboud University, and winning the CAESAR cryptographic competition's "lightweight encryption" category in 2019.

AEAD (Authenticated Encryption with Associated Data) and hashing are two of ASCON's native features highlighted in NIST's announcement. AEAD is an encryption mode that combines symmetric encryption and MAC (message authentication code) to prevent unauthorized access or tampering with transmitted or stored data.

Hashing is a data integrity verification mechanism that generates a string of characters (hash) from distinct inputs, allowing two data exchange points to verify that the encrypted message has not been tampered with. NIST continues to recommend AES for AEAD and SHA-256 for hashing; however, these are incompatible with smaller, weaker devices.

Despite its lightweight nature, NIST claims that ASCON is powerful enough to withstand attacks from powerful quantum computers at its standard 128-bit nonce. This is not, however, the goal or purpose of this standard, and lightweight cryptography algorithms should only be used to protect ephemeral secrets.

The National Institute of Standards and Technology (NIST) treats post-quantum cryptography as a distinct challenge, with a separate programme for developing quantum-resistant standards, and the effort has already produced results.

The National Institute of Standards and Technology (NIST) treats post-quantum cryptography as a separate challenge, with a separate programme for developing quantum-resistant standards, and the effort has already yielded its first results.

More information on ASCON, it can be found on the algorithm's website or in the technical paper submitted to NIST in May 2021.

GitHub Introduces Private Flaw Reporting to Secure Software Supply Chain

 

GitHub, a Microsoft-owned code hosting platform, has announced the launch of a direct channel for security researchers to report vulnerabilities in public repositories that allow it. The new private vulnerability reporting capability allows repository administrators to enable security researchers to report any vulnerabilities found in their code to them. 

Some repositories may include instructions on how to contact the maintainers for vulnerability reporting, but for those that do not, researchers frequently report issues publicly. Whether the researcher reports the vulnerability through social media or by creating a public issue, this method may make vulnerability details insufficiently public. 

To avoid such situations, GitHub has implemented private reporting, which allows researchers to contact repository maintainers who are willing to enroll directly. If the functionality is enabled, the reporting security researchers are given a simple form to fill out with information about the identified problem.

According to GitHub, "anyone with admin access to a public repository can enable and disable private vulnerability reporting for the repository." When a vulnerability is reported, the repository maintainer is notified and can either accept or reject the report or ask additional questions about the issue.

According to GitHub, the benefits of the new capability include the ability to discuss vulnerability details privately, receiving reports directly on the same platform where the issue is discussed and addressed, initiating the advisory report, and a lower risk of being contacted publicly.

Private vulnerability reporting can be enabled from the repository's main page's 'Settings' section, in the 'Security' section of the sidebar, under 'Code security and analysis.' Once the functionality is enabled, security researchers can submit reports by clicking on a new 'Report a vulnerability' button on the repository's 'Advisories' page.

The private vulnerability reporting was announced at the GitHub Universe 2022 global developer event, along with the general availability of CodeQL support for Ruby, a new security risk and coverage view for GitHub Enterprise users, and funding for open-source developers.

The platform will provide a $20,000 incentive to 20 developers who maintain open-source repositories through the new GitHub Accelerator initiative. While, the new $10 million M12 GitHub Fund will support future open-source companies.