GitHub has unveiled a novel AI-driven feature aimed at expediting the resolution of vulnerabilities during the coding process. This new tool, named Code Scanning Autofix, is currently available in public beta and is automatically activated for all private repositories belonging to GitHub Advanced Security (GHAS) customers.
Utilizing the capabilities of GitHub Copilot and CodeQL, the feature is adept at handling over 90% of alert types in popular languages such as JavaScript, Typescript, Java, and Python.
Once activated, Code Scanning Autofix presents potential solutions that GitHub asserts can resolve more than two-thirds of identified vulnerabilities with minimal manual intervention. According to GitHub's representatives Pierre Tempel and Eric Tooley, upon detecting a vulnerability in a supported language, the tool suggests fixes accompanied by a natural language explanation and a code preview, offering developers the flexibility to accept, modify, or discard the suggestions.
The suggested fixes are not confined to the current file but can encompass modifications across multiple files and project dependencies. This approach holds the promise of substantially reducing the workload of security teams, allowing them to focus on bolstering organizational security rather than grappling with a constant influx of new vulnerabilities introduced during the development phase.
However, it is imperative for developers to independently verify the efficacy of the suggested fixes, as GitHub's AI-powered feature may only partially address security concerns or inadvertently disrupt the intended functionality of the code.
Tempel and Tooley emphasized that Code Scanning Autofix aids in mitigating the accumulation of "application security debt" by simplifying the process of addressing vulnerabilities during development. They likened its impact to GitHub Copilot's ability to alleviate developers from mundane tasks, allowing development teams to reclaim valuable time previously spent on remedial actions.
In the future, GitHub plans to expand language support, with forthcoming updates slated to include compatibility with C# and Go.
For further insights into the GitHub Copilot-powered code scanning autofix tool, interested parties can refer to GitHub's documentation website.
Additionally, the company recently implemented default push protection for all public repositories to prevent inadvertent exposure of sensitive information like access tokens and API keys during code updates.
This move comes in response to a notable issue in 2023, during which GitHub users inadvertently disclosed 12.8 million authentication and sensitive secrets across more than 3 million public repositories. These exposed credentials have been exploited in several high-impact breaches in recent years, as reported by BleepingComputer.
Various technical details, including code about Binance's security procedures, were included in the leaked material. Interestingly, this contained details on multi-factor authentication (MFA) and passwords. A large portion of the code that was made public concerned systems that were identified as "prod," denoting a link to Binance's operational website as opposed to test or development environments.
On January 5, 2024, 404 Media contacted Binance to inform the exchange about the compromised data, which is when the problem became apparent. Binance then retaliated by sending GitHub a copyright removal request. Binance admitted in this request that internal code from the disclosed material "poses a significant risk" to the exchange, resulting in "severe financial harm" as well as possible user misunderstanding or harm.
Even after admitting the leak, Binance sent out a representative to try and reassure its user base. According to the spokesman, Binance's security team examined the circumstances and came to the conclusion that the code that had been leaked was not similar to the code that was being produced at the time. The representative emphasized the protection of users' data and assets and stated that there was only a "negligible risk" from the compromised information.
The significance of strong security procedures in the Bitcoin sector is highlighted by this occurrence. Crypto exchanges are required to uphold strict security procedures because of their role in managing users' sensitive information and financial assets. The prolonged public disclosure of security-related code and internal passwords on a public forum calls into doubt the effectiveness of Binance's security protocols.
Another level of worry is raised by the exposed data, especially the code about security protocols like multi-factor authentication and passwords. These kinds of security lapses can have serious repercussions, including the compromise of user funds and accounts. It draws attention to the continuous difficulties Bitcoin platforms have in maintaining the integrity and confidentiality of their internal systems.
Mercedes-Benz faces the spotlight as a critical breach comes to light. RedHunt Labs, a cybersecurity firm, discovered a serious vulnerability in Mercedes's digital security, allowing unauthorised entry to confidential internal data. Shubham Mittal, Chief Technology Officer at RedHunt Labs, found an employee's access token exposed on a public GitHub repository during a routine scan in January. This access token, initially meant for secure entry, inadvertently served as the gateway to Mercedes's GitHub Enterprise Server, posing a risk to sensitive source code repositories. The incident reiterates the importance of robust cybersecurity measures and highlights potential risks associated with digital access points.
Mittal found an employee's authentication token, an alternative to passwords, exposed in a public GitHub repository. This token provided unrestricted access to Mercedes's GitHub Enterprise Server, allowing the unauthorised download of private source code repositories. These repositories contained a wealth of intellectual property, including connection strings, cloud access keys, blueprints, design documents, single sign-on passwords, API keys, and other crucial internal details.
The exposed repositories were found to include Microsoft Azure and Amazon Web Services (AWS) keys, a Postgres database, and actual Mercedes source code. Although it remains unclear whether customer data was compromised, the severity of the breach cannot be underestimated.
Upon notification from RedHunt Labs, Mercedes responded by revoking the API token and removing the public repository. Katja Liesenfeld, a Mercedes spokesperson, acknowledged the error, stating, "The security of our organisation, products, and services is one of our top priorities." Liesenfeld assured that the company would thoroughly analyse the incident and take appropriate remedial measures.
The incident, which occurred in late September 2023, raises concerns about the potential exposure of the key to third parties. Mercedes has not confirmed if others discovered the exposed key or if the company possesses the technical means to track any unauthorised access to its data repositories.
This incident comes on the heels of a similar security concern with Hyundai's India subsidiary, where a bug exposed customers' personal information. The information included names, mailing addresses, email addresses, and phone numbers of Hyundai Motor India customers who had their vehicles serviced at Hyundai-owned stations across India.
These security lapses highlight the importance of robust cybersecurity measures in an era where digital threats are increasingly sophisticated. Companies must prioritise the safeguarding of sensitive data to protect both their intellectual property and customer information.
As the situation unfolds, Mercedes will undoubtedly face scrutiny over its security protocols, emphasising the need for transparency and diligence in handling such sensitive matters. Consumers are reminded to remain vigilant about the cybersecurity practices of the companies they entrust with their data.
According to a report by Cryptopolitan, the breach happened when malicious code was added to Ledger's Github repository for Connect Kit, an essential component that is required by several DeFi protocols in order to communicate with hardware wallets for cryptocurrencies. Every application that used the Connect Kit had issues with its front end due to the malicious code. Notable protocols affected by this security flaw were Sushi, Lido, Metamask, and Coinbase.
In regards to the incident, Ledger informed that one of its employees had fallen victim to a phishing attack, resulting in the unauthorized leak of a compromised version of the Ledger Connect Kit. The leaked code revealed the name and email address of the former employees. It is important to note that the developer was first believed to be behind the exploit by the cryptocurrency community. Ledger subsequently stated, nevertheless, that the incident was the consequence of a former employee falling for a phishing scheme.
Ledger, after acknowledging the incident, identified and removed the exploited version of the software. However, despite the swift response, the damage was already done, since the software was left vulnerable for at least two hours, in the course of which the threat actors had already drained the funds.
The company acted promptly, identifying and removing the harmful version of the software. However, despite Ledger’s quick response, the damage had already been done in approximately two hours, during which the hackers drained funds.
This incident has raised major concerns regarding the security infrastructure of decentralized applications. DeFi protocols frequently rely on code from multiple software providers, including Ledger, which leaves them vulnerable to multiple potential points of failure.
This incident has further highlighted the significance of boosting security protocols across the DeFi ecosystem.
The victims who were directly affected by the attack included users of services such as revoke.cash. Also, the service normally used in withdrawing permissions from DeFi protocols following security breaches was compromised. Users who were trying to protect their assets were unintentionally sent to a fraudulent token drainer, which increased the extent of the theft.
Vulnerabilities in the constantly changing technology landscape present serious risks to the safety of our online lives. A significant Bluetooth security weakness that affects Apple, Linux, and Android devices has recently come to light in the cybersecurity community, potentially putting millions of users at risk of hacking.
Security experts from SkySafe, a renowned cybersecurity firm, delved into the intricacies of the vulnerability and disclosed their findings on GitHub. If successfully employed, the exploit could lead to a myriad of security breaches, prompting urgent attention from device manufacturers and software developers alike.
Apple, a prominent player in the tech industry, was not exempt from the repercussions of this Bluetooth bug. The flaw could potentially enable hackers to hijack Apple devices, raising concerns among millions of iPhone, iPad, and MacBook users. Apple, known for its commitment to user security, has been swift in acknowledging the issue and is actively working on a patch to mitigate the vulnerability.
Linux, an open-source operating system widely used across various platforms, also faced the brunt of this security loophole. With a significant user base relying on Linux for its robustness and versatility, the impact of the Bluetooth flaw extends to diverse systems, emphasizing the urgency of a comprehensive solution.
Android, the dominant mobile operating system, issued a security bulletin addressing the Bluetooth vulnerability. The Android Security Bulletin for December 2023 outlined the potential risks and provided guidance on necessary patches and updates. As the flaw could compromise the security of Android devices, users are strongly advised to implement the recommended measures promptly.
Cybersecurity experts stated, "The discovery of this Bluetooth vulnerability is a stark reminder of the constant vigilance required in the digital age. It underscores the importance of prompt action by manufacturers and users to ensure the security and integrity of personal and sensitive information."
This Bluetooth security issue serves as a grim reminder of the ongoing fight against new cyber threats as the tech world struggles with its implications. In order to strengthen its commitment to a secure digital future, the IT industry is working together with developers, manufacturers, and consumers to quickly identify and fix vulnerabilities.
While massive platforms like YouTube and Gmail use text classification models to identify frauds, offensive remarks, and phishing attempts, threat actors are known to create counter-strategies to get around these security mechanisms.
The project description on GitHub reads, "RETVec is trained to be resilient against character-level manipulations including insertion, deletion, typos, homoglyphs, LEET substitution, and more."
"The RETVec model is trained on top of a novel character encoder which can encode all UTF-8 characters and words efficiently."
The Google-sponsored platforms reveal that they have been using Adversarial text manipulations, such as the usage of homoglyphs, keyword stuffing, and invisible characters.
With its out-of-the-box support for over 100 languages, RETVec seeks to contribute to developing more robust and computationally affordable server-side and on-device text classifiers that are more durable and effective.
In natural language processing (NLP), vectorization is a technique that maps words or phrases from a lexicon to a matching numerical representation for use in sentiment analysis, text classification, and named entity recognition, among other analyses.
Google’s anti-abuse researchers Elie Bursztein and Marina Zhang note in the Google Security blog that, “due to its novel architecture, RETVec works out-of-the-box on every language and all UTF-8 characters without the need for text preprocessing, making it the ideal candidate for on-device, web, and large-scale text classification deployments."
Google further notes that incorporating vectorizer into Gmail has really helped in detecting spam, with the detection rate escalating over the baseline by 38%. Also, the false positive rate has declined by 19.4%.
Moreover, vectorization has also reduced the model's Tensor Processing Unit (TPU) usage by 83%.
"Models trained with RETVec exhibit faster inference speed due to its compact representation. Having smaller models reduces computational costs and decreases latency, which is critical for large-scale applications and on-device models," Bursztein and Zhang added.
Spams are the most popular attack vector in the virtual space, used by almost every cybercriminal. The popularity comes with its convenience of being omnipresent, cheap, and efficient, enabling cybercriminals to transfer malware and access sensitive data from targeted systems.
Apparently, the vulnerability enables an attacker to extract the master password from the target computer's memory and take it away in plain text, or in other words, in an unencrypted form. Although it is a fairly easy hack, there are expected to be some unsettling repercussions.
Password managers, like in this case KeePass, lock up a user’s login info encrypted and secure behind a master password in order to keep it safe. The vault is a valuable target for hackers since the user is required to input the master password to access everything within.
Security researcher 'vdohney,' according to a report by Bleeping Computer, found the KeePass vulnerability and posted a proof-of-concept (PoC) program on GitHub.
With the exception of the initial one or two characters, this tool can almost entirely extract the master password in readable, unencrypted form. Even if KeePass is locked and, possibly, if the app is completely closed, it is still capable of doing this.
All this is because the vulnerability extracts the master password from KeePass’s memory. This can be acquired, as the researcher says, in a number of ways: “It doesn’t matter where the memory comes from — can be the process dump, swap file (pagefile.sys), hibernation file (hiberfil.sys) or RAM dump of the entire system.”
The exploit is only possible due to some custom code KeePass uses. Your master password is entered in a unique box named SecureTextBoxEx. Despite its name, it turns out that this box is actually not all that secure since each character that is entered essentially creates a duplicate of itself in the system memory. The PoC tool locates and extracts these remaining characters.
Having physical access to the computer from which the master password is to be taken is the only drawback to this security breach. However, that is not always a problem; as the LastPass vulnerability case demonstrated, hackers can access a target's computer by utilizing weak remote access software installed on the device.
In case a device was infected by a malware, it may as well be set up to dump KeePass's memory and send it and the app's database back to the hacker's server, giving the threat actor time to get the master password.
Fortunately, the developer of KeePass promises that a fix is incoming; one of the potential fixes is to add random dummy text that would obscure the password into the app's memory. It may be agonizing to wait until June or July 2023 for the update to be made available for anyone concerned about their master password being compromised. The fix, however, is also available in beta form and may be downloaded from the KeePass website.
The size of the language models in the LLaMA collection ranges from 7 billion to 65 billion parameters. In contrast, the GPT-3 model from OpenAI, which served as the basis for ChatGPT, has 175 billion parameters.
Meta can potentially release its LLaMA model and its weights available as open source, since it has trained models through the openly available datasets like Common Crawl, Wkipedia, and C4. Thus, marking a breakthrough in a field where Big Tech competitors in the AI race have traditionally kept their most potent AI technology to themselves.
In regards to the same, Project member Guillaume’s tweet read "Unlike Chinchilla, PaLM, or GPT-3, we only use datasets publicly available, making our work compatible with open-sourcing and reproducible, while most existing models rely on data which is either not publicly available or undocumented."
Meta refers to its LLaMA models as "foundational models," which indicates that the company intends for the models to serve as the basis for future, more sophisticated AI models built off the technology, the same way OpenAI constructed ChatGPT on the base of GPT-3. The company anticipates using LLaMA to further applications like "question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of present language models" and to aid in natural language research.
While the top-of-the-line LLaMA model (LLaMA-65B, with 65 billion parameters) competes head-to-head with comparable products from rival AI labs DeepMind, Google, and OpenAI, arguably the most intriguing development comes from the LLaMA-13B model, which, as previously mentioned, can reportedly outperform GPT-3 while running on a single GPU when measured across eight common "common sense reasoning" benchmarks like BoolQ, PIQA LLaMA-13B opens the door for ChatGPT-like performance on consumer-level hardware in the near future, unlike the data center requirements for GPT-3 derivatives.
In AI, parameter size is significant. A parameter is a variable that a machine-learning model employs in order to generate hypotheses or categorize data as input. The size of a language model's parameter set significantly affects how well it performs, with larger models typically able to handle more challenging tasks and generate output that is more coherent. However, more parameters take up more room and use more computing resources to function. A model is significantly more efficient if it can provide the same outcomes as another model with fewer parameters.
"I'm now thinking that we will be running language models with a sizable portion of the capabilities of ChatGPT on our own (top of the range) mobile phones and laptops within a year or two," according to Simon Willison, an independent AI researcher in an Mastodon thread analyzing and monitoring the impact of Meta’s new AI models.
Currently, a simplified version of LLaMA is being made available on GitHub. The whole code and weights (the "learned" training data in a neural network) can be obtained by filling out a form provided by Meta. A wider release of the model and weights has not yet been announced by Meta.