Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Security Vendors. Show all posts

Dangers of Adopting Unsanctioned SaaS Applications

 

A sleek little app-store sidebar was silently introduced to the right side of your session screen by the most recent programme update, as you might have seen on your most recent Zoom calls. With the touch of a button and without even pausing their Zoom session, this feature enables any business user inside your company to connect the software-as-a-service (SaaS) apps displayed in the sidebar.

The fact that anyone within an organisation can deploy, administer, and manage SaaS applications emphasises both one of the major strengths and security threats associated with SaaS. Although this technique could be quick and simple for business enablement, it also intentionally avoids any internal security review procedures. 

As a result, your security team is unable to identify which applications are being adopted and used, as well as whether or not they may be vulnerable to security threats, whether or not they are being used securely, or how to put security barriers in place to prevent unauthorised access to them. Zero-trust security principles become nearly hard to enforce. 

Joint Obligation 

Companies need to understand that they are continually being urged by vendors to install additional apps and adopt new features before they reprimand their staff for recklessly utilising SaaS applications. Indeed, the applications themselves frequently meet crucial business demands, and sure, employees naturally want to use them right away without waiting for a drawn-out security evaluation. But, whether they are aware of it or not, they are acting in this way because shrewd application providers are actively marketing to them and frequently tricking users into thinking they are adhering to security best practices. Users are not always reading the consent text displayed on the consent screens that are intended to give users pause during installation and nudge them to read about their rights and obligations. 

Always be cautious

In other circumstances, security is frequently presumed. Consider well-known brands' application markets. Vendors do not have the motivation, financial interest, or capacity to assess the security posture of every third-party application sold on their marketplaces. Yet, in order to promote the business, they may mislead users into believing that anything sold there retains the same level of protection as the marketplace vendor, frequently by omission. Similarly, market descriptions may be worded in such a way as to imply that their application was developed in partnership with or approved by a significant, secure brand.

The use of application marketplaces results in third-party integrations that pose the same vulnerabilities as those that led to numerous recent assaults. During the April 2022 GitHub assault campaign, attackers were able to steal and exploit legitimate Heroku and Travis-CI OAuth tokens issued to well-known suppliers. According to GitHub, the attackers were able to steal data from dozens of GitHub customers and private repositories by using the trust and high access offered to reputable vendors. 

Similarly, CircleCI, a provider focusing in CI/CD and DevOps technologies, reported in December 2022 that some customer data was stolen in a data breach. The investigation was sparked by a hacked GitHub OAuth token. According to the CircleCI team's research, the attackers were able to obtain a valid session token from a CircleCI engineer, allowing them to bypass the two-factor authentication mechanism and gain unauthorised access to production systems. They were able to steal consumer variables, tokens, and keys as a result. 

An Attraction to Frictionless Adoption 

Vendors also design their platforms and incentive plans to make adoption as simple as accepting a free trial, a lifetime free service tier, or swiping a credit card, frequently with alluring discounts to try and buy without commitment. Vendors want users to adopt any exciting, new capability immediately, so they remove all barriers to adoption, including going around ongoing IT and security team reviews. It is hoped that an application will prove to be too well-liked by business users and crucial to corporate operations to be removed, even if security personnel become aware of its use. 

Making adoption too simple, however, can also result in a rise in the number of underutilised, abandoned, and exposed apps. An app can frequently continue to function after it has been rejected during a proof of concept (PoC), abandoned because users have lost interest in it, or the app owner has left the company. This results in an expanded and unprotected attack surface that puts the organisation and its data at greater risk.

While educating business users on SaaS security best practises is important, it's even more crucial to prevent SaaS sprawl by teaching them to think more critically about the seductive promises of quick deployment and financial incentives made by SaaS suppliers.

Additionally, security teams ought to use solutions that can help them manage risks associated with SaaS misconfiguration and SaaS-to-SaaS integrations. These technologies allow customers to continue utilising SaaS applications as required while also conducting security due diligence on new vendors and integrations and setting up crucial security barriers.

Security Vendors are Turning to GPT as a Key AI Technology

 

A number of businesses are utilising conversational AI technology to improve their product capabilities, including for security, despite some concerns about how generative AI chatbots like ChatGPT can be used maliciously — to create phishing campaigns or write malware. 

A large language model (LLM) called ChatGPT, created by OpenAI, uses the GPT 3 LLM and is based on a variety of large test data sets. When a user asks a simple question, ChatGPT, which can understand human language, responds with thorough explanations and can manage complex tasks like document creation and code writing. It serves as an illustration of how conversational AI can be used to organise massive amounts of data, improve user experience, and facilitate communications. 

For example, a conversational AI tool, such as ChatGPT or another option, could act as the back end of an information concierge that automates the use of threat intelligence in enterprise support, claims IT research and advisory firm Into-Tech Research. 

With Orca Security Platform, it seems like Orca Security is taking that tack. The platform's capacity to produce contextual and precise remediation plans for security alerts was improved by the incorporation of OpenAI's GPT3 API, particularly the "Da-Vinci-03" series. In the announcement, the head of data science at Orca, Itamar Golan, and the director of innovation at Orca, Lior Drihem, wrote. Before feeding the components as input to GPT3, the new pipeline preprocesses data from a security alert, including fundamental details about the risk and its contextual environment, including affected assets, attack vectors, and potential impact. The best and most useful solutions to fix the problem are then generated by the AI, according to Golan and Drihem. For teams to refer to and apply, these remediation steps can also be included in tickets, such as Jira tickets. 

Even though the AI model has the potential to produce inaccurate data (or ambiguous results), Drihem and Golan claim that "the benefits of utilising GPT3's natural language generation capabilities outweigh any potential risks, and have seen significant improvements in the efficiency and effectiveness of our remediation efforts." 

Orca Security has previously used language models in their work. To improve the remediation information customers receive regarding infosec risks, the company recently integrated GPT3 into its cloud security platform. 

"By fine-tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps — giving you a much better remediation plan and assisting you to optimally solve the issue as fast as possible," Golan and Drihem added. 

Utilizing LLM & AI for applications 

Orca Security joins other businesses that offer language models as part of their product line. This week, Gupshup introduced Auto Bot Builder, a tool that uses GPT-3 to assist businesses in creating their own sophisticated conversational chatbots. Using content from the enterprise website, documents, message logs, product catalogues, databases, and other corporate systems, Auto Bot Builder creates chatbots tailored to the enterprise's unique requirements. The information is processed using GPT-3 LLM (Large Language Model), and it is then fine-tuned with proprietary industry-specific models. Businesses can use Auto Bot Builder to create chatbots for customer support, product discovery, product recommendations, shopping advice, and lead generation in marketing. 

These chatbots are different from ChatGPT, a general-purpose chatbot, but they share with ChatGPT the ability to communicate with end users at a "exceptionally high degree of language capability," according to Gupshup. 

ChatGPT is also being used by the cryptocurrency community to develop software like trading bots and cryptocurrency blogs. Competitive intelligence analyst Jerrod Piker from Deep Instinct wrote in an email. Examples include creating a sample smart contract using ChatGPT and creating a trading bot to help automate the process of buying and selling cryptocurrencies by identifying entry and exit points. 

The idea of a generative AI chatbot that can respond to questions is not new, but Casey Ellis, founder and CTO of Bugcrowd, notes that ChatGPT stands out from the competition due to the variety of topics it can handle and its usability.