Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Corporate Security. Show all posts

Unauthorized Use of AI Tools by Employees Exposes Sensitive Corporate Data


 

Artificial intelligence has rapidly revolutionised the modern workplace, creating both unprecedented opportunities and presenting complex challenges at the same time. Despite the fact that AI was initially conceived to improve productivity, it has quickly evolved into a transformational force that has changed the way employees think, work, and communicate. 

Despite the rapid rise in technology, many organisations are still ill-prepared to deal with the unchecked use of artificial intelligence. With the advent of generative AI, which can produce text, images, videos, and audio in a variety of ways, employees have increasingly adopted it for drafting emails, preparing reports, analysing data, and even creating creative content. 

The ability of advanced language models, which have been trained based on vast datasets, to mimic the language of humans with remarkable fluency can enable workers to perform tasks that once took hours to complete. According to some surveys, a majority of American employees rely on AI tools, often without formal approval or oversight, which are freely accessible with a little more than an email address to use. 

Platforms such as ChatGPT, where all you need is an email address if you wish to use the tool, are inspiring examples of this fast-growing trend. Nonetheless, this widespread use of unregulated artificial intelligence tools raises many concerns about privacy, data protection, and corporate governance—a concern employers must address with clear policies, robust safeguards, and a better understanding of the evolving digital landscape to prevent these concerns from becoming unfounded. 

Cybernews has recently found out that the surge in unapproved AI use in the workplace is a concerning phenomenon. While digital risks are on the rise, a staggering 75 per cent of employees who use so-called “shadow artificial intelligence” tools admit to having shared sensitive or confidential information through them.

Information that could easily compromise their organisations. However, what is more troubling is that the trend is not restricted to junior staff; it is actually a trend led by the leadership at the organisation. With approximately 93 per cent of executives and senior managers admitting to using unauthorised AI tools, it is clear that executives and senior managers are the most frequent users. Management accounts for 73 per cent, followed by professionals who account for 62 per cent. 

In other words, it seems that unauthorised AI tools are not isolated, but rather a systemic problem. In addition to employee records and customer information, internal documents, financial and legal records, and proprietary code, these categories of sensitive information are among the most commonly exposed categories, each of which can lead to serious security breaches each of which has the potential to be a major vulnerability. 

However, despite nearly nine out of ten workers admitting that utilising AI entails significant risks, this continues to happen. It has been found that 64 per cent of respondents recognise the possibility of data leaks as a result of unapproved artificial intelligence tools, and more than half say they will stop using those tools if such a situation occurs. However, proactive measures remain rare in the industry. As a result, there is a growing disconnect between awareness and action in corporate data governance, one that could have profound consequences if not addressed. 

There is also an interesting paradox within corporate hierarchies revealed by the survey: even though senior management is often responsible for setting data governance standards, they are the most frequent infringers on those standards. According to a recent study, 93 per cent of executives and senior managers use unapproved AI tools, outpacing all other job levels by a wide margin.

There is also a significant increase in engagement with unauthorised platforms by managers and team leaders, who are responsible for ensuring compliance and modelling best practices within the organisation. This pattern, researchers suggest, reflects a worrying disconnect between policy enforcement and actual behaviour, one that erodes accountability from the top down. Žilvinas GirÄ—nas, head of product at Nexos.ai, warns that the implications of such unchecked behaviour extend far beyond simple misuse. 

The truth is that it is impossible to determine where sensitive data will end up if it is pasted into unapproved AI tools. "It might be stored, used to train another model, exposed in logs, or even sold to third parties," he explained. It could be possible to slip confidential contracts, customer details, or internal records quietly into external systems without detection through such actions, he added.

A study conducted by IBM underscores the seriousness of this issue by estimating that shadow artificial intelligence can result in an average data breach cost of up to $670,000, an expense that few companies are able to afford. Even so, the Cybernews study found that almost one out of four employers does not have formal policies in place governing artificial intelligence use in the workplace. 

Experts believe that awareness alone will not be enough to prevent these risks from occurring. As Sabeckis noted, “It would be a shame if the only way to stop employees from using unapproved AI tools was through the hard lesson of a data breach. For many companies, even a single breach can be catastrophic. GirÄ—nas echoed this sentiment, emphasising that shadow AI “thrives in silence” when leadership fails to act decisively. 

The speaker warned that employees will continue to rely on whatever tools seem convenient to them if clear guidelines and sanctioned alternatives are not provided, leading to efficiency shortcuts becoming potential security breaches without clear guidelines and sanctioned alternatives. Experts emphasise that organisations must adopt comprehensive internal governance strategies to mitigate the growing risks associated with the use of unregulated artificial intelligence, beyond technical safeguards. 

There are a number of factors that go into establishing a well-structured artificial intelligence framework, including establishing a formal AI policy. This policy should clearly state the acceptable uses for AI, prohibit the unauthorised download of free AI tools, and limit the sharing of personal, proprietary, and confidential information through these platforms. 

Businesses are also advised to revise and update existing IT, network security, and procurement policies in order to keep up with the rapidly changing AI environment. Additionally, proactive employee engagement continues to be a crucial part of addressing AI-related risks. Training programs can provide workers with the information and skills needed to understand potential risks, identify sensitive information, and follow best practices for safe, responsible use of AI. 

Also essential is the development of a robust data classification strategy that enables employees to recognise and handle confidential or sensitive information before interacting with AI systems in a proper manner. 

The implementation of formal authorisation processes for AI tools may also benefit organisations by limiting access to the tools to qualified personnel, along with documentation protocols that document inputs and outputs so that compliance and intellectual property issues can be tracked. Further safeguarding the reputation of your brand can be accomplished by periodic reviews of AI-generated content for bias, accuracy, and appropriateness. 

By continuously monitoring AI tools, including reviewing their evolving terms of service, organisations can ensure ongoing compliance with their company's standards, as well. Finally, it is important to put in place a clearly defined incident response plan, which includes designated points of contact for potential data exposure or misuse. This will help organisations respond more quickly to any AI-related incident. 

Combined, these measures represent a significant step forward in the adoption of structured, responsible artificial intelligence that balances innovation and accountability. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information.

Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. 

In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. 

Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. 

These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information. Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). 

In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. 

Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators.

In addition to ensuring compliance, responsible AI adoption can improve operational efficiency, increase employee confidence, and strengthen brand loyalty in an increasingly data-conscious market. According to experts, artificial intelligence should not be viewed merely as a risk to be controlled, but as a powerful tool to be harnessed under strong ethical and strategic guidelines. 

It is becoming increasingly apparent that in today's business climate, every prompt, every dataset can potentially create a vulnerability, so organisations that thrive will be those that integrate technological ambition with a disciplined governance framework - trying to transform AI from being a source of uncertainty to being a tool for innovation that is as sustainable and secure as possible.

These 6 Ways Will Help in Improving Your Organization's Security Culture


Having a robust security culture is the best way of protecting your organization from security data hacks. This blog will talk about six ways you can follow to foster a strong security culture. 

The average cost to the organization of a data attack went upto $4.45 million in 2023 and will probably rise in the coming time. While we can't be certain of how the digital landscape will progress, making a robust security culture is one step of future-proofing your company. 

If you don't have answers to these questions, you may haven't thought much about the concept. If you're not sure where to start and face this problem, needn't worry. This blog will guide you through what a security culture is and provide six practical tips for improving your organization's security. 

What is security culture and how did it evolve?

There has been much discussion recently about the cybersecurity talent divide and the issues it is causing for organizations attempting to improve their data security. While there is no question that it is an urgent problem, considerably fewer firms appear to be paying close attention to the concept of security culture.

That's unfortunate because building a strong security culture is likely the single most necessary thing you can do to defend your firm against security breaches.

The word security culture relates to everyone in your organization's approach toward data security. This includes aspects such as how much people care about security and how they behave in practice.

Is security a priority for the leadership team? Is data security awareness training an important element of your strategy? Even something as simple as how tightly you enforce laws prohibiting anyone without a staff pass from entering the building contributes to the overall security culture.

We're all busy, and it's easy to overlook security. For instance, how many of us are happy shutting the door behind us when someone else wants to come in? Nonetheless, physical security is a critical component of data security.

6 ways to create a strong security culture for your organization

Creating a strong security culture requires everyone in your company to prioritize it for the greater good. 

1. Conduct regular security awareness training sessions for all workers

The starting point is to develop a training plan. This should not be limited to new employees. While security knowledge must be included as part of the process of onboarding, building a truly strong security culture requires everyone, from the top of the boardroom down, to be dedicated to it.

Start with the basics while building a training program:

  • Data protection and privacy: Everyone, regardless of industry or location, should be aware of their legal obligations under rules such as HIPAA or GDPR.
  • Password management entails the use of password managers as well as other access methods such as multi-factor authentication.
  • Adopting safe internet habits: Recognizing the dangers of downloading content or visiting insecure sites. Remind staff to be on the watch for phishing attacks and to report any questionable emails.
  • Physical security: Creating positive practices, such as having employees constantly lock their computers when they leave their desks.

2. Establish a thorough security policy and set of recommendations

A properly stated security policy is required to get everyone on board. But a word of caution: You must find a balance between the amount of information you include in your security policy papers and the length of time it takes to go through them.

3.Plan for risk mitigation and vulnerability identification

Even in a strong security culture, no one data security solution is flawless, therefore you must maintain vigilance. Fortunately, there are numerous measures you can take to assess your security and discover areas for improvement:

  • Penetration testing is a form of test in which you purposefully attempt to breach your own systems. If you lack the means to accomplish it in-house, there are third-party security businesses that can assist you.
  • The principle of the least privilege: Give staff only the information they need to execute their tasks. This entails being selective about which rights are allowed rather than granting broad access.

4. Install security technologies and perform frequent audits

In many respects, your the company's data is its most important asset. Sadly this implies that there are many people who want to get their hands on it for bad motives. To avoid, you must employ safe equipment with the most recent encryption protocols.

First, assess your present technology stack. Is it as seamless as it could be? It is not usual for separate departments to employ distinct tools, each adopted years previously, to accomplish a specific task. When information is transmitted across systems in an inefficient manner, this might lead to security flaws.

5. Building secure communication channels

  • Moving to a fully integrated enterprise management planning (ERP) solution is one answer to this problem. 
  • When it comes to transforming your company's culture into one that prioritizes security, communicating is key.
  • First and foremost, it is critical to identify who is accountable for each aspect of security policy. Usually, this would include creating a table that clearly lays it out. Cover everything from IT teams dealing with system flaws to particular employees being responsible for the security of their own devices.
  • Next, cultivate an open culture. This can be tough at first because, when a problem arises, many people's first reaction is to assign blame. Although reasonable it is not recommended. Because, if this reaction becomes the norm, it ironically increases the likelihood of a security breach. 

6. Develop protocols for crisis management and incident response

If something catastrophic happens, you must have a plan in place to deal with it. Everyone in the organization should be versed in the strategy so that it can be implemented as fast and efficiently as feasible if the need arises.

Take the following three actions to ensure that your organization is properly prepared:

  • 1) Create an Incident Response Plan (IRP): A defined strategy that specifies which processes should be followed by everyone when a security event happens.
  • 2) Form an IRT (Incident Response Team): Assign particular responsibility for incident management to individuals. To serve every angle, this should include personnel from your legal, communications, and executive teams, as well as IT professionals.




Data Theft: Employees Steal Company Data After Getting Fired


Employees taking personal data with them

Around 47 Million Americans left their jobs in 2021, and some took away personal information with them.

The conclusion comes from the latest report by Cyberhaven Inc, a data detection and response firm. It studied 3,72,000 cases of data extraction, and unauthorized transferring of critical info among systems- it involves 1.4 over a six-month period. Cyberhaven Inc found that 9.% of employees took data during that time frame. 

Over 40% of the compromised data was customer or client details, 13.8% related to source code, and 8% was regulated by personally identifiable information. The top 1% of guilty actors are accountable for around 8% of cases and the top 10% of guilty parties are responsible for 35% of cases. 

Reason for data extraction

As expected, the prime time for data extraction was between notice submissions by employees and their last day at work. Cyberhaven calculated around a 38% rise in cases during the post-notice period and an 83% rise in two weeks prior to an employee's resignation. The Cases bounced to 109% on the day the employees were fired from the company. 

Cyberhaven Inc blog says:

"While external threats capture headlines, our report proves that internal leaks are rampant – costing millions (sometimes billions) in IP loss and reputational damage. High-profile recent examples include Twitter, TikTok, and Facebook, but for the most part, this trend has flown under the radar."

The scale of the incident

If you look at the threat on a per-person basis, the risk is not significant, however, it intensifies with scale. Companies experience a mere average of 0.045% data extraction cases/per employee every month, however, it piles up to 45 monthly events at 1,000-employee organizations. 

A general way an employee usually takes out information is through cloud storage accounts, these were used in 27.5% of cases, then 19% belonging to personal webmail, with 14.4% incidents having corporate email messages sent to personal accounts. Removable storage drives amount to one in seven cases. 

Most incidents caused due to accident

Howard Ting (Chief Executive) warned not to jump to any conclusions, thinking many employees are criminals. He believes that the first and foremost cause of data exfiltration is an accident, one shouldn't assume every user is guilty. He said that users are generally unaware they aren't able to upload critical info on drives. 

Most organizations fail to clearly mention policies regarding data ownership. People in sales may believe they can keep account details they have, and developers may keep their code as a personal achievement. Organization mails having internal contact details are casually forwarded to personal accounts without ill intent and critical information can be stored in local hard drives, just a few clicks away. Cyberhaven inc comments:

"Our data suggests employees often sense their impending dismissal and decide to collect sensitive company data for themselves, while others quickly siphon away data before their access is turned off."