Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Threat. Show all posts

Telus Makes History with ISO Privacy Certification in AI Era

Telus, a prominent telecoms provider, has accomplished a significant milestone by obtaining the prestigious ISO Privacy by Design certification. This certification represents a critical turning point in the business's dedication to prioritizing privacy. The accomplishment demonstrates Telus' commitment to implementing industry-leading data protection best practices and can be seen as a new benchmark.

Privacy by Design, a concept introduced by Dr. Ann Cavoukian, emphasizes the integration of privacy considerations into the design and development of technologies. Telus' attainment of this certification showcases the company's proactive approach to safeguarding user information in an era where digital privacy is a growing concern.

Telus' commitment to privacy aligns with the broader context of technological advancements and their impact on personal data. As artificial intelligence (AI) continues to shape various industries, privacy concerns have become more pronounced. The intersection of AI and privacy is critical for companies to navigate responsibly.

The realization that AI technologies sometimes entail the processing of enormous volumes of sensitive data highlights the significance of this intersection. Telus's acquisition of the ISO Privacy by Design certification becomes particularly significant in the current digital context when privacy infractions and data breaches frequently make news.

In an era where data is often referred to as the new currency, the need for robust privacy measures cannot be overstated. Telus' proactive stance not only meets regulatory requirements but also sets a precedent for other companies to prioritize privacy in their operations.

Dr. Ann Cavoukian, the author of Privacy by Design, says that "integrating privacy into the design process is not only vital but also feasible and economical. It is privacy plus security, not privacy or security alone."

Privacy presents both opportunities and concerns as technology advances. Telus' certification is a shining example for the sector, indicating that privacy needs to be integrated into technology development from the ground up.

The achievement of ISO Privacy by Design certification by Telus represents a turning point in the ongoing conversation about privacy and technology. The proactive approach adopted by the organization not only guarantees adherence to industry norms but also serves as a noteworthy model for others to emulate. Privacy will continue to be a key component of responsible and ethical innovation as AI continues to change the digital landscape.


Apple Co-founder Says AI Could Make Cyber Scams ‘Harder to Spot’


Apple co-founder Steve Wozniak recently cautioned that artificial intelligence (AI) could result in making cyber scams and misinformation more challenging to recognize. 

Speaking to BBC, he further notes that technology may as well be harnessed by “bad actors.” According to Mr. Wozniak, AI contents should well-labelled, and also highlighted the need for proper regulation in the industry. 

In March, Apple, along with Meta CEO Elon Musk signed a letter, urging a halt to the development of more potent AI models. 

Mr. Wozniak, also referred to as Woz in the tech community, is a seasoned veteran of Silicon Valley who co-founded Apple with Steve Jobs and created the company's first computer./ In an interview with BBC Technology Editor Zoe Kleinman, he discussed his fears as well as the advantages of artificial intelligence.

"AI is so intelligent it's open to the bad players, the ones that want to trick you about who they are," said Kleinman. 

AI refers to computer programs that can perform tasks that would typically require human intelligence. This includes systems that can identify objects in images and chatbots that can comprehend queries and provide responses that seem human.

While Mr. Wozniak ardently believes that AI will not be replaying humans, since it lacks emotions. However, he warns against bad actors, since AI is making them more realistic, one example being generative AI ChatGPT that can carve texts which sounds human and “intelligent.”

A Human Really has to Take the Responsibility 

Wozniak believes that any product of the artificial intelligence be held accountable for those who publish it. "A human really has to take the responsibility for what is generated by AI," he says. 

The large tech companies that "feel they can kind of get away with anything" should be held accountable by regulations, according to him.

Yet he expressed doubt that authorities would make the correct decisions, saying, "I think the forces that drive for money usually win out, which is sort of sad."

Technology cannot be Stopped 

Mr. Wozniak, a computer pioneer, believes that those developing artificial intelligence now might learn from the chances lost during the early stages of the internet. Although "we can't stop the technology," in his opinion, we can teach individuals to recognize fraud and other nefarious attempts to obtain personal information.

Last week, the current CEO of Apple, Tim Cook told investors that is crucial to be “deliberate and thoughtful,” is a way to approach AI. "We view AI as huge, and we'll continue weaving it in our products on a very thoughtful basis," he said.  

Shadow AI: The Novel, Unseen Threat to Your Company's Data

 

Earlier this year, ChatGPT emerged as the face of generative AI. ChatGPT was designed to help with almost everything, from creating business plans to breaking down complex topics into simple terms. Since then, businesses of all sizes have been eager to explore and reap the benefits of generative AI. 

However, as this new chapter of AI innovation moves at breakneck speed, CEOs and leaders risk overlooking a type of technology that has been infiltrating through the back door: shadow AI. 

Overlooking AI shadow a risky option 

To put it simply, "shadow AI" refers to employees who, without management awareness, add AI tools to their work systems to make life easier. Although most of the time this pursuit of efficiency is well-intentioned, it is exposing businesses to new cybersecurity and data privacy risks.

When it comes to navigating tedious tasks or laborious processes, employees who want to increase productivity and process efficiency are usually the ones who embrace shadow AI. This could imply that AI is being asked to summarise the main ideas from meeting minutes or to comb through hundreds of PowerPoint decks in search of critical data. 

Employees typically don't intentionally expose their company to risk. On the contrary. All they're doing is simplifying things so they can cross more things off their to-do list. However, given that over a million adults in the United Kingdom have already utilised generative AI at work, there is a chance that an increasing number of employees will use models that their employers have not approved for safe use, endangering data security in the process. 

Major risks 

Shadow AI carries two risks. First, employees may feed sensitive company information into such tools or leave sensitive company information open to be scraped while the technology continues to operate in the background. For example, when an employee uses ChatGPT or Google Bard to increase productivity or clarify information, they may be entering sensitive or confidential company information. 

Sharing data isn't always an issue—companies frequently rely on third-party tools and service providers for information—but problems can arise when the tool in question and its data-handling policies haven't been assessed and approved by the business. 

The second risk related to shadow AI is that, because businesses generally aren't aware that these tools are being used, they can't assess the risks or take appropriate action to minimise them. (This may also apply to employees who receive false information and subsequently use it in their work.) 

This is something that occurs behind closed doors and beyond the knowledge of business leaders. In 2022, 41% of employees created, modified, or accessed technology outside of IT's purview, according to research from Gartner. By 2027, the figure is expected to increase to 75%. 

And therein lies the crux of the issue. How can organisations monitor and assess the risks of something they don't understand? 

Some companies, such as Samsung, have gone so far as to ban ChatGPT from their offices after employees uploaded proprietary source code and leaked confidential company information via the public platform. Apple and JP Morgan have also restricted employee use of ChatGPT. Others are burying their heads in the sand or failing to notice the problem at all. 

What should business leaders do to mitigate the risks of shadow AI while also ensuring that they and their teams can benefit from the efficiencies and insights that artificial intelligence can offer? 

First, leaders should educate teams on what constitutes safe AI practise, as well as the risks associated with shadow AI, and provide clear guidance on when ChatGPT can and cannot be used safely at work. 

Companies should consider offering private, in-house generative AI tools to employees who fall into the latter category. Models such as Llama 2 and Falcon AI can be downloaded and used securely to power generative AI tools. Azure Open AI provides a middle-ground option in which data remains within the company's Microsoft "tenancy." 

These options avoid the risk to data and IP that comes with public large language models like ChatGPT—whose various uses of our data aren't yet known—while allowing employees to yield the results they desire.

AI-Generated Phishing Emails: A Growing Threat

The effectiveness of phishing emails created by artificial intelligence (AI) is quickly catching up to that of emails created by humans, according to disturbing new research. With artificial intelligence advancing so quickly, there is concern that there may be a rise in cyber dangers. One example of this is OpenAI's ChatGPT.

IBM's X-Force recently conducted a comprehensive study, pitting ChatGPT against human experts in the realm of phishing attacks. The results were eye-opening, demonstrating that ChatGPT was able to craft deceptive emails that were nearly indistinguishable from those composed by humans. This marks a significant milestone in the evolution of cyber threats, as AI now poses a formidable challenge to conventional cybersecurity measures.

One of the critical findings of the study was the sheer volume of phishing emails that ChatGPT was able to generate in a short span of time. This capability greatly amplifies the potential reach and impact of such attacks, as cybercriminals can now deploy a massive wave of convincing emails with unprecedented efficiency.

Furthermore, the study highlighted the adaptability of AI-powered phishing. ChatGPT demonstrated the ability to adjust its tactics in response to recipient interactions, enabling it to refine its approach and increase its chances of success. This level of sophistication raises concerns about the evolving nature of cyber threats and the need for adaptive cybersecurity strategies.

While AI-generated phishing is on the rise, it's important to note that human social engineers still maintain an edge in certain nuanced scenarios. Human intuition, emotional intelligence, and contextual understanding remain formidable obstacles for AI to completely overcome. However, as AI continues to advance, it's crucial for cybersecurity professionals to stay vigilant and proactive in their efforts to detect and mitigate evolving threats.

Cybersecurity measures need to be reevaluated in light of the growing competition between AI-generated phishing emails and human-crafted attacks. Defenders must adjust to this new reality as the landscape changes. Staying ahead of cyber threats in this quickly evolving digital age will require combining the strengths of human experience with cutting-edge technologies.

The Dark Side of AI: How Cyberthreats Could Get Worse, Report Warns

 

A UK government report warns that by 2025, artificial intelligence could escalate the risk of cyberattacks and undermine public confidence in online content . It also suggests that terrorists could use the technology to plot chemical or biological strikes. 

However, other experts doubt that technology will advance as predicted. It is expected this week that Prime Minister Rishi Sunak would highlight the opportunities and threats of technology. 

The government report analyses generative AI, the kind of technology that now powers image-generating software and chatbots that are widely employed.

The possibility that AI would enable faster, more potent, and extensive cyberattacks by 2025 is another concern pointed up in the report. Hackers could benefit from employing artificial intelligence (AI) to successfully mimic official language, according to Joseph Jarnecki of the Royal United Services Institute. This presents challenges since bureaucratic terminology has a specific tone that hackers have found tricky to exploit. 

This report's release comes before Sunak's speech detailing the UK government's intentions to guarantee AI safety and position the nation as a global leader in this area. Although Sunak acknowledges that AI has the potential to improve economic growth and problem-solving abilities, he also stresses the need to address the risks and anxieties that come with it. 

"AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought were beyond us. But it also brings new dangers and new fears," Mr Sunak is expected to say. 

Furthermore, "frontier AI," or highly advanced AIs, will be discussed at a government summit the following week. The question of whether these technologies are dangerous for humans is still up for debate. Another recently released report from the Government Office for Science states that many experts believe this risk to be unlikely and that there are few possible ways for it to come to go through. 

An AI would need to have influence over vital systems, such as financial or military systems, in order to threaten human existence. It would also call for the development of new abilities like autonomy, self-improvement, and the capacity to avoid human monitoring. The report does concede, though, that opinions differ on when specific future capabilities might emerge.

Generative AI Projects Can Lead to Major Security Threats for Businesses

AI Threat

Generative AI Projects' Potential Cybersecurity Risks

Have you heard anything about the potential cybersecurity dangers of generative AI projects to businesses? It's a topic that's recently made the news. You may be curious if technology and its impact on enterprises interests you.

What are the dangers?

According to a recent report, developers are thrilled about tools like ChatGPT and other Language Learning Models (LLMs). However, most organizations are not well prepared to protect against the vulnerabilities introduced by this new technology.

According to Rezilion research, given that this technology is rapidly being adopted by the open-source community (with over 30,000 GPT-related projects on GitHub alone!), the initial projects being produced are vulnerable. It means that organizations face an increased threat and significant security risk.

Rezilion's report addresses several significant aspects of generative AI security risk, such as trust boundary risk, data management risk, inherent model risk, and basic security best practices. For example, LLM-based projects were immensely popular with developers.

However, the researchers said their relative immaturity was combined with a generally low-security grade. Suppose developers rely on these efforts to create new generative-AI-based enterprise systems. In that case, they may produce even more potential vulnerabilities against which organizations are unprepared to fight.

Why is it important to be aware of these dangers?

Many industries, from healthcare to banking, benefit from generative AI. However, like any new technology, it has risks. In the case of generative AI, one of the most significant dangers is cybersecurity.

Organizations can ensure they can use this exciting new technology while also protecting themselves from potential hazards by being aware of these risks and taking proactive efforts to mitigate them. It all comes down to striking the correct balance between innovation and security.

So there you have it: an overview of the possible cybersecurity threats posed by generative AI initiatives to businesses and what companies can do to mitigate these risks. We hope you found this helpful information! If you want to learn more about this subject, read Rezilion's report. Thank you for taking the time to read this!