Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Automation. Show all posts

Cybersecurity Teams Tackle AI, Automation, and Cybercrime-as-a-Service Challenges

 




In the digital society, defenders are grappling with the transformative impact of artificial intelligence (AI), automation, and the rise of Cybercrime-as-a-Service. Recent research commissioned by Darktrace reveals that 89% of global IT security teams believe AI-augmented cyber threats will significantly impact their organisations within the next two years, yet 60% feel unprepared to defend against these evolving attacks.

One notable effect of AI in cybersecurity is its influence on phishing attempts. Darktrace's observations show a 135% increase in 'novel social engineering attacks' in early 2023, coinciding with the widespread adoption of ChatGPT2. These attacks, with linguistic deviations from typical phishing emails, indicate that generative AI is enabling threat actors to craft sophisticated and targeted attacks at an unprecedented speed and scale.

Moreover, the situation is further complicated by the rise of Cybercrime-as-a-Service. Darktrace's 2023 End of Year Threat Report highlights the dominance of cybercrime-as-a-service, with tools like malware-as-a-Service and ransomware-as-a-service making up the majority of harrowing tools used by attackers. This as-a-Service ecosystem provides attackers with pre-made malware, phishing email templates, payment processing systems, and even helplines, reducing the technical knowledge required to execute attacks.

As cyber threats become more automated and AI-augmented, the World Economic Forum's Global Cybersecurity Outlook 2024 warns that organisations maintaining minimum viable cyber resilience have decreased by 30% compared to 2023. Small and medium-sized companies, in particular, show a significant decline in cyber resilience. The need for proactive cyber readiness becomes pivotal in the face of an increasingly automated and AI-driven threat environment.

Traditionally, organisations relied on reactive measures, waiting for incidents to happen and using known attack data for threat detection and response. However, this approach is no longer sufficient. The shift to proactive cyber readiness involves identifying vulnerabilities, addressing security policy gaps, breaking down silos for comprehensive threat investigation, and leveraging AI to augment human analysts.

AI plays a crucial role in breaking down silos within Security Operations Centers (SOCs) by providing a proactive approach to scale up defenders. By correlating information from various systems, datasets, and tools, AI can offer real-time behavioural insights that human analysts alone cannot achieve. Darktrace's experience in applying AI to cybersecurity over the past decade emphasises the importance of a balanced mix of people, processes, and technology for effective cyber defence.

A successful human-AI partnership can alleviate the burden on security teams by automating time-intensive and error-prone tasks, allowing human analysts to focus on higher-value activities. This collaboration not only enhances incident response and continuous monitoring but also reduces burnout, supports data-driven decision-making, and addresses the skills shortage in cybersecurity.

As AI continues to advance, defenders must stay ahead, embracing a proactive approach to cyber resilience. Prioritising cybersecurity will not only protect institutions but also foster innovation and progress as AI development continues. The key takeaway is clear: the escalation in threats demands a collaborative effort between human expertise and AI capabilities to navigate the complex challenges posed by AI, automation, and Cybercrime-as-a-Service.

RansomHouse Gang Streamlines VMware ESXi Attacks Using Latest MrAgent Tool

 

RansomHouse, a ransomware group known for its double extortion tactics, has developed a new tool named 'MrAgent' to facilitate the widespread deployment of its data encrypter on VMware ESXi hypervisors.

Since its emergence in December 2021, RansomHouse has been targeting large organizations, although it hasn't been as active as some other notorious ransomware groups. Nevertheless, it has been employing sophisticated methods to infiltrate systems and extort victims.

ESXi servers are a prime target for ransomware attacks due to their role in managing virtual computers containing valuable data for businesses. Disrupting these servers can cause significant operational damage, impacting critical applications and services like databases and email servers.

Researchers from Trellix and Northwave have identified a new binary associated with RansomHouse attacks, designed specifically to streamline the process of targeting ESXi systems. This tool, named MrAgent, automates the deployment of ransomware across multiple hypervisors simultaneously, compromising all managed virtual machines.

MrAgent is highly configurable, allowing attackers to customize ransomware deployment settings received from the command and control server. This includes tasks such as setting passwords, scheduling encryption events, and altering system messages to display ransom notices.

By disabling firewalls and terminating non-root SSH sessions, MrAgent aims to minimize detection and intervention by administrators while maximizing the impact of the attack on all reachable virtual machines.

Trellix has identified a Windows version of MrAgent, indicating RansomHouse's efforts to broaden the tool's reach and effectiveness across different platforms.

The automation of these attack steps underscores the attackers' determination to target large networks efficiently. Defenders must remain vigilant and implement robust security measures, including regular updates, access controls, network monitoring, and logging, to mitigate the threat posed by tools like MrAgent.

GM Cruise Halts Driverless Operations

General Motors' Cruise unit has suspended all driverless operations following a recent ban in California, halting their ambitious plans for a nationwide robotaxi service.

The decision comes in response to a regulatory setback in California, a state known for its stringent rules regarding autonomous vehicle testing. The California Department of Motor Vehicles revoked Cruise's permit to operate its autonomous vehicles without a human safety driver on board, citing concerns about safety protocols and reporting procedures.

This move has forced GM Cruise to halt all of its driverless operations, effectively putting a pause on its plans to launch a commercial robotaxi service. The company had previously announced its intention to deploy a fleet of autonomous vehicles for ride-hailing purposes in San Francisco and other major cities.

The suspension of operations is a significant blow to GM Cruise, as it now faces a setback in the race to deploy fully autonomous vehicles for commercial use. Other companies in the autonomous vehicle space, including Waymo and Tesla, have been making strides in the development and deployment of their autonomous technologies.

The California ban highlights the challenges and complexities surrounding the regulation of autonomous vehicles. Striking the right balance between innovation and safety is crucial, and incidents or regulatory concerns can lead to significant delays in the deployment of this technology.

While GM Cruise has expressed its commitment to working closely with regulators to address their concerns, the current situation raises questions about the timeline for the widespread adoption of autonomous vehicles. It also emphasizes the need for a unified regulatory framework that can provide clear guidelines for the testing and deployment of autonomous technologies.

In the meantime, GM Cruise will need to reassess its strategy and potentially explore other avenues for testing and deploying its autonomous vehicles. The company has invested heavily in the development of this technology, and overcoming regulatory hurdles will be a crucial step in realizing its vision of a driverless future.

The halt to GM Cruise's driverless robotaxi operations is a clear reminder of the difficulties and unknowns associated with the advancement of autonomous car technology. The safe and effective use of this ground-breaking technology will depend on companies and regulators working together as the industry develops.

Alert: AI Sector's Energy Consumption Could Match That of the Netherlands

 

A recent study warns that the artificial intelligence (AI) industry could potentially consume as much energy as a country the size of the Netherlands by 2027. 

This surge is attributed to the rapid integration of AI-powered services by major tech companies, particularly since the emergence of ChatGPT last year. Unlike conventional applications, these AI-driven services demand considerably more power, significantly heightening the energy intensity of online activities.

Nonetheless, the study suggests that the environmental impact of AI might be less severe if its current growth rate were to slow down. Nevertheless, many experts, including the report's author, emphasize that such predictions are speculative due to the lack of sufficient data disclosure from tech firms, making accurate forecasts challenging.

Without a doubt, AI necessitates more robust hardware compared to traditional computing tasks. The study, conducted by Alex De Vries, a PhD candidate at the VU Amsterdam School of Business and Economics, is contingent on certain parameters remaining constant. These include the rate at which AI advances, the availability of AI chips, and the continuous operation of servers at full capacity.

De Vries notes that Nvidia, a chip designer, is estimated to supply approximately 95% of the required AI processing equipment. By estimating the quantity of these computers projected to be delivered by 2027, he approximates an energy consumption range for AI between 85 and 134 terrawatt-hours (TWh) annually. At the higher end, this is roughly equivalent to the energy consumption of a small country, akin to the Netherlands.

De Vries stresses that his findings underscore the importance of using AI only in cases where it is genuinely necessary. His peer-reviewed study has been published in the journal Joule.

AI systems, such as the sophisticated language models underpinning popular chatbots like OpenAI's ChatGPT and Google's Bard, require specialized computer warehouses known as data centers. 

Consequently, this equipment consumes more power and, like conventional setups, necessitates substantial water usage for cooling. The study did not incorporate the energy required for cooling, an aspect often omitted by major tech companies in their disclosures.

Despite this, the demand for AI-powered computers is surging, along with the energy required to maintain these servers at optimal temperatures. 

Notably, companies are showing a growing interest in housing AI equipment within data centers. Danny Quinn, CEO of Scottish data center firm DataVita, highlights the significant disparity in energy consumption between racks containing standard servers and those housing AI processors.

In its recent sustainability report, Microsoft, a company heavily investing in AI development, revealed a 34% surge in water consumption between 2021 and 2022. This amounted to 6.4 million cubic meters, roughly equivalent to 2,500 Olympic swimming pools.

Professor Kate Crawford, an authority on AI's environmental impact, underscores the monumental energy and water requirements of these high-powered AI systems. She emphasizes that these systems constitute a substantial extractive industry for the 21st century, with enormous implications for resource usage.

While AI's energy demands present a challenge, there are also hopes that AI can contribute to solving environmental problems. Google and American Airlines, for instance, have recently found that AI tools can reduce aircraft contrails, a contributor to global warming. 

Additionally, the U.S. government is investing millions in advancing nuclear fusion research, where AI could accelerate progress in achieving a limitless, green power supply. This year, a university academic reported a breakthrough in harnessing immense power through AI-driven prediction in an experiment, offering promise for future sustainable energy solutions.

Revolutionizing Everyday Life: The Transformative Potential of AI and Blockchain

 

Artificial intelligence (AI) and blockchain technology have emerged as two pivotal forces of innovation over the past decade, leaving a significant impact on diverse sectors like finance and supply chain management. The prospect of merging these technologies holds tremendous potential for unlocking even greater possibilities.

Although the integration of AI within the cryptocurrency realm is a relatively recent development, it demonstrates the promising potential for expansion. Forecasts suggest that the blockchain AI market could attain a valuation of $980 million by 2030.

Exploring below the potential applications of AI within blockchain reveals its capacity to bolster the crypto industry and facilitate its integration into mainstream finance.

Elevated Security and Fraud Detection

One domain where AI can play a crucial role is enhancing the security of blockchain transactions, resulting in more robust payment systems. Firstly, AI algorithms can scrutinize transaction data and patterns, preemptively identifying and preventing fraudulent activities on the blockchain.

Secondly, AI can leverage machine learning algorithms to reinforce transaction privacy. By analyzing substantial volumes of data, AI can uncover patterns indicative of potential data breaches or unauthorized account access. This enables businesses to proactively implement security measures, setting up automated alerts for suspicious behavior and safeguarding sensitive information in real time.

Instances of AI integration are already evident. Scorechain, a crypto-tracking platform, harnessed AI to enhance anti-money laundering transaction monitoring and fortify fraud prediction capabilities. CipherTrace, a Mastercard-backed blockchain security initiative, also adopted AI to assess risk profiles of crypto merchants based on on-chain data.

In essence, the amalgamation of AI algorithms and blockchain technology fosters a more dependable and trustworthy operational ecosystem for organizations.

Efficiency in Data Analysis and Management

AI can revolutionize data collection and analysis for enterprises. Blockchain, with its transparent and immutable information access, provides an efficient framework for swiftly acquiring accurate data. Here, AI can amplify this advantage by streamlining the data analysis process. AI-powered algorithms can rapidly process blockchain network data, identifying nuanced patterns that human analysts might overlook. The result is actionable insights to support business functions, accompanied by a significant reduction in manual processes, thereby optimizing operational efficiency.

Additionally, AI's integration can streamline supply chain management and financial transactions, automating tasks like invoicing and payment processing, eliminating intermediaries, and enhancing efficiency. AI can also ensure the authenticity and transparency of products on the blockchain, providing a shared record accessible to all network participants.

A case in point is IBM's blockchain-based platform introduced in 2020 for tracking food manufacturing and supply chain logistics, facilitating collaborative tracking and accounting among European manufacturers, distributors, and retailers.

Strengthening Decentralized Finance (DeFi)

The synergy of AI and blockchain can empower decentralized finance and Web3 by facilitating the creation of improved decentralized marketplaces. While blockchain's smart contracts automate processes and eliminate intermediaries, creating these contracts can be complex. AI algorithms, like ChatGPT, employ natural language processing to simplify smart contract creation, reducing errors, enhancing coding efficiency, and broadening access for new developers.

Moreover, AI can enhance user experiences in Web3 marketplaces by tailoring recommendations based on user search patterns. AI-powered chatbots and virtual assistants can enhance customer service and transaction facilitation, while blockchain technology ensures product authenticity.

AI's data analysis capabilities further contribute to identifying trends, predicting demand and supply patterns, and enhancing decision-making for Web3 marketplace participants.

Illustrating this integration is the example of Kering, a luxury goods company, which launched a marketplace combining AI-driven chatbot services with crypto payment options, enabling customers to use Ethereum for purchases.

Synergistic Future of AI and Blockchain

Though AI's adoption within the crypto sector is nascent, its potential applications are abundant. In DeFi and Web3, AI promises to enhance market segments and attract new users. Furthermore, coupling AI with blockchain technology offers significant potential for traditional organizations, enhancing business practices, user experiences, and decision-making.

In the upcoming months and years, the evolving collaboration between AI and blockchain is poised to yield further advancements, heralding a future of innovation and progress.

Designers Still Have an Opportunity to Get AI Right

 

As ChatGPT attracts an unprecedented 1.8 billion monthly visitors, the immense potential it offers to shape our future world is undeniable.

However, amidst the rush to develop and release new AI technologies, an important question remains largely unaddressed: What kind of world are we creating?

The competition among companies to be the first in the AI race often overshadows thoughtful considerations about potential risks and implications. Startups working on AI applications like GPT-3 have not adequately addressed critical issues such as data privacy, content moderation, and harmful biases in their design processes.

Real-world examples highlight the need for more responsible AI design. For instance, creating AI bots that reinforce harmful behaviors or replacing human expertise with AI without considering the consequences can lead to unintended harmful effects.

Addressing these problems requires a cultural shift in the AI industry. While some companies may intentionally create exploitative products, many well-intentioned developers lack the necessary education and tools to build ethical and safe AI. 

Therefore, the responsibility lies with all individuals involved in AI development, regardless of their role or level of authority.

Companies must foster a culture of accountability and recruit designers with a growth mindset who can foresee the consequences of their choices. We should move away from prioritizing speed and focus on our values, making choices that align with our beliefs and respect user rights and privacy.

Designers need to understand the societal impact of AI and its potential consequences on racial and gender profiling, misinformation dissemination, and mental health crises. AI education should encompass fields like sociology, linguistics, and political science to instill a deeper understanding of human behavior and societal structures.

By embracing a more thoughtful and values-driven approach to AI design, we can shape a world where AI technologies contribute positively to society, bridging the gap between technical advancements and human welfare.

FBI Alerts: Cybercriminals Exploiting Open-Source AI Programs with Ease

 

Unsurprisingly, criminals have been exploiting open-source generative AI programs for various malicious activities, including creating malware and conducting phishing attacks, as stated by the FBI.

In a recent call with journalists, the FBI highlighted how generative AI programs, highly popular in the tech industry, are also fueling cybercrime. Criminals are using these AI programs to refine and propagate scams, and even terrorists are consulting the technology to develop more powerful chemical attacks.

A senior FBI official stated that as AI models become more widely adopted and accessible, these cybercriminal trends are expected to increase.

Although the FBI did not disclose the specific AI models used by criminals, it was revealed that hackers prefer free, customizable open-source models and pay for private hacker-developed AI programs circulating in the cybercriminal underworld.

Seasoned cybercriminals are exploiting AI technology to create new malware attacks and improve their delivery methods. For example, they use AI-generated websites as phishing pages to distribute malicious code secretly. The technology also helps hackers develop polymorphic malware that can bypass antivirus software.

Last month, the FBI issued a warning about scammers using AI image generators to create sexually themed deepfakes to extort money from victims. The extent of these AI-powered schemes remains unclear, but the majority of cases reported to the FBI involve criminal actors utilizing AI models to enhance traditional frauds, including scams targeting loved ones and the elderly through AI voice-cloning technology in phone calls.

In response, the FBI has engaged in constructive discussions with AI companies to address the issue. One proposed solution is using a "watermarking" system to identify AI-generated content and images more easily.

The senior official emphasized that the FBI considers this AI threat a national priority, as it affects all programs within the agency and is a recent development in the cybercrime landscape.

'Verified human': Worldcoin Users Crowd for Iris Scans

 

The Worldcoin project, founded by Sam Altman, CEO of OpenAI (the developer of ChatGPT), is offering people around the world the opportunity to get a digital ID and free cryptocurrency in exchange for getting their eyeballs scanned. 

Despite concerns raised by privacy advocates and data regulators, the project aims to establish a new "identity and financial network" where users can prove their human identity online.

The project was launched recently, and participants in countries like Britain, Japan, and India have already undergone eyeball scans. In Tokyo, people queued up in front of a shiny silver globe to have their irises scanned, receiving 25 free Worldcoin tokens as verified users.

Privacy concerns have been raised due to the data-collection process, with some seeing it as a potential privacy nightmare. The Electronic Privacy Information Center, a US privacy campaigner, expressed worries about the extent of data collection. Worldcoin claims its project is "completely private," allowing users to delete their biometric data or store it encrypted.

Worldcoin representatives have been promoting the project, offering free t-shirts and stickers with the words "verified human" at a co-working space in London. Users were lured by the promise of financial gains from the cryptocurrency, which was trading at around $2.30 on Binance, the world's largest exchange.

Some participants, like Christian, a graphic designer, joined out of curiosity and to witness advancements in artificial intelligence and crypto. Despite privacy concerns, many participants did not read Worldcoin's privacy policy and were enticed by the prospect of free tokens.

Critics, such as UK privacy campaign group Big Brother Watch, argue that digital ID systems increase state and corporate control and may not deliver the benefits they promise. Regulators, including Britain's data regulator, are investigating the UK launch of Worldcoin.

In India, operators approached people at a mall in Bengaluru to sign them up for Worldcoin, and most individuals interviewed expressed little concern about privacy, focusing more on the opportunity to get free coins.

Singapore Explores Generative AI Use Cases Through Sandbox Options

 

Two sandboxes have been introduced in Singapore to facilitate the development and testing of generative artificial intelligence (AI) applications for government agencies and businesses. 

These sandboxes will be powered by Google Cloud's generative AI toolsets, including the Vertex AI platform, low-code developer tools, and graphical processing units (GPUs). Google will also provide pre-trained generative AI models, which include their language model Palm, AI models from partners, and open-source alternatives.

The initiative is a result of a partnership agreement between the Singapore government and Google Cloud to establish an AI Government Cloud Cluster. The purpose of this cloud platform is to promote AI adoption in the public sector.

The two sandboxes will be provided at no cost for three months and will be available for up to 100 use cases or organizations. Selection for access to the sandboxes will occur through a series of workshops over 100 days, where participants will receive training from Google Cloud engineers to identify suitable use cases for generative AI.

The government sandbox will be administered by the Smart Nation and Digital Government Office (SNDGO), while the sandbox for local businesses will be managed by Digital Industry Singapore (DISG).

Singapore has been actively pursuing its national AI strategy since 2019, with over 4,000 researchers currently contributing to AI research. However, the challenge lies in translating this research into practical applications across different industries. The introduction of these sandboxes aims to address potential issues related to data, security, and responsible AI implementation.

Karan Bajwa, Google Cloud's Asia-Pacific vice president, emphasized the need for a different approach when deploying generative AI within organizations, requiring robust governance and data security. It is crucial to calibrate and fine-tune AI models for specific industries to ensure optimal performance and cost-effectiveness.

Several organizations, including the Ministry of Manpower, GovTech, American Express, PropertyGuru Group, and Tokopedia, have already signed up to participate in the sandbox initiatives.

GovTech, the public sector's CIO office, is leveraging generative AI for its virtual intelligence chat assistant platform (Vica). By using generative AI, GovTech has reduced training hours significantly and achieved more natural responses for its chatbots.

During a panel discussion at the launch, Jimmy Ng, CIO and head of group technology and operations at DBS Bank, emphasized the importance of training AI models with quality data to mitigate risks associated with large language models learning from publicly available data.

Overall, the introduction of these sandboxes is seen as a positive step to foster responsible AI development and application in Singapore's public and private sectors.

With More Jobs Turning Automated, Protecting Jobs Turn Challenging


With the rapid trend of artificial intelligence being incorporated in almost all the jobs, protecting jobs in Britain now seems like a challenge, as analyzed by the new head of the state-authorized AI taskforce.

According to Ian Hogarth, a tech entrepreneur and AI investor, it was “inevitable” that more jobs would turn increasing automated.

He further urged businesses and individuals the need to reconsider how they work. "There will be winners or losers on a global basis in terms of where the jobs are as a result of AI," he said.

There have already been numerous reports of jobs that are losing their status of being ‘manual’, as companies are now increasing adopting AI tools rather than recruiting individuals. One recent instance was when BT stated “it will shed around 10,000 staff by the end of the decade as a result of the tech.”

However, some experts believe that these advancements in the job sector will also result in the emergence of new job options that do exist currently, similar to the time when the internet was newly introduced.

Validating this point is a report released by Goldman Sachs earlier this year, which noted 60% of the jobs we aware of today did not exist in 1940.

What are the Benefits?

According to Hogarth, the aim of the newly assigned taskforce was to help government "to better understand the risks associated with these frontier AI systems" and to hold the companies accountable.

Apparently, he was concerned about the possibility of AI posing harm, such as wrongful detention if applied to law enforcement or the creation of dangerous software that encourages cybercrime.

He said that, “expert warnings of AI's potential to become an existential threat should not be dismissed, even though this divides opinion in the community itself.”

However, he did not dismiss the benefits that comes with these technologies. One of them being the advancements in the healthcare sector. AI tools are not all set to identify new antibiotics, helping patients with brain damage regain movements and aiding medical professional by identifying early symptoms of diseases.

According to Mr. Hogarth, he developed a tool that could spot breast cancer symptoms in a scan.

To monitor AI safety research, the group he will head has been handed an initial £100 million. Although he declined to reveal how he planned to use the funds, he did declare that he would know he had succeeded in the job if "the average person in the UK starts to feel a benefit from AI."

What are the Challenges 

UK’s Prime Minister Rishi Sunak has set AI as a key priority, wanting to make UK to become a global hub for the sector.

Following this revelation, OpenAI, the company behind the very famous chatbot ChatGPT is all set to build its first international office in London. Also, data firm Palantir has also confirmed that they will open their headquarters in London.

But for the UK to establish itself as a major force in this profitable and constantly growing sector of technology, there are a number of obstacles it will have to tackle.

One instance comes from an AI start-up run by Emma McClenaghan and her partner Matt in Northern Ireland. They have created an AI tool named ‘Wally,’ which generates websites. The developers aspire to turn Wally into a more general digital assistance.

While the company – Gensys Engine – has received several awards and recognition, it still struggle getting the specialized processors, or GPUs (graphics processing units). They need to continue developing the product further.

In regards to this, Emma says, "I think there is a lack of hardware access for start-ups, and a lack of expertise and lack of funding.”

She said they waited five months for a grant to buy a single GPU - at a time when in the US Elon Musk was reported to have purchased 10,000.

"That's the difference between us and them because it's going to take us, you know, four to seven days to train a model and if he's [able to] do it in minutes, then you know, we're never going to catch up," she added.

In an email chat, McClenaghan noted that she thinks the best outcome for her company would be if it gets acquired by some US tech giant, something commonly heard from a UK startup.

This marks another challenge for the UK: to refocus on keeping prosperous companies in the UK and fostering their expansion.

5 AI Tools That may Save Your Team’s Working Hours


In today’s world of ‘everything digital,’ integrating Artificial Intelligence tools in a business is not just a mere trend, but a necessity. AI is altering how we work and interact with technology in the rapidly transforming digital world. AI-powered solutions are set to improve several corporate functions, from customer relationship management to design and automation.

Here, we are discussing some of these AI-powered tools, that have proved to be a leading attribute for growing a business:

1. Folk

Folk is a highly developed CRM (Customer Relationship Management) developed to work for its users, with the use of its AI-powered setup. Some of its prominent features include its lightweight and customizability. Due to its automation capabilities, it frees its user from any manual task, which allows them to shift their focus to its main goal: building customer and business relationships.

Folk's AI-based smart outreach feature tracks results efficiently, allowing users to know when and how to reach out.

2. Sembly AI

It is a SaaS platform that deploys algorithms to record and analyse meetings and integrate the findings into useful information.

3. Cape Privacy 

Cape Privacy introduced its AI tool - CapeChat - the platform focuses on privacy, and is powered by ChatGPT.

CapeChat is used to encrypt and redact sensitive data, in order to ensure user privacy while using AI language models.

Cape also provides secure enclaves for processing sensitive data and protecting intellectual property.

4. Drafthorse AI

Drafthorse AI is a programmatic SEO writer used by brands and niche site owners. With its capacity to support over 100 languages, Drafthorse AI allows one to draft SEO-optimized articles in minutes.

It is an easy-to-use AI tool with a user-friendly interface that allows users to import target keywords, generate content, and export it in various formats.

5. Uizard

Uizard includes Autodesigner, an AI-based designing and ideation tool that helps users to generate creative mobile apps, websites, and more.

A user with minimal or no designing experience can easily use the UI design, as it generates mockups from text prompts, scans screenshots, and offers drag-and-drop UI components.

With the help of this tool, users may quickly transition from an idea to a clickable prototype.  

Risks and Best Practices: Navigating Privacy Concerns When Interacting with AI Chatbots

 

The use of artificial intelligence chatbots has become increasingly popular. Although these chatbots possess impressive capabilities, it is important to recognize that they are not without flaws. There are inherent risks associated with engaging with AI chatbots, including concerns about privacy and the potential for cyber-attacks. Caution should be exercised when interacting with these chatbots.

To understand the potential dangers of sharing information with AI chatbots, it is essential to explore the risks involved. Privacy risks and vulnerabilities associated with AI chatbots raise significant security concerns for users. Surprisingly, chat companions such as ChatGPT, Bard, Bing AI, and others can inadvertently expose personal information online. These chatbots rely on AI language models that derive insights from user data.

For instance, Google's chatbot, Bard, explicitly states on its FAQ page that it collects and uses conversation data to train its model. Similarly, ChatGPT also has privacy issues as it retains chat records for model improvement, although it provides an opt-out option.

Storing data on servers makes AI chatbots vulnerable to hacking attempts. These servers contain valuable information that cybercriminals can exploit in various ways. They can breach the servers, steal the data, and sell it on dark web marketplaces. Additionally, hackers can leverage this data to crack passwords and gain unauthorized access to devices.

Furthermore, the data generated from interactions with AI chatbots is not restricted to the respective companies alone. While these companies claim that the data is not sold for advertising or marketing purposes, it is shared with certain third parties for system maintenance.

OpenAI, the organization behind ChatGPT, admits to sharing data with "a select group of trusted service providers" and allowing some "authorized OpenAI personnel" to access the data. These practices raise additional security concerns surrounding AI chatbot interactions, as critics argue that generative AI security concerns may worsen.

Therefore, it is crucial to safeguard personal information when interacting with AI chatbots to maintain privacy.

To ensure privacy and security, it is important to follow best practices when interacting with AI chatbots:

1. Avoid sharing financial details: Sharing financial information with AI chatbots can expose it to potential cybercriminals. Limit interactions to general information and broad questions. For personalized financial advice, consult a licensed financial advisor.

2. Be cautious with personal and intimate thoughts: AI chatbots lack real-world knowledge and may provide generic responses to mental health-related queries. Sharing personal thoughts with them can compromise privacy. Use AI chatbots as tools for general information and support, but consult a qualified mental health professional for personalized advice.

3. Refrain from sharing confidential work-related information: Sharing confidential work information with AI chatbots can lead to unintended disclosure. Exercise caution when sharing sensitive code or work-related details to protect privacy and prevent data breaches.

4. Never share passwords: Sharing passwords with AI chatbots can jeopardize privacy and expose personal information to hackers. Protect login credentials to maintain online security.

5. Avoid sharing residential details and other personal data: Personal Identification Information (PII) should not be shared with AI chatbots. Familiarize yourself with chatbot privacy policies, avoid questions that reveal personal information, and be cautious about sharing medical information or using AI chatbots on social platforms.

In conclusion, while AI chatbots offer significant advancements, they also come with privacy risks. Protecting data by controlling shared information is crucial when engaging with AI chatbots. Adhering to best practices mitigates potential risks and ensures privacy.

Innovative AI System Trained to Identify Recyclable Waste

 

According to the World Bank, approximately 2.24 billion tonnes of solid waste were generated in 2020, with projections indicating a 73% increase to 3.88 billion tonnes by 2050.

Plastic waste is a significant concern, with research from the Universities of Georgia and California revealing that over 8.3 billion tonnes of plastic waste was produced between the 1950s and 2015.

Training AI systems to recognize and classify various forms of rubbish, such as crumpled and dirty items like a discarded Coke bottle, remains a challenging task due to the complexity of waste conditions.

Mikela Druckman, the founder of Greyparrot, a UK start-up focused on waste analysis, is well aware of these staggering statistics. Greyparrot utilizes AI technology and cameras to analyze waste processing and recycling facilities, monitoring around 50 sites in Europe and tracking 32 billion waste objects per year.

"It is allowing regulators to have a much better understanding of what's happening with the material, what materials are problematic, and it is also influencing packaging design," says Ms Druckman.

"We talk about climate change and waste management as separate things, but actually they are interlinked because most of the reasons why we are using resources is because we're not actually recovering them.

"If we had stricter rules that change the way we consume, and how we design packaging, that has a very big impact on the value chain and how we are using resource."

Troy Swope, CEO of Footprint, is dedicated to developing better packaging solutions and has collaborated with supermarkets and companies like Gillette to replace plastic trays with plant-based fiber alternatives.

Swope criticizes the "myth of recycling" in a blog post, arguing that single-use plastic is more likely to end up in landfills than to be recycled. He advocates for reducing dependence on plastic altogether to resolve the plastic crisis.

"It's less likely than ever that their discarded single-use plastic ends up anywhere but a landfill," wrote Mr Swope. "The only way out of the plastics crisis is to stop depending on it in the first place."
 
So-called greenwashing is a big problem, says Ms Druckman. "We've seen a lot of claims about eco or green packaging, but sometimes they are not backed up with real fact, and can be very confusing for the consumer."

Polytag, a UK-based company, tackles this issue by applying ultraviolet (UV) tags to plastic bottles, enabling verification of recycling through a cloud-based app. Polytag has collaborated with UK retailers Co-Op and Ocado to provide transparency and accurate recycling data.

In an effort to promote recycling and encourage participation, the UK government, along with administrations in Wales and Northern Ireland, plans to introduce a deposit return scheme in 2025. This scheme will involve "reverse vending machines" where people can deposit used plastic bottles and metal cans in exchange for a monetary reward.

However, the challenge of finding eco-friendly waste disposal methods continues to persist, as new issues arise each year. The rising popularity of e-cigarettes and vapes has resulted in a significant amount of electronic waste that is difficult to recycle.

Disposable single-use vapes, composed of various materials including plastics, metals, and lithium batteries, pose a challenge to the circular economy. Research suggests that 1.3 million vapes are discarded per week in the UK alone, leading to a substantial amount of lithium ending up in landfills.

Ray Parmenter, head of policy and technical at the Chartered Institute of Waste Management, emphasizes the importance of maximizing the use of critical raw materials like lithium.

"The way we get these critical raw materials like lithium is from deep mines - not the easiest places to get to. So once we've got it out, we need to make the most of it," says Mr Parmenter.

Mikela Druckman highlights the need for a shift in thinking, she added  "It doesn't make economic sense, it doesn't make any sense. Rather than ask how do we recycle them, ask why we have single-use vapes in the first place?"

In conclusion, addressing the growing waste crisis requires collaborative efforts from industries, policymakers, and consumers, with a focus on sustainable packaging, improved recycling practices, and reduced consumption.

Google's 6 Essential Steps to Mitigate Risks in Your A.I. System

 

Generative A.I. has the potential to bring about a revolutionary transformation in businesses of all sizes and types. However, the implementation of this technology also carries significant risks. It is crucial to ensure the reliability of the A.I. system and protect it from potential hacks and breaches. 

The main challenge lies in the fact that A.I. technology is still relatively young, and there are no widely accepted standards for constructing, deploying, and maintaining these complex systems.

To address this issue and promote standardized security measures for A.I., Google has introduced a conceptual framework called SAIF (Secure AI Framework).

In a blog post, Royal Hansen, Google's vice president of engineering for privacy, safety, and security, and Phil Venables, Google Cloud's chief information security officer, emphasized the need for both public and private sectors to adopt such a framework.

They highlighted the risks associated with confidential information extraction, hackers manipulating training data to introduce faulty information, and even theft of the A.I. system itself. Google's framework comprises six core elements aimed at safeguarding businesses that utilize A.I. technology. 

Here are the core elements of Google's A.I. framework, and how they can help in safeguarding

  • Establish a strong foundation:
First and foremost, assess your existing digital infrastructure's standard protections as a business owner. However, bear in mind that these measures may need to be adapted to effectively counter A.I.-based security risks. After evaluating how your current controls align with your A.I. use case, develop a plan to address any identified gaps.

  • Enhance threat detection capabilities:
Google emphasizes the importance of swift response to cyberattacks on your A.I. system. One crucial aspect to focus on is the establishment of robust content safety policies. Generative A.I. has the ability to generate harmful content such as imagery, audio, and video. By implementing and enforcing content policies, you can safeguard your system from malicious usage and protect your brand simultaneously.

  • Automate your defenses:
To protect your system from threats like data breaches, malicious content creation, and A.I. bias, Google suggests deploying automated solutions such as data encryption, access control, and automatic auditing. These automated defenses are powerful and often eliminate the need for manual tasks, such as reverse-engineering malware binaries. However, human intervention is still necessary to exercise judgment in critical decisions regarding threat identification and response strategies.

  • Maintain a consistent strategy:
Once you integrate A.I. into your business model, establish a process to periodically review its usage within your organization. In case you observe different controls or frameworks across different departments, consider implementing a unified approach. Fragmented controls increase complexity, result in redundant efforts, and raise costs.

  • Be adaptable:
Generative A.I. is a rapidly evolving field, with new advancements occurring daily. Consequently, threats are constantly evolving as well. Conducting "red team" exercises, which involve ethical hackers attempting to exploit system vulnerabilities, can help you identify and address weaknesses in your system before they are exploited by malicious actors.

  • Determine risk tolerance:
Before implementing any A.I.-powered solutions, it is essential to determine your specific use case and the level of risk you are willing to accept. Armed with this information, you can develop a process to evaluate different third-party machine learning models. This assessment will help you match each model to your intended use case while considering the associated level of risk.

Overall, while generative A.I. holds enormous potential for businesses, it is crucial to address the security challenges associated with its implementation. Google's Secure AI Framework offers a comprehensive approach to mitigate risks and protect businesses from potential threats. By adhering to the core elements of this framework, businesses can safeguard their A.I. systems and fully leverage the benefits of this transformative technology.

3 Key Reasons SaaS Security is Essential for Secure AI Adoption

 

The adoption of AI tools is revolutionizing organizational operations, providing numerous advantages such as increased productivity and better decision-making. OpenAI's ChatGPT, along with other generative AI tools like DALL·E and Bard, has gained significant popularity, attracting approximately 100 million users worldwide. The generative AI market is projected to surpass $22 billion by 2025, highlighting the growing reliance on AI technologies.

However, as AI adoption accelerates, security professionals in organizations have valid concerns regarding the usage and permissions of AI applications within their infrastructure. They raise important questions about the identity of users and their purposes, access to company data, shared information, and compliance implications.

Understanding the usage and access of AI applications is crucial for several reasons. Firstly, it helps assess potential risks and enables organizations to protect against threats effectively. Without knowing which applications are in use, security teams cannot evaluate and address potential vulnerabilities. Each AI tool represents a potential attack surface that needs to be considered, as malicious actors can exploit AI applications for lateral movement within the organization. Basic application discovery is an essential step towards securing AI usage and can be facilitated using free SSPM tools.

Additionally, knowing which AI applications are legitimate helps prevent the inadvertent use of fake or malicious applications. Threat actors often create counterfeit versions of popular AI tools to deceive employees and gain unauthorized access to sensitive data. Educating employees about legitimate AI applications minimizes the risks associated with these fraudulent imitations.

Secondly, identifying the permissions granted to AI applications allows organizations to implement robust security measures. Different AI tools may have varying security requirements and risks. By understanding the permissions granted and assessing associated risks, security professionals can tailor security protocols accordingly. This ensures the protection of sensitive data and prevents excessive permissions.

Lastly, understanding AI application usage helps organizations effectively manage their SaaS ecosystem. It provides insights into employee behavior, identifies potential security gaps, and enables proactive measures to mitigate risks. Monitoring for unusual AI onboarding, inconsistent usage, and revoking access to unauthorized AI applications are security steps that can be taken using available tools. Effective management of the SaaS ecosystem also ensures compliance with data privacy regulations and the adequate protection of shared data.

In conclusion, while AI applications offer significant benefits, they also introduce security challenges that must be addressed. Security professionals should leverage existing SaaS discovery capabilities and SaaS Security Posture Management (SSPM) solutions to answer fundamental questions about AI usage, users, and permissions. By utilizing these tools, organizations can save valuable time and ensure secure AI implementation.

Is ChatGPT Capable of Substituting IT Network Engineers? Here’s All You Need to Know

 

Companies are increasingly adopting chatGPT, a creation by OpenAI, to enhance productivity for both the company and its employees. This innovative tool has gained significant popularity worldwide, with various sectors and companies utilizing it for tasks such as writing, composing emails, drafting messages, and other complex assignments.

Modern IT networks are intricate systems comprising firewalls, switches, routers, servers, workstations, and other valuable devices. As companies operate on cloud hybrids, they are constantly exposed to threats from malicious actors.

Network engineers are responsible for managing these complex networks and implementing technical solutions. While chatGPT can be a valuable tool when used correctly, there are concerns among engineers regarding how it may impact their roles. However, there are three key areas where chatGPT can provide valuable assistance to engineers.
  • Configuration Management: ChatGPT has demonstrated its capabilities in generating example configurations for different network devices like Cisco routers and Juniper switches, showing familiarity with vendor-specific syntax. However, it is crucial to carefully monitor the accuracy of the configurations generated by the system.
  • Troubleshooting: ChatGPT has shown an impressive understanding of network engineering concepts, such as the Spanning Tree Protocol (STP), as observed through its ability to answer real-world questions posed by network engineers. Nonetheless, for more complex issues, networking professionals are still needed, and chatGPT cannot fully replace them.
  • Automating Documentation: While chatGPT initially claimed to provide networking diagrams, it later admitted its limitations in generating graphical representations. This area remains a significant challenge for chatGPT, as it is primarily a text-based tool. However, other AI applications are capable of generating images, suggesting that producing a usable network diagram could be achievable in the future.
Throughout the research, several important considerations emerged, including ensuring accuracy and consistency, integrating with existing systems and processes, and handling edge cases and exceptions. These challenges are not unique to chatGPT but are inherent in AI applications as a whole. Researchers have emphasized the need to differentiate between coherent text generation and true functional competence or intelligence.

In conclusion, while chatGPT possesses impressive capabilities, it cannot replace network engineers. Its true value lies in assisting professionals who understand its effective use for specific tasks.

Challenges in Ensuring AI Safety: A Deeper Look into Complexity

 

Artificial intelligence (AI) is a subject that sparks divergent opinions among experts, who generally fall into one of two camps: those who believe it will significantly enhance our lives and those who fear it may lead to our demise. 
This is why the recent debate in the European Parliament regarding the regulation of AI holds great significance. The crucial question at hand is how to ensure the safety of AI technology. Below are five key challenges that lie ahead in this regard.

Establishing a common understanding of AI

 After a two-year-long process, the European Parliament has finally crafted a definition for an AI system. According to their definition, software that can "for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with."

This week, it is voting on its Artificial Intelligence Act - the first legal rules of their kind on AI, which go beyond voluntary codes and require companies to comply.

Achieving global consensus

Former head of the UK Office for Artificial Intelligence, Sana Kharaghani, emphasizes that AI technology knows no boundaries and requires international collaboration. 

"We do need to have international collaboration on this - I know it will be hard," she tells BBC News. "This is not a domestic matter. These technologies don't sit within the boundaries of one country

Different regions have their own perspectives on AI regulation:

  • The European Union has proposed stringent regulations, including the classification of AI products based on their impact. For instance, a simple email spam filter would face lighter regulation compared to a cancer-detection tool.
  • The United Kingdom is integrating AI regulation within existing regulatory frameworks. In cases of AI discrimination, individuals could approach the Equalities Commission.
  • The United States has primarily relied on voluntary codes, but concerns have been raised about the effectiveness of such measures, as acknowledged in a recent AI committee hearing.
  • China intends to require companies to notify users whenever an AI algorithm is employed.
  • Balancing innovation and safety: Striking a balance between encouraging innovation and ensuring the safety of AI poses a significant challenge. Regulators must avoid stifling progress while addressing potential risks associated with AI deployment.

Accountability and transparency

"If people trust it, then they'll use it," International Business Machines (IBM) Corporation EU government and regulatory affairs head Jean-Marc Leclerc says.

Holding AI developers and users accountable for the actions and decisions made by AI systems remains a crucial challenge. The ability to trace and explain the reasoning behind AI-generated outputs is essential for building trust and ensuring ethical practices.

Addressing biases and discrimination: AI systems have shown the potential to perpetuate biases and discriminate against certain individuals or groups. Overcoming this challenge involves developing mechanisms to detect and mitigate bias in AI algorithms, as well as establishing guidelines to ensure fair and unbiased use of AI technology.

As the European Parliament prepares to vote on the Artificial Intelligence Act, the first set of legal rules specifically designed for AI, the outcome will significantly shape the future of AI regulation in Europe and possibly serve as a reference point for global discussions on the subject.

Determining Responsibility for Rule-Writing

Up until now, the regulation of AI has primarily been left to the AI industry itself. While major corporations claim to support government oversight to mitigate potential risks, concerns arise regarding their priorities. Will profit take precedence over people's well-being if they are heavily involved in shaping the regulations?

It's highly likely that these companies want to exert influence on the lawmakers responsible for establishing the rules. And Lastminute.com founder Baroness Lane-Fox says it is crucial to listen not just to corporations. We must involve civil society, academia, people who are affected by these different models and transformations," she says.

Acting quickly:

Microsoft, which has made significant investments in ChatGPT, aims to streamline work processes by eliminating tedious tasks. While ChatGPT can generate text that closely resembles human writing, OpenAI CEO Sam Altman emphasizes that it is a tool, not an autonomous entity.

Chatbots are designed to enhance productivity for workers and can serve as valuable assistants in various industries. However, some sectors have already experienced job losses due to AI implementation. BT, for instance, recently announced that 10,000 jobs would be replaced by AI.

ChatGPT became publicly available just over six months ago and has rapidly expanded its capabilities. It can now write essays, plan vacations, and even help individuals prepare for professional exams. The progress of these large-scale language models has been astonishing, prompting warnings from renowned figures like Geoffrey Hinton and Prof Yoshua Bengio, considered the "godfathers" of AI, about the immense potential for harm.

The implementation of the Artificial Intelligence Act is not expected until at least 2025, which EU technology chief Margrethe Vestager deems far too delayed. In response, she is collaborating with the United States to develop an interim voluntary code for the AI sector, with the possibility of completion within weeks.

Research : Generative AI Can Save Marketing Professionals 5 Hours Weekly

 

According to a recent study conducted by Salesforce, marketing professionals are optimistic about the potential impact of generative AI. However, they are still in the process of investigating and learning about the most effective ways to use this technology while ensuring safety. 

Salesforce surveyed a diverse group of over 1,000 marketers from companies of various sizes and sectors in the U.S., U.K., and Australia. The survey revealed that 51% of the respondents are currently utilizing generative AI. In order to successfully adopt and implement generative AI in marketing, having skills related to generative AI and access to trusted first-party data are considered crucial. 

Furthermore, the study highlighted the importance of human oversight in the implementation of generative AI. This is because the current outputs generated by the technology can be inaccurate and potentially biased. Therefore, maintaining human involvement is necessary to ensure the accuracy and fairness of the results. 

Before delving into the details of the marketing survey findings, it is essential for business leaders to consider certain factors in order to fully leverage the capabilities of generative AI within their organizations. These factors include relying on trusted data, utilizing hybrid AI foundational models that can be automated, and implementing a unified platform with built-in security and governance measures to ensure the ethical and responsible use of AI technologies.

Here are the key findings from the research on the impact of generative AI in marketing:
  • Generative AI is set to have a significant influence on marketing. A majority of marketers (53%) consider it a game-changing technology, and 60% believe it will revolutionize their role. Currently, 51% of marketers are already utilizing or experimenting with generative AI in their work.
  • Marketers are leveraging generative AI to transform various aspects of their campaigns. Presently, 57% of marketers use generative AI to create groups or segments for marketing campaigns, while 55% employ it in crafting marketing campaigns and journey plans. Additionally, 54% utilize generative AI to personalize messaging content, 53% for copy testing and experimentation, and 53% for building and optimizing SEO strategies.
  • Generative AI adoption has led to increased productivity among marketers. They estimate that using generative AI saves them more than 5 hours per week, which equates to over a month per year, enabling them to focus on more meaningful tasks. According to the survey, 71% believe that generative AI will eliminate mundane work, 71% think it will allow them to concentrate on strategic initiatives, and 70% believe it will enhance their overall productivity.
  • However, there is a lack of proficiency in generative AI skills among marketers. The majority (66%) anticipate that generative AI will necessitate new skprofessional skill setsAlmost half (43%) admit to nnot knowing how to extract maximum value from generative AI, and 39% are unsure to safely utilize it in their work. Additionally, 34% feel ineffective in utilizing generative AI for marketing purposes.
  • Marketers have concerns regarding the accuracy and quality of generative AI-generated content. The top worries associated with generative AI in marketing include accuracy and quality (31%), trust (20%), skills (19%), and job safety (18%). A significant portion (73%) of marketers believe that generative AI lacks human contextual knowledge, 66% worry about potential biases in its outputs, and 76% express concerns about new security risks introduced by generative AI.

To successfully leverage generative AI, marketers emphasize the need for human oversight, relevant skills, and trustworthy customer data. The survey reveals that 63% of marketers consider trusted customer data crucial for the effective utilization of generative AI. Furthermore, 66% highlight the importance of human oversight in maintaining brand voice when using generative AI. This indicates the necessity for appropriate training, as 54% of marketers believe generative AI training programs are important for successful adoption. Lastly, 72% of marketers expect their employers to provide them with opportunities to learn how to use generative AI effectively.

Marketing organizations can leverage generative AI to enhance customer experiences and streamline operations. Companies can automate processes and implement intelligent workflows by utilizing reliable customer data and utilizing pre-built, custom, or public AI models.  Furthermore, comprehensive staff training ensures proficient handling of AI technologies. The integration of AI across various departments, including marketing, sales, customer service, and digital commerce, promises a complete transformation of the customer journey, enhancing interactions at every touchpoint.

This Cryptocurrency Tracking Firm is Employing AI to Identify Attackers

 

Elliptic, a cryptocurrency analytics firm, is incorporating artificial intelligence into its toolkit for analyzing blockchain transactions and risk identification. The company claims that by utilizing OpenAI's ChatGPT chatbot, it will be able to organize data faster and in larger quantities. It does, however, have some usage restrictions and does not employ ChatGPT plug-ins. 

"As an organization trusted by the world’s largest banks, regulators, financial institutions, governments, and law enforcers, it’s important to keep our intelligence and data secure," an Elliptic spokesperson told Decrypt. "That’s why we don’t use ChatGPT to create or modify data, search for intelligence, or monitor transactions.”

Elliptic, founded in 2013, provides blockchain analytics research to institutions and law enforcement for tracking cybercriminals and regulatory compliance related to Bitcoin. Elliptic, for example, reported in May that some Chinese shops selling the ingredients used to produce fentanyl accepted cryptocurrencies such as Bitcoin. Senator Elizabeth Warren of the United States used the report to urge stronger regulations on cryptocurrencies once more.

Elliptic will employ ChatGPT to supplement its human-based data collecting and organization procedures, allowing it to double down on accuracy and scalability, according to the company. Simultaneously, large language models (LLM) organize the data.

"Our employees leverage ChatGPT to enhance our datasets and insights," the spokesperson said. "We follow and adhere to an AI usage policy and have a robust model validation framework."

Elliptic is not concerned about AI "hallucinations" or incorrect information because it does not employ ChatGPT to generate information. AI hallucinations are occasions in which an AI produces unanticipated or false outcomes that are not supported by real-world facts.

AI chatbots, such as ChatGPT, have come under fire for successfully giving false information about persons, places, and events. OpenAI has increased its efforts to resolve these so-called hallucinations in training its models using mathematics, calling it a vital step towards establishing aligned artificial general intelligence (AGI).

"Our customers come to us to know exactly their risk exposure," Elliptic CTO Jackson Hull said in a statement. "Integrating ChatGPT allows us to scale up our intelligence, giving our customers a view on risk they can't get anywhere else."


AI Revolutionizes Job Searching, Promotions, and Workplace Success in America

 

The impact of artificial intelligence on our careers is becoming more apparent, even if we are not fully aware of it. Various factors, such as advancements in human capital management systems, the adoption of data-driven practices in human resource and talent management, and a growing focus on addressing bias, are reshaping the way individuals are recruited, trained, promoted, and terminated. 

The current market for artificial intelligence and related systems is already substantial, generating a revenue of over US$38 billion in 2021. Undoubtedly, AI-powered software holds significant potential to rapidly progress and revolutionize how organizations approach strategic decision-making concerning their workforce.

Consider a scenario where you apply for a job in the near future. As you submit your well-crafted résumé through the company's website, you can't help but notice the striking resemblance between the platform and others you've used in the past for job applications. After saving your résumé, you are then required to provide demographic information and fill in numerous fields with the same data from your résumé. Finally, you hit the "submit" button, hoping for a follow-up email from a human.

At this point, your data becomes part of the company's human capital management system. Nowadays, only a handful of companies actually examine résumés; instead, they focus on the information you enter into those small boxes to compare you with dozens or even hundreds of other candidates against the job requirements. Even if your résumé clearly demonstrates that you are the most qualified applicant, it's unlikely to catch the attention of the recruiter since their focus lies elsewhere.

Let's say you receive a call, ace the interview, and secure the job. Your information now enters a new stage within the company's database or HCM: active employee. Your performance ratings and other employment-related data will now be linked to your profile, providing more information for the HCM and human resources to monitor and evaluate.

Advancements in AI, technology, and HCMs enable HR to delve deeper into employee data. The insights gained help identify talented employees who could assume key leadership positions when others leave and guide decisions regarding promotions. This data can also reveal favoritism and bias in hiring and promotion processes.

As you continue in your role, your performance is continuously tracked and analyzed. This includes factors such as your performance ratings, feedback from your supervisor, and your participation in professional development activities. Accumulating a substantial amount of data about you and others over time allows HR to consider how employees can better contribute to the organization's growth.

For instance, HR may employ data to determine the likelihood of specific employees leaving and assess the impact of such losses.

Popular platforms used on a daily basis already aggregate productivity data from sign-in to sign-off. Common Microsoft tools like Teams, Outlook, and SharePoint offer managers insights through workplace analytics. The Microsoft productivity score monitors overall platform usage.

Even the metrics and behaviors that define "good" or "bad" performance may undergo changes, relying less on subjective manager assessments. With the expansion of data, even professionals such as consultants, doctors, and marketers will be evaluated quantitatively and objectively. An investigation conducted by The New York Times in 2022 revealed that these systems, intended to enhance productivity and accountability, had the unintended consequence of damaging morale and instilling fear.

It is evident that American employees need to contemplate how their data is utilized, the narrative it portrays, and how it may shape their futures.

Not all companies have a Human Capital Management (HCM) system or possess advanced capabilities in utilizing talent data for decision-making. However, there is a growing number of companies that are becoming more knowledgeable in this area, and some have reached a remarkable level of advancement.  

While some researchers argue that AI could enhance fairness by eliminating implicit biases in hiring and promotions, many others see a potential danger in human-built AI merely repackaging existing issues. Amazon learned this lesson the hard way in 2018 when it had to abandon an AI system for sorting résumés, as it exhibited a bias in favor of male candidates for programming roles.

Furthermore, the increased collection and analysis of data can leave employees uncertain about their standing within the organization, while the organization itself may possess a clear view. It is crucial to comprehend how AI is reshaping the workplace and to demand transparency from your employer. These are some key points that employees should consider inquiring about during their next performance review:
  • Do you perceive me as a high-potential employee?
  • How does my performance compare to that of others?
  • Do you see me as a potential successor to your role or the roles of others?
Similar to the need to master traditional aspects of workplace culture, politics, and relationships, it is essential to learn how to navigate these platforms, understand the evaluation criteria being used, and take ownership of your career in a new, more data-driven manner.