Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Human Intelligence. Show all posts

Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT's popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform on a partner in crime or how to share money with changing incentives. The bots' responses were compared to those of over 100,000 people from 50 nations. 

The study is one of the first in which an artificial intelligence source has passed a rigorous Turing test. A Turing test, named after British computing pioneer Alan Turing, can consist of any job assigned to a machine to determine whether it performs like a person. If the machine seems to be human, it passes the test. 

Chatbot personality quirks

The researchers assessed the bots' personality qualities using the OCEAN Big-5, a popular personality exam that evaluates respondents on five fundamental characteristics that influence behaviour. In the study, ChatGPT's version 4 performed within normal ranges for the five qualities but was only as agreeable as the lowest third of human respondents. The bot passed the Turing test, but it wouldn't have made many friends. 

Version 4 outperformed version 3 in terms of chip and motherboard performance. The previous version, with which many internet users may have interacted for free, was only as appealing to the bottom fifth of human responders. Version 3 was likewise less open to new ideas and experiences than all but a handful of the most stubborn people. 

Human-AI interactions 

Much of the public's concern about AI stems from their failure to understand how bots make decisions. It can be difficult to trust a bot's advice if you don't know what it's designed to accomplish. Jackson's research shows that even when researchers cannot scrutinise AI's inputs and algorithms, they can discover potential biases by meticulously examining outcomes. 

As a behavioural economist who has made significant contributions to our knowledge of how human social structures and interactions influence economic decision-making, Jackson is concerned about how human behaviour may evolve in response to AI.

“It’s important for us to understand how interactions with AI are going to change our behaviors and how that will change our welfare and our society,” Jackson concluded. “The more we understand early on—the more we can understand where to expect great things from AI and where to expect bad things—the better we can do to steer things in a better direction.”

AI vs Human Intelligence: Who Is Leading The Pack?

 




Artificial intelligence (AI) has surged into nearly every facet of our lives, from diagnosing diseases to deciphering ancient texts. Yet, for all its prowess, AI still falls short when compared to the complexity of the human mind. Scientists are intrigued by the mystery of why humans excel over machines in various tasks, despite AI's rapid advancements.

Bridging The Gap

Xaq Pitkow, an associate professor at Carnegie Mellon University, highlights the disparity between artificial intelligence (AI) and human intellect. While AI thrives in predictive tasks driven by data analysis, the human brain outshines it in reasoning, creativity, and abstract thinking. Unlike AI's reliance on prediction algorithms, the human mind boasts adaptability across diverse problem-solving scenarios, drawing upon intricate neurological structures for memory, values, and sensory perception. Additionally, recent advancements in natural language processing and machine learning algorithms have empowered AI chatbots to emulate human-like interaction. These chatbots exhibit fluency, contextual understanding, and even personality traits, blurring the lines between man and machine, and creating the illusion of conversing with a real person.

Testing the Limits

In an effort to discern the boundaries of human intelligence, a new BBC series, "AI v the Mind," will pit AI tools against human experts in various cognitive tasks. From crafting jokes to mulling over moral quandaries, the series aims to showcase both the capabilities and limitations of AI in comparison to human intellect.

Human Input: A Crucial Component

While AI holds tremendous promise, it remains reliant on human guidance and oversight, particularly in ambiguous situations. Human intuition, creativity, and diverse experiences contribute invaluable insights that AI cannot replicate. While AI aids in processing data and identifying patterns, it lacks the depth of human intuition essential for nuanced decision-making.

The Future Nexus of AI and Human Intelligence

As we move forward, AI is poised to advance further, enhancing its ability to tackle an array of tasks. However, roles requiring human relationships, emotional intelligence, and complex decision-making— such as physicians, teachers, and business leaders— will continue to rely on human intellect. AI will augment human capabilities, improving productivity and efficiency across various fields.

Balancing Potential with Responsibility

Sam Altman, CEO of OpenAI, emphasises viewing AI as a tool to propel human intelligence rather than supplant it entirely. While AI may outperform humans in certain tasks, it cannot replicate the breadth of human creativity, social understanding, and general intelligence. Striking a balance between AI's potential and human ingenuity ensures a symbiotic relationship, attempting to turn over new possibilities while preserving the essence of human intellect.

In conclusion, as AI continues its rapid evolution, it accentuates the enduring importance of human intelligence. While AI powers efficiency and problem-solving in many domains, it cannot replicate the nuanced dimensions of human cognition. By embracing AI as a complement to human intellect, we can harness its full potential while preserving the extensive qualities that define human intelligence.




How AI Affects Human Cognition

 

The impact of artificial intelligence (AI) on how people handle and interpret data in the digital age has gained substantial attention. 

This article analyses the advantages and drawbacks of AI's potential influence on cognitive processes in humans. 

 Advantages of personalised AI 

The ability of generative AI tools, like ChatGPT, to deliver personalised content catered to each user's preferences has led to a significant increase in their popularity. These customised AI solutions come with a number of alluring advantages: 

Revolutionising education: By providing custom learning materials, personalised AI has the power to completely change education. Students' comprehension and retention may increase as a result of this. 

Workflow efficiency: AI can automate tasks like content production and data analysis, freeing up time for novel and challenging issues. This improved efficiency might raise productivity in a variety of industries. 

Accelerating scientific discovery: AI's capacity to analyse large datasets and find patterns offers the potential to hasten scientific advancements. 

Enhancing communication: As demonstrated by Meta AI, chatbots and virtual assistants powered by AI can improve connections and interpersonal interactions. Even as companions, they can provide company and assistance. 

Addressing the issues raised by AI-powered thinking

While the benefits of personalised AI are clear, it is important to recognise and address legitimate concerns: 

Data ownership: Gathering and analysing substantial amounts of data is required for the application of AI. Concerns about privacy and security, particularly those involving personal data, must be given top priority to avoid exploitation. 

Cognitive bias challenges: Content produced by AI, which is frequently intended to appear neutral and recognisable, may unintentionally perpetuate cognitive biases. As a result, consumers may have distorted viewpoints.

Filter bubbles and information bias: Filter bubbles are frequently created by social media algorithms, limiting users' exposure to various content. This lack of diversity can lead to ideological polarisation and an increased likelihood of encountering misinformation. 

Influence of AI on human cognitive 

To assess the possible impact of generative AI on thinking, consider the revolutionary effects of technology in prior decades. With the introduction of the internet in the early 1990s, a new era of information accessibility began: 

Increased meta-knowledge: People now have access to a wealth of knowledge resources thanks to the internet, which has resulted in an increase in meta-knowledge. However, this abundance of knowledge also led to the "Google effect," where people were able to find information quickly but had poorer memory recall. 

Reliance on search engines: Using search engines online enables people to outsource some cognitive functions, which frees up their minds for original thought. But this simplicity of use also made people more prone to distraction and dependence

Encouraging the responsible use of AI 

It is critical to take the following precautions to make sure that generative AI tools have a good impact on human thinking: 

Promoting AI literacy should be a top priority for society. Responsible use of AI requires that people are informed about its potential as well as its limitations.

Human autonomy and critical thinking: AI tools need to be developed to support and enhance human autonomy and critical thinking rather than to replace these essential cognitive functions.

Hybrid Cybersecurity: A Need of the Hour

 

Training artificial intelligence (AI) and machine learning (ML) models to provide enterprises with hybrid cybersecurity at scale requires human intelligence and intuition. When human intelligence and intuition are combined with AI and ML models, subtleties in attack patterns that are missed by numerical analysis alone can be detected. 

Data scientists, security analysts, and threat hunters with extensive experience make sure that the data used to train AI and ML models enables a model to accurately identify threats and minimize false positives. The future of hybrid cybersecurity is defined by combining human expertise, AI, and ML models with a real-time stream of telemetry data from enterprises' numerous systems and apps. 

Benefits of hybrid cybersecurity 

One of the fastest-growing subcategories of enterprise cybersecurity is the integration of AI, ML, and human intelligence as a service. The service category that benefits the most from businesses' need for hybrid cybersecurity as a component of their more comprehensive risk management strategies is managed detection and response (MDR). Client inquiries about this topic increased by 35%, according to Gartner. Additionally, the report predicts that the MDR market will generate $2.2 billion in revenue in 2025, up from $1 billion in 2021, representing a compound annual growth rate (CAGR) of 20.2%. 

The MDR services that rely on AI and ML for threat monitoring, detection, and response functions will be used by 50% of organizations by 2025, the report further reads. To find threats and halt breaches for clients, these MDR systems will increasingly rely on ML-based threat containment and mitigation capabilities, bolstered by the expertise of seasoned threat hunters, analysts, and data scientists. 

Efficient against AI and ML attacks 

In organizations with a shortage of data scientists, analysts, and experts in AI and ML modeling, hybrid cybersecurity continues to rise in importance. VentureBeat, a cybersecurity news portal, spoke with CISOs from small, rapidly expanding companies to mid-tier and large-scale enterprises, and they all emphasized the need to protect themselves from faster-moving, deadly cybercriminal gangs that are developing their AI and ML skills more quickly than they are. “We champion a hybrid approach of AI to gain [the] trust of users and executives, as it is very important to have explainable answers,” stated AJ Abdallat, CEO of Beyond Limits. 

Within one hour and 24 minutes of the initial time of compromise, cybercriminal gangs with AI and ML expertise have demonstrated that they can move from the initial entry point to an internal system. A 45% increase in interactive intrusions and more than 180 tracked adversaries were noted in the CrowdStrike 2022 Global Threat Report. Staying ahead of threats is not a human-scale issue in this environment. It requires the potent fusion of human expertise and machine learning. 

Endpoint detection and response (EDR), extended detection and response (XDR), and endpoint protection platforms (EPPs) powered by AI and ML are proving successful at quickly spotting and thwarting new attack patterns. However, they still need time to process information and become aware of fresh threats. Convolutional neural networks and deep learning are used in AI and ML-based cybersecurity platforms to help reduce this latency, but hackers continue to develop new methods faster than AI and ML systems can catch up. 

As a result, even the most sophisticated threat monitoring and response systems relied upon by businesses and MDR providers find it difficult to keep up with the constantly changing strategies used by malicious hackers. 

Lowering the possibility of a business disruption 


Boards of directors, CEOs, and CISOs are discussing risk management and how hybrid cybersecurity is a business investment more frequently as a result of the possibility of a devastating cyberattack having an impact on their ongoing business operations. CISOs tell VentureBeat that board-level initiatives for cybersecurity in 2023 will include hybrid cybersecurity to protect and increase revenue. 

Hybrid cybersecurity will remain a thing. It aids businesses in overcoming the fundamental problems they face in defending themselves against cyberattacks driven by AI and ML which are getting more and more sophisticated. CISOs who lack the resources to scale up AI and ML modeling rely on MDR providers who offer services that include AI and ML-based EPP, EDR, and XDR platforms. 

By removing the difficulty of locating skilled AL and ML model builders with experience on their key platforms, MDRs allow CISOs to implement hybrid cybersecurity at scale. For CISOs, hybrid cybersecurity is essential to the long-term success of their companies.