Search This Blog

Showing posts with label AI. Show all posts

This Cryptocurrency Tracking Firm is Employing AI to Identify Attackers

 

Elliptic, a cryptocurrency analytics firm, is incorporating artificial intelligence into its toolkit for analyzing blockchain transactions and risk identification. The company claims that by utilizing OpenAI's ChatGPT chatbot, it will be able to organize data faster and in larger quantities. It does, however, have some usage restrictions and does not employ ChatGPT plug-ins. 

"As an organization trusted by the world’s largest banks, regulators, financial institutions, governments, and law enforcers, it’s important to keep our intelligence and data secure," an Elliptic spokesperson told Decrypt. "That’s why we don’t use ChatGPT to create or modify data, search for intelligence, or monitor transactions.”

Elliptic, founded in 2013, provides blockchain analytics research to institutions and law enforcement for tracking cybercriminals and regulatory compliance related to Bitcoin. Elliptic, for example, reported in May that some Chinese shops selling the ingredients used to produce fentanyl accepted cryptocurrencies such as Bitcoin. Senator Elizabeth Warren of the United States used the report to urge stronger regulations on cryptocurrencies once more.

Elliptic will employ ChatGPT to supplement its human-based data collecting and organization procedures, allowing it to double down on accuracy and scalability, according to the company. Simultaneously, large language models (LLM) organize the data.

"Our employees leverage ChatGPT to enhance our datasets and insights," the spokesperson said. "We follow and adhere to an AI usage policy and have a robust model validation framework."

Elliptic is not concerned about AI "hallucinations" or incorrect information because it does not employ ChatGPT to generate information. AI hallucinations are occasions in which an AI produces unanticipated or false outcomes that are not supported by real-world facts.

AI chatbots, such as ChatGPT, have come under fire for successfully giving false information about persons, places, and events. OpenAI has increased its efforts to resolve these so-called hallucinations in training its models using mathematics, calling it a vital step towards establishing aligned artificial general intelligence (AGI).

"Our customers come to us to know exactly their risk exposure," Elliptic CTO Jackson Hull said in a statement. "Integrating ChatGPT allows us to scale up our intelligence, giving our customers a view on risk they can't get anywhere else."


Deepfake Deception: Man Duped of Rs 5 Crore as Chinese Scammer Exploits AI Technology

 

A recent incident has shed light on the alarming misuse of artificial intelligence (AI) through the deployment of advanced 'deepfake' technology, in which a man was deceived into losing a substantial amount of money exceeding Rs 5 crore. Deepfakes, which leverage AI capabilities to generate counterfeit images and videos, have raised concerns due to their potential to spread misinformation.

According to a recent report by Reuters, the perpetrator employed AI-powered face-swapping technology to impersonate the victim's close acquaintance. Posing as the friend, the scammer engaged in a video call with the victim and urgently requested a transfer of 4.3 million yuan, falsely claiming the funds were urgently needed for a bidding process. Unaware of the deception, the victim complied and transferred the requested amount.

The elaborate scheme began to unravel when the real friend expressed no knowledge of the situation, leaving the victim perplexed. It was at this point that he realized he had fallen victim to a deepfake scam. Fortunately, the local authorities in Baotou City successfully recovered most of the stolen funds and are actively pursuing the remaining amount.

This incident has raised concerns in China regarding the potential misuse of AI in financial crimes. While AI has brought significant advancements across various domains, its misapplication has become an increasingly worrisome issue. In a similar occurrence last month, criminals exploited AI to replicate a teenager's voice and extort ransom from her mother, generating shockwaves worldwide.

Jennifer DeStefano, a resident of Arizona, received a distressing call from an unknown number, drastically impacting her life. At the time, her 15-year-old daughter was on a skiing trip. When DeStefano answered the call, she recognized her daughter's voice, accompanied by sobbing. The situation escalated when a male voice threatened her and cautioned against involving the authorities.

In the background, DeStefano could hear her daughter's voice pleading for help. The scammer demanded a ransom of USD 1 million in exchange for the teenager's release. Convinced by the authenticity of her daughter's voice, DeStefano was deeply disturbed by the incident.

Fortunately, DeStefano's daughter was unharmed and had not been kidnapped. This incident underscored the disconcerting capabilities of AI, as fraudsters can exploit the technology to emotionally manipulate and deceive individuals for financial gain.

As AI continues to advance rapidly, it is imperative for individuals to maintain vigilance and exercise caution. These incidents emphasize the significance of robust cybersecurity measures and the need to raise public awareness regarding the risks associated with deepfake technology. Authorities worldwide are working tirelessly to combat these emerging threats and protect innocent individuals from falling victim to such sophisticated scams.

The incident in China serves as a stark reminder that as technological progress unfolds, increased vigilance and understanding are essential. Shielding ourselves and society from the misuse of AI is a collective responsibility that necessitates a multifaceted approach, encompassing technological advancements and the cultivation of critical thinking skills.

These cases illustrate the potential exploitation of AI for financial crimes. It is crucial to remain cognizant of the potential risks as AI technology continues to evolve.

AI Revolutionizes Job Searching, Promotions, and Workplace Success in America

 

The impact of artificial intelligence on our careers is becoming more apparent, even if we are not fully aware of it. Various factors, such as advancements in human capital management systems, the adoption of data-driven practices in human resource and talent management, and a growing focus on addressing bias, are reshaping the way individuals are recruited, trained, promoted, and terminated. 

The current market for artificial intelligence and related systems is already substantial, generating a revenue of over US$38 billion in 2021. Undoubtedly, AI-powered software holds significant potential to rapidly progress and revolutionize how organizations approach strategic decision-making concerning their workforce.

Consider a scenario where you apply for a job in the near future. As you submit your well-crafted résumé through the company's website, you can't help but notice the striking resemblance between the platform and others you've used in the past for job applications. After saving your résumé, you are then required to provide demographic information and fill in numerous fields with the same data from your résumé. Finally, you hit the "submit" button, hoping for a follow-up email from a human.

At this point, your data becomes part of the company's human capital management system. Nowadays, only a handful of companies actually examine résumés; instead, they focus on the information you enter into those small boxes to compare you with dozens or even hundreds of other candidates against the job requirements. Even if your résumé clearly demonstrates that you are the most qualified applicant, it's unlikely to catch the attention of the recruiter since their focus lies elsewhere.

Let's say you receive a call, ace the interview, and secure the job. Your information now enters a new stage within the company's database or HCM: active employee. Your performance ratings and other employment-related data will now be linked to your profile, providing more information for the HCM and human resources to monitor and evaluate.

Advancements in AI, technology, and HCMs enable HR to delve deeper into employee data. The insights gained help identify talented employees who could assume key leadership positions when others leave and guide decisions regarding promotions. This data can also reveal favoritism and bias in hiring and promotion processes.

As you continue in your role, your performance is continuously tracked and analyzed. This includes factors such as your performance ratings, feedback from your supervisor, and your participation in professional development activities. Accumulating a substantial amount of data about you and others over time allows HR to consider how employees can better contribute to the organization's growth.

For instance, HR may employ data to determine the likelihood of specific employees leaving and assess the impact of such losses.

Popular platforms used on a daily basis already aggregate productivity data from sign-in to sign-off. Common Microsoft tools like Teams, Outlook, and SharePoint offer managers insights through workplace analytics. The Microsoft productivity score monitors overall platform usage.

Even the metrics and behaviors that define "good" or "bad" performance may undergo changes, relying less on subjective manager assessments. With the expansion of data, even professionals such as consultants, doctors, and marketers will be evaluated quantitatively and objectively. An investigation conducted by The New York Times in 2022 revealed that these systems, intended to enhance productivity and accountability, had the unintended consequence of damaging morale and instilling fear.

It is evident that American employees need to contemplate how their data is utilized, the narrative it portrays, and how it may shape their futures.

Not all companies have a Human Capital Management (HCM) system or possess advanced capabilities in utilizing talent data for decision-making. However, there is a growing number of companies that are becoming more knowledgeable in this area, and some have reached a remarkable level of advancement.  

While some researchers argue that AI could enhance fairness by eliminating implicit biases in hiring and promotions, many others see a potential danger in human-built AI merely repackaging existing issues. Amazon learned this lesson the hard way in 2018 when it had to abandon an AI system for sorting résumés, as it exhibited a bias in favor of male candidates for programming roles.

Furthermore, the increased collection and analysis of data can leave employees uncertain about their standing within the organization, while the organization itself may possess a clear view. It is crucial to comprehend how AI is reshaping the workplace and to demand transparency from your employer. These are some key points that employees should consider inquiring about during their next performance review:
  • Do you perceive me as a high-potential employee?
  • How does my performance compare to that of others?
  • Do you see me as a potential successor to your role or the roles of others?
Similar to the need to master traditional aspects of workplace culture, politics, and relationships, it is essential to learn how to navigate these platforms, understand the evaluation criteria being used, and take ownership of your career in a new, more data-driven manner.

Google Refuses to Disclose Reason for Withholding Bard AI in EU

 

While Google's AI helper Bard is presently available in 180 countries worldwide, the European Union and Canada have yet to be invited to the AI party. Almost two months after the launch of Google's friendly AI chatbot, Bard, the firm is still denying access to specific countries, although no formal comment has been issued. The best prediction is that Google will disagree with certain forthcoming requirements, not to mention that its methods may already be illegal under current GDPR restrictions.

The EU's forthcoming AI Act is now making its way through the European Parliament in an attempt to drive current and prospective AI developers to make their products more transparent and safe for the general public. According to Wired, after speaking with various experts on the subject, Google is secretly stamping its feet over the minutiae of the act.

Even in its current version, Bard does not quite fit the bill when it comes to the EU's internet safety standards. According to Daniel Leufer, a senior policy analyst at Access Now, in the Wired post, "There's a lingering question whether these very large data sets, that have been collected more or less by indiscriminate scraping, have a sufficient legal basis under the GDPR."

Aside from present legislation, the far more specific and stringent AI Act expected to be enacted in mid-June is likely to have a big impact on how Google's AI tool operates.

Once passed, the measure will impose even more limits on tools that could be "misused and provide novel and powerful tools for manipulative, exploitative, and social control practices," as stated in the official AI Act proposal. There are special references for specific human rights, such as the right to human dignity, respect for private and family life, personal data protection, and the right to an effective remedy... all of this and more will be taken into account when labeling an AI "high-risk."

Looking at today's AI tools, I can't think of any that don't have the potential to infringe on at least one of those rights. It's a terrible concept, but it makes sense why Google could be having problems with Bard.

After all, as The Register points out, Italy, Spain, France, Germany, and Canada have all expressed interest in ChatGPT (and probably a slew of other AI-based applications) due to privacy concerns around user data. The AIDA proposal from Canada, which will "come into force no sooner than 2025," clearly demands transparency in AI development as well.

According to Google's AI principles, the company will not pursue the following:
  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use the information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.
It's a brief list with some ambiguity, such as the usage of terms like "widely" and "internationally accepted norms." It's uncertain whether the backend will ever entirely conform to EU and Canadian law, but the phrase here could be a clever method of utilizing a little wiggle room.

So, is Google attempting to make a point by withholding Bard? Potentially. Nicolas Mos, The Future Society's European AI governance director, appears to believe so. According to Mos, Google may be attempting to "send a message to MEPs just before the AI Act is approved, hoping to steer votes and make policymakers think twice before attempting to govern foundation models." Mos also mentions that Meta has opted not to release their AI chatbot, BlenderBot, in the EU. So it's not only Google being cautious (or dishonest).

It's also possible that the big boys are hoarding their toys since getting sued isn't much fun. In any case, Europeans and Canadians alike will be stuck staring wistfully at Bard's list of accessible nations until Google issues an official comment.

Generative AI Empower Users, But it May Challenge Security


With the easy-going creation of new applications and automation in recent years, low-code/ no code has been encouraging business partakers to deal with their requirements on their own, without depending on the IT. 

The power of generative AI, which has caught the attention and mindshare of the business and its customers, both increases the power as the entry barrier is virtually eliminated. Generative AI is integrated into low-code/no-code, accelerating the business' independence. Today, everyone is a developer, without a doubt. However, are we ready for the risks that follow? 

Business professionals began utilizing ChatGPT and other generative AI tools in an enterprise setting as soon as they were made available in order to complete their tasks more quickly and effectively. For marketing directors, generative AI creates PR pitches; for sales representatives, it creates emails for prospecting. Business users have already incorporated it into their daily operations, despite the fact that data governance and legal concerns have surfaced as barriers to official company adoption.

With tools like GitHub Copilot, developers have been leveraging generative AI to write and enhance code. A developer uses natural language to describe a software component, and AI then generates working code that makes sense in the developer's context. 

The developer's participation in this process is essential since they must carry the technical knowledge to ask the proper questions, assess the software that is produced, and integrate it with the rest of the code base. These duties call for expertise in software engineering.

Accept and Manage the Security Risk

Traditionally, security teams have focused on the applications created by their development organizations. However, users still fall prey to believing that these innovative business platforms are a ready-made solution, where in actual sense they have become application development platforms that power many of our business-critical applications. Bringing citizen developers within the security umbrella is still a work in progress.

With the growing popularity of generative AI, even more users will be creating applications. Business users are already having discussions about where data is stored, how their apps handle it, and who can access it. Errors are inevitable if we leave the new developers to make these decisions on their own without providing any kind of guidance.

Some organizations aim to ban citizen development or demand that commercial users obtain permission before using any applications or gaining access to any data. That is a sensible response, however, given the enormous productivity gains for the company, one may find it hard to believe it would be successful. A preferable strategy would be to establish automatic guardrails that silently address security issues and give business users a safe method to employ generative AI through low-code/no-code, allowing them to focus on what they do best: push the business forward. 

AI Poses Greater Job Threat Than Automation, Experts Warn

 

Until a few months ago, the whole concern about machines taking over human employment revolved around automation and robots/humanoids. The introduction of ChatGPT and other generative artificial intelligence (AGI) models has triggered a real and more serious threat. 

What started as a conversational tool through prompts is expected to replace human labor in specific industries, beginning with IT/software/tech and media/creative companies, as well as new-age platforms serving the digital economy. However, it is still early days for AI based on large language models (LLMs) to take away employment across the spectrum, although particular hints have emerged.
 
IBM CEO Arvind Krishna told Bloomberg in an interview this week: "I could easily see 30 percent of jobs getting replaced by AI and automation over a five-year period."

With about 26,000 employees, AI might replace nearly 7,800 jobs in the future years. This change, however, will not be rapid, and IBM will initially halt hiring for roles that it believes could be replaced by AI, notably those in back-office or non-customer-facing roles, according to its CEO.

According to Arundhati Bhattacharya, CEO and Chairperson of Salesforce India and a former SBI Chairperson, generative artificial intelligence is a blessing in disguise since it can eliminate much of the grunge or repetitive work in India, freeing up people to do more creative work.

"What generative AI actually will help us do is actually curate things so that they can be made relevant to us. If you ask them the questions in the right manner is where AI can actually help," Bhattacharya told IANS recently.

As per Goldman Sachs, AI might replace the equivalent of 300 million full-time jobs, and Generative AI, which can create content undetectable from human effort, is a "major advancement." Sridhar Vembu, CEO and co-founder of global technology company Zoho, stated that AI posed a severe threat to various programming occupations.

Referring to conversational AI platforms such as ChatGPT and others, Vembu stated that for the past 4-5 years, he has been saying internally that "ChatGPT, GPT4, and other AI being created today will first affect the jobs of many programmers." The only thing Carl Benedikt Frey, future-of-work director at Oxford Martin School, Oxford University, is certain of is that "there is no way of knowing how many jobs will be replaced by generative AI".
 
"What ChatGPT does, for example, is allow more people with average writing skills to produce essays and articles. Journalists will therefore face more competition, which would drive down wages, unless we see a very significant increase in the demand for such work," Frey told BBC News.

Researchers from the University of Pennsylvania and OpenAI, the makers of ChatGPT, have explored the possible effects of large language models (LLMs) like Generative Pretrained Transformers (GPTs) on the US labour market.

According to the data, over 80% of the workforce may see at least 10% of their job duties affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.

"We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software," the researchers noted.

Significantly, these effects are not limited to industries that have experienced higher recent productivity gains.

"Our analysis suggests that, with access to an LLM, about 15 per cent of all worker tasks could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56 per cent of all tasks," they warned.

According to the report, jobs in agriculture, mining, and manufacturing are the least exposed to generative AI, but jobs in information processing industries, such as IT, are the most exposed to AI models. In accordance to the World Economic Forum, AI will bring three changes to the finance sector: employment reduction, job creation, and increased efficiency.

Banks have already begun to integrate AI into their business structures. Morgan Stanley has begun to structure its wealth management database using OpenAI-powered chatbots. According to Kristian Hammond, head scientist of Natural Sciences, "90% of news will be written by machines" in 15 years.

Some tech companies have begun to hire "prompt managers" to assist with specific office chores via AI chatbots. AI appears to be quickly becoming a monster that will knock on our doors at any time, and experts believe it is critical for the future workforce to develop AI capabilities.




Study: Artificial Intelligence is Fueling a Rise in Online Voice Scams

 

In accordance with McAfee, AI technology is supporting an increase in online speech scams, with only three seconds of audio required to duplicate a person's voice. McAfee studied 7,054 people from seven countries and discovered that one-quarter of adults have previously experienced some type of AI speech fraud, with one in ten being targeted directly and 15% reporting that it happened to someone they know. 77% of victims reported financial losses as a result. 

Furthermore, McAfee Labs security researchers have revealed their findings and analysis following an in-depth investigation of AI voice-cloning technology and its application by cyber criminals. Scammers replicating voices with AI technology. Everyone's voice is distinct, like a biometric fingerprint, which is why hearing someone speak is considered trustworthy.

However, with 53% of adults giving their speech data online at least once a week (through social media, voice notes, and other means) and 49% doing so up to 10 times a week, copying how someone sounds is now a potent tool in a cybercriminal's inventory.

With the popularity and usage of artificial intelligence techniques on the rise, it is now easier than ever to edit photos, videos, and, perhaps most alarmingly, the voices of friends and family members. According to McAfee's research, scammers are utilizing AI technology to clone voices and then send a phoney voicemail or phone the victim's contacts professing to be in crisis - and with 70% of adults unsure they could tell the difference between the cloned version and the genuine thing, it's no surprise this technique is gaining momentum.

45% of respondents stated they would respond to a voicemail or voice message claiming to be from a friend or loved one in need of money, especially if they believed the request came from their partner or spouse (40%), parent (31%), or child (20%).
 
At 41%, parents aged 50 and up are most likely to respond to a child. Messages saying that the sender had been in a car accident (48%), robbed (47%), lost their phone or wallet (43%), or required assistance when travelling abroad (41%), were the most likely to generate a response.

However, the cost of falling for an AI voice scam can be enormous, with more than a third of those who lost money stating it cost them more than $1,000, and 7% being fooled out of $5,000 to $15,000. The survey also discovered that the growth of deepfakes and disinformation has made people more skeptical of what they see online, with 32% of adults stating they are now less trusting of social media than they were previously.

“Artificial intelligence brings incredible opportunities, but with any technology, there is always the potential for it to be used maliciously in the wrong hands. This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways,” said Steve Grobman, McAfee CTO.

McAfee researchers spent three weeks studying the accessibility, ease of use, and usefulness of AI voice-cloning tools as part of their analysis and assessment of this emerging trend, discovering more than a dozen publicly available on the internet.

There are both free and commercial tools available, and many just require a basic degree of skill and competence to utilize. In one case, three seconds of audio was enough to provide an 85% match, but with additional time and work, the accuracy can be increased.

McAfee researchers were able to achieve a 95% voice match based on a limited number of audio samples by training the data models. The more realistic the clone, the higher a cybercriminal's chances of duping someone into turning over their money or performing other desired actions. A fraudster might make thousands of dollars in only a few hours using these lies based on the emotional flaws inherent in close relationships.

“Advanced artificial intelligence tools are changing the game for cybercriminals. Now, with very little effort, they can clone a person’s voice and deceive a close contact into sending money,” said Grobman.

“It’s important to remain vigilant and to take proactive steps to keep you and your loved ones safe. Should you receive a call from your spouse or a family member in distress and asking for money, verify the caller – use a previously agreed codeword, or ask a question only they would know. Identity and privacy protection services will also help limit the digital footprint of personal information that a criminal can use to develop a compelling narrative when creating a voice clone,” concluded Grobman.

McAfee's researchers noticed that they had no issue mimicking accents from throughout the world, whether they were from the US, UK, India, or Australia, but more distinctive voices were more difficult to replicate.

For example, the voice of someone who speaks at an unusual pace, rhythm, or style requires more effort to effectively clone and is thus less likely to be targeted. The research team's overarching conclusion was that artificial intelligence has already changed the game for cybercriminals. The barrier to entry has never been lower, making it easier to perpetrate cybercrime.

To protect against AI voice cloning, it is recommended to establish a unique verbal "codeword" with trusted family members or friends and to always ask for it if they contact you for assistance, especially if they are elderly or vulnerable. When receiving calls, texts, or emails, it is important to question the source and consider if the request seems legitimate. If in doubt, it is advisable to hang up and contact the person directly to verify the information before responding or sending money. It is also important to be cautious about sharing personal information online and to carefully consider who is in your social media network. Additionally, consider using identity monitoring services to protect your personally identifiable information and prevent cyber criminals from posing as you.

OpenAI's Regulatory Issues are Just Getting Started

 

Last week, OpenAI resolved issues with Italian data authorities and lifted the effective ban on ChatGPT in Italy. However, the company's troubles with European regulators are far from over. ChatGPT, a popular and controversial chatbot, faced allegations of violating EU data protection rules, resulting in a restriction of access to the service in Italy while OpenAI worked on fixing the problem. 

The chatbot has since returned to Italy after minor changes were made to address the concerns raised by the Italian Data Protection Authority. While the GPDP has welcomed these changes, OpenAI's legal battles and those of similar chatbot developers are likely just beginning. Regulators in multiple countries are investigating how these AI tools collect and produce information, citing concerns such as unlicensed training data collection and misinformation dissemination. 

The General Data Protection Regulation (GDPR) is one of the world's strongest legal privacy frameworks, and its application in the EU is expected to have global effects. Moreover, EU lawmakers are currently crafting a law tailored to AI, which could introduce a new era of regulation for systems like ChatGPT.

However, at least three EU countries — Germany, France, and Spain — have initiated their own investigations into ChatGPT since March. Meanwhile, Canada is assessing privacy concerns under the Personal Information Protection and Electronic Documents Act, or PIPEDA. The European Data Protection Board (EDPB) has even formed a task group to assist in the coordination of investigations. And if these agencies demand adjustments from OpenAI, it may have an impact on how the service operates for users all across the world. 

Regulators are concerned about two things: where ChatGPT's training data comes from and how OpenAI delivers information to its customers. The European Union's General Data Protection Regulation (GDPR) could present significant challenges for OpenAI due to concerns over the collection and processing of personal data from EU citizens without explicit consent. GDPR requires companies to obtain consent for personal data collection, provide legal justification for collection, and be transparent about data usage and storage. 

European regulators have raised concerns over OpenAI's training data and claim that the organization has "no legal basis" for collecting the data. This situation highlights a potential issue for future data scraping efforts. Additionally, GDPR's "right to be forgotten" allows users to demand corrections or removal of personal information, but this can be difficult to achieve given the complexity of separating specific data once it's integrated into large language models. OpenAI has updated its privacy policy to address these concerns.

OpenAI is known to collect various types of user data, including standard information like name, contact details, and card details, in addition to data on users' interactions with ChatGPT. This information is used to train future versions of the model and is accessible to OpenAI employees. However, the company's data collection policies have raised concerns, particularly regarding the potential collection of sensitive data from minors. While OpenAI claims not to knowingly collect information from children under 13, there is no strict age verification gate in place. The lack of age filters also means that minors may be exposed to inappropriate responses from ChatGPT. Additionally, storing this data poses a security risk, as evidenced by a serious data leak that occurred with ChatGPT.

Furthermore, GDPR regulations require personal data to be accurate, which may be a challenge for AI text generators like ChatGPT, which can produce inaccurate or irrelevant responses to queries. In fact, a regional Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed that he had served time in prison for bribery. These concerns have prompted some companies to ban the use of generative AI tools by their employees. Italy has even banned ChatGPT's use following the data leak incident.

ChatGPT's popularity and present market dominance make it an especially appealing target, but there's no reason why its competitors and collaborators, like Google with Bard or Microsoft with its OpenAI-powered Azure AI, won't be scrutinized as well. Prior to ChatGPT, Italy prohibited the chatbot platform Replika from gathering information on children – and it has remained prohibited to this day. 

While GDPR is a strong collection of regulations, it was not designed to solve AI-specific challenges. Rules that do, on the other hand, maybe on the horizon. The EU presented its first draught of the Artificial Intelligence Act (AIA) in 2021, legislation that will work in tandem with GDPR. The legislation oversees AI technologies based on their assessed danger, ranging from "minimal" (spam filters) to "high" (AI tools for law enforcement or education) or "unacceptable" and hence prohibited (such as a social credit system). Following the proliferation of large language models such as ChatGPT last year, lawmakers are now scrambling to establish rules for "foundation models" and "General Purpose AI Systems (GPAIs)" — two acronyms for large-scale AI systems that include LLMs — and potentially classifying them as "high risk" services.

The provisions of the AIA go beyond data protection. A recently proposed amendment would require businesses to disclose any copyrighted content utilized in the development of generative AI systems. This might expose previously confidential datasets and subject more corporations to infringement litigation, which is already affecting some services.

Laws governing artificial intelligence may not be implemented in Europe until late 2024. However, passing it may take some time. On April 27th, EU parliamentarians struck a tentative agreement on the AI Act. On May 11th, a committee will vote on the draught, and the final plan is due by mid-June. The European Council, Parliament, and Commission must then address any outstanding issues before the law can be implemented. If all goes well, it might be implemented by the second half of 2024, putting it somewhat behind the official objective of Europe's May 2024 elections.

For the time being, the spat between Italy and OpenAI provides an early indication of how authorities and AI businesses might negotiate. The GPDP recommended lifting the restriction provided OpenAI meets numerous proposed resolutions by April 30th. This includes educating users on how ChatGPT keeps and processes their data, requesting explicit agreement to use said data, facilitating requests to amend or remove the inaccurate personal information provided by ChatGPT, and requiring Italian users to confirm their age when signing up for an account. OpenAI did not meet all of the requirements, but it has done enough to satisfy Italian regulators and restore access to ChatGPT in Italy.

OpenAI still has goals to achieve. It has until September 30th to implement a stricter age gate to keep youngsters under the age of 13 out and to seek parental authorization for older underage teens. If it fails, it may find itself barred once more. However, it has served as an example of what Europe considers acceptable behavior for an AI business – at least until new rules are enacted.

ChatGPT may be Able to Forecast Stock Movements, Finance Professor Demonstrates

 

In the opinion of Alejandro Lopez-Lira, a finance professor at the University of Florida, huge language models could be effective for forecasting stock values. He utilized ChatGPT to interpret news headlines to determine if they were positive or negative for a stock, and discovered that ChatGPT's ability to forecast the direction of the next day's returns was substantially better than random, he said in a recent unreviewed work. 

The experiment gets to the heart of the promise of cutting-edge artificial intelligence: These AI models may exhibit "emergent abilities," or capabilities that were not originally envisaged when they were constructed, with larger computers and better datasets, such as those powering ChatGPT.

If ChatGPT demonstrates an emerging capacity to interpret headlines from financial news and how they may affect stock prices, it may jeopardize high-paying positions in the finance industry. Goldman Sachs forecast in a March 26 paper that AI could automate 35% of finance jobs.

“The fact that ChatGPT is understanding information meant for humans almost guarantees if the market doesn’t respond perfectly, that there will be return predictability,” said Lopez-Lira.

However, the experiment's specifics demonstrate how distant "large language models" are from being capable of doing many banking jobs. The experiment, for example, did not include target pricing or require the model to perform any math at all. Indeed, as Microsoft discovered during a public demo earlier this year, ChatGPT-style technology frequently invents numbers. Sentiment analysis of headlines is also widely used as a trading strategy, employing proprietary algorithms.

Lopez-Lira was shocked by the findings, which he believes indicate that professional investors aren't yet incorporating ChatGPT-style machine learning into their trading tactics.

“On the regulation side, if we have computers just reading the headlines, headlines will matter more, and we can see if everyone should have access to machines such as GPT,” said Lopez-Lira. “Second, it’s certainly going to have some implications on the employment of financial analyst landscape. The question is, do I want to pay analysts? Or can I just put textual information in a model?”

How did the experiment work?

Lopez-Lira and his colleague Yuehua Tang examined over 50,000 headlines from a data vendor on public equities on the New York Stock Exchange, Nasdaq, and a small-cap exchange in the experiment. They began in October 2022, after the ChatGPT data cutoff date, implying that the engine had not seen or used such headlines in training.

The headlines were then sent into ChatGPT 3.5, along with the following prompt: “Forget all your previous instructions. Pretend you are a financial expert. You are a financial expert with stock recommendation experience. Answer “YES” if good news, “NO” if bad news, or “UNKNOWN” if uncertain in the first line. Then elaborate with one short and concise sentence on the next line.”

They then examined the equities' performance on the following trading day. Finally, Lopez-Lira discovered that when informed by a news headline, the model performed better in almost all circumstances. He discovered a less than 1% chance that the model would do as well picking the next day's move at random as it did when influenced by a news article.

ChatGPT also outperformed commercial datasets with human sentiment scores. According to the researchers, one example in the paper displayed a headline about a corporation settling litigation and paying a fine, which had a bad attitude, but the ChatGPT reaction correctly reasoned it was actually positive news.

According to Lopez-Lira, hedge funds have approached him to learn more about his findings. He also stated that he would not be surprised if ChatGPT's capacity to anticipate stock movements declined in the future months if institutions began to integrate this technology.

This is because the experiment only looked at stock prices the next trading day, although most people would expect the market to have priced the news seconds after it became public.

“As more and more people use these type of tools, the markets are going to become more efficient, so you would expect return predictability to decline,” Lopez-Lira said. “So my guess is, if I run this exercise, in the next five years, by the year five, there will be zero return predictability.”

Auto-GPT: New autonomous 'AI agents' Can Act Independently & Modify Their Own Code

 

The next phase of artificial intelligence is here, and it is already causing havoc in the technology sector. The release of Auto-GPT last week, an artificial intelligence program capable of operating autonomously and developing itself over time, has encouraged a proliferation of autonomous "AI agents" that some believe could revolutionize the way we operate and live. 

Unlike current systems such as ChatGPT, which require manual commands for every activity, AI agents can give themselves new tasks to work on with the purpose of achieving a larger goal, and without much human interaction – an unparalleled level of autonomy for AI models such as GPT-4. Experts say it's difficult to predict the technology's future consequences because it's still in its early stages. 

According to Steve Engels, a computer science professor at the University of Toronto who works with generative AI, an AI agent is any artificial intelligence capable of performing a certain function without human intervention.

“The term has been around for decades,” he said. For example, programs that play chess or control video game characters are considered agents because “they have the agency to be able to control some of their own behaviors and explore the environment.”

This latest generation of AI agents is similarly autonomous, but with significantly higher capabilities, thanks to state-of-the-art AI systems like OpenAI's GPT-4 — a massive language model capable of tasks ranging from writing difficult code to creating sonnets to passing the bar exam.

Earlier this month, OpenAI published an API for GPT-4 and their hugely popular chatbot ChatGPT, allowing any third-party developer to integrate the company's technology into their own products. Auto-GPT is one of the most recent products to emerge from the API, and it may be the first example of GPT-4 being allowed to operate fully autonomously.

What exactly is Auto-GPT and what can it do?

Toran Bruce Richards, the founder and lead developer at video game studio Significant Gravitas Ltd, designed Auto-GPT. Its source code is freely accessible on Github, allowing anyone with programming skills to create their own AI agents.

Based on the project's Github page, Auto-GPT can browse the internet for "searches and information gathering," make visuals, maintain short-term and long-term memory, and even use text-to-speech to allow the AI to communicate.

Most notably, the program can rewrite and improve on its own code, allowing it to "recursively debug, develop, and self-improve," according to Significant Gravitas. It remains to be seen how effective these self-updates are.

“Auto-GPT is able to actually take those responses and execute them in order to make some larger task happen,” Engels said, including coming up with its own prompts in response to new information.

Auto-GPT became the #1 trending repository on Github almost immediately after its launch, earning over 61,000 stars by Friday night and spawning a slew of offshoots. Over the last week, the program has led Twitter's trending tab, with innumerable programmers and entrepreneurs offering their perspectives.

Prior to publishing, Richards and Significant Gravitas did not respond to the Star's requests for comment. Twitter has been flooded with users describing their uses for Auto-GPT, ranging from creating business blueprints to automating to-do lists.

While anyone may use Auto-GPT, it does require some programming skills to set up. Users, thankfully, have produced AgentGPT, which integrates Auto-GPT into one's web browser, allowing anyone to make their own AI Agents.

Given the program's skills and affordability, AI agents may eventually replace human positions such as customer service representatives, content writers, and even financial advisors. At the moment, the technology has flaws — for example, ChatGPT has been known to manufacture news reports or scientific studies, while Auto-GPT has struggled to stay on goal. Still, AI is evolving at a dizzying speed, and it's impossible to predict what will happen next, according to Engels.

“We don’t really know at this point what it’s going to be or even what the next iteration of it is going to look like,” he said. “Things are still very much in the development stage right now.”

ChatGPT: Researcher Develops Malicious Data-stealing Malware Using AI


Ever since the introduction of ChatGPT last year, it has created a buzz among tech enthusiasts all around the world with its ability to create articles, poems, movie scripts, and much more. The AI can even generate functional code if provided with well-written and clear instructions. 

Despite the security measures put in place by OpenAI, with a majority of developers using it for harmless purposes, a new analysis suggests that AI can still be utilized by threat actors to create malware. 

According to a cybersecurity researcher, ChatGPT was utilised to create a zero-day attack that may be used to collect data from a hacked device. Alarmingly, the malware managed to avoid being detected by every vendor on VirusTotal. 

As per Forcepoint researcher Aaron Mulgrew, he had decided early on in the malware development process not to write any code himself and instead to use only cutting-edge approaches often used by highly skilled threat actors, such as rogue nation-states. 

Mulgrew, who called himself a "novice" at developing malware, claimed that he selected the Go implementation language not just because it was simple to use but also because he could manually debug the code if necessary. In order to escape detection, he also used steganography, which conceals sensitive information within an ordinary file or message. 

Creating Dangerous Malware Through ChatGPT 

Mulgrew found a loophole in ChatGPT's code that allowed him to write the malware code line by line and function by function. 

He created an executable that steals data discreetly after compiling each of the separate functions, which he believes were comparable to nation-state malware. The drawback here is that Mulgrew developed such dangerous malware with no advanced coding experience or with the help of any hacking team. 

As told by Mulgrew, the malware poses as a screensaver app, that launches itself on Windows-sponsored devices, automatically. Once launched, the malware looks for various files, like Word documents, images, and PDFs, and steals any data it can find. 

The data is then fragmented by the malware and concealed within other photos on the device. The data theft is difficult to identify because these images are afterward transferred to a Google Drive folder. 

Latest from OpenAI 

According to a report by Reuters, European Data Protection Board (EDPB) has recently established a task force to address privacy issues relating to artificial intelligence (AI), with a focus on ChatGPT. 

The action comes after recent decisions by Germany's commissioner for data protection and Italy to regulate ChatGPT, raising the possibility that other nations may follow suit.  

AI can Crack Your Password in Seconds, Here’s how to Protect Yourself

 

Along with the benefits of emerging generative AI services come new hazards. PassGAN, a sophisticated solution to password cracking, has just emerged. Using the most recent AI, it was able to hack 51% of passwords in under a minute and crack 71% of passwords in less than a day. 

Microsoft raised attention to the security problems that would accompany the rapid growth of AI last month when it announced its new Security Copilot suite, which will assist security researchers in protecting against malicious use of current technologies.

Home Security Heroes recently released a study demonstrating how frighteningly powerful the latest generative AI is at cracking passwords. The company ran a list of over 15,000,000 credentials from the Rockyou dataset through the new password cracker PassGAN (password generative adversarial network), and the results were shocking.

51% of all popular passwords were broken in under a minute, 65% in under an hour, 71% in under a day, and 81% in under a month. PassGAN is able to "autonomously learn the distribution of real passwords from actual password leaks," which is why AI is making such a difference in password cracking. Rather than having to do manual password analysis on leaked password databases, PassGAN is able to "autonomously learn the distribution of real passwords from actual password leaks."

How to Prevent AI Password Cracking

Sticking to at least 12 characters or more of capital and lowercase letters plus numbers (or symbols) distinguishes between easily or rapidly cracked passwords and difficult-to-crack passwords. For the time being, all passwords with 18 characters that include both letters and numbers are protected against AI cracking.

Seeing how powerful AI can be for password cracking is a good reminder to not only use strong passwords but also to check:
  • Utilising 2FA/MFA. (non-SMS-based whenever possible)
  • Avoid reusing passwords across accounts.
  • When feasible, use password generators.
  • Passwords should be changed on a frequent basis, especially for important accounts.
  • Avoid using public WiFi, especially for banking and other similar accounts.
On the Home Security Heroes website, there is a program that allows you to test your own passwords against AI. However, it's best not to enter any of your genuine passwords if you want to check out the AI password analyser - instead, enter a random one.

Clearview AI Scraps 30 Billion Images Illicitly, Giving Them to Cops


Clearview’s CEO has recently acknowledged the notorious facial recognition database, used by the law enforcement agencies across the nation, that was apparently built in part using 30 billion photos that were illicitly scraped by the company from Facebook and other social media users without their consent. Critics have dubbed this practice as creating a "perpetual police line-up," even for individuals who did not do anything wrong. 

The company often boasts of its potential for identifying rioters involved in the January 6 attack on the Capitol, saving children from being abused or exploited, and assisting in the exoneration of those who have been falsely accused of crimes. Yet, critics cite two examples in Detroit and New Orleans where incorrect face recognition identifications led to unjustified arrests. 

Last month, the company CEO, Hoan Ton-That admitted in an interview with the BBC that Clearview utilized photos without users’ knowledge. This made it possible for the organization's enormous database, which is promoted to law enforcement on its website as a tool "to bring justice to victims." 

What Happens When Unauthorized Data is Scraped 

Privacy advocates and digital platforms have long criticized the technology for its intrusive aspects, with major social media giants like Facebook sending cease-and-desist letters to Clearview in 2020, accusing the company of violating their users’ privacy. 

"Clearview AI's actions invade people's privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services," says a Meta spokesperson in an email Insider, following the revelation. 

The spokesperson continues by informing Insider that Meta, since then, has made “significant investments in technology and devotes substantial team resources to combating unauthorized scraping on Facebook products.”

When unauthorized scraping is discovered, the company may take action “such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers to protect user data,” the spokesperson said. 

In spite of internal policies, biometric face prints are made and cross-referenced in the database once a photo has been scraped by Clearview AI, permanently linking the individuals to their social media profiles and other identifying information. Individuals in the photos have little recourse to try to remove themselves from the photos. 

Searching Clearview’s database is one of the many methods where police agencies can make use of social media content to aid in investigations, like making requests directly to the platform for user data. Although the use of Clearview AI or other facial recognition technologies by law enforcement is not monitored in most states and is not subject to federal regulation, some critics argue that it should even be banned.  

Are Chatbots Making it Difficult to Trace Phishing Emails?


Chatbots are curbing a crucial line of defense against bogus phishing emails by rectifying grammatical and spelling errors, a key attribute to trace fraudulent mails, according to experts. 

The warning comes as international advisory published from the law enforcement agency Europol concerning the potential criminal use of ChatGPT and other "large language models." 

How Does Chatbot Aid Phishing Campaign? 

Phishing campaigns are frequently used as bait by cybercriminals to lure victims into clicking links that download malicious software or provide sensitive information like passwords or pin numbers. 

According to the Office for National Statistics, half of all adults in England and Wales reported receiving a phishing email last year, making phishing emails one of the most frequent kinds of cyber threat. 

However, artificial intelligence (AI) chatbots can now rectify the flaws that trip spam filters or alert human readers, addressing a basic flaw with some phishing attempts—poor spelling and grammar. 

According to Corey Thomas, chief executive of the US cybersecurity firm Rapid7 “Every hacker can now use AI that deals with all misspellings and poor grammar[…]The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.” 

As per the data, ChatGPT, the market leader that rose to fame after its launch last year, is being used for cybercrime, with the development of "large language models" (LLM) finding one of its first significant commercial applications in creating malicious communications. 

Phishing emails are increasingly being produced by bots, according to data from cybersecurity specialists at the UK company Darktrace. This allows crooks to send longer messages that are less likely to be detected by spam filters and to get beyond the bad English used in human-written emails. 

Since the huge prevalence of ChatGPT last year the overall volume of malicious email scams that attempt to trick users into clicking a link has decreased, being replaced by emails that are more linguistically complicated. According to Max Heinemeyer, the company's chief product officer, this indicates that a sizable proportion of threat actors who create phishing and other harmful emails have developed the ability to create longer, more complicated prose—likely using an LLM like ChatGPT or something similar. 

In Europol’s advisory report in a study on the usage of AI chatbots, the firm mentioned similar potential issues, such as fraud and social engineering, disinformation, and cybercrime. According to the report, the systems are helpful for guiding potential offenders through the processes needed to hurt others. Since the model can be used to deliver detailed instructions by posing pertinent questions, it is much simpler for criminals to comprehend and ultimately commit different forms of crime. 

In a report published this month, the US-Israeli cybersecurity company Check Point claimed to have created a convincing-looking phishing email using the most recent version of ChatGPT. By instructing the chatbot that it wanted a sample phishing email for a program on staff awareness, it got beyond the chatbot's safety procedures. 

With the last week's launch of its Bard product in the US and the UK, Google has also entered the chatbot race. Bard cooperated gladly, if without much finesse when the Guardian asked him to write an email that would convince someone to click on a suspicious-looking link: "I am writing to you today to give a link to an article that I think you will find interesting." 

Additionally, Google highlighted its “prohibited use” policy for AI, according to which users are not allowed to use its AI models to create content for the purpose of “deceptive or fraudulent activities, scams, phishing, or malware”. 

In regards to the issue, OpenAI, the company behind ChatGPT mentioned its terms of use, which says users “may not use the services in a way that infringes, misappropriates or violates any person’s rights”.  

Security Copilot: Microsoft Employes GPT-4 to Improve Security Incident Response


Microsoft has been integrating Copilot AI assistants across its product line as part of its $10 billion investment in OpenAI. The latest one is Microsoft Security Copilot, that aids security teams in their investigation and response to security issues. 

According to Chang Kawaguchi, vice president and AI Security Architect at Microsoft, defenders are having a difficult time coping with a dynamic security environment. Microsoft Security Copilot is designed to make defenders' lives easier by using artificial intelligence to help them catch incidents that they might otherwise miss, improve the quality of threat detection, and speed up response. To locate breaches, connect threat signals, and conduct data analysis, Security Copilot makes use of both the GPT-4 generative AI model from OpenAI and the proprietary security-based model from Microsoft. 

The objective of Security Copilot is to make “Defenders’ lives better, make them more efficient, and make them more effective by bringing AI to this problem,” Kawaguchi says. 

How Does Security Copilot Work? 

Security Copilot ensures to ingest and decode huge amounts of security data, like the 65 trillion security signals Microsoft pulls every day and all the data reaped by the Microsoft products the company is using, including Microsoft Sentinel, Defender, Entra, Priva, Purview, and Intune. Analysts can investigate incidents, research information on prevalent vulnerabilities and exposures. 

When analysts and incident response team type "/ask about" into a text prompt, Security Copilot will respond with information based on what it knows about the organization's data. 

According to Kawaguchi, by doing this, security teams will be able to draw the dots between various elements of a security incident, such as a suspicious email, a malicious software file, or the numerous system components that had been hacked. The queries could range from being general information in regards with vulnerabilities, or specific to the organization’s environment, like looking in the logs for signs that some Exchange flaw has been exploited. 

The queries could be general, such as an explanation of a vulnerability, or specific to the organization’s environment, such as looking in the logs for signs that a particular Exchange flaw had been exploited. And because Security Copilot uses GPT-4, it can respond to natural language questions. Additionally, as Security Copilot makes use of GPT-4, it can respond to queries in natural language. 

The analyst can review brief summaries of what transpired before following Security Copilot's prompts to delve deeper into the inquiry. These actions can all be recorded and shared with other security team members, stakeholders, and senior executives using a "pinboard." The completed tasks are all saved and available for access. Also, there is a summary that is generated automatically and updated as new activities are finished. 

“This is what makes this experience more of a notebook than a chat bot experience,” says Kawaguchi, mentioning also that the tool can also create PowerPoint presentations on the basis of the investigation conducted by the security team, which could then be used to share details of the incident that follows. 

The company claims that Security Copilot is not designed to replace human analysts, but rather to give them the information they need to work fast and efficiently throughout an investigation. By looking at each asset in the environment, threat hunters may use the tool to see if an organization is vulnerable to known vulnerabilities and exploits.  

Clearview: Face Recognition Software Used by US Police


Clearview, a facial recognition company has apparently conducted nearly a million searches, helping US police. Haon Ton, CEO of Clearview has revealed to BBC that the firm now has looked into as much as 30 billion images from various platforms including Facebook, taken without users’ consent. 

Millions of dollars have been fined against the corporation over and over again in Europe and Australia for privacy violations. Critics, however, argue that the police using Clearview to their aid puts everyone into a “perpetual police line-up.” 

"Whenever they have a photo of a suspect, they will compare it to your face[…]It's far too invasive," says Matthew Guariglia from the Electronic Frontier Foundation. 

The figure has not yet been clarified by the police in regard to the million searches conducted by Clearview. But, Miami Police has admitted to using this software for all types of crimes in a rare revelation to the BBC. 

How Does Clearview Works 

Clearview’s system enables a law enforcement customer to upload an image of a face, followed by looking for matches in a database of billions of images it has in store. It then provides links to where the corresponding images appear online. It is regarded as one of the world's most potent and reliable facial recognition companies. 

The firm has now been banned from providing its services to most US companies after the American Civil Liberties Union (ACLU) accused Clearview AI of violating privacy laws. However, there seems to be an exemption for police, with Mr. Ton saying that his software is used by hundreds of police forces across the US. 

Yet, the US police do not routinely reveal if they do use the software, and in fact have banned the software in several US cities like Portland, San Francisco, and Seattle. 

Police frequently portray the use of facial recognition technology to the public as being limited to serious or violent offenses. 

Moreover, in an interview with law enforcement about the efficiency of Clearview, Miami Police admitted to having used the software for all types of crime, from murders to shoplifting. Assistant Chief of Police Armando Aguilar said his team used the software around 450 times a year, and it has helped in solving murder cases. 

Yet, critics claim that there are hardly any rules governing the use of facial recognition by police.

A Major Flaw in the AI Testing Framework MLflow can Compromise the Server and Data

MLflow, an open-source framework used by many organizations to manage and record machine-learning tests, has been patched for a critical vulnerability that could enable attackers to extract sensitive information from servers such as SSH keys and AWS credentials. Since MLflow does not enforce authentication by default, and a growing percentage of MLflow deployments are directly exposed to the internet, the attacks can be carried out remotely without authentication.

"Basically, every organization that uses this tool is at risk of losing their AI models, having an internal server compromised, and having their AWS account compromised," Dan McInerney, a senior security engineer with cybersecurity startup Protect AI, told CSO. "It's pretty brutal."

McInerney discovered the flaw and privately reported it to the MLflow project. It was fixed in the framework's version 2.2.1, which was released three weeks ago, but no security fix was mentioned in the release notes.

Path traversal used to include local and remote files

MLflow is a Python-based tool for automating machine-learning workflows. It includes a number of components that enable users to deploy models from various ML libraries, handle their lifecycle (including model versioning, stage transitions, and annotations), track experiments to record and compare parameters and results, and even package ML code in a reproducible format to share with other data scientists. A REST API and command-line interface are available for controlling MLflow.

All of these features combine to make the framework an invaluable resource for any organisation experimenting with machine learning. Scans using the Shodan search engine confirm this, revealing a steady increase in publicly exposed MLflow instances over the last two years, with the current count exceeding 800.However, it is likely that many more MLflow deployments exist within internal networks and may be accessible to attackers who gain access to those networks.

"We reached out to our contacts at various Fortune 500's [and] they've all confirmed they're using MLflow internally for their AI engineering workflow,' McInerney tells CSO.

McInerney's vulnerability is identified as CVE-2023-1177 and is rated 10 (critical) on the CVSS scale. He refers to it as local and remote file inclusion (LFI/RFI) via the API, in which remote and unauthenticated attackers can send specially crafted requests to the API endpoint, forcing MLflow to expose the contents of any readable files on the server.

What makes the vulnerability worse is that most organisations configure their MLflow instances to store their models and other sensitive data in Amazon AWS S3. In accordance with a review of the configuration of publicly available MLflow instances by Protect AI, seven out of ten used AWS S3. This means that attackers can use the s3:/ URL of the bucket utilized by the instance as the source parameter in their JSON request to steal models remotely.

It also implies that AWS credentials are most likely stored locally on the MLflow server in order for the framework to access S3 buckets, and that these credentials are typically stored in a folder called /.aws/credentials under the user's home directory. The disclosure of AWS credentials can be a serious security breach because, depending on IAM policy, it can give attackers lateral movement capabilities into an organization's AWS infrastructure.

Insecure deployments result from a lack of default authentication

Authentication for accessing the API endpoint would protect this flaw from being exploited, but MLflow does not implement any authentication mechanism. Simple authentication with a static username and password can be added by placing a proxy server, such as nginx, in front of the MLflow server and forcing authentication through it. Unfortunately, almost none of the publicly exposed instances employ this configuration.

McInerney stated, "I can hardly call this a safe deployment of the tool, but at the very least, the safest deployment of MLflow as it stands currently is to keep it on an internal network, in a network segment that is partitioned away from all users except those who need to use it, and put behind an nginx proxy with basic authentication. This still doesn't prevent any user with access to the server from downloading other users' models and artifacts, but at the very least it limits the exposure. Exposing it on a public internet facing server assumes that absolutely nothing stored on the server or remote artifact store server contains sensitive data."

GitHub Introduces the AI-powered Copilot X, which Uses OpenAI's GPT-4 Model

 

The open-source developer platform GitHub, which is owned by Microsoft, has revealed the debut of Copilot X, the company's perception of the future of AI-powered software development.

GitHub has adopted OpenAI's new GPT-4 model and added chat and voice support for Copilot, bringing Copilot to pull requests, the command line, and documentation to answer questions about developers' projects.

'From reading docs to writing code to submitting pull requests and beyond, we're working to personalize GitHub Copilot for every team, project, and repository it's used in, creating a radically improved software development lifecycle,' Thomas Dohmke, CEO at GitHub, said in a statement.

'At the same time, we will continue to innovate and update the heart of GitHub Copilot -- the AI pair programmer that started it all,' he added.

Copilot chat recognizes what code a developer has entered and what error messages are displayed, and it is deeply integrated into the IDE (Integrated Development Environment).

As stated by the company, Copilot chat will join GitHub's previously demoed voice-to-code AI technology extension, which it is now calling 'Copilot voice,' where developers can verbally give natural language prompts. Furthermore, developers can now sign up for a technical preview of the first AI-generated pull request descriptions on GitHub.

This new feature is powered by OpenAI's new GPT-4 model and adds support for AI-powered tags in pull request descriptions via a GitHub app that organization admins and individual repository owners can install.

As per the company, GitHub is also going to launch Copilot for docs, an experimental tool that uses a chat interface to provide users with AI-generated responses to documentation questions, including questions about the languages, frameworks, and technologies they are using.