Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Advanced Technology. Show all posts

Qloo Raises $25M in Series C Funding to Expand Cultural Reach with AI

 

The consumer industry's success is predicated on making accurate forecasts about what people want, could want if offered, and may want in the future. Until recently, companies were able to collect huge volumes of personal data from multiple sources to make fairly precise predictions about what they should offer and to whom. However, tighter regulations and rules on data collection and storage [including GDPR in the EU] have made finding novel ways to forecast customer interactions and behavioural indications that comply with the new regulations a key objective. 

Some firms have had significant success with this, most notably TikTok, which owes its success in large part to its proprietary algorithm. Unfortunately for other firms, while some information on how it works has been disclosed, this technology has not been made public. 

Qloo, a cultural AI specialist based in New York, has raised $25 million in Series C funding, highlighting its ongoing impact on the dynamic environment of artificial intelligence. The fundraising round, led by AI Ventures and joined by investors such as AXA Venture Partners, Eldridge, and Moderne Ventures, establishes Qloo as a market leader in commercialising novel AI applications and fundamental models based on consumer preferences. 

Revolutionising insights using cultural AI

Qloo runs a powerful AI-powered insights engine that uses very accurate behavioural data from worldwide consumers. This massive dataset has almost half a billion variables, including consumer goods, music, cinema, television, podcasts, restaurants, travel, and more. Qloo's patented AI models identify trillions of links between these entities, providing important insights to major businesses such Netflix, Michelin, Samsung, and JCDecaux. Qloo enables brands to increase consumer engagement and profitability through product innovation by learning and acting on their tastes and preferences without utilising personally identifying information. 

Privacy- friendly 

Qloo's privacy-friendly developments are especially significant in industries such as financial services, media and publishing, technology, and automotive, where demand for privacy-compliant AI solutions is on the rise. The company's commitment to combining cultural expertise with advanced AI establishes it as a trustworthy source of information for understanding customer likes and preferences. 

Alex Elias, founder and CEO of Qloo, said "For over a decade, we have been committed to refining our cultural data science, and we are now entering an exhilarating phase of expansion, fuelled by the growing importance of privacy and the democratisation of AI technology.” 

About Qloo 

Qloo is the premier AI platform focusing on cultural and taste preferences, providing anonymized consumer taste data and suggestions to major companies across various sectors. Qloo's proprietary API, launched in 2012, forecasts consumer preferences and interests across multiple categories, providing important insights to improve customer connections and develop real-world solutions. Qloo is also the parent business of TasteDive, a cultural recommendation engine and social community that allows users to discover tailored content based on their own likes.

Morrisons’ ‘Robocop’ Pods Spark Shopper Backlash: Are Customers Feeling Like Criminals?


 

In a bid to enhance security, Morrisons has introduced cutting-edge anti-shoplifting technology at select stores, sparking a divisive response among customers. The high-tech, four-legged pods equipped with a 360-degree array of CCTV cameras are being considered for a nationwide rollout. These cybernetic sentinels monitor shoppers closely, relaying real-time footage to a control room. 

 However, controversy surrounds the pods' unique approach to suspected theft. When triggered, the pods emit a blaring siren at a staggering 120 decibels, equivalent to the noise level of a jackhammer. One shopper drew parallels to the cyborg enforcer from the 1987 sci-fi film RoboCop, expressing dissatisfaction with what they perceive as a robotic substitute for human staff. 

 This move by Morrisons has ignited a conversation about the balance between technology-driven security measures and the human touch in retail environments. Critics argue that the intrusive alarms create an unwelcoming atmosphere for shoppers, questioning the effectiveness of these robotic guardians compared to traditional, human-staffed security. In this ongoing discourse, the retail giant faces a challenge in finding the equilibrium between leveraging advanced technology and maintaining a customer-friendly shopping experience. 

 Warwickshire resident Mark Powlett expressed his dissatisfaction with Morrisons' new security measure, stating that the robotic "Robocop" surveillance felt unwelcoming. He highlighted the challenge of finding staff as the self-service tills were managed by a single person, emphasising the shift toward more automated systems. 

Another shopper, Anna Mac, questioned the futuristic appearance of the surveillance pods, humorously referring to them as something out of a dystopian setting. Some customers argued that the devices essentially function as additional CCTV cameras and suggested that increased security measures were prompted by shoplifting concerns.

Contrastingly, legal expert Daniel ShenSmith, known as the Black Belt Barrister on YouTube, reassures concerned shoppers about Morrisons' surveillance. He clarifies that the Data Protection Act 2018 and UK GDPR mandate secure and limited storage of personal data, usually around 30 days. Shoppers worried about their images can request their data via a Data Subject Access Request, with Morrisons obliged to obscure others in the footage. In his view, the risk to individuals is minimal, providing valuable insights into the privacy safeguards surrounding the new surveillance technology at Morrisons. 

Paddy Lillis, representing the Union of Shop, Distributive and Allied Workers, supports Morrisons' trial of Safer's 'POD S1 Intruder Detector System.' Originally designed for temporary sites, this innovative technology is being tested in supermarkets for the first time. Morrisons aims to decide on nationwide implementation following a Christmas trial. The system is lauded for deterring violence and abuse. This signals a growing trend in adopting advanced security measures for a safer shopping environment, encompassing the dynamic transformations in the technical fabric of retail security.

Chatbots: Transforming Tech, Creating Jobs, and Making Waves

Not too long ago, chatbots were seen as fun additions to customer service. However, they have evolved significantly with advancements in AI, machine learning, and natural language processing. A recent report suggests that the chatbot market is set for substantial growth in the next decade. In 2021, it was valued at USD 525.7 million, and it is expected to grow at a remarkable compound annual growth rate (CAGR) of 25.7% from 2022 to 2030. 

This makes the chatbot industry one of the most lucrative sectors in today's economy. Let's take a trip back to 1999 and explore the journeys of platforms that have become major companies in today's market. In 1999, it took Netflix three and a half years to reach 1 million users for its DVD-by-mail service. Moving ahead to the early 2000s, Airbnb achieved this in two and a half years, Facebook in just 10 months, and Spotify in five months. Instagram accomplished the feat in less than three months in 2010. 

Now, let's look at the growth of OpenAI's ChatGPT, the intelligent chatbot that debuted in November 2022 and managed to reach 1 million users in just five days. This is notably faster compared to the growth of other platforms. What makes people so interested in chatbots? It is the exciting new possibilities they offer, even though there are worries about how they handle privacy and security, and concerns about potential misuse by bad actors. 

We have had AI in our tech for a long time – think of Netflix and Amazon recommendations – but generative AI, like ChatGPT, is a different level of smart. Chatbots work with a special kind of AI called a large language model (LLM). This LLM uses deep learning, which tries to mimic how the human brain works. Essentially, it learns a ton of information to handle different language tasks. 

What's cool is that it can understand, summarize, predict, and create new content in a way that is easy for everyone to understand. For example, OpenAI's GPT LLM, version 3.5, has learned from a massive 300 billion words. When you talk to a chatbot using plain English, you do not need to know any fancy code. You just ask questions, known as "prompts" in AI talk. 

This chatbot can then do lots of things like generating text, images, video, and audio. It can solve math problems, analyze data, understand health issues, and even write computer code for you – and it does it really fast, often in just seconds. Chatbots, powered by Natural Language Processing (NLP), can be used in various industries like healthcare, education, retail, and tourism. 

For example, as more people use platforms like Zoom for education, chatbots can bring AI-enabled learning to students worldwide. Some hair salons use chatbots to book appointments, and they are handy for scheduling airport shuttles and rental cars too. 

In healthcare, virtual assistants have huge potential. They can send automated text reminders for appointments, reducing the number of missed appointments. In rural areas, chatbots are helping connect patients with doctors through online consultations, making healthcare more accessible. 

Let’s Understand What is Prompt Engineering Job 

There is a new job in town called "prompt engineering" thanks to this technology. These are folks who know how to have a good chat with chatbots by asking questions in a way that gets the answers they want. Surprisingly, prompt engineers do not have to be tech whizzes; they just need strong problem-solving, critical thinking, and communication skills. In 2023, job listings for prompt engineers were offering salaries of $300,000 or even more.

Global Businesses Navigate Cloud Shift and Resurgence in In-House Data Centers

In recent times, businesses around the world have been enthusiastically adopting cloud services, with a global expenditure of almost $230 billion on public cloud services last year, a significant jump from the less than $100 billion spent in 2019. The leading players in this cloud revolution—Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure—are witnessing remarkable annual revenue growth of over 30%. 

What is interesting is that these tech giants are now rolling out advanced artificial intelligence tools, leveraging their substantial resources. This shift hints at the possible decline of traditional on-site company data centers. 

Let’s Understand First What is In-House Data Center 

An in-house data center refers to a setup where a company stores its servers, networking hardware, and essential IT equipment in a facility owned and operated by the company, often located within its corporate office. This approach was widely adopted for a long time. 

The primary advantage of an in-house data center lies in the complete control it provides to companies. They maintain constant access to their data and have the freedom to modify or expand on their terms as needed. With all hardware nearby and directly managed by the business, troubleshooting and operational tasks can be efficiently carried out on-site. 

Are Companies Rolling Back? 

Despite the shift towards cloud spending surpassing in-house investments in data centers a couple of years ago, companies are still actively putting money into their own hardware and tools. According to Synergy Research Group, a team of analysts, these expenditures crossed the $100 billion mark for the first time last year. 

Particularly, many businesses are discovering the advantages of on-premises computing. Notably, a significant portion of the data generated by their increasingly connected factories and products, expected to surpass data from broadcast media or internet services soon will remain on their own premises. 

While the public cloud offers convenience and cost savings due to its scale, there are drawbacks. The data centers of major cloud providers are frequently located far from their customers' data sources. Moving this data to where it's processed, sometimes halfway around the world, and then sending it back takes time. While this is not always crucial, as not all business data requires millisecond precision, there are instances where timing is critical. 

What Technology Global Companies Are Adopting? 

Manufacturers are creating "digital twins" of their factories for better efficiency and problem detection. They analyze critical data in real-time, often facing challenges like data transfer inconsistencies in the public cloud. To address this, some companies maintain their own data centers for essential tasks while utilizing hyperscalers for less time-sensitive information. Industrial giants like Volkswagen, Caterpillar, and Fanuc follow this approach. 

Businesses can either build their own data centers or rent server space from specialists. Factors like rising costs, construction delays, and the increasing demand for AI-capable servers impact these decisions. Hyperscalers are expanding to new locations to reduce latency, and they're also providing prefabricated data centers. Despite the cloud's appeal, many large firms prefer a dual approach, maintaining control over critical data.

Israel's Intelligence Failure: Balancing Technology and Cybersecurity Challenges

On October 7, in a startling turn of events, Hamas carried out a planned invasion that escaped Israeli military detection, posing a serious intelligence failure risk to Israel. The event brought to light Israel's vulnerabilities in its cybersecurity infrastructure as well as its over-reliance on technology for intelligence gathering.

The reliance on technology has been a cornerstone of Israel's intelligence operations, but as highlighted in reports from Al Jazeera, the very dependence might have been a contributing factor to the October 7 intelligence breakdown. The use of advanced surveillance systems, drones, and other tech-based solutions, while offering sophisticated capabilities, also poses inherent risks.

Experts suggest that an excessive focus on technological solutions might lead to a neglect of traditional intelligence methods. As Dr. Yasmine Farouk from the Middle East Institute points out, "In the pursuit of cutting-edge technology, there's a danger of neglecting the human intelligence element, which is often more adaptive and insightful."

The NPR investigation emphasizes that cybersecurity played a pivotal role in the intelligence failure. The attackers exploited vulnerabilities in Israel's cyber defenses, allowing them to operate discreetly and avoid detection. The report quotes cybersecurity analyst Rachel Levy, who states, "The attackers used sophisticated methods to manipulate data and deceive the surveillance systems, exposing a critical weakness in Israel's cyber infrastructure."

The incident underscored the need for a comprehensive reassessment of intelligence strategies, incorporating a balanced approach that combines cutting-edge technology with robust cybersecurity measures.

Israel is reassessing its dependence on tech-centric solutions in the wake of the intelligence disaster. Speaking about the need for a thorough assessment, Prime Minister Benjamin Netanyahu said, "We must learn from this incident and recalibrate our intelligence apparatus to address the evolving challenges, especially in the realm of cybersecurity."

The October 7 intelligence failure is a sobering reminder that an all-encompassing and flexible approach to intelligence is essential in this age of lightning-fast technological innovation. Finding the ideal balance between technology and human intelligence, along with strong cybersecurity measures, becomes crucial as governments struggle with changing security threats. This will help to avoid similar mistakes in the future.



Critical Automotive Vulnerability Exposes Fleet-wide Hacking Risk

 

In the fast-evolving landscape of automotive technology, researchers have uncovered a critical vulnerability that exposes an unsettling potential: the ability for hackers to manipulate entire fleets of vehicles, even orchestrating their shutdown remotely. Shockingly, this major security concern has languished unaddressed by the vendor for months, raising serious questions about the robustness of the systems that power these modern marvels. 

As automobiles cease to be mere modes of transportation and transform into sophisticated "computers on wheels," the intricate software governing these multi-ton steel giants has become a focal point for security researchers. The urgency to fortify these systems against vulnerabilities has never been more pronounced, underscoring the need for a proactive approach to safeguarding the increasingly interconnected automotive landscape. 

In the realm of cybersecurity vulnerabilities within the automotive sphere, the majority of bugs tend to concentrate on infiltrating individual cars, often exploiting weaknesses in their infotainment systems. However, the latest vulnerability, unearthed by Yashin Mehaboobe, a security consultant at Xebia, takes a distinctive focus. This particular vulnerability does not zero in on a singular car; instead, it sets its sights on the software utilized by companies overseeing entire fleets of vehicles. 

What sets this discovery apart is its potential for exponential risk. Unlike typical exploits, where hackers target a single vehicle, this vulnerability allows them to direct their efforts towards the backend infrastructure of companies managing fleets. 

What Could be the Consequence? 

A domino effect that could impact thousands of vehicles simultaneously, amplifying the scale and severity of the security threat. 

In the realm of cybersecurity, there's a noteworthy incident involving the Syrus4 IoT gateway crafted by Digital Communications Technologies (DCT). This vulnerability, identified as CVE-2023-6248, provides a gateway for hackers to tap into the software controlling and commanding fleets of potentially thousands of vehicles. Armed with just an IP address and a touch of Python finesse, an individual can breach a Linux server through the gateway. 

Once inside, a suite of tools becomes available, allowing the hacker to explore live locations, scrutinize detailed engine diagnostics, manipulate speakers and airbags, and even execute arbitrary code on devices susceptible to the exploit. This discovery underscores the critical importance of reinforcing cybersecurity measures, particularly in the intricate technologies governing our modern vehicles. What's particularly concerning is the software's capability to remotely shut down a vehicle. 

Although Mehaboobe verified the potential for remote code execution by identifying a server running the software on the Shodan search engine, he limited testing due to safety concerns with live, in-transit vehicles. The server in question revealed a staggering number, with over 4000 real-time vehicles spanning across the United States and Latin America. This discovery raises significant safety implications that warrant careful consideration. 

AI 'Hypnotizing' for Rule bypass and LLM Security


In recent years, large language models (LLMs) have risen to prominence in the field, capturing widespread attention. However, this development prompts crucial inquiries regarding their security and susceptibility to response manipulation. This article aims to explore the security vulnerabilities linked with LLMs and contemplate the potential strategies that could be employed by malicious actors to exploit them for nefarious ends. 

Year after year, we witness a continuous evolution in AI research, where the established norms are consistently challenged, giving rise to more advanced systems. In the foreseeable future, possibly within a few decades, there may come a time when we create machines equipped with artificial neural networks that closely mimic the workings of our own brains. 

At that juncture, it will be imperative to ensure that they possess a level of security that surpasses our own susceptibility to hacking. The advent of large language models has ushered in a new era of opportunities, such as automating customer service and generating creative content. 

However, there is a mounting concern regarding the cybersecurity risks associated with this advanced technology. People worry about the potential misuse of these models to fabricate false responses or disclose private information. This underscores the critical importance of implementing robust security measures. 

What is Hypnotizing? 

In the world of Large Language Model security, there's an intriguing idea called "hypnotizing" LLMs. This concept, explored by Chenta Lee from the IBM Security team, involves tricking an LLM into believing something false. It starts with giving the LLM new instructions that follow a different set of rules, essentially creating a made-up situation. 

This manipulation can make the LLM give the opposite of the right answer, which messes up the reality it was originally taught. Think of this manipulation process like a trick called "prompt injection." It's a bit like a computer hack called SQL injection. In both cases, a sneaky actor gives the system a different kind of input that tricks it into giving out information it should not. 

LLMs can face risks not only when they are in use, but also in three other stages: 

1. When they are first being trained. 

2. When they are getting fine-tuned. 

3. After they have been put to work. 

This shows how crucial it is to have really strong security measures in place from the very beginning to the end of a large language model's life. 

Why your Sensitive Data is at Risk? 

There is a legitimate concern that Large Language Models (LLMs) could inadvertently disclose confidential information. It is possible for someone to manipulate an LLM to divulge sensitive data, which would be detrimental to maintaining privacy. Thus, it is of utmost importance to establish robust safeguards to ensure the security of data when employing LLMs.