Your cart is currently empty!
Key takwaways:
- AI trust is rising but uneven – Younger, educated, and higher-income individuals trust AI more, while rural and lower-income groups remain skeptical, highlighting a growing digital divide.
- AI is reshaping information discovery – AI-generated search results reduce engagement with traditional sources, raising concerns about misinformation, journalism’s future, and the public’s ability to critically assess AI-generated content.
- AI literacy remains low – Many trust AI without understanding it, emphasizing the need for education to prevent blind reliance on AI-generated information and potential biases.

Introduction: Do People Trust AI?
Artificial intelligence has moved from being a futuristic concept to an everyday reality. From automating tasks to powering recommendation engines, AI has seeped into nearly every industry, shaping how businesses operate and how consumers interact with technology.
But as AI adoption accelerates, public trust in the technology remains divided. A series of studies from KPMG (2023) and Rutgers University (2024-2025) reveal a shifting but cautious attitude toward AI. Some see it as an enabler of efficiency, while others fear its unpredictability, ethical concerns, and misinformation risks.
Notably, AI is also changing how people search for and consume information. The rise of tools like ChatGPT, AI-generated search results, and Google’s move toward a zero-click search experience are redefining how users engage with online content. If AI increasingly answers questions without requiring users to visit a webpage, what does this mean for trust, journalism, and the future of SEO?
This article explores how public trust in AI has evolved, who trusts AI the most, and what this means for the future of information discovery and content creation.
AI Trust Over Time: A Shifting Landscape
Public trust in AI has always been a complex issue, balancing optimism about its potential with concerns over misuse. In 2023, the global study by KPMG found that most people were cautious about trusting AI, with 61% expressing skepticism. Trust levels depended heavily on the application, AI in human resources was met with the most resistance, while AI in healthcare was more widely accepted.
By 2024 and into 2025, surveys from Rutgers University showed a noticeable shift. Trust in AI had not only increased but had also surpassed trust in social media and even government institutions like Congress. This change suggests that as AI becomes more embedded in daily life, people are gradually becoming more comfortable with it, especially younger generations and those with higher education levels.
But this rise in trust doesn’t necessarily mean people are engaging with AI more critically. Many respondents admitted to having limited knowledge of AI, and a significant portion struggled to identify AI-generated content. This growing reliance on AI, paired with low AI literacy, raises an important question: are people trusting AI because they understand it better, or simply because it’s becoming unavoidable?
At the same time, AI is also reshaping how people interact with information. The rise of AI-powered search tools, like ChatGPT and Google’s AI-generated overviews, is changing the way users find answers. Instead of clicking on multiple links, users are increasingly getting responses directly from AI, often without verifying the source. This shift could reinforce trust in AI-generated content, even if users remain uncertain about how AI reaches its conclusions.
Who Trusts AI the Most? Breaking Down the Data
Trust in AI is not distributed evenly across the population. Studies from both KPMG and Rutgers University highlight key demographic trends that indicate who is most likely to embrace AI, and who remains skeptical.
Age plays a significant role. In 2023, AI trust levels were generally low across most age groups. However, by 2025, younger adults especially those between 18 and 24 were the most likely to trust AI, with 60% expressing confidence in the technology. This makes sense given that younger generations have grown up with AI-driven tools, from social media algorithms to voice assistants. AI is a natural part of their digital environment, making them more comfortable with its presence.
Gender also reveals a divide. Men tend to trust AI more than women, with Rutgers’ 2024 study showing a gap of nearly 10 percentage points. This could be influenced by differences in exposure, industry representation, or concerns over bias in AI systems, which have disproportionately affected women in areas like hiring algorithms and facial recognition.
Education and income levels further shape AI trust. Higher-income individuals and those with advanced degrees are significantly more likely to trust AI, with 65% of high earners expressing confidence in businesses using AI responsibly. These groups also tend to have higher AI literacy, which may contribute to their trust in the technology. However, this also raises concerns about an AI divide if only those with access to education and resources understand AI well enough to trust it, could this further widen economic and technological disparities?
Beyond demographics, geography plays a role as well. Urban populations tend to trust AI more than rural communities. In the Rutgers study, 53% of urban residents expressed confidence in AI, compared to only 38% in rural areas. This could be due to greater exposure to AI-driven technologies in cities, where industries like finance, healthcare, and logistics are increasingly AI-powered.
These trends in AI trust intersect with broader changes in how people search for and consume information. Google’s increasing reliance on AI-generated answers and ChatGPT’s growing role as an alternative to traditional search engines may reinforce these trust gaps. Those who already trust AI may rely on these tools more frequently, while those who are skeptical may avoid them altogether, deepening the divide between AI adopters and those who remain hesitant.
Who Trusts AI the Most? Breaking Down the Data
Trust in AI isn’t uniform—it varies significantly based on age, gender, education, and income. The 2023 KPMG study found that trust in AI was generally low across most groups, with people in emerging economies showing slightly higher acceptance levels than those in developed nations.
By 2024 and into 2025, the Rutgers studies revealed a clearer divide. Young adults, particularly those aged 18 to 24, showed the highest levels of trust in AI, with 60% expressing confidence in the technology. Those with higher incomes and graduate degrees also reported greater trust, suggesting that exposure and familiarity play a key role in shaping perceptions.
Men were consistently more likely to trust AI than women, with a 9% gap between the two groups. Geographic differences were also notable, urban residents showed higher trust in AI compared to those in rural areas, a pattern that aligns with broader trends in technology adoption.
While these numbers suggest growing confidence in AI, they also highlight a potential issue: AI is becoming a tool of the educated and affluent, while lower-income and less-educated populations may be left behind. This digital divide isn’t just about access, it’s about trust, understanding, and the ability to critically engage with AI-driven systems.
At the same time, the way people interact with AI is shifting. Instead of relying on traditional search engines, younger users are turning to tools like ChatGPT to find answers. These AI-driven platforms aren’t just influencing trust in AI, they’re shaping how people consume and validate information. If AI-generated responses become the default, will people question their accuracy, or will they simply assume AI is always right?
AI vs. Other Institutions: The Rise of AI Trust Over Social Media
A striking shift in public perception has emerged over the past two years: people now trust AI more than social media. In 2023, skepticism toward AI was still dominant, with concerns centered on cybersecurity risks, bias, and job displacement. However, by 2024-2025, Rutgers University’s studies found that 47% of Americans trust AI to act in the public interest, higher than trust in Congress (42%) or social media (39%).
Rutgers University’s studies found that 47% of Americans trust AI to act in the public interest, higher than trust in Congress (42%) or social media (39%)
rutgers.edu
This change is significant. Social media has long been viewed as an unreliable source of information, plagued by misinformation and algorithm-driven echo chambers. AI, on the other hand, is increasingly positioned as a neutral tool capable of automating tasks, improving decision-making, and streamlining access to information.
But this doesn’t mean people fully understand how AI reaches its conclusions. A growing number of users now rely on AI-powered search tools like ChatGPT and Google’s AI Overviews instead of social media to find information. These AI-driven responses often provide quick, concise answers without requiring users to click on a source. While this shift reduces reliance on potentially misleading social media posts, it also creates new risks. If AI is generating responses based on existing biases in training data, people may be blindly trusting a system they don’t fully understand.
Google’s transition to a zero-click search model, where AI-generated snippets answer questions directly in search results, reinforces this trend. Users are engaging less with individual websites, reducing opportunities for deeper research or fact-checking. This raises critical questions about the future of content discovery, will people continue questioning information, or will they simply accept AI-generated answers as fact?
The irony is that while trust in AI is increasing, the need for transparency and explainability in AI-generated content has never been greater.
AI and Journalism: The Preference for Human-Created Content
As AI becomes more sophisticated, its role in journalism and content creation is expanding. News organizations are already using AI to generate reports, summarize events, and even draft articles. But despite these advancements, people still overwhelmingly prefer human-written content.
A 2024 Rutgers study found that 62% of respondents trust mainstream journalists, while only 48% trust AI-generated news. This gap highlights an important reality: while AI can process vast amounts of data quickly, people remain skeptical of its ability to provide reliable, unbiased reporting. The fear of AI-generated misinformation is one reason why human journalists still hold more credibility.
At the same time, many people struggle to distinguish between AI-generated and human-written content. Only 13% of respondents felt “very confident” in identifying AI-created news, while 30% felt “somewhat confident.” As AI-generated articles become more common, this lack of awareness raises concerns about how misinformation might spread without people realizing it.
Another issue is the changing nature of how people access news. Traditionally, users would visit trusted news sites or follow journalists on social media. Now, many are turning to AI-powered tools like ChatGPT or Google’s AI Overviews for quick summaries instead of reading full articles. This shift could reshape journalism itself—if fewer people click on news sources, how will independent journalism survive in a digital ecosystem where AI aggregates and summarizes everything?
With Google moving toward a zero-click search model, AI-driven news summaries may replace traditional headlines as the first (and sometimes only) thing people read. If AI determines what news is surfaced and how it’s framed, the question becomes: who is really in control of the narrative?
Related: Learn more about how to influence LLM models
For now, trust in human journalists remains strong, but as AI-generated content becomes harder to detect, the gap between human and machine credibility may continue to narrow.
AI Literacy: The Growing Knowledge Gap
While AI trust is increasing, understanding of AI remains low. The 2024-2025 Rutgers studies found that half of respondents admitted they don’t fully understand AI or how it’s used. AI literacy also correlates with education and income—those with graduate degrees and higher earnings were more confident in their ability to engage with AI critically.
“…half of respondents admitted they don’t fully understand AI or how it’s used.”
Rutgers.edu
This gap is concerning as more people rely on AI-generated search results and ChatGPT for information. If users don’t understand how AI works or recognize its limitations, they may unknowingly accept biased or incomplete answers.
Experts argue that AI education should start early, integrating AI literacy into K-12 curricula. Without this, AI will continue to be a tool for the tech-savvy, while others struggle to adapt to an increasingly AI-driven world.
The Future of AI Trust: What’s Next?
AI trust is evolving, but concerns over regulation, misinformation, and governance remain. Public sentiment is clear, 71% of respondents now expect AI to be regulated, signaling a growing demand for oversight and accountability.
“71% of respondents now expect AI to be regulated”
KPMG
However, government policy is moving in a different direction. In January 2025, President Donald Trump signed an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked previous AI regulations and prioritized minimal government intervention to accelerate innovation. This shift reflects the administration’s belief that fewer restrictions will help the U.S. maintain its leadership in AI.
This divergence between public expectations and government action underscores a critical debate: how do we balance AI innovation with ethical safeguards? While businesses push forward with AI-driven products and services, consumers remain wary of misuse, bias, and lack of transparency.
At the same time, AI’s role in information discovery is shifting. With ChatGPT, AI Overviews, and Google’s zero-click search model reducing the need to visit traditional websites, AI is becoming the primary gatekeeper of knowledge. This means trust in AI-generated content will be more important than ever. If AI is curating, summarizing, and even creating information, who ensures its accuracy and fairness?
For businesses, journalists, and policymakers, the challenge is clear: transparency, responsible AI use, and public education will define AI’s long-term credibility. Those who fail to address these concerns risk losing public trust in an AI-driven future.
Leave a Reply