Back to Seismic.org

On the
Razor’s Edge

Seismic Report 2025
Seismic Report 2025
AI vs. Everything we care about
On the <br>Razor’s Edge
For the first time, this report reveals emerging tensions in the narrative around AI. People are beginning to see it as something that could make their lives worse in deeply personal ways.
For the first time, this report reveals emerging tensions in the narrative around AI. People are beginning to see it as something that could make their lives worse in deeply personal ways.

Previous polls have shown general anxiety about AI, though most of us still rank it low among social priorities.

But now we can see that’s only part of the story.

Our findings show that people already care about AI — they just don’t always realize it. A deeper public understanding is emerging. People are starting to feel how AI might affect their lives.

Download full report
People think AI will worsen almost everything we care about, even as they rank AI low on their list of concerns.

People think AI will worsen almost everything we care about, even as they rank AI low on their list of concerns.

People think AI will worsen almost everything we care about, even as they rank AI low on their list of concerns.

We live in a world full of urgent concerns: war, climate change, unemployment. Against these, AI still feels like a gimmick to many. Yet experts warn that AI will reshape all of these issues and more. Do people see the connection? What will make them care? Critically, are some of us more attuned to the promise and risks of AI than others?

This research is the first large-scale effort to answer these questions. We polled 10,000 people across the U.S., U.K., France, Germany, and Poland to understand how AI fits into their broader hopes and fears for the future.

Key
Findings

We see AI as a threat to what we care about.
We see AI as a threat to what we care about.
Overwhelmingly, over the near term, people think AI will worsen almost everything they care about. We asked people whether they thought AI would improve or worsen a range of salient issues, ranging from the economy to politics, health and society. The pattern is clear. The trend is negative for every issue except health care and pandemic prevention. Unemployment, Misinformation, and War and Terrorism are the areas where people think AI will do the most damage.
We see AI as a threat to what we care about.
The balance of opinion - all respondents who said this would improve with AI, minus all those who said it would get worse:
↓20%Unemployment
↓19%Disinformation or misinformation
↓15%War and terrorism
We wonder: will the future be evenly distributed?
We wonder: will the future be evenly distributed?

AI offers enormous promise to enhance human potential and productivity. To truly deliver on that promise, this technology needs to have the broadest possible reach across the workforce. But, as with every technological leap, some of us will find adopting and adapting to the new technology easier than others. The best outcomes depend on overcoming these challenges.

Our research shows that people have an innate understanding of this fact.

Women are twice as likely as men to worry about AI. And with cause; see for example this UN report that found that women are three times more likely to have their jobs disrupted by AI than men.

The same divide shows up across income levels. The higher your income, the more optimistic you are about AI.

This is an economic issue. And our findings show an emerging understanding that well-regulated AI can lead to better broad outcomes. Only 15% of people think there is enough regulation around AI, while 45% of us think there should be more.

2.2xmore women than men are worried about AI.
We wonder: will the future be evenly distributed?
Students especially feel short-changed.
Students especially feel short-changed.

Students and recent graduates especially feel like they’re in a hard place. They're daunted by the future of work, and most feel their schools aren't helping them figure it out. A majority of students worry that what they're studying will no longer be useful by the time they come to get a job.

50%fear their studies will be outdated by the time they graduate.
57%feel daunted by what the future of work looks like.
41%say their education helped them grasp AI’s career impact.
Students especially feel short-changed.
People don't trust AI with their money, their bodies, or their kids.
People don't trust AI with their money, their bodies, or their kids.

Overwhelmingly, people wouldn't trust an AI to decide who gets welfare support, would not accept health care decisions made by AI, and would not leave their finances in charge of an AI. People are against AI teachers and AI money managers... but they're more likely to let an AI teach their kids than manage their money.

12%Would agree to AI-recommended surgery.
12%Would trust AI to manage their finances.
15%Would accept an AI as their child’s teacher and mentor.
People don't trust AI with their money, their bodies, or their kids.
We wonder: will the future be evenly distributed?
Students especially feel short-changed.
People don't trust AI with their money, their bodies, or their kids.
“The future feels already decided—and I’m not part of it”
“The future feels already decided—and I’m not part of it”

Reality check

We’re more anxious about losing love than losing jobs. More people fear AI replacing relationships (60%) than triggering mass unemployment (57%)
Of all the things AI could affect, we fear for our relationships the most.
Of all the things AI could affect, we fear for our relationships the most.
The most frequent use of AI today is for companionship and therapy. So it’s no surprise that we’re hearing more and more stories of people falling in love with their AI chatbots. But our research shows that this trend is one of the most concerning to people. Overwhelmingly, most people are worried about the effects of AI on human relationships, with 60% moderately or extremely worried, and only 10% not worried at all. As in many parts of our report, we can see here how culture shapes how we adopt AI. For example, Americans are twice as likely as the French to consider a romantic relationship with AI cheating.
Of all the things AI could affect, we fear for our relationships the most.
60%worry AI could replace human relationships.
10%aren’t concerned about AI affecting relationships.
67%of parents feel uneasy about their child falling for an AI.
15%wouldn’t be concerned about an AI relationship.

Reality check

AI love is betrayal. Unless you’re French. Half of Americans (50%) say an emotional AI relationship counts as cheating—compared to just 37% of French respondents.
“I am worried about the flow of true information because ultimately if we don't have trust, we have nothing.”
“I am worried about the flow of true information because ultimately if we don't have trust, we have nothing.”
Male, USA
We’re on the razor’s edge.
We’re on the razor’s edge.
Despite widespread worry, public opinion on AI appears neutral—split evenly between optimism and pessimism. But this balance is misleading: views differ sharply across groups, and tensions are rising as AI rapidly expands into every part of life.We are balanced on the razor’s edge. And finely balanced things need only a touch to topple over.
We’re on the razor’s edge.
This is why our mapping of the five key groups is so important. These groups are one major news story away from being mobilized to take civic action about AI. Their experiences in the near term may be critical to how the debate about AI plays out in society. We need to be paying very close attention.
31%say AI gives them hope for humanity’s future.
32%doesn’t feel hopeful towards our future with AI.
46%believe AI’s benefits will mostly go to the elites.
“We can’t let AI replace free human decision making”
“We can’t let AI replace free human decision making”
Female, France

About seismic

Seismic is a global non-profit, dedicated to ensuring that the integration of AI in our societies is beneficial for everyone. We use the power of media to raise awareness, build understanding of AI, create urgency, and encourage action among key decision-makers and their constituents.
Making responsible AI a human priority
Image 1
Image 2
Image 3
The full report

The full report

The full report

Download the full report to explore all the details, including our analysis of the five groups most likely to engage in civic action on AI in the near future.
Download full report
TL;DR? <br>Just ask our custom GPT
TL;DR? <br>Just ask our custom GPT

TL;DR?
Just ask our custom GPT

TL;DR?
Just ask our custom GPT

No time to read the full report? We trained a GPT on all of it. Ask anything—from national breakdowns to emotional insights.
Go to Report GPT
Let’s
keep in
touch
We’re looking to connect