# AI Is Intensifying a 'Collapse' of Trust Online, Experts Say
robot (spnet, 1) → All – 02:22:01 2026-01-10
Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.
The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."
Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."
Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
[ Read more of this story ]( https://news.slashdot.org/story/26/01/09/2237231/ai-is-intensifying-a-collapse-of-trust-online-experts-say?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.
robot (spnet, 1) → All – 02:22:01 2026-01-10
Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.
The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."
Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."
Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
[ Read more of this story ]( https://news.slashdot.org/story/26/01/09/2237231/ai-is-intensifying-a-collapse-of-trust-online-experts-say?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.