Online Misinformation Is Only Going to Get Worse

With AI getting smarter by the day and verification badges up for sale, the line between authentic and fake is all but invisible

On April 20th, Elon Musk removed the verification checkmarks from Twitter accounts with “legacy ticks”—those that had been verified under the old system for being people of note, rather than those that were verified after subscribing to Twitter Blue for $8 a month.

The verified badge once served as a way for users to gauge a tweeter’s true identity and qualifications. Since Musk’s takeover and the implementation of paid verification, impersonations have skyrocketed—with users posting apologies on behalf of J.K. Rowling or impersonating the Pope. That little check symbol now serves as a mark of shame, a sign that someone was willing to pay money for a meaningless status symbol.

Meta also recently introduced paid verification—further accelerating the verified checkmark’s decline as a sign of legitimacy. For $11.99 a month, Facebook and Instagram users can sport verification badges that, like their counterparts on Twitter, were previously only available to people deemed sufficiently noteworthy or at risk of impersonation. 

These changes are a huge impediment to the fight against online misinformation. Users who don’t have time to do their due diligence before believing or retweeting a piece of information have now lost a key method of identifying legitimate sources, and bad faith actors are taking advantage.

The alarmingly fast rise of sophisticated AI is adding more fuel to the misinformation dumpster fire online. AI is developing at such a rapid rate that many tech leaders recently signed an open letter warning of the potential risks—although skeptics argue some, like Elon Musk, want AI training to be halted so that their own competing software can catch up. AI versions of song covers have already gone viral on TikTok; examples include Kanye West singing “Hey There Delilah,” Kanye West singing Lana Del Rey, and Kanye West singing Bill Withers (not all AI song covers feature Kanye, though his seem to get the most attention).

One song, “Heart On My Sleeve,” using AI-generated vocals from Drake and The Weeknd, netted over 15 million views on TikTok. It was streamed more than 600,000 times on Spotify before it was eventually removed. A Universal Music Group representative told Billboard that AI-generated songs “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

Experts have already voiced concerns about some AI tools that claim to be ideal for use in educational settings, such as the app that allows users to converse with historical figures. When I interviewed Professor Toby Walsh, chief scientist at the University of New South Wales’s AI Institute, for my Lens piece “The Unforseen Consequences of a New AI Chatbot,” he explained, “They’re trained by pouring the internet into the neural network, and the internet is full of conspiracy theories and untruths, and they really can’t distinguish between what is true and what is false—they’re just saying what is probable.”

The increasing sophistication of artificial intelligence and the added hurdles for users seeking to differentiate between reputable posters and bad-faith actors could create a perfect storm where users find it increasingly difficult to know whether the information they receive online is real or a work of fiction. While in the case of memes and songs, the stakes are relatively low, in other cases, they’re alarmingly high. 

Consider Twitch streamers Maya Higa, Pokimane, and QTCinderella, who discovered they had been edited into deep fake porn earlier this year. While Twitch banned deep fake porn in response, and the creator behind the deep fakes deleted the videos along with his internet presence, the streamers have found they have little legal recourse despite the obvious violations committed against them. While one creator may have deleted his deep fakes, the internet is forever, and it’s likely the videos are still accessible to those who want to find them—and who may not realize they’re fake.

With verification as a tool for identifying reliable sources of information disappearing, and artificial intelligence continuing to evolve in increasingly realistic ways, it’s clear that more needs to be done to guard against the potential for misinformation on social media. While Kanye West’s reputation isn’t likely to suffer because of a bogus “Hey There Delilah” cover, the damage potential is far greater for more insidious uses of fake content created without the subject’s consent.

Apr 25, 2023

·

3 min read

Online Misinformation Is Only Going to Get Worse

With AI getting smarter by the day and verification badges up for sale, the line between authentic and fake is all but invisible

On April 20th, Elon Musk removed the verification checkmarks from Twitter accounts with “legacy ticks”—those that had been verified under the old system for being people of note, rather than those that were verified after subscribing to Twitter Blue for $8 a month.

The verified badge once served as a way for users to gauge a tweeter’s true identity and qualifications. Since Musk’s takeover and the implementation of paid verification, impersonations have skyrocketed—with users posting apologies on behalf of J.K. Rowling or impersonating the Pope. That little check symbol now serves as a mark of shame, a sign that someone was willing to pay money for a meaningless status symbol.

Meta also recently introduced paid verification—further accelerating the verified checkmark’s decline as a sign of legitimacy. For $11.99 a month, Facebook and Instagram users can sport verification badges that, like their counterparts on Twitter, were previously only available to people deemed sufficiently noteworthy or at risk of impersonation. 

These changes are a huge impediment to the fight against online misinformation. Users who don’t have time to do their due diligence before believing or retweeting a piece of information have now lost a key method of identifying legitimate sources, and bad faith actors are taking advantage.

The alarmingly fast rise of sophisticated AI is adding more fuel to the misinformation dumpster fire online. AI is developing at such a rapid rate that many tech leaders recently signed an open letter warning of the potential risks—although skeptics argue some, like Elon Musk, want AI training to be halted so that their own competing software can catch up. AI versions of song covers have already gone viral on TikTok; examples include Kanye West singing “Hey There Delilah,” Kanye West singing Lana Del Rey, and Kanye West singing Bill Withers (not all AI song covers feature Kanye, though his seem to get the most attention).

One song, “Heart On My Sleeve,” using AI-generated vocals from Drake and The Weeknd, netted over 15 million views on TikTok. It was streamed more than 600,000 times on Spotify before it was eventually removed. A Universal Music Group representative told Billboard that AI-generated songs “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

Experts have already voiced concerns about some AI tools that claim to be ideal for use in educational settings, such as the app that allows users to converse with historical figures. When I interviewed Professor Toby Walsh, chief scientist at the University of New South Wales’s AI Institute, for my Lens piece “The Unforseen Consequences of a New AI Chatbot,” he explained, “They’re trained by pouring the internet into the neural network, and the internet is full of conspiracy theories and untruths, and they really can’t distinguish between what is true and what is false—they’re just saying what is probable.”

The increasing sophistication of artificial intelligence and the added hurdles for users seeking to differentiate between reputable posters and bad-faith actors could create a perfect storm where users find it increasingly difficult to know whether the information they receive online is real or a work of fiction. While in the case of memes and songs, the stakes are relatively low, in other cases, they’re alarmingly high. 

Consider Twitch streamers Maya Higa, Pokimane, and QTCinderella, who discovered they had been edited into deep fake porn earlier this year. While Twitch banned deep fake porn in response, and the creator behind the deep fakes deleted the videos along with his internet presence, the streamers have found they have little legal recourse despite the obvious violations committed against them. While one creator may have deleted his deep fakes, the internet is forever, and it’s likely the videos are still accessible to those who want to find them—and who may not realize they’re fake.

With verification as a tool for identifying reliable sources of information disappearing, and artificial intelligence continuing to evolve in increasingly realistic ways, it’s clear that more needs to be done to guard against the potential for misinformation on social media. While Kanye West’s reputation isn’t likely to suffer because of a bogus “Hey There Delilah” cover, the damage potential is far greater for more insidious uses of fake content created without the subject’s consent.

Apr 25, 2023

·

3 min read

Online Misinformation Is Only Going to Get Worse

With AI getting smarter by the day and verification badges up for sale, the line between authentic and fake is all but invisible

On April 20th, Elon Musk removed the verification checkmarks from Twitter accounts with “legacy ticks”—those that had been verified under the old system for being people of note, rather than those that were verified after subscribing to Twitter Blue for $8 a month.

The verified badge once served as a way for users to gauge a tweeter’s true identity and qualifications. Since Musk’s takeover and the implementation of paid verification, impersonations have skyrocketed—with users posting apologies on behalf of J.K. Rowling or impersonating the Pope. That little check symbol now serves as a mark of shame, a sign that someone was willing to pay money for a meaningless status symbol.

Meta also recently introduced paid verification—further accelerating the verified checkmark’s decline as a sign of legitimacy. For $11.99 a month, Facebook and Instagram users can sport verification badges that, like their counterparts on Twitter, were previously only available to people deemed sufficiently noteworthy or at risk of impersonation. 

These changes are a huge impediment to the fight against online misinformation. Users who don’t have time to do their due diligence before believing or retweeting a piece of information have now lost a key method of identifying legitimate sources, and bad faith actors are taking advantage.

The alarmingly fast rise of sophisticated AI is adding more fuel to the misinformation dumpster fire online. AI is developing at such a rapid rate that many tech leaders recently signed an open letter warning of the potential risks—although skeptics argue some, like Elon Musk, want AI training to be halted so that their own competing software can catch up. AI versions of song covers have already gone viral on TikTok; examples include Kanye West singing “Hey There Delilah,” Kanye West singing Lana Del Rey, and Kanye West singing Bill Withers (not all AI song covers feature Kanye, though his seem to get the most attention).

One song, “Heart On My Sleeve,” using AI-generated vocals from Drake and The Weeknd, netted over 15 million views on TikTok. It was streamed more than 600,000 times on Spotify before it was eventually removed. A Universal Music Group representative told Billboard that AI-generated songs “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

Experts have already voiced concerns about some AI tools that claim to be ideal for use in educational settings, such as the app that allows users to converse with historical figures. When I interviewed Professor Toby Walsh, chief scientist at the University of New South Wales’s AI Institute, for my Lens piece “The Unforseen Consequences of a New AI Chatbot,” he explained, “They’re trained by pouring the internet into the neural network, and the internet is full of conspiracy theories and untruths, and they really can’t distinguish between what is true and what is false—they’re just saying what is probable.”

The increasing sophistication of artificial intelligence and the added hurdles for users seeking to differentiate between reputable posters and bad-faith actors could create a perfect storm where users find it increasingly difficult to know whether the information they receive online is real or a work of fiction. While in the case of memes and songs, the stakes are relatively low, in other cases, they’re alarmingly high. 

Consider Twitch streamers Maya Higa, Pokimane, and QTCinderella, who discovered they had been edited into deep fake porn earlier this year. While Twitch banned deep fake porn in response, and the creator behind the deep fakes deleted the videos along with his internet presence, the streamers have found they have little legal recourse despite the obvious violations committed against them. While one creator may have deleted his deep fakes, the internet is forever, and it’s likely the videos are still accessible to those who want to find them—and who may not realize they’re fake.

With verification as a tool for identifying reliable sources of information disappearing, and artificial intelligence continuing to evolve in increasingly realistic ways, it’s clear that more needs to be done to guard against the potential for misinformation on social media. While Kanye West’s reputation isn’t likely to suffer because of a bogus “Hey There Delilah” cover, the damage potential is far greater for more insidious uses of fake content created without the subject’s consent.

Apr 25, 2023

·

3 min read

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Creator stories that inspire,
inform, and entertain

Creator stories that inspire,
inform, and entertain

Creator stories that inspire,
inform, and entertain