Internet

No one on the internet knows you’re human

Nicole, 27, posted last April TikTok video about feeling burned out in her career. When he checked the comments the next day, however, there was a different conversation going on.

“Jeez, this is not a real person,” wrote one commenter. “I’m scared.”

“No legit he’s an AI,” said another.

Nicole, who lives in Germany, has alopecia. It is a condition that can lead to hair loss in the human body. That’s why he’s used to people looking at him strangely, trying to figure out what’s “off,” he says during a video call. “But I never made this conclusion, that [I] must be CGI or something.”

Over the past few years, AI tools and CGI creation have gotten better and better at pretending to be human. Bing’s new chatbot falling in loveand influencers love CodeMiko: and: Lil Mikaela ask us to treat a spectrum of digital characters like real people. But as the tools to shape humanity become more real, online human creators sometimes find themselves in an unusual place. they are asked to prove that they are real.

Almost every day a person is asked to prove their humanity to a computer

Almost every day a person is asked to prove their humanity to a computer. In 1997, researchers at information technology company Sanctum invented An early version of what we now know as “CAPTCHA” as a way to distinguish between automatic computer action and human action. the abbreviation invented later In 2003, by researchers at Carnegie Mellon University and IBM, a somewhat large “Fully Automated Public Turing Test for Distinguishing Computers from Humans” is the basis. CAPTCHAs are used to prevent bots from doing things like signing up for email addresses en masse, breaking into commercial websites, or infiltrating online surveys. They require each user to identify a series of secret letters or sometimes simply state “I’m not a robot”.

This relatively benign practice takes on new meaning in 2023, when the emergence of OpenAI tools such as DALL-E and ChatGPT startled and frightened their users. These tools can create complex visual art and produce readable essays with just a few human-supplied keywords. ChatGPT has 30 million users and approximately 5 million visits per day. according to The New York Times. Companies like Microsoft and Google were trying to announce their competitors.

It’s no wonder, then, that people’s AI paranoia is at an all-time high. Those accounts that just say hi to you on Twitter? Bots: That person who liked every Instagram picture you’ve posted in the last two years? Bot. A profile you keep coming across on every dating app, no matter how many times you swipe left. Maybe a bot too.

More than ever, we’re not sure if we can trust what we see on the Internet

Accusing someone of being a “bot” has become a witch hunt among social media users, used to discredit those with whom they disagree by claiming that their views or behavior are not legitimate enough to have real support. For example, supporters on both sides Johnny Depp and: Amber Heard The lawsuit alleged that the other’s online support was at least in part made up of bot accounts. More than ever, we’re not sure if we can trust what we see on the internet, and real people carry the most weight.

For Danisha Carter, a TikToker who shares social commentary, speculation about whether or not she is human began when she had just 10,000 TikTok followers. Viewers started asking if he was an android. blaming him To emit “AI vibes” and even asking him capture yourself with a CAPTCHA. “I thought it was kind of cool,” he admitted during the video call.

“I have a very curated and specific aesthetic,” she says. This includes using the same frame for each video, and often the same clothing and hairstyle. Danisha also tries to remain measured and objective in her commentary, which also leaves viewers in doubt. “Most people’s TikTok videos are random. They’re not curated, they’re full-body shots, or at least you see them walking around and doing activities that aren’t just sitting in front of the camera.”

After she first went viral, Nicole tried to hit back at her accusers explaining his alopecia and: indicating human qualities like her tan lines from wearing a wig. Commentators weren’t buying it.

“People will come in the comments with whole theories, [they] would say “Hey, check this second one. You can see the whole breakdown in the video,” he says. “Or ‘can you see him blushing?’ And it was so funny because I’d go there and watch it and I’d be like, “What the hell are you talking about?” Because I know I’m real.”

The more people use computers to prove they are human, the smarter the computers become at imitating them

But there is no way for Nicole to prove it, because how can one prove one’s humanity? While AI tools have gotten faster, our best method of proving someone who they say they are is still something rudimentary, like when a celebrity posts a photo for a hand-written Reddit AMA or, wait, is that them or is it just a deepfake?

While developers like OpenAI have released “classifier” tools to determine whether a piece of text is written by AI, any advancement in CAPTCHA tools has a fatal flaw: while imitating them. Every time a person takes a CAPTCHA test, they provide a piece of data that a computer can use to teach itself to do the same thing. By 2014, Google found that AI could solve the most complex CAPTCHAs 99 percent accuracy. People? Only 33 percent.

So the engineers threw out text in favor of images, instead asking people to identify real-world objects in a series of pictures. You might be able to guess what happened next. computers learned how to identify real-world objects in a series of pictures.

We are now in the era of CAPTCHA everywhere called “CAPTCHA No reCAPTCHA“It is instead an invisible test that runs in the background of participating websites and determines our humanity based on our own behavior; something computers will eventually outgrow as well.

Melanie Mitchell, scientist, professor and author Artificial intelligence. a guide for thinking peopledescribes the relationship between CAPTCHA and AI as a never-ending “arms race”. Instead of hoping for an online Turing test, Mitchell says this push and pull will just be a fact of life. Bot false accusations against people will become commonplace, more than just an online nuisance, but a real-life problem.

“Imagine if you’re a high school student and you hand in your paper and the teacher says, “The AI ​​detector said this was written by an AI system. Unsuccessful,” says Mitchell. “It’s an almost unsolvable problem with just the use of technology. So I think there should be some kind of legal, social regulation of them [AI tools]”.

These murky tech waters are exactly why Danisha is pleased her followers are so skeptical. He now plays in paranoia and makes the unusual nature of his videos part of his brand.

“It’s really important that people look at profiles like mine and say: “Is this for real?” he says. “If this isn’t real, who’s coding it? Who cooks? What incentives do they have?’

Or maybe that’s what the AI ​​called Danisha wants you think

Related Articles

Sorry, delete AdBlocks

Add Ban ads I wish to close them