5.1 C
New York
Tuesday, December 6, 2022

Buy now

Artificial intelligence is now used to track down hate speech – Brospar Daily News

According to the Pew Research Center in March 2021, the frequency of internet use in the United States has increased significantly over the past decade, with one in three Americans saying they go online regularly, while nine in 10 say they go online several times. per week Research polls.

The huge growth in activity has helped people stay more connected to each other, but it has also enabled the widespread proliferation and exposure of hate speech. Artificial intelligence is one of the solutions that social media companies and other online networks rely on, with varying degrees of success.

For a company with a large user base like Meta, AI is a key, if not necessary, tool for detecting hate speech – because there are so many violating users and content to be managed by the thousands of moderators. of human content already hired to do the verification. company. AI can help alleviate this burden by scaling up or down to fill these gaps based on the influx of new users.

For example, Facebook has grown by leaps and bounds, growing from 400 million users in the early 2010s to over 2 billion by the end of the century. Between January 2022 and March 2022, Meta took action against more than 15 million hateful content on Facebook. Facebook proactively detects about 95% of them using artificial intelligence.

The combination of artificial intelligence and human moderators can still make huge threads of misinformation disappear. Paul Barrett, associate director of the Stern Center for Human Rights at New York University, found that 3 million Facebook posts are flagged for censorship every day by 15,000 Facebook content moderators. The ratio of moderators to users is 1:160,000.

“If you have this volume, these people, these people are going to have a huge burden deciding on hundreds of low-key items every work day,” Barrett said.

Another problem: the AI ​​used to eradicate hate speech is mainly trained on text and still images. This means that video content, especially live video content, is more difficult to automatically detect as possible hate speech.

Zeve Sanderson is the founding executive director of NYU’s Center for Social Media and Politics.

“Live video is very difficult to moderate because it’s live, you know, we’ve had the misfortune of seeing tragic shootings lately, you know, people use live video to broadcast content about it. Despite the fact that the platforms react to this relatively quickly. We’ve seen copies of these videos go viral, so it’s not just the original video, but also the ability to save it and then share in other formats. So it’s very difficult to be live,” Sanderson said.

Additionally, many AI systems are not powerful enough to detect hate speech in real time. Extremism researcher Linda Schiegl told Newsy that this has become a problem in online multiplayer games, where players can use voice chat to spread hateful ideologies or ideas.

“Autodetect is really hard to understand because if you’re talking about weapons, or if you’re talking about how we’re going, I don’t know, a view of this school or whatever might be in the game. So AI or auto-detection is really hard in games, so it has to be something more complicated than that, or done by hand, which I think is really hard, even for those companies,” said Schiegl.

This story was originally posted by Tyler Adkisson on news sites

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles