Artificial intelligence, new weapon against violent comments on Internet

Artificial intelligence, new weapon against violent comments on Internet

                        Résultat de recherche d'images pour "L'intelligence artificielle"

A subsidiary of Google has developed a program to automatically detect violent messages or harassment ten online. It will first be tested by the New York Times.

On the Internet, no need to dig very far to find the worst, insulting comments, online harassment, hate speech, etc. When the Web becomes a battlefield, online sites have to be armed with sophisticated new tools. The New York Times on Tuesday announced a unique partnership with Jigsaw, a subsidiary of Alphabet, the parent company of Google. The latter has developed an artificial intelligence program dedicated to online comments. It should help the moderators of the log to identify similar comments, according to their violence, to validate or reject the cluster. "We hope that this project will open our horizons, giving a secure platform for different communities for different conversations, and allow our readers to be part of our work," says the New York Times in a statement. "This new technology will also enable our moderators to have discussions and friendly background with our readers."...

Detect online harassment

Jigsaw was born is 2010, under the name "Google Ideas". His goal was first to organize a discussion about online safety, as a think tank and conference organization. The company is now comprised of fifty employees, mostly engineers. Their goal is always to strive for online security, this time more concretely. One of the first serious initiatives was the Jigsaw Project Shield, to protect small platforms and information sites of attacks by denial of service. He also developed a program that can detect pro-jihadi or white supremacist websites, orienting them from Google towards propaganda against-pages.

In early September, Jigsaw unveiled in the US magazine Wired his new project: Conversation AI, artificial intelligence can detect abusive or harassing messages. This tool should come and help social networking websites or moderators to react more quickly to attacks or online hate. It was developed in collaboration with several victims of cyberbullying. AI conversation is a machine learning program: he eats a lot of examples in order to understand what is allowed or not. To this end, he analyzed nearly 17 million comments posted on the New York Times website, taking into account their abusive or, conversely, if they were respectful. Jigsaw has also approached the Wikimedia Foundation, its software was analyzed over 130,000 extracts of conversations on Wikipedia. A dozen people randomly recruited were subjected to some of these messages. They were charged to estimate whether it was or not harassment.

A recurring problem

Conversation AI is now able to detect online attacks with a success rate of 92%, according to Google. The New York Times is the first site to implement it. Wikipedia is still considering its operations on its pages. Jigsaw hopes to open the program to other locations soon. Violence and harassment online are crucial issues for online platforms and social networks. Twitter, for example, regularly criticized for its poor moderation hate messages.

Social networks are deliberately very secretive about their efforts at moderation. It is not known how many people are responsible for monitoring the content on YouTube, Facebook or Twitter. Some are now using artificial intelligence programs to assist in controlling problematic images, particularly in the fight against child pornography or terrorist propaganda. In late May, one of the leaders of artificial intelligence efforts at Facebook said that "more problematic photos are now reported by robots instead of men."

The use of such technologies is often kept secret: Social networks are afraid of being accused of dictatorship, where robots would be responsible to enforce their rules. Artificial intelligence is nevertheless an advantage to those companies that process billions of data per day that can not be controlled by all human beings. But this kind of technology is much more complex to apply to text to pictures or videos: it is easy to detect if an image shows a naked body or blood, much less whether a sentence is violent or not. It is this puzzle to solve Jigsaw think.



Comments

Popular Posts