A year ago, Facebook announced that it had a text classification machine to help machines interpret words in context. And now Instagram is introducing an enhanced comment filter that will automatically detect hateful, harassing or offensive comments. It will also make certain that no one will even see them.
It is developed on an AI system called DeepText. DeepText is an in-house tool that was built by Facebook engineers and Instagram has also worked on it. It works as a text classification engine which is meant to help machines in interpreting words in their context in an aim to fight abusive content. It is designed in a manner that it analyzes the way a word runs in our brain along with the context and what it could mean. It quickly sorts out through the huge amount of data, creates classification rules and then builds products that help users.
One of the first things that Instagram did was to hire men and women to sort out and identify words and further classify them if they are spam or not. After they had sorted through excessive large pile of data containing all possible extortion, it was fed into DeepText. The engineers then worked to create algorithms to try and classify spam correctly. Based on the success of this, they further wanted to experiment if it could tackle more complicated issues, which included hate speech, harassing or bullying comments and so on. The team even analyzed about two million comments before they plan to send it live, which is today.
If this works out, it could possibly open a new chapter in social media. Instagram is one of the most popular social media platforms where millions of users access it worldwide. Currently, the filter is being released only in English language. But Instagram is looking for further expansions in its project to introduce the spam filter in other languages too.