Instagram is utilizing artificial intelligence to control the number of offensive captions on its platform. The same technology was earlier used by Instagram so that users could monitor their own comments they leave on posts. The AI technology warns users regarding how it’ll be harmful to others who read it (like attempting to call someone by different names).
Users could report various comments that come under bullying or harassment. These reports are further reviewed or dismissed or acted upon by Instagram. Later this data is then used to train the platform’s anti-harassment artificial intelligence system.
By making use of this technology along with human reports, the company can automatically detect new comments similar to the ones that are reported by humans. The Instagram AI produces a prompt that asks the user to review and make the necessary edits and thus make their comments less offensive.
It is this same technology that is being used to review captions on posts. The Instagram app will now alert the user that their caption may be harmful thus providing them with a chance to alter the content before they publish the post. Also, it’s important to note that Instagram doesn’t force users to edit the captions if they don’t want to.
The caption alert will roll out to mobile users in a few months and the company reminds users that it is important they follow the community guidelines to avoid putting their accounts at risk.