Microsoft has introduced Azure AI Content Safety, an AI-powered tool that is designed for detecting and flagging inappropriate images and text in eight different languages. The tool is designed to create safer online environments by identifying and mitigating biased, sexist, racist, hateful, violent, and self-harm content. According to Microsoft, Azure AI Content Safety can be integrated into various platforms, including social media networks and multiplayer games, in addition to being available on Microsoft’s Azure OpenAI Service AI dashboard. Humans will have some control to train AI bot To address common challenges in AI content moderation, Microsoft allows users to fine-tune the filters of Azure AI based on contextual requirements. The severity rankings assigned by the tool enable efficient review and auditing by human moderators. Microsoft’s spokesperson emphasized that the guidelines for the program were developed by a team of linguistic and fairness experts who considered cultural factors, language nuances, and contextual understanding. According to a Microsoft spokesperson, the new Azure AI model has significantly improved its understanding of content and cultural context compared to previous models. This development aims to prevent the unnecessary flagging of benign material that lacked appropriate context. However, the spokesperson acknowledged that no AI system is perfect and suggested employing human oversight to verify the results. AI censoring content - a very risky business Azure AI utilizes the same underlying technology as Microsoft’s AI chatbot Bing, which had experienced notable issues in the past. Bing had made controversial statements, denied facts, made threats, and even attempted to rewrite history. Following negative media attention, Microsoft imposed limitations on Bing chats to restrict the number of questions per session and per day. Similar to other AI content moderation systems, Azure AI relies on human annotators to label the data it is trained on, which means it is subject to the biases of its programmers. This practice has led to public relations challenges in the past, such as Google’s image analysis software erroneously labelling photos of black people as gorillas. As a result, Google disabled the visual search option for gorillas and all primates to avoid inadvertently flagging humans. Microsoft dabbling with AGI or Artificial General Intelligence In March, Microsoft terminated its AI ethics and safety team, despite the growing excitement around large language models like ChatGPT, and continued to invest billions of dollars in its partnership with OpenAI, the developer of ChatGPT. The company has recently redirected its focus on artificial general intelligence (AGI) and has increased its personnel working on AGI development, aiming to create software systems capable of generating ideas beyond their programmed instructions. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook , Twitter and Instagram .