Google use AI to identify online child sexual abuse material (CSAM)

To stop the vulgarity on the Internet, big companies such as Google, Microsoft, Apple, and Facebook all showed a zero-tolerance attitude.

According to CNET reports, Google released a free AI artificial intelligence, its main purpose is to help companies and organizations identify sexual assault on the Internet (providing this technology to NGOs and industry partners free of charge), naked pictures, especially related to children, which leads to a decline in the corresponding crime rate.

Google launched the Content Safety API, which allows developers to use deep neural networks to process images to reduce their exposure to Internet users. While technology is free, Google also emphasizes that this AI artificial intelligence technology can help reviewers to identify 700% more child sexual abuse material than without this technology.

Google’s move is also a lot of praise, and industry insiders hope that by sharing this new technology, the speed of image recognition will be faster, which will make the Internet a safer place.

Another reason why Google has taken this step because of the heat facing by most of the internet giants; they accused to play a role to spread the CSAM across the web. Regarding this, last week the U.K. Foreign Secretary Jeremy Hunt on the Twitter sharp attack on Google and criticize its plans to re-enter China. He said Google can compromiose and censored its content on a search engine to the renter in China but couldn’t support other countries to remove child abuse content elsewhere in the world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.