Instagram keeps looking for ways to get better, and this time, it is set to test a new feature that blurs messages containing nudity.
This, Fintech Telex understands, will safeguard teens and prevent potential scammers from reaching them,
This was disclosed by its parents, Meta while trying to allay concerns over harmful content on its apps.
It also added that it was developing new tools to protect teenage users from “sextortion” scams on its Instagram platform. Meanwhile, this is coming after it was accused by US politicians of damaging the mental health of youngsters.
Gangs run sextortion scams by persuading people to provide explicit images of themselves and then threatening to release them to the public unless they receive money.
According to Meta, the protection feature for Instagram’s direct messages would use on-device machine learning to analyze whether an image sent through the service contains nudity.
The feature will be turned on by default for users under 18 and Meta will notify adults to encourage them to turn it on.
“Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone chooses to report them to us,” the company said.
READ ALSO: X Risks Fines As Elon Musk Insists On Violating Court Order
Meta Also Plans To Introduce Encryption On Instagram
Unlike Meta’s Messenger and WhatsApp apps, direct messages on Instagram are not encrypted but the company has said it plans to roll out encryption for the service.
Meta also said that it was developing technology to help identify accounts that might be potentially engaging in sextortion scams and that it was testing new pop-up messages for users who might have interacted with such accounts.
In January, the social media giant had said it would hide more content from teens on Facebook and Instagram, adding this would make it more difficult for them to come across sensitive content such as suicide, self-harm, and eating disorders
Some 3,000 young people fell victim to sexploitation scams in 2022 in the United States, according to the authorities there.
Separately, more than 40 US states began suing Meta in October in a case that accuses the company of having “profited from children’s pain”.
‘Sexploitation’ And Teenagers That Have Fallen Victim
Some 3,000 young people fell victim to sexploitation scams in 2022 in the United States, according to the authorities there.
Separately, more than 40 US states began suing Meta in October in a case that accuses the company of having “profited from children’s pain”.
The legal filing alleged Meta had exploited young users by creating a business model designed to maximize the time they spend on the platform despite harm to their health.
About Device Machine Learning
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm also revealed that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyze images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Whistle-blower Frances Haugen, a former Facebook engineer, publicized research in 2021 carried out internally by Meta — then known as Facebook — which showed the company had long been aware of the dangers its platforms posed for the mental health of young people.