May 5, 2024
Google AI flags dad who had photos of his child’s groin infection on his phone to share with doctors

Google AI flags dad who had photos of his child’s groin infection on his phone to share with doctors

A father was locked out of his Google Photos account after storing images of his child’s infected groin that he intended to share with doctors.

The photos were flagged as potential child sexual abuse material by an artificial intelligence (AI), triggering a police investigation, according to The New York Times

This incident occurred in February 2021 when some doctors’ offices were still closed due to the COVID-19 pandemic, so consultations were taking place virtually.

Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied.

It was not until December that year that the San Francisco Police Department found that the incident ‘did not meet the elements of a crime and that no crime occurred.’

This highlights the complications of using AI technology to identify abusive digital material, currently implemented by Google, Facebook, Twitter and Reddit.

Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018 . This AI was trained to recognise 'hashes', or unique digital fingerprints, of child sexual abuse material (stock image)

Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018 . This AI was trained to recognise 'hashes', or unique digital fingerprints, of child sexual abuse material (stock image)

Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018 . This AI was trained to recognise ‘hashes’, or unique digital fingerprints, of child sexual abuse material (stock image)

Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi. It wasn't until months later that he was informed that the San Francisco Police Department had closed the case against him (stock image)

Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi. It wasn't until months later that he was informed that the San Francisco Police Department had closed the case against him (stock image)

Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi. It wasn’t until months later that he was informed that the San Francisco Police Department had closed the case against him (stock image)

What are ‘hashes’ used by Apple, Facebook, Google and Twitter to locate child abusers?

The technology works by creating a unique fingerprint, called a ‘hash’, for each image reported to the foundation.

These fingerprints are then passed on to internet companies to be automatically removed from the net. 

Once an image has been targeted, an employee will look at the content of the file and analyse the message to determine if it should be handed over to the right authorities.

The system that uses the same technology as Facebook, Twitter and Google employ to locate child abusers. 

Google scans images and videos uploaded to Google Photos using its Content Safety API AI toolkit, released in 2018.

This AI was trained to recognise ‘hashes’, or unique digital fingerprints, of child sexual abuse material (CSAM).

As well as matching hashes to known CSAM on a database, it is able to classify previously unseen imagery.

The tool then prioritises those it thinks are most likely to be deemed harmful and flags them to human moderators.

Any illegal material is reported to the National Center for Missing and Exploited Children (NCMEC), which liaises with the appropriate law enforcement agency, and it is removed from the platform.

Google spokesperson Christa Muldoon told The Verge: ‘Our team of child safety experts reviews flagged content for accuracy and consults with paediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.’

In 2021, Google reported 621,583 cases of CSAM to the NCMEC’s CyberTipLine, which then alerted authorities to over 4,260 potential new child victims.

A Google spokeswoman told The New York Times that the company only scans personal images after the user takes ‘affirmative action’, which includes backing up their material on Google Photos.

Last year, stay-at-home dad Mark noticed swelling in his toddler’s genital area, so immediately contacted his healthcare provider.

A nurse asked him to send over photos of the infected region so a doctor could review them ahead of a video conference.

Mark took some photos using his Android smartphone, which were automatically backed-up to his Google cloud. 

The doctor prescribed the boy with antibiotics which cleared up the swelling, however two days after taking the images Mark received a notification informing him that his Google accounts had been locked.

According to The New York Times, the reason for the action was the presence of ‘harmful content’ that was ‘a severe violation of Google’s policies and might be illegal.’

After the AI flagged the photos, a human content moderator for Google would have reviewed them to confirm they met the definition of CASM before escalating the incident.

Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi.

It wasn’t until months later that he was informed that the San Francisco Police Department had closed the case against him.

Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied (stock image)

Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied (stock image)

Named only as Mark, the concerned parent ended up losing access to his emails, contacts, photos and even his phone number, and his appeal was denied (stock image)

The incident is an example of why critics regard the monitoring of data stored on personal devices or in the cloud for CSAM as an invasion of privacy. 

Jon Callas, a director of technology projects at the Electronic Frontier Foundation called Google’s practices ‘intrusive’ in a statement to The New York Times

He said: ‘This is precisely the nightmare that we are all concerned about.

‘They’re going to scan my family album, and then I’m going to get into trouble.’

In April, Apple announced it was rolling out its Communication Safety tool in the UK.

The tool – which parents can choose to opt in or out of – scans images sent and received by children in Messages for nudity and automatically blurs them. 

It initially raised concerns about privacy when it was announced in 2021, but Apple has since reassured that it does not have access to photos or messages.

‘Messages uses on-device machine learning to analyse image attachments and determine if a photo appears to contain nudity,’ it explained.

‘The feature is designed so that Apple doesn’t get access to the photos.’

Tech giants face huge fines over online child abuse if they fail to take action

Tech giants that fail to develop ways to scan for online child abusers could face fines of billions of pounds.

Ofcom have been granted powers to require firms to show how they are ‘preventing, identifying and removing’ illegal content.

If they fail to comply they could be hit with fines of up to £18million or 10 per cent of their annual global turnover.

It came as a children’s charity revealed a horrifying 80 per cent surge in online sex grooming crimes reported to police over the past four years. 

The NSPCC said it included hundreds of cases just last year on Meta’s social media platforms – Facebook, Instagram and WhatsApp. 

Read more here 

Source link