From overly nude content to sexualised images of children, social media platforms such as Facebook are being heavily criticised for not doing enough to tackle the stream of unwanted content, and moderate the sharing of particular groups and the images therein.
According to the BBC, Chairman of the Commons media committee, Damian Collins said he had “grave doubts” about the effectiveness of Facebook’s content moderation systems.
“Nudity or other sexually suggestive content” it states are not allowed on the platform, and following an ordeal with BBC reporters it issued the statement: “It is against the law for anyone to distribute images of child exploitation.”
But are online platforms doing what they can to eliminate the need to emphasize such illegality? Here Lawyer Monthly hears from Phil Gorski at Blacks Solicitors, on what the law says and what the solutions might be.
What does the law say on the subject?
The primary piece of legislation, put in place to deal with this type of content, is the Protection of Children Act 1978. It says that it is a criminal offence to take (or to permit someone to take), to distribute, to show or to possess an indecent photograph of a child. A child is a person under 18 but what is considered indecent is unsurprisingly more difficult to define. The test is imprecise – it is up to the court to decide whether something is indecent “in accordance with recognised standards of propriety.” Importantly, the circumstances in which the photograph came to be taken and motive of the taker are not relevant; it is not the taker’s conduct which must be indecent but the photograph of the child which results.
Information on convictions specifically for offences under the Act is not publicly available.
What do the social media platforms do?
All prominent platforms provide some form of system which can be used to report material which is considered to be indecent. The recent reports surrounding Facebook’s apparently lacklustre response to the BBC’s reporting of 80 images containing indecent material of children has been used to support the argument that these systems don’t work and there is some weight to that argument. Reliable data is hard to come by but anecdotal evidence suggests that they are slow and ineffective. The counter argument is that social media platforms such as Facebook face an almost impossible task. The sheer volume of posts means that human checking of images is practically speaking an unrealistic option and computer intelligence is notably poor at interpreting the content of images.
Is there a solution?
There certainly isn’t a quick fix. It should go without saying that social media platforms should devote as much time and resources to developing systems which can prevent the dissemination of these images as possible – as with cyberbullying the effects can be severe, they can shape a child’s future development and physical boundaries are irrelevant. However, there also needs to be an increased awareness on the part of parents and children alike of just how easily images put online can obtained by people who were never intended to see them and how quickly they can proliferate. A report of a few years ago found that a very large proportion of indecent images of children found online appeared to be self-generated.