Cloud Scan looks for concerning images saved in cloud storage and assigns them into categories that Safeguarding Leads reference when reviewing flagged content.
Cloud Scan categories
Category | Description |
---|---|
Gore | Depictions of graphic violence and injuries resulting from accidents and gruesome acts. |
Pornography | Depictions of sexual activity, nudity, erotica |
Risqué | Depictions of sexually suggestive behaviour or entertainment |
Weapons | Guns, weapons of war, knives, or any device intended to inflict bodily harm or physical damage (e.g. explosives) |
Table 1. Cloud Scan categories
Cloud Scan also provides details about the flagged image, such as the uploader's name and the date it was uploaded. For images it flags for the first time, Cloud Scan shows the number of users these images have been shared to.
Cloud Scan categorises images by their depicted themes and content, and some images may be assigned to unexpected categories. When it identifies potentially unsafe images, Cloud Scan doesn’t make inferences about the user’s intent or character.
When it flags an image, it does so only based on what it shows. It doesn’t take into account the cultural landscape or prevailing customs where the school operates. For instance, an image may be flagged under Weapons although it only depicts a recreational activity like paintballing. That’s why it’s up to Safeguarding Leads to decide to keep or delete images based on community and organisational standards.
Important
Your organisation is responsible for ensuring their users–and the staff reviewing flagged content–adhere to policies, laws and regulations on information technology and security, data management, and privacy. Organisations are also responsible for warning their staff about potentially disturbing themes in flagged images and for providing further support where necessary.
Comments
0 commentsPlease sign in to leave a comment.