The government will consult on whether social media should be banned for under-16s in the UK.
Tagged with safeguarding
AI ‘nudification’ apps are to be banned following an IWF campaign as new data reveals nearly one in five reports of nude or sexual imagery of young people involves some form of faked or digitally altered imagery.
Ofcom has already published final Codes and risk assessment guidance on how we expect platforms to tackle illegal content and content harmful to children. Ofcom’s role is to hold tech companies to account and ensure they comply with the law, using our enforcement powers where necessary.
But Ofcom is also required to produce guidance setting out how providers can take action against harmful content and activity that disproportionately affects women and girls, in recognition of the unique risks they face.
Online grooming offences have reached a record high, with police data revealing a four-year-old boy as being among the victims.
The NSPCC described the figures, which show an almost doubling of such crimes over the past eight years, as "deeply alarming".
The UK government will allow tech firms and child safety charities to proactively test artificial intelligence tools to make sure they cannot create child sexual abuse imagery.
An amendment to the Crime and Policing Bill announced on Wednesday would enable "authorised testers" to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release.
Trade unions and online safety experts have urged MPs to investigate TikTok’s plans to make hundreds of jobs for UK-based content moderators redundant.
The video app company is planning 439 redundancies in its trust and safety team in London, leading to warnings that the jobs losses will have implications for online safety.






Comments
make a comment