At the moment, when it comes to graphic content, social media largely relies on self-governance. Sites such as YouTube and Facebook have their own rules about what’s unacceptable (video, pictures or text) and the way that users are expected to behave towards one another.
This includes content that promotes fake news, hate speech or extremism, or causes mental health problems.
If the rules are broken, it is up to social media firms to remove the offending material.
YouTube has defended its record on removing inappropriate content. In a recent blog post it said that, while it faced an “immense challenge”, it had a “core responsibility” to remove content that violates its terms and conditions.
The video-sharing site said that 7.8m videos were taken down between July and September 2018, with 81% of them automatically removed by machines, and three-quarters of those clips never receiving a single view.
Globally, YouTube employs 10,000 people in monitoring and removing content, as well as policy development.
Facebook, which owns Instagram, told the BBC that it has 30,000 people around the world working on safety and security. It said that it removed 15.4m pieces of violent content between October and December, up from 7.9m in the previous three months.
If illegal content, such as “revenge pornography” or extremist material, is posted on a social media site, it will be the person who posted it, rather than the social media companies, who is most at risk of prosecution.
This is a situation that needs to change, according to Culture Minister Margot James. She says she wants the government to bring in legislation that will force social media platforms to remove illegal content and “prioritise the protection of users, especially children, young people and vulnerable adults”.
That view is echoed by the children’s commissioner for England, Anne Longfield. She has also called for the government to make it clear that social media companies have a duty of care for children using their sites.
“It would mean online providers like Facebook, Snapchat, Instagram and others would owe a duty to take all reasonable and proportionate care to protect children from any reasonably foreseeable harm,” she said.
“Harmful content would include images around bullying, harassment and abuse, discrimination or hate speech, violent or threatening behaviour, glorifying illegal or harmful activity, encouraging suicide or self-harm and addiction or unhealthy body images.”
Facebook has acknowledged there is more it could do and says it is “looking at ways to work with governments and regulation in areas where we don’t think it makes sense for a private company to set the rules on its own”.
So, what do other countries do? Some are taking action to protect users, others have been criticised for going too far.
Germany’s NetzDG law came into effect at the beginning of 2018, applying to companies with more than two million registered users in the country.
They were forced to set up procedures to review complaints about content they are hosting and remove anything that is clearly illegal within 24 hours.
Individuals may be fined up to €5m ($5.6m; £4.4m) and companies up to €50m for failing to comply with these requirements.
The Federal Ministry of Justice confirmed to the BBC that the figure had been considerably below the 25,000 complaints a year it had been expecting and that there have been no fines issued so far.
Under Russia’s data laws from 2015, social media companies are required to store any data about Russians on servers within the country.
Its communications watchdog is taking action against Facebook and Twitter for not being clear about how they planned to comply with this.
Russia is also considering two laws similar to Germany’s example, requiring platforms to take down offensive material within 24 hours of being alerted to it and imposing fines on companies that fail to do so.
Sites such as Twitter, Google and WhatsApp are blocked in China. Their services are provided instead by Chinese applications such as Weibo, Baidu and WeChat.
Chinese authorities have also had some success in restricting access to the virtual private networks that some users have employed to bypass the blocks on sites.