Instagram has announced a new feature that will notify parents if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This initiative comes as pressure mounts for governments to consider similar actions to Australia’s ban on social media use for individuals under 16.
The social media platform, owned by Meta Platforms Inc., stated that it will begin sending alerts to parents who have opted into the supervision setting when their children attempt to access content related to suicide or self-harm. This notification system will be rolled out next week for users in Canada, the United States, Britain, and Australia.
Instagram emphasized that these alerts complement their existing efforts to safeguard teenagers from potentially harmful content on the platform. The company maintains strict policies against any content that promotes or glorifies suicide or self-harm, with current measures in place to block such searches and direct users to support resources.
Governments worldwide are increasingly focusing on safeguarding children from online harm, particularly following concerns surrounding AI chatbots like Grok, which have been linked to the creation of non-consensual sexualized images. Countries such as Britain and Australia have already taken steps towards implementing regulations to protect minors online. Additionally, Spain, Greece, and Slovenia have recently expressed interest in restricting access to certain online content.
In the UK, initiatives aimed at preventing children from accessing pornography sites have raised privacy concerns for adults and sparked tensions with the US regarding free speech limitations and regulatory boundaries. Instagram’s “teen accounts” for individuals under 16 require parental permission to adjust settings, offering an additional layer of monitoring that parents can activate with their teen’s consent. These accounts also restrict young users from viewing sensitive content, including sexually suggestive or violent material.