Frequently asked questions
The Executive Summary (PDF, 1.4MB) and Factsheet (PDF, 2.2MB) cover the main aspects of the proposals. The questions below will clarify any further questions you may have.
What is currently considered illegal content?
The threshold and definition of illegal content already exists. This includes ‘objectionable’ content under the Films, Videos, and Publications Classifications Act 1993. The definition of objectionable captures the most extreme types of content that describes, depicts, expresses or otherwise deals with matters such as sex, horror, crime, cruelty, or violence in such a manner that the availability of the publication is likely to be injurious to the public good. Examples include terrorist content and child exploitation material.
Illegal content also includes content that may breach other laws, such as privacy laws, which make it illegal to produce and share certain content, and the Crimes Act 1961.
We are not proposing any changes to definitions of what is considered illegal in New Zealand. Objectionable material is already illegal, and it will remain illegal to produce, publish, possess and share. Under the new framework, criminal and civil penalties will still apply, and prosecutions could continue to be undertaken by government agencies such as Police, Customs, or Internal Affairs.
Will the proposed new regulator be independent from government?
The regulator will be at arm’s length from government, similar to how the Broadcasting Standards Authority and the Classification Office currently work. This means that they make their own independent decisions and Ministers cannot interfere in those decisions. The government can’t tell them what to do, but will hold them accountable for fulfilling their statutory duties.
How will the proposed regulator be held accountable?
Independent regulators, such as crown entities, are held accountable by Parliament, by their Minister at the time, the government agency that monitors them and the public. This is usually done through regularly publishing reports about performance, finances and complaints.
What powers will the proposed regulator have?
We expect the new regulator to be an ‘industry regulator’, which means the legislation would place requirements on platforms and industry – the regulator and industry would work together to develop and implement the rules.
We propose the regulator’s enforcement powers should include
- directing a Regulated Platform to take remedial action to address identified gaps or deficiencies in their systems or processes within a specific period;
- issuing formal warnings, which could include issuing public notices citing the platform’s failure to comply, to inform and raise consumer awareness;
- penalising platforms for significant non-compliance; and
- requiring platforms to take down illegal material quickly when directed to do so. This power already exists.
We are proposing that the regulator should also have powers to deal with material that is illegal for other reasons, such as harassment or threats to kill. We want to know what the public thinks about this.
The regulator won’t have enforcement powers over individual pieces of legal content – these are things like an episode of a TV show, a social media post, or a news article.
What does ‘individual pieces of legal content’ mean?
When we talk about individual pieces of legal content, we mean individual social media posts, broadcasts and publications that don’t break existing New Zealand laws.
The threshold for content to be considered illegal is very high. Legal content will continue to be managed by platforms, through the proposed codes of practice, much like what already exists on the internet already through platforms’ terms of use. The regulator’s role is to ensure compliance with codes, not to give platforms binding directions on how to deal with specific items of legal content.
How will the proposals impact me?
Overall, the proposals aim to improve users’ experiences of content – especially online. You’ll likely have fewer experiences of unintentional exposure to the most harmful content, especially while browsing social media, and platforms would be less likely to recommend harmful content to you. You could see more warning labels and advice while you’re using media platforms. If something goes wrong, the new approach would provide you with better access to complaints processes.
For the most part, though, many of the changes in the proposals are focused on the things that happen ‘behind the scenes’ on platforms’ processes and systems.
What about people who create content online, like content creators or influencers?
Under this proposed approach, people who share content on social media, like influencers and content creators, shouldn’t see much change while using online platforms. However, this will depend on the platform’s approach to reducing harm on their services. For example, content creators may see more prompts if a platform’s system thinks they are trying to post something that potentially breaches the platform’s community standards.
Content creators will still need to comply with safety requirements that platforms set, which most major social media platforms already have. While some of those settings might be adjusted, the regulator would not have powers over individual pieces of legal content.
Why is it important for us to regulate platforms?
Our proposal is to regulate platforms because they are the ones who control the experiences users have on their services. Platforms can determine what a safe experience on their service looks like for their users, including creators and sharers of content.
Recent events like the livestream of the Christchurch terror attacks have shown why it is important for us to get this right. While sharing the video of the attack was and will continue to be illegal, we know that this footage, and clips of this footage, had already been pushed to New Zealanders across multiple online platforms before it was taken down.
We also know that some easily available online material can have serious mental health impacts, for example suicide or self-harm related content on social media being promoted to those who are already vulnerable. This is reflected in what young people in New Zealand tell us about how algorithms will often actively push content that makes light of self-harm to them. These issues are not unique to New Zealand. A coroner in the United Kingdom recently ruled that negative effects of online content on Instagram and similar sites contributed to the death of a British teenager in 2017.
During the targeted engagement phase of the programme, we also repeatedly heard concerns about how social media is used to harass, bully, or otherwise harm people. This concern was consistent across our engagement with Māori, young people, advocates for children, women, older people, disabled people, LGBTQIA+ communities, and communities that have been or are targets of ethnic, racial and/or religious discrimination.
These are real harms that are happening to New Zealanders. That’s why we need to get these settings right.
What do you mean by ‘harm’? How will you define it in the regulatory framework?
When we talk about harmful content, we are talking about content that negatively impacts users’ physical, social, emotional, and mental wellbeing, or causes loss or damage to rights and property. We are not talking about feeling offended, although content that is harmful will often also cause offence.
We are not planning on defining harm, or what “counts” as ‘harm’, in the law. Instead, we intend to lift the bar for safety practices on platforms, through clear objectives and expectations for platforms in codes of practice. This is because the types of harm that New Zealanders might experience from content are wide-ranging, so rather than create a catch-all definition, we want a framework that requires platforms to have policies, systems and processes in place that reduce the risk of harm on their services.
The exception to this is for illegal material – a threshold for harm already exists for this type of content, which includes terrorist content and child exploitation material, and we would carry this threshold into the new framework.
Why can’t we just get rid of everything that is harmful?
What’s harmful to one person is not necessarily harmful to another. This means that removing content is unlikely to be the best way to address potentially harmful content and maintain freedom of expression.
It’s also impractical to remove everything that is potentially harmful. The sheer amount of content that is made and the continued evolution of how it is accessed means that content must be managed at a system level by industry; it is not feasible to reduce harm by reacting to individual pieces and types of content, unless that content is, or could be, illegal (such as terrorist content or child exploitation material).