Posted on 10 Mar 2026
The Pacific is often celebrated for its strong community values, rich cultures and close-knit family networks – social structures that play an important role in protecting children offline. But as internet access expands rapidly from urban centres to remote villages and islands, children are entering digital spaces that operate beyond the reach of those traditional safeguards, and faster than legal, technical and institutional protections are being in place.
Last year, a Fijian mother received a video from her niece that left her stunned. The footage, which had been widely circulated in a Telegram group, showed her sixteen-year-old daughter dancing naked for the camera. The shock was even greater for her daughter, who had not created the video. It later emerged that a group of boys from her school in Nausori had downloaded images she had previously shared on social media and used an AI-powered ‘nudification’ app to generate the fake material. This case highlights the collision between adolescent online behaviour and rapidly developing technologies that can turn everyday digital interactions into sources of harm.
This incident is far from isolated. Recent data from Fiji reveals that thousands of incidents of child sexual abuse material (CSAM) and online grooming involving Pacific users are detected each year. This should serve as a wake-up call – not just for Fiji, but for every Pacific island nation, for development partners and for the technology companies shaping children’s digital experience.
What the Fiji data shows and why it matters
In late 2025, Manoa Kamikamica, then the deputy prime minister of Fiji, raised the alarm about the increased presence of CSAM – a term he urged should replace the misleading ‘child pornography’, as the latter obfuscates the criminal element involved. Earlier in the year, national authorities had spoken of a ‘tsunami of porn’, estimating that about 15 terabytes of pornographic content were consumed daily in the country, a proportion of which involves the circulation of CSAM.
Fiji’s growing involvement in this criminal economy has been evident for several years. In 2023, the country recorded 3 638 cases of online child exploitation. This increase, partly attributable to improved domestic reporting, has been accompanied by a rise in referrals – reports of CSAM shared by online platforms. In 2025, Filipe Batiwale, Fiji’s Online Safety Commissioner, reported that between 1 800 and 8 000 CSAM-related referrals linked to Fiji were being passed to global clearing houses, such as the US National Center for Missing and Exploited Children. These alerts reveal the scale of the risk that children face online, and underline the greater visibility of Pacific nations in global CSAM datasets.
For small island nations, the implications are especially serious. Many global platforms operate beyond local jurisdiction, making it difficult for national authorities to compel action or intervene quickly. Investigative and prosecution capacity are often limited, and child-protection systems already stretched. The challenge is not only technical but systemic: limited institutional capacity means that referrals do not consistently translate into effective protection and prevention.
These figures matter not only for Fiji, but for the region: Pacific states are becoming increasingly implicated in global CSAM detection systems, but they typically lack the resources needed to convert referrals into child protection outcomes.
New technology, new threats
Across the region, digital connectivity brings opportunity, but its rapid expansion creates new vulnerabilities — from cyberbullying to sexual exploitation. Although smartphone ownership is widespread, legal frameworks, reporting mechanisms and digital forensic capabilities lag behind.
At a 2025 regional summit, the UN Children’s Fund (UNICEF) warned that ‘the levels of violence, abuse and neglect of Pacific children are among the highest in the world’. Save the Children, an international NGO, similarly reported rising levels of physical, sexual, emotional and online violence experienced by children in five Pacific Island countries. Yet national reporting mechanisms for online exploitation and abuse remain weak, and dedicated investigative units are rare. Without reliable data collection, governments cannot accurately assess the scope of online exploitation or design targeted, evidence-based interventions.
The Nausori case encapsulates a rapidly emerging frontier of harm: CSAM created using artificial intelligence (AI) platforms. The Internet Watch Foundation, a charity that aims to remove online records of child sexual abuse, verified 1 286 illegal AI-generated CSAM videos in the first half of 2025, compared to just two in the same period in the previous year. Traditional content-detection tools often fail to identify synthetic imagery. The realism and customisability of AI-generated material may also facilitate grooming. Offenders can create tailored sexual content to desensitize children, falsely suggest that abusive behaviour is normal, or generate synthetic images that can be used to pressure or extort victims.
Fiji has begun responding more strategically. In 2025, the Online Safety Commission intensified efforts to identify AI-generated abuse material and strengthen its participation in global safety conventions. The National Taskforce Against Pornography is advancing strategies to curb digital threats to children, including establishing a dedicated police unit, strengthening partnerships with UNICEF and the Australian government, and prioritizing children’s safety online.
These efforts do not eliminate risk, but they show Fiji building the components of a more coordinated, survivor-orientated response. Other Pacific nations can draw lessons from this approach. Developing survivor-centred services, linking online safety measures with broader child-protection frameworks, and embedding digital safety curricula and culturally grounded awareness campaigns in schools and communities can all help improve children’s safety online.
Designing pragmatic, regional and rights-based policy
Because Pacific Island countries share languages, migration patterns and digital ecosystems, a regional response offers clear advantages. Collective action can help overcome capacity constraints, reduce duplication of efforts, and ensure that child protection matches the borderless nature of digital threats. A coordinated regional mechanism – expanding on initiatives such as the Cyber Safety Pasifika programme, which aims to improve online safety awareness across the Pacific – could pool technical expertise, support legal harmonization and create shared investigative resources.
Such an approach would also strengthen engagement with global technology platforms, which is currently uneven and largely reactive. Individual Pacific island states having limited leverage when dealing with multinational companies whose moderation policies, data-sharing practices and reporting thresholds are determined far beyond national jurisdictions. A regional framework would increase collective bargaining power, enabling Pacific governments to secure faster responses to referrals, request local-language and context-aware moderation, and seek greater transparency in CSAM detection methods and region-specific risks.
At the same time, policy responses must reflect how online child sexual exploitation operates in practice. Abuse is often structured and networked, even where it appears to involve lone offenders. Responses limited to content takedowns are therefore insufficient. Effective policy design must integrate child protection, cybercrime enforcement and platform governance within a rights-based framework. Development partners such as Australia, New Zealand, European countries, the United States and Japan have a critical role to play here. Their support must prioritize sustainable capacity building over one-off technology donations, and should ensure that cooperation with platforms, law enforcement training and survivor support are embedded within broader child-protection systems.
From numbers to action
In the dark world of online child sexual exploitation, Fiji is not an outlier. A surge in AI-generated abuse content and growing digital safety risks reflect a regional challenge in which internet connectivity has outpaced protection, enforcement and awareness.
The response must be equally borderless. Governments, communities, NGOs and technology companies must all recognize online child sexual exploitation and abuse as a child-protection emergency, not just an act of cybercrime.
If acted upon, Fiji’s data could be a catalyst for stronger regional cooperation and better protection for children growing up in an increasingly connected Pacific.