Host Violent Content? In Australia, You Could Go to Jail


SYDNEY, Australia — The video showing the murder of 51 people in Christchurch carries both an offensive title, “New Zealand Video Game,” and a message to “download and save.”

Appearing on 153news.net, an obscure site awash in conspiracy theories, it is exactly the sort of online content that Australia’s new law criminalizing “abhorrent violent material” says must be purged. But that doesn’t mean it’s been easy to get it off the internet.

“Christchurch is a hoax,” the site’s owners replied after investigators emailed them in May. Eventually, they agreed to block access to the entire site, but only in Australia.

A defiant response, a partial victory: Such is the challenge of trying to create a safer internet, link by link.

In an era when mass shootings are live-streamed, denied by online conspiracy theorists and encouraged by racist manifestoes posted to internet message boards, much of the world is grasping for ways to stem the loathsome tide.

Australia, spurred to act in April after one of its citizens was charged in the Christchurch attacks, has gone further than almost any other country.

The government is now using the threat of fines and jail time to pressure platforms like Facebook to be more responsible, and it is moving to identify and block entire websites that hold even a single piece of illegal content.

“We are doing everything we can to deny terrorists the opportunity to glorify their crimes,” Prime Minister Scott Morrison said at the recent Group of 7 summit meeting in France.

But will it be enough? The video of the Christchurch attack highlights the immensity of the challenge.

Hundreds of versions of footage filmed by the gunman spread online soon after the March 15 attack, and even now, clips, stills and the full live-stream can be easily found on scores of websites and some of the major internet platforms.

The video from 153news alone has reached more than six million people on social media.

Australia is pitching its strategy as a model for dealing with the problem, but the limits to its approach have quickly become clear.

Although penalties are severe, enforcement is largely passive and reactive, relying on complaints from internet users, which so far have been just a trickle. Resources are scarce. And experts in online expression say the law lacks the transparency that they say must accompany any effort to restrict expression online.

Of the 30 or so complaints investigators have received so far that were tied to violent crime, terrorism or torture, investigators said, only five have led to notices against site owners and hosts.

“The Australian government wanted to send a message to the social media companies, but also to the public, that it was doing something,” said Evelyn Douek, an Australian doctoral candidate at Harvard Law School who studies online speech regulation. “The point wasn’t so much how the law would work in practice. They didn’t think that through.”

The heart of Australia’s effort sits in an office near Sydney’s harbor that houses the eSafety Commission, led by Julie Inman Grant, an exuberant American with tech industry experience who describes her mission as online consumer protection.

Worldwide, after decades of evolution, that system is robust. Software called PhotoDNA and an Interpol database rapidly identify illegal images. Takedown notices can be deployed through the INHOPE network — a collaboration of nonprofits and law enforcement agencies in 41 countries, including the United States.

In the last fiscal year, the Cyber Report team requested the removal of 35,000 images and videos through INHOPE, and in most cases, takedowns occurred within 72 hours.

“I think we can learn a lot from that,” said Toby Dagg, 43, a former New South Wales detective who oversees the team.

Experts agree, with caveats. Child exploitation is a consensus target, they note. There is far less agreement about what crosses the line when violence and politics are fused. Critics of the Australia law say it gives internet companies too much power over choosing what content should be taken down, without having to disclose their decisions.

They argue that the law creates incentives for platforms and hosting services to pre-emptively censor material because they face steep penalties for all “abhorrent violent material” they host, even if they were unaware of it, and even if they take down the version identified in a complaint but other iterations remain.

Want more Australia coverage and discussion? Sign up for the Australia Letter.

Mr. Dagg acknowledged the challenge. He emphasized that the new law criminalizes only violent video or audio that is produced by perpetrators or accomplices.

But there are still tough questions. Does video of a beheading by uniformed officers become illegal when it moves from the YouTube channel of a human-rights activist to a website dedicated to gore?

“Context matters,” Mr. Dagg said. “No one is pretending it’s not extremely complicated.”

Immediately after the Christchurch shootings, internet service providers in Australia and New Zealand voluntarily blocked more than 40 websites — including hate hothouses like 4chan — that had hosted video of the attacks or a manifesto attributed to the gunman.

In New Zealand, where Prime Minister Jacinda Ardern is leading an international effort to combat internet hate, the sites gradually returned. But in Australia, the sites have stayed down.

Mr. Morrison, at the G7, said the eSafety Commission was now empowered to tell internet service providers when to block entire sites at the domain level.

In its first act with such powers, the commission announced Monday that around 35 sites had been cleared for revival, while eight unidentified repeat offenders would continue to be inaccessible in Australia.

In a country without a First Amendment and with a deep culture of secrecy in government, there is no public list of sites that were blocked, no explanations, and no publicly available descriptions of what is being removed under the abhorrent-content law.

More transparency has been promised by officials in a recent report, and some social media companies have pledged to be more forthcoming. But Susan Benesch, a Harvard professor who studies violent rhetoric, said any effort that limits speech must require clear and regular disclosure “to provoke public debate about where the line should be.”

To get a sense of how specific complaints are handled, in early August a reporter for The New York Times submitted three links for investigation:

Investigators said the last item “did not meet the threshold” and was not investigated. For the Christchurch footage, a notice was sent to the site and the hosting service. The first complaint was referred to Facebook, which removed the post.



Sahred From Source link Technology

Leave a Reply

Your email address will not be published. Required fields are marked *