Bill C-10 and the myth of free speech

Gord Dimitrieff
9 min readMay 14, 2021

Most of the public discussion around Bill C-10 has centred around the removal of broad protections for social media companies, which critics have described as an extreme overreach for broadcasting regulation. For Cara Zwibel of the CCLA, this “constitutes an indefensible infringement of the right to freedom of expression.” Saskatchewan Justice Minister Gordon Wyant echoes this, concluding that “the expanded scope of this bill represents a concerning constraint on individuals’ freedom of expression.” Former CRTC chair Konrad von Finckenstein and vice-chair Peter Menzies elaborate on this, asserting “the purpose of the internet was always to ensure the absence of gates and the liberation of the ‘gate-kept.’

And therein lies the rub.

Criticism that Bill C-10 would infringe on the right to freedom of expression presupposes that social media sites are open and unregulated spaces in which individuals have a right to freedom of expression in the first place. This understanding of the Internet is perhaps most famously expressed by Electronic Frontier Foundation founder John Perry Barlow in his 1996 manifesto ‘A Declaration of the Independence of Cyberspace:’

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

For Barlow, the Internet is a place in which the “legal concepts of property, expression, identity, movement, and context do not apply.” American science fiction writer Bruce Sterling made a similar assessment in his 1992 essay “A Short History of the Internet”:

Why do people want to be “on the Internet?” One of the main reasons is simple freedom. The Internet is a rare example of a true, modern, functional anarchy. There is no “Internet Inc.” There are no official censors, no bosses, no board of directors, no stockholders.

Google leaders Eric Schmidt and Jared Cohen reiterated this perspective in 2013 when they called the Internet “the world’s largest ungoverned space,” so when von Finckenstein and Menzies posit that the result of C-10 will be the “stifling of that recently gained freedom,” there is no mistaking that they subscribe to the Barlow/Sterling understanding of the Internet, as do others making ‘freedom of speech’ arguments in the context of Bill C-10.

I too was once a subscriber of this viewpoint — I am lucky enough to have experienced the early and idealistic days of the Internet, when in 1991 I got a co-op placement in a computer science lab at the Ontario Institute for Studies in Education. It was the time of Glasnost and Perestroika, the Berlin Wall had come down, and as I watched Star Trek VI together with my OISE colleagues — mostly other computer science co-op students — it really did feel like the Internet was an incredible development that going to change the world and society for the better.

In the years following the dot-com market crash of March 2000, however, the Internet began to change. MySpace, one of the original social media companies, was purchased in July 2005 by Rupert Murdoch’s News Corporation for $580 million. One year earlier, Google became a public company via an initial public offering that gave it a market capitalization of $1.67 billion, and in short order the company acquired YouTube (2006) and DoubleClick (2008), forming the sophisticated digital media company it is today. As told by Jillian York, Facebook and Twitter emerged in this post-bubble Internet as places where early users perceived “a virtual public sphere, in which ideas and information could be exchanged and all had an equal opportunity to contribute to public debate” (York 2021). Though one might argue no one should hold an expectation of freedom within these corporately-owned platforms, this is precisely what the aforementioned critics of Bill C-10 are suggesting.

In stark contrast to the worldview of unrestricted open exchange, free from gatekeepers, shared by Barlow/Sterling and critics of Bill C-10, American constitutional scholar Jack Balkin has labelled our contemporary world of large, multinational Internet and social media companies the ‘Algorithm Society.’ For Balkin, analysing the regulation of speech within the Algorithm Society requires shifting from a traditional dyadic model of understanding to a triadic one, in which “individuals may be controlled, censored, and surveilled both by the nation state and by the owners of many different kinds of private infrastructure, who operate across national borders in multiple jurisdictions” (Balkin 2018). This emergence of the Algorithm Society has occurred in parallel to what Shoshana Zuboff calls “a stark new form of social inequality best understood as ‘epistemic inequality.’ It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself” (Zuboff 2020). In other words, contemporary society is based not on what we learn, but rather on what we can learn.

Through original research of internal company documents, archives and interviews with platform executives, Kate Klonick concludes that the platforms of YouTube, Facebook, and Twitter “have developed a system that has marked similarities to legal or governance systems. This includes the creation of a detailed list of rules, trained human decision-making to apply those rules, and reliance on a system of external influence to update and amend those rules” (Klonick 2018). Klonick terms these Internet platforms the ‘New Govenors’ of speech, which she places into Balkin’s triadic model.

Emily Laidlaw identifies ‘Internet Information Gatekeepers’ (IIGs), which she defines in contrast to simple gatekeepers, as those with democratic significance that control the flow of information, deliberation and participation in democratic society. Laidlaw proposes a tiered framework to understanding these IIGs, which defines macro-gatekeepers at the top, authority-gatekeepers in the middle, and micro-gatekeepers at the bottom. Macro-gatekeepers are entities such as Internet Service Providers (ISPs) and mobile phone operators, that literally enable or disable one’s access to the Internet. Authority-gatekeepers are high-traffic sites that control both traffic and information flows, which is the category in which Laidlaw places sites like Wikipedia and Facebook. At this level of Laidlaw’s framework, IIGs “impact all aspects of the rights of freedom of expression, privacy and freedom of association and assembly.” At the base level of the framework are micro-gatekeepers, which do not necessarily engage these rights. In this framework, “the less the site is of democratic significance, the less of a gatekeeping obligation is incurred” — this is where Laidlaw places sites that do not enable interactive participation and sites with smaller reach (Laidlaw 2010).

Whether understood through a lens of New Governors or of Internet Information Gatekeepers, the core issue in the removal of broad protections for social media companies in Bill C-10 is the question of how to balance the obligations society might impose on social media platforms in the name of culture, while simultaneously permitting these companies to pursue their commercial interests and protecting individuals’ right to free expression. While critics of Bill C-10 have adopted the pioneering Barlow/Sterling perspective in which social media platforms are an extension of the public sphere with, as Sterling put it, “no official censors, no bosses,” Balkin’s analysis shows these entities cannot be understood as neutral intermediaries. Anupam Chander and Vivek Krishnamurthy point out that although Internet companies have strived to portray themselves as neutral, “the very fact of their existence has utterly transformed how people learn and think about matters ranging from politics to sex” (Chander and Krishnamurthy 2018).

To achieve their commercial objectives, YouTube, Facebook and Twitter employ computational algorithms that determine what any given user will see on their platforms at any given time. As described by Chander and Krishnamurthy in their analysis of platform neutrality:

These algorithms do not simply present you with a chronological set of information provided by everyone in your network. Rather, they have been designed and optimized to maximize the companies’ revenue by maximizing the time you spend on their platform and the number of advertising links you click. Not one political consideration likely went into the development of these algorithms, and yet the creation of “echo chambers” and the spread of “fake news” are not neutral in their impact on politics.
(Chander and Krishnamurthy 2018)

The algorithms of social media companies extend beyond their user interfaces, into a complex web of user tracking across websites and apps. For example, a 2019 report by the Wall Street Journal revealed that Facebook uses third-party apps to acquire sensitive data including heart rates and menstrual cycles, even from people with no connection to Facebook. Social media companies harvest this data not only to optimize revenue by predicting user behaviour, but also to influence people in subtle but effective ways, amounting to what Zuboff terms “instrumentarian power” (Zuboff 2020). Two separate articles were published in peer-reviewed journals in 2012 and 2014, detailing a series of experiments that Facebook conducted on tens of millions of users, in which the company deployed subliminal messages and manipulated social comparisons to influence users to vote in midterm elections, and later to make people feel sadder or happier. In a May 2017 report, The Guardian reported on a leaked internal Facebook report:

Facebook showed advertisers how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”, according to a leaked documents based on research quietly conducted by the social network.

The internal report produced by Facebook executives, and obtained by the Australian, states that the company can monitor posts and photos in real time to determine when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless” and a “failure”.

In … response to the Australian report, Facebook said it “has an established process to review the research we perform”, adding: “This research did not follow that process, and we are reviewing the details to correct the oversight.”

Yes, the truth is painful.

Rather than the unregulated virtual public sphere of free speech and open exchange, companies that now power the Internet have become the Algorithm Society, marked by epistemic inequality and the accumulation of instrumentarian power. To the extent that removing broad protections for social media companies would put new regulatory powers in the hands of government, there currently are no rights or freedoms to be enjoyed on social media that C-10 would impede or infringe. Michael Geist concedes “we should be requiring greater algorithmic transparency from Internet companies,” but criticizes Bill C-10 as an attempt to simply “(substitute) their choices for those crafted through government regulation.” It is a false dichotomy to consider self-regulation or heavy-handed regulation as the only options, and while a direct substitution will likely not arrive at the desired balance, Geist leaves the question unanswered of how companies might be “required” to do anything without a government imposed imperative.

The issue of why Section 4.1 was in Bill C-10 to begin with remains a valid question, and one that points directly at its drafters. A broad exemption for social media services made Bill C-10 an unworkable proposition in the reality of the digital audio streaming market, and the Department of Canadian Heritage should have identified this before the bill was tabled for first reading. As Emily Laidlaw told the CBC on 30 April 2021, “if they want to explicitly regulate what these platforms are doing, be explicit about it and be narrow about it … But that’s not the way [the bill] is drafted.” Indeed, the Heritage portfolio is one of the most complex in government, and successfully constructing a modernized broadcasting framework requires significant interdisciplinary legal expertise across the specific areas of society, technology, entertainment and broadcasting.

Anything less will be doomed to fail.


Balkin, Jack M., “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation” (September 9, 2017). UC Davis Law Review, (2018 Forthcoming), Yale Law School, Public Law Research Paper №615, Available at SSRN: or

Chander, A. and Krishnamurthy, V. (2018). “The Myth of Platform Neutrality”, Georgetown Law Technology Review 400
Available at:

Klonick, Kate., “The New Governors: The People, Rules, and Processes Governing Online Speech” (March 20, 2017). 131 Harv. L. Rev. 1598, Available at SSRN:

Laidlaw, Emily, “A Framework for Identifying Internet Information Gatekeepers” (November 1, 2010). International Review of Law, Computers & Technology, Vol. 24, №3, 2010, Available at SSRN:

York, Jillian C. “The Future of Free Speech under Surveillance Capitalism”, Verso: 2021. Available at:

Zuboff, Shoshana, “You Are Now Remotely Controlled”, New York Times, 24 January 2020. Available at: