Literacy and Liberty

Thesis, Intro, & Guiding Q’s
Posted by: , November 18, 2018, 10:18 pm
Filed under: Uncategorized


Our relationship to the internet as a society today is fundamentally different than it was at the beginning of the century. Rather than being a mere novelty or tool, the internet has become a core and crucial part of our lives as well as the primary medium for social discourse. However this brings with it two major problems. Firstly, the majority of the internet is composed of “private platforms”, websites that are owned by private entities such as individuals or companies. As such the owners of these private platforms are able to impose on the users of their website whatever terms of service they deem desirable. This is a problem because, while previously our public discourse was only restricted by the laws established by the government, now that such discourse occurs mostly on these “private platforms”, our speech is further limited by the rules established by the websites hosting our speech. These rules differ from platform to platform and frequently change with little accountability to the users affected by them. On private platforms, our right to free-speech does not apply. Secondly, while the internet has created a more informed public by allowing for instantaneous and global spread of information, not all of this information is trustworthy. “Fake news”, conspiracies, hate speech, calls for violence, extremist and divisive rhetoric, and other problematic media are just as easily shared and received as trustworthy media. This is a problem because many people fail to distinguish between the two and are susceptible to being misinformed and radicalized. Thus such media becomes “intolerable” and must be combated. However, currently the onus of solving the problem has fallen on the private platforms independently from one another. They have had little success and their efforts have been mired in controversy.  Private platforms such as Twitter, Facebook, and YouTube have been accused of bias primarily against right-wing groups or individuals who simply seek to promote “alternative” perspectives. When measures normally intended to stifle extremists are applied to more moderate voices, it is perceived by affected persons and their followers as constituting unfair attacks on their right to free-speech. To effectively solve the problem, a fair and universal method for classifying “intolerable” speech must be established as well as a standardized way of suppressing it and punishing those who spread it. Furthermore we as a society should revisit the First Amendment and our right to free speech, and consider “updating” and expanding it to reflect today’s online reality.

Guiding Questions:

Overall Question:

How should free-speech be treated online with specific focus on “intolerable” speech?

In-Depth Questions:

  1. What constitutes “intolerable speech” vs hate speech
    • Is all hate speech intolerable?
  2. What measures are appropriate to deal with such speech online?
    • Can a private platform “go too far” when attempting to combat intolerable speech made on its website?
      • If so, what constitutes such an example?
    • Can a “universal standard” be arrived at regarding classifying “intolerable speech” and applying fitting suppressive measures?
  3. How does this apply differently depending on the type of website in question?
    • “Mainstream” private platforms (ie. Facebook, Twitter, YouTube, Reddit, etc.)
      • Do they have “the right” to censor and ban certain speech they deem unacceptable? After all it is their own personal platform, that individuals must agree to terms of service before using.
    • Anonymous message/image boards such as 4chan
      • How should these sites be treated differently given their highly anonymous nature and how they cater to hate-speech?
    • Personal platforms/websites (a site you own) 
      • How are you responsible for the statements you make on your own website?
      • Those made by others?
      • Other individuals answering your “call to action”?
  4. How far do the protections of the first amendment reach?
    • Can they apply to speech made on private platforms or are they only intended to protect a person from the government?
    • Should these protections be expanded or “updated” to meet the reality of our “online society”?
  5. Are suppressive measures truly effective?
    • When certain sites are shut down or become too restrictive, people will often create or flock to other platforms.
      • In this manner the internet becomes something of a hydra, cut off one head, another one grows.
    • Given this, what measures are realistic?

1 Comment so far
Leave a comment

Joseph–Thanks for this thorough post about your ideas and hunches. We talked through some snags in this promising project in class yesterday, but please let me add here: as I read your post, it strikes me that what might really be at the heart of your project is an exploration of a digital “public” (or semblance of one) that is controlled by private corporate interests. Might you dig there, and thus bring focus and arguability to this project?

   Professor Seiler 11.20.18 @ 1:52 pm

Leave a comment
Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>