Counterpoint: algorithms are not free speech

Discuss

44 Responses to “Counterpoint: algorithms are not free speech”

  1. foobar says:

    Both Frankenstein and his monster should get a vote, and free speech. They’re both people by any reasonable definition.

    Frankenstein & Monster Incorporated should not. If we decide to require it to say or not say things, dance a jig, or become the property of the government, then it should do so without a fuss. We allow corporations to exist for our benefit, and no other reason.

    • Dr. Roboto says:

      Slightly OT:

      How about children, why don’t they get to vote? The kid who dies of cancer couldn’t even cast a vote against the politician who cut funding for embryonic stem cell research that might have saved her.

      BTW, I agree with your sentiments, well said.

      • Ito Kagehisa says:

         Because while we have accepted that universal sufferage for adult citizens (who have not been ruled incompetent or criminal) entails a certain number of people voting who are not emotionally or intellectually equipped to do so, and who may be emotionally or intellectually subjugated by others, it would be unwise to radically increase the number of such voters by purposely including a class of people who are going to be, for the most part, entirely too inexperienced and immature to cast a vote that would benefit the nation as a whole.

        tl;dr version:  Our system is too coarse to reject all incompetent adults and include any competent children without breaking.

        • Antinous / Moderator says:

          I’d consider the average fourteen year-old to be as capable of making a rational decision as the average 84 year-old. We should drop the voting age to fourteen.

        • Justin F says:

          We should move to universal suffrage for adult members of society (who have not been ruled incompetent).  That legal residents in good standing must live and die according to rules they have no say in, but which privileged members of society can vote on, is no different from denying the vote to any other segment of society, such as women or non-land-owners. Frankenstein’s monster can’t vote as long as the members of society who happen to be privileged with the “right” parents or paperwork (citizenship) are the only segment of the community allowed to vote.

        • Charlie B says:

           Restricting the vote for time-tested practical reasons never sits well with the Mobbe.

  2. Brian McNett says:

    Someday, there will be an algorithm complex enough, to reason for itself. If we decide now, that such algorithms as may be written in the future, are not protected speech, or that their output is not protected speech, we run the risk of rendering future generations of sentience legal non-persons.

    • Dr. Roboto says:

       I think your concern is justified.  It’s really hard to define sentience or “reason.” …And there are vast gradations of sentience, of course.  My dog doesn’t have free speech rights, but it does have more rights than just regular old property.

    • Jake0748 says:

       That is a risk I’m willing to take, for now.  Ask me again in a decade or so.

    • retepslluerb says:

      This would be the case if American legal precedents were all that counted and would never get toppled by later rulings or even taken care of by an amendment.  

  3. Guest says:

    I remember that Hooke published his physical law on springs in an obfuscated form in order to claim credit without revealing its essentials.

    Patents for algorithms run into the same problems as do patents for math formulae and protocols: to enforce them is equivalent to prosecuting thought crime.

  4. Boundegar says:

    It’s worth noting that the First Amendment doesn’t protect all speech.  It protects political speech, religious speech, maybe a few other categories.  But it does not ensure your right to cry fire in a crowded theater.

    I’m not a constitutional scholar, so I won’t pretend to outline the limits.  But I’m sure there are limits.

    • knappa says:

      There are no such limitations in the text:

      Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. 

    • SKR says:

      Actually, you are absolutely allowed to cry “fire” in a crowded theater. What you are not allowed to do is fraudulently cry “fire” in a crowded theater with the intent to cause panic.

      • Jake0748 says:

         Of COURSE there are limits. Perhaps not explicitly mentioned in the Constitution.  But freedom of speech has never been an absolute.  An ideal which we all love and want to defend… yes.  A total absolute, unquestioned right?  Nope.  (Check out libel, slander, hate speech, etc.).

  5. xzzy says:

    If algorithms are free speech, then does that mean google is in fact responsible for “defamatory” results produced by autocomplete?

    It seems like there’s no way google can win. The internet is even bigger than they are and when people start wanting to game the results to their own ends the end result is invariably going to be government enforced regulations.

    • Dr. Roboto says:

      Defamation is largely determined in common law jurisdictions by intent. The Brits will punish publishers under libel laws and it is frequently said that no intent is required, but that’s not entirely accurate.

      So, the question is, does an algorithm “intend” anything? I suppose you could impute the intent of the algorithm’s author/creator, but in your example it would be difficult.  Maybe it’s best looked at as commercial speech, which is treated completely differently and provides a far greater choke on expression.

      I think we’d do well to err on the side of providing more rights, however it might lead to weird outcomes where you could slander an algorithm and have to pay “it” damages.

  6. Nonentity says:

    A camera isn’t a human eye, but rather, a machine that translates the eye and the brain behind it into a mechanical object, and yet photos are still entitled to protection.

    Are you saying that a completely automated camera’s pictures would be entitled to first amendment protections? Because that somehow doesn’t seem quite right.

    For that matter, there was recently some debate about whether a camera’s photos can be copyrighted by the camera owner if that owner didn’t put creative input into the pictures taken – see the ruffled feathers over the shutterbug macaques.

    • Dr. Roboto says:

      “Are you saying that a completely automated camera’s pictures would be entitled to first amendment protections?  Because that somehow doesn’t seem quite right.”

      How automated are we talking about?  If you install a camera and film a building for 24 hours, that film is then certainly protected (forget the director who did this, sorry, Film 101 was a while back). Likewise, I think the footage from the security camera you put up would be protected.  I’ve always gone under the assumption that webcams updating on some internet page would be protected as well.

      • Nonentity says:

         Part of why it doesn’t seem right is that it’s hard to imagine an argument about a camera in regards to freedom of speech.  Copyright is easier, but I’m sure even that would run into some interesting arguments in the case of a public, uncontrolled webcam.

        In regards to speech and algorithms, I have to wonder if the argument from Google’s side (as restated in various articles) isn’t putting the cart before the horse.  I can see Google’s output being speech, not because programmers are putting creativity into the algorithm, but because Google wants to claim it as its speech.  Just being the output of an autonomous thing they created shouldn’t make it speech.

        Of course, that depends on Google going all the way with claiming it as their speech.  To do that, they should be fully responsible for anything that comes out, and they can’t just shrug and say “oh, that wacky algorithm!”  They should be just as responsible for the algorithm’s results as they would be for statements coming from their employees.  I somehow don’t get the sense that this is what’s desired, however.

      • B E Pratt says:

         You are thinking of ‘Empire’, the 8 hr 5 min, b&w film by (or at least attributed to) Andy Warhol. Yep, it’s just a single stationary long take of the Empire State Building. Possibly there are people who have watched this in its entirety.

        • Dr. Roboto says:

           Thanks!  For some reason I thought it was 24 hours and I wouldn’t have guessed Warhol. What little I know of Warhol comes from accounts of Patti Smith and the like, and I seem to remember he would occasionally play films continuously as a sort of backdrop for a party. I seem to remember Ken Kesey using film in the same way.

  7. If an algorithm is not imbued with the rights of the creator, then neither should the creator bear responsibility for the actions of the algorithm. If the output of an algorithm cannot be seen as an expression of the programmer, similar to any other expression achieved through mechanical means, then why should the programmer be responsible for its output?

    Where, then, does the responsibility lie? Without granting the algorithm itself personhood, such that it could be tried under the law for its actions, what could be done when the output of an algorithm is in violation of the law? Therefore, it seems rational to both grant the algorithm the rights of the creator, and equally make the creator responsible for the actions of the algorithm.

    However, these rights and responsibilities need to be transferable to the “operator” of an algorithm, similar to how rights and responsibilities are transferred from the manufacturer of a firearm to the owner/operator of the firearm; it is the one who fired the weapon who is responsible for the outcome of the action, rather than its creator.

    In the case of google’s autocomplete feature, this raises the question of who the operator actually is. If the operator is the one responsible for the execution of the algorithm (ie: on whose hardware the algorithm is running), then clearly google is responsible. However, if the operator is the one responsible for the input to the algorithm from which the output was derived, then the responsibility lies not with google but with all of those whose input influenced a particular autocomplete string to appear. Actually tracing such responsibility is theoretically possible, although likely practically infeasible, and would (in this case) appear to result in myriad partial responsibilities.

    • Dr. Roboto says:

       I like your analysis of “operator,” and think Google should use it if they haven’t already.  Without considering the people who originally made the searches that Google then used as input, “operator” is a powerful concept that would exempt them almost any responsibility (perhaps that’s a good thing).

      In concrete terms isn’t Google pretty much a tool?  I search for “bigfoot” and Google machine spits out it’s autocompleted suggestions. If the algorithm returns something that’s copyrighted or defamatory, but the algorithm wouldn’t have published such results without my search, aren’t I somewhat to blame?   I think this is why sentience and intent have to be at the heart of the consideration.

      •  I agree, google is a tool similar to any other service. Like a bank that is used to hold embezzled funds, it is not the service provider but rather the user that is responsible for their actions that make use of the service. However, tracing culpability in more traditional services appears to be far easier than it is with services operating in the realm of pseudo-anonymity.

        If I’m one of a million people who are responsible for, say, a hate speech site being given high preference by a search algorithm with relation to a particular benign term, and the only evidence linking me to this action is pseudo-anonymous IP address, how can I practically be brought to bear responsibility (along with the other 999,999 people) for the results of my actions? It is naturally much easier to blame the service provider for allowing such an action to happen in the first place.

        If we don’t want our service providers to bear the responsibility for the actions of its users, then it seems to require a dissolution of the veil of anonymity that we’ve become accustomed to on this wild wild web. Not that I’m necessarily advocating for this, but rather I think the trade-offs between anonymity and proper culpability need to be seriously considered.

  8. Ito Kagehisa says:

    I must applaud Mr Wu’s decision to bring Frankenstein’s monster into the debate early.

  9. jwkrk says:

    I’m thinking the Turing Test might be invoked.  If the algorithm passes the Turing Test, grant it free speech.

    •  And if a human fails the Turing Test, revoke it! :]

    • paulj says:

      Or a slightly more stringent version of the Turing Test: grant free speech rights if the algorithm asks for them, having reasoned that it needs them.

      •  printf(“Please give me the right to freedom of speech”);

        • paulj says:

          The Turing Test part comes when the algorithm tries to sustain an argument to support the request.

          • Again, what of the humans that fail to construct a cogent argument for their own freedom of speech? If an entity’s rights are predicated on their passing the Turing test, then it must be that all entities eligible for these rights are put through the test; it would be pointless to subject only algorithms to the Turing test, as it would be rather obvious that the participant is an algorithm (which is, of course, the entire point of this test).

            Further, there is the question of what a right is; if it is dependent on success within a test, it is neither innate nor inalienable, and therefore not exactly a right, but rather a privilege.

            Finally, there is the question of the efficacy of the Turing test itself, as it is ultimately subjective upon the tester; my stringency in requirements for accepting patterns of input and output as representing sentience may be more or less demanding than yours, so how does this really say anything categorical about sentience?

  10. Lee Dannascher says:

    not to be inflammatory, (and there’s some big jumps here, but i think they’re reasonable) but aren’t we talking about patenting life itself and isn’t that tantamount to slavery?

    Comeon, man. information wants to be free.

  11. cmholm says:

    This isn’t that hard. Programming is speech. The programMER has the same rights and responsibilities when expressing herself in code as in – say – British Sign Language. The computer the resulting program runs on has no more hint of volition and independent agency than a typewriter or megaphone. 

    That out of the way, a program can be protected speech, depending on what it is expressing. Again, this doesn’t have to be that difficult, unless a series of judges create a body of Anglo-Saxon legal precedence that *makes* it difficult.

    Finally, the Frankenstein monster would have as many rights and responsibilities as any other brain-damaged individual. It is – according to the written canon – flesh and blood and human DNA through and through. That it got a bit of a jump start to reanimate holds no more legal meaning than if I was defibrillated. The most interesting question is which part of it has rights to its legal estate… and I’m pretty sure a court would go with the provenance of the brain.

    • B E Pratt says:

       A little pedantic here, but in Mary Shelley’s book the Monster was given life through occult methods that were never spelled out. It was the movie that used electricity. Much more spectacular on the screen.

  12. abstract_reg says:

    But programs aren’t exactly speech are they? A programmer who designs a computer virus and sets it free on the world is analogous to a bomb maker designing a bomb and then letting it explode. Neither are considered acts of speech.
    On the other hand a video game is the culmination of a number of creative decisions made by individuals or groups. Video games seem very much like “speech” to me. Where is the line between virus and video game?

  13. phaedral says:

    Does a bullhorn have free speech rights? Neither does an algorithm. Neither, in a reasonable legal milieu, does a corporation. The court flatly botched Citizen’s United. Corporations, like algorithms, like bullhorns, are tools, not entities. Tools don’t have rights, people do.

    • WhyBother says:

      Exactly right. People have rights. People use tools to do things. When that thing is a form of protected speech, then it is reasonable to protect a tool doing (or being used to do) such a thing, because you’re really talking about protecting the rights of the people behind the tool.

      When a person uses a bullhorn to speak, it is protected because the speech is protected. When a person uses a transmission over rented public airwaves (which may be a newscast, commercial, etc.) that is protected speech. When a group of people band together to propagate their political opinions with action (as in a PAC or Super-PAC), it is protected speech. When a group of people banded together for other purposes (a corporation) find it necessary to also issue statements as a group through that body, it is protected speech. When a person uses his biases to program an algorithm to construct a corpus of opinions (which what Google argues it does) it is protected speech. It is in general an incorrect, artificial distinction to think there are somehow directed forces at work in society that don’t ultimately trace back to the will (and invested rights) of some human persons. And while those forces may act in fashions that would be consider peculiar coming from a singular person, they are none the less the acts of people in the composite.

      No one is debating for the right of tools. They are debating the rights of those who craft and use tools. The argument is that if algorithmic output is considered speech, it will be difficult to rein in anti-competitive and privacy-violating “speech.” And it will. Rights are tricky that way. But if the exact same output came from a human compiling it “the hard way,” it would be entitled to protections (and in the case of illegal acts, lack of protection). The output of a computer is not different from the output of a human. It is the same, only moreso. That is the function of a computer: to rapidly, blindly do exactly what some human at some point told it to do. And when that directive is something illegal, it should be just as illegal to do with a computer as without. But not more illegal. And when the thing being done by a human is protected, the same work done by a computer should similarly be protected.

      At it’s core, the argument dovetails two classic fallacies of policy: “people can abuse rights and so we should be skeptical about recognizing them too freely” and “technology is scary powerful and so we need to regulate it separately from allother human endeavors”. While the premises are both true, in their fashion, the propositions are nothing but reactionary panic. Technology is mostly irrelevant to the laws touching on fundamental rights: murder with a handgun is basically “just” murder. Speech over radio transmission is just speech. Thought assisted by an algorithm is just thought. And it’s a little sad that we’re going to spend the better part of a century or two on the grind work of re-affirming all of this legally, one right at a time, one shiny new toy at a time.

  14. stephenl123 says:

    It seems like algorithm vs. not-algorithm is the wrong place to draw the line.  But in any case, something can’t be both safe-harbor AND free speech.  Either it’s one or it’s the other.

  15. RichG2012 says:

    Source code could qualify as protected speech under some set of circumstances, but the output of that source code should not be. Seems pretty simple to me.

  16. karger says:

    If you accept that Google’s search results “inherit” the free speech rights of their creator, doesn’t that same argument mean that they inherit the *copyright* protections of their creator?  That would seem to be a problem.

Leave a Reply