I have frequently written here on the pros and cons of the Communications Decency Act (“CDA”). Without it, no website could permit comments, but by the same token it allows unscrupulous website operators to encourage defamatory postings, and then use those postings to extort payments from the victims.
Because of the latter reality, many have suggested to me that they would like to see the CDA abolished. But a case out of Australia demonstrates just how ridiculous things get without the CDA.
Those Australians are people of few words, so I had to read a number of news accounts to piece together what had occurred. A blogger by the name of Marieke Hardy apparently picked up an anonymous on-line bully. For undisclosed reasons, Hardy decided that she had determined the identity of her mystery bully, so she posted the following comment on Twitter:
“I name and shame my ‘anonymous’ internet bully. Liberating business! Join me.”
The “tweet” then provided a link back to her blog, and there on the blog she identified Joshua Meggitt as the bully. Problem was, Meggitt was not the bully.
Meggitt sued for defamation. Hardy settled with him, allegedly for around $15,000. But Meggitt wants more. Meggitt is suing Twitter for defamation for the tweet by Hardy.
Do you see how absurd things quickly become without the CDA? If Twitter is responsible for every comment, then to avoid defamation it would have to put a delay on all comments, and hire thousands of employees to review the comments. As each comment passed in front of the reviewer, he or she would need to make a quick decision about whether that comment could possible be defamatory, and only then clear it for publication.
I want you to imagine that scenario. You are one of the Twitter reviewers. Thankfully Twitter limits each tweet to 140 characters, so there is not much to review, but you must apply your best judgment to each comment to see if anyone could be offended. So up pops the following:
“That J-Lo. She be crazy.”
Do you hit the approve or disapprove button? Was the “crazy” comment meant in a good or bad sense? Even if the person making the comment meant only that the singer Jennifer Lopez is crazy good, if you approve the comment then every person in the world who goes by the name J-Lo could potentially sue for defamation, claiming that the post accuses them of having mental problems.
But the dispute between Hardy and Meggitt takes the scenario to an even more absurd level. Applying those facts to out hypothetical, what you really received was:
“That J-Lo. She be crazy. http://tinyurl.com/48y28m7″
What do you do with THAT?! Twitter requires you to review and approve or deny 120 tweets per hour. To keep your job you only have less than 30 seconds to make a decision. You quickly click on the link to see why J-Lo is crazy, and you are confronted with a four and a half minute video! Do you have to watch the entire video to make sure it contains nothing defamatory? You don’t have time for that. REJECTED!
And here, all the tweeter wanted to do was pass along a great video by J-Lo.
Under the best possible circumstances, Twitter would be relegated to approving only the most milk toast comments with no possible defamatory implication. In reality though, Twitter could not possibly exist if it could be held liable for every comment posted.
To all of you who just responded with a resounding, “Who cares about Twitter?”, that’s not really the point. I’m talking big picture here.
It will be very interesting to see how the courts in Australia handle this case.
It seems like every few weeks I have to rail against a lawsuit I read about, wherein the attorney representing the plaintiff brings an action that is clearly barred by the Communications Decency Act. In this latest installment, we find a New York attorney who represents plaintiffs who appear to have a solid case against some individual defendants resulting from some truly horrific defamation on the Internet.
But the attorney could not leave it alone. I can almost see his mind working. He thinks to himself, “these individuals will never be able to pay the judgment, so I’d better look around for some deep pockets.” So, in addition to the individual defendants he names ning.com, wordpress.com, twitter.com, and my personal favorite, godaddy.com.
I sometimes use the analogy that naming a Internet Service Provider in an Internet defamation action is akin to naming Microsoft as a defendant because the defamer used Word to type the defamatory statements. I never thought any attorney would actually go that far, but the attorney in this case surpasses even that far flung analogy. I know it’s a foreign concept to some attorneys and their clients, but a defendant should only be held liable for damages if he, she or it has done something wrong. Here, twitter.com is named because the defendants sent out “tweets” sending their followers to the defamatory content. Godaddy.com is named because the defendants obtained the domain name there, and then set it to forward to their blog on wordpress.com. How could these companies possibly be liable? Well, according to plaintiffs and their attorney, they are liable because what the defendants did amounted to an “irresponsible use of technology.”
Apparently, in this attorney’s world, we have gone beyond even requiring that the website provider check the content of every web page posted on its server. Now it is also the obligation of twitter.com to review and authorize every tweet that is sent, and godaddy.com must view with suspicion every account that sets a domain name to forward elsewhere. Clearly there could be no Internet if such duty and liability could be imposed.
In (very slight) defense of the attorney, he does allege that these companies were informed of the nefarious use of their services, and did nothing to block the content. Among the public there is an urban legend that a company becomes liable once it is informed that it is being used to distribute the defamatory content, but an attorney should know better.
I’ve explained here several times that the Communications Decency Act is a necessary evil because you could never have open forums for discussion on the Internet if the operators of the websites were required to read and approve every message posted. Perhaps the Amazons of the world would have the resources to hire a huge staff to monitor all postings, but any popular discussion site that started to attract thousands of visitors would likely be required to stop offering a public forum if it became responsible for the things posted by visitors.
Some attorneys still don’t understand this reality. Take the case of Richard M. Berman. Poor Richard was shot by someone using a handgun purchased from a for sale ad posted on Craigslist. He hired attorney Paul B. Dalnocky, who sued Craigslist for more than $10 million, claiming it was responsible for the handgun ending up in the bad guy’s hands. The civil complaint alleged Craigslist “is either unable or unwilling to allocate the necessary resources to monitor, police, maintain and properly supervise the goods and services” sold on its site. When interviewed for an article on Law.com, attorney Dalnocky said, “We weren’t seeing Craigslist as a publisher — we were seeing it as a regular business that should have monitored its business better. I mean, how can you run a business with millions of ads and have only 25 employees monitoring it?”
No, Mr. Dalnocky, the question is, how would a service like Craigslist be possible if attorneys could sue for things posted in those millions of ads? The answer is it wouldn’t be possible. You allege “millions” of ads are posted on Craigslist. Let’s assume a person could review 1000 ads during a work day. That’s probably not realistic, because that means the person would need to review more than two ads per minute (assuming an eight-hour work day with two 15 minute breaks). Some ads go on for pages so I don’t think one could really review more than two ads per minute, but let’s go with 1000 just to keep the numbers simple. Thus, Craigslist would need to hire 1000 employees for every one million ads posted. It’s going to be very difficult for old Craig to maintain his business model that permits me to post free ads for my 8-track tapes if he is required to hire thousands of employees.
And, Mr. Dalnocky, what would those thousands of employees be looking for, exactly? Guns can be legally sold, and I did not see anything in the court’s decision about any alleged illegality of the gun sale in question. Rather, your complaint alleged that Craigslist was liable because it breached its “duty of care to ensure that inherently hazardous objects, such as handguns, did not come into the hands of . . . individuals, such as Mr. Ortiz.” (Ortiz was alleged to have shot Richard Berman.) What, in that ad, would have put the reviewer on notice that this gun sale was going to end badly?
The attorney representing Craigslist is no doubt a subscriber to the Internet Defamation Blog, and therefore knew that the Communications Decency Act (CDA) is not limited only to claims for defamation. Craigslist moved for dismissal under §230, which states that no “provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider,” and that no “cause of action may be brought and liability imposed under any State law that is inconsistent with this section.”
The court properly dismissed the case under the CDA because, let’s say it all together, a website operator cannot be held liable for comments (or ads) posted by third parties, and is not liable for failing to somehow monitor those comments (or ads). One of the earliest cases involving the CDA was an action against Ebay. Someone sued, claiming that Ebay should be held liable for the counterfeit items that were being posted and sold, trying to impose on it an obligation to review and investigate every ad. Ebay prevailed in that action, and Craigslist properly prevailed in this one.
The full court decision can be found here.
I think there is little doubt that someday a court will permit a circumvention of the Communications Decency Act. As explained here numerous time, the CDA makes a website or website provider immune from liability for content posted by others. But there are constant skirmishes at the fringe. For example, if the website somehow “highlights” the posting or adds its own editorial comments, does it then become responsible for the content? What if a court orders the poster to remove the defamatory content, but the site refuses to cooperate in the process? Can’t the argument then be made that the website operator is then publishing the content since the original poster has disowned it? And while the CDA contemplates that the original poster will be responsible for the defamatory content, what if the person who posted the content dies and the victim is left with no remedy?
This last hypothetical is precisely the issue that is presented by a case currently pending in Illinois. The mother of US Olympic speedskater Shani Davis is suing Google for refusing to remove a blog posting that was made by a user who has since died. There is no doubt that under normal circumstances, Google would be protected from immunity under the CDA. But the blogger, Sean Healy, died of cancer a year after publishing the article in question.
The post by Healy was entitled, “Memo to Cherie Davis,” and claims that the speedskater’s mother made disparaging comments about the views of the US Speedskating Federation. Cherie Davis claims in her suit that she made no such comment. She further claims that because Healy cannot be made to answer in damages and/or remove the content, Google must step up and make things right with this now dormant blog, that just sits on Google’s server, continuing to defame plaintiff.
I’m hopeful that this will be the case that opens a tiny crack in the CDA. I applaud the CDA for protecting websites from liability. As I have explained here before, if website operators became liable for the content posted by others, none could risk having a public discussion board. But I always contended that the open marketplace of ideas can still exist even if we make website operators subject to cooperating with court orders. If a court finds that content is defamatory, there is no reason that a site should fight to maintain that content. The website will be protected by the necessity of a plaintiff having to go to court to have that determination made. Website operators contend even that is too onerous, since they will then have to remove the content, but this is belied by the fact that website hosts, including Google, already comply with demands made under the DMCA to remove copyrighted material.
I’ll keep you posted on the results of this case.
I’m surprised I don’t get more of these calls.
A caller to my office today was frustrated because many of her emails are ending up in the recipients’ spam folders. It happens so often that now she has no confidence that the message was received. She routinely follows up with a phone call to confirm receipt, and often must lead the intended recipient through the process of checking their spam folder for the missing message. She said it has become enough of a problem that it is interfering with her business.
I occasionally experience this myself. A new client is eager to get me going on their urgent case, so I quickly prepare and email a fee agreement. Days later they call, frustrated because they still haven’t received the agreement, and I have to direct them to the spam folder. Often as not, they were not even aware they had a spam folder. (Yes, I could request receipt notification, but that is imprecise at best.)
Back to the caller. She was frustrated because obviously someone out there in cyberspace is designating her as a spammer, and she wanted to know if she could sue for defamation. Allow me to wax nostalgic, because this exact issue arose in one of my earlier Internet cases.
The Communications Decency Act (“CDA”) immunizes “a provider . . . of an interactive computer service” who makes available to “others the technical means to restrict access to material . . . the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” My client in the earlier case had created a spam filter that was widely used by Internet service providers. A business ended up being designated as a potential spammer, and all of a sudden its messages were being blocked (although it had other email addresses that were not being blocked). The business owner sued my client, and he hired me.
Try as I might, I could not get plaintiff’s counsel to understand the plain meaning of the CDA. He conceded that the creator of a spam filter could be protected, but contended that the spam filter had to be content based. In other words, he claimed that you could block emails containing pornographic pictures, for example, but you could not block the spammer sending the emails, since he might send something other than porn.
This was a nonsensical position. The CDA says that in addition to the obvious stuff like porn, you can also block email you find “harassing or otherwise objectionable.” Anything can be harassing, like those stupid pedi-paw emails that are currently flooding my email. Not surprisingly, I got the judge to throw out the case, and the Court of Appeal agreed.
Bottom line: If there is something about your emails that is triggering spam filters (maybe changing your name to Cialis Viagra wasn’t as clever as you thought), figure out which ISPs take exception to you and why. You may be able to fix the problem on your end, and if not, they may voluntarily tweak the filters. But don’t think you can sue them.
There are still many attorneys making money representing clients on Internet defamation cases that can’t be won. They are either ignorant of the law, or ignoring it. My firm has been schooling others on the Communications Decency Act for years. See, for example, Winning the Fight for Freedom of Expression on the Internet and A Victory Against Spam. But there are still a number of firms that still need an education. A case just came down in New York, where someone tried to sue a web host for the comments posted on his website.
Let’s all say it together. If a website is created that allows visitors to post their comments, under the Communications Decency Act the host of that website cannot be held liable for any defamatory remarks that others post. The law is very black and white in this area. The myth still continues that if the defamed party makes the website operator aware of the defamatory material, he somehow becomes liable for failing to take it down. That is simply not true.
There is a lot of abuse on the Internet, and ideally a web host should respond to requests to remove defamatory posts, but if that were made the law then the ability to host a community forum would disappear in almost all instances.
Consider a helpful, innocent person who decides to start a restaurant forum, discussing the local businesses. Someone goes on and leaves a post that a local sushi restaurant is using old fish. The sushi restaurant contacts the host, and insists that the post be taken down, claiming they use nothing but fresh fish. How would our hypothetical web host go about investigating such a claim? Is he required to go to the restaurant and inspect the receipts to determine the freshness of the fish? Must he insist that the poster provide proof of the old fish?
Most likely, if faced with civil liability, the host would simply take down the post. And when reviewing all the protests became too time consuming, the forum would disappear. The day Congress passes a law requiring website operators to verify all the claims made by visitors to their sites is the day that most free speech ends on the Internet. Many would prefer that, but in my opinion the open approach is the better approach.