Google Autocomplete and the Potential for Defamation

Autocomplete is a feature provided by many search engines that uses an algorithm to automatically display search suggestions to fill queries as information is inputted. These search suggestions are based on a user’s search history, popular search queries, and a number of other objective factors. Autocomplete is an extremely useful search tool, as it may accelerate and refine searches in ways users would not expect.   However, despite its benefits, autocomplete has been the subject of controversy. Autocomplete can be potentially defamatory if your name or company is autocompleted with something negative. Even assuming the negative information is false, the suggestion alone has the power to completely destroy your reputation.

Autocomplete was originally implemented to help people with disabilities increase their typing speed and reduce the number of keystrokes needed in order to complete a word or sentence. It quickly became clear, however, that autocomplete served a purpose for all Internet users. Autocomplete operates so that when a user inputs the first letter or word into the search bar, it predicts one or more possible words to fill the query. If the user intends to type what appears in the list, he can select it. If not, the user must type in the next letter of the word. As each additional letter is entered into the search box, autocomplete automatically alters the search suggestions in the drop-down menu. Once the word or phrase that the user intends to search appears, he can select it and press “Enter” to complete the search.

Autocomplete search suggestions are generated by an algorithm that takes into account a number of objective factors, such as a user’s previous searches and popular search queries.  Other criteria are also factored into the ranking, such as the user’s location and a search term’s “freshness.” In addition, the algorithm automatically detects and filters out a small set of search terms related to pornography, violence, hate speech, and copyright infringement.

Around the world, Google has been subjected to defamation lawsuits based on the content that automatically appears as Internet users input their search queries into Google’s search box. Even though the content in Google’s search suggestions is mainly based on information inputted by third parties, plaintiffs have sued Google on the grounds that it “controls, “creates,” or “publishes” the information through autocomplete. Plaintiffs’ arguments are based on the fact that Google uses an algorithm to aggregate, synthesize, and reconstitute input query data prior to publishing it in its autocomplete search suggestions. Google also consistently updates and improves its algorithm. Plaintiffs argue that by using artificial intelligence, which Google itself creates and maintains, to actively facilitate searches, Google does more than simply convey third-party information. Therefore, it should be held liable for any defamatory content displayed.  Most foreign courts have accepted this argument and found Google liable, forcing it to either remove the defamatory material upon request or otherwise modify its autocomplete algorithm.

In the most recent case, a Hong Kong court ruled that plaintiff Albert Yeung Sau-shing, founder and chairman of Hong Kong-based conglomerate Emperor Group, could sue Google for defamation based on its autocomplete suggestions. In this case, a search of Yeung Sau-shing’s name automatically suggested adding the word “triad,” a term associated with organized crime. In response, Google contended that it was not the “publisher” of its autocomplete results, but rather a “mere passive facilitator of the information,” since the automatic search processes required no human input, operation and manipulation. On the other hand, Yeung Sau-shing, argued that even if Google’s autocomplete function were automated, could still be liable as the “publisher” of the defamatory search suggestions because it actively enabled their publication. The court found that there was a “good arguable case” that Google was a publisher with respect to the contents appearing in its search suggestions.  Therefore, the court allowed Yeung Sau-shing’s case to proceed to trial.

The United States has yet to adjudicate the issue of whether a search engine may be liable for defamation based on content algorithmically generated by its autocomplete function. In the United States, search engines are generally immune from liability pursuant to § 230 of the Communications Decency Act. § 230 provides: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The statute defines an “interactive service provider” as “any information service system or access software provider that provides or enables computer access by multiple users to a computer’s server, including specifically a service or system that providers access to the Internet.” Under the statute, “information content provider” means “any person or entity that is responsible in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” The former definition has been applied to various Internet providers, websites, and search engines. Section 230 is generally understood to immunize these services from civil liability for claims arising from user-generated content.

Although § 230’s protections are broad in scope, they are not absolute. Section 230 shields service providers from liability for defamatory content produced by third-party information content providers.  However, service providers may lose this immunity if a court finds that they are responsible for “creating or developing” the content provided through their services.  For example, in 2008, the Ninth Circuit held that the defendant, Roommates.com (hereinafter “Roommates”), acted as a direct publisher of materials when it categorized and directed users to specific information. In this case, Roommates operated a website that helped people find roommates online. To use the site, subscribers had to create a profile based on basic information, including name location, gender, sexual orientation, etc. They had to provide this information through series of questions with pre-populated answers. In addition to basic information, the website also allowed users to post any other information about themselves or their roommate preferences in an “Additional Comments” section. Plaintiff Fair Housing brought an action against the site, alleging that the site violated fair housing laws by allowing discrimination through its questionnaires and comments.

In response, Roommates argued that under § 230’s safe harbor provision, it could not be liable for content posted on the website by third-parties, as it did create or develop any of the discriminatory information. The lower court agreed, holding that § 230 barred the Fair Housing Council’s claim. On appeal, the Ninth Circuit affirmed the lower court’s decision to grant Roommates immunity for the information provided in the Additional Comments section. It found that Roommates was not at all responsible for creation or development of the content in that section. It did not alter users’ posts or provide any guidance whatsoever as to what the information the responses should contain. Therefore, it was immune from liability for any defamatory statements made in that section. However, the Ninth Circuit reversed the lower court’s decision regarding the information provided in the dropdown menus. The court held that by generating a list of pre-populated answers and providing specific information for user profiles, Roommates acted as an information content provider, rather than a “passive transmitter” of the information. Therefore, it could not claim protection under § 230.

Given the vast number of people whom Google’s search suggestions could potentially affect, it is only a matter of time before a suit of its kind reaches a United States court. If a similar lawsuit is brought against Google in the United States, a court will have to decide whether to grant Google immunity.  If Google is held responsible for the creation or development of its autocomplete search suggestions, it will either have to rework its autocomplete function to ensure that its search suggestions provide no potentially defamatory information or eliminate the autocomplete function altogether. Either of these alternatives would contravene Congress’s purpose behind § 230 and eliminate an extremely useful and popular search tool.  Therefore, courts should adopt a broad construction of § 230 to allow Google to avoid liability for the results of its autocomplete algorithm.