Human behavior will always find a way to bend and distort the most meticulously planned processes. Even Google’s search display algorithim is not immune to the variations between how things should work, and how they actually function in reality.
That is why Google applies a human layer of quality assurance to their results. These are people, conducting real searches, known as the ‘Serach Quality Evaluators’ and they operate using a 164-page guide for how to assess the quality displayed by their searches.
This is a three-part blog post in which we uncover the role of these search raters, how they judge content, and the effect their rating has on your ranking and reputation.
In part one, we’re reviewing the process Google serch raters follow and how they contribute to the ranking system.
In mid-2018, Google released a core algorithm update. There’s nothing surprising in this, they release major changes two or three times per year, plus multiple smaller changes every day.
What was of note was that this updated roughly coincided with an update to Google’s Quality Rater Guidelines, which includes new information regarding the assessment of “low quality” and “lowest quality” pages. Some websites that suffered a dip in their rankings after the core update were convinced that Google had marked their page as low-quality, and dumped them down the rankings.
The algorithm is complicated and dense, but for publishers, it was at least consistent and broadly rational. Humans are subjective and emotional and it seemed, had the power to undo years of work with a single keystroke.
But, this was not the case. Danny Sullivan, Google’s public search liaison explicitly stated ‘rater data’ isn’t a component of the algorithm and that pages that experienced dips did so because previously ‘under-rewarded’ pages were being pushed up the rankings.
The aim is to use real, human searcher raters to improve the accuracy of the machine learning algorithm that delivers the results in the future. Humans can understand the intent behind the searches and help display more accurate results that take context into account.
One of the first steps rates is to define the criteria of the page and if it can be defined as a YMYL — Your Money or Your Life—page.
This is used to categorize pages that concern financial matters or health topics. The quality of these pages is tantamount and appears to be a key priority that only the highest quality content, from the highest quality sources, appears for YMYL searches.
Google raters will also assess how credible the author of the content is. In fact, they state that YMYL pages that don’t display any author information are defacto rated “Lowest” quality, and the same applies to pages where the author has a negative reputation.
These rating systems are then used to determine an overall page quality score that is used to measure how well Google delivers results for the search in context.
It’s no surprise that Google wants to serve results that are most relevant to users. But, what the news regarding low quality pages seemed to do was suggest that only high-ranking companies, with a global reputation, and teams of experts would be able to produce content that displays for future searches. It wasn’t instantly clear that any page can be judged to have a high level of E.A.T. and meet the searcher’s needs.
Fortunately, search context remains everything and any type of page can meet a search need and provide quality content, regardless of the subject. It all depends on how the process is applied.
Search raters are given specific searches to do, to then measure the quality of the search results using two sliding scales:
The Needs Met rating is based on both the query and the result and ratings for this range from “Fully Meets” to “Fails to Meet. The highest rating for this criterion is that all users would be immediately and fully satisfied by the result and would not need to view other results.
‘In other words, the Fully Meets rating should be reserved for results that are the “complete and perfect response or answer” so that no other results are necessary for all or almost all users to be fully satisfied.’
Any site that is not mobile-friendly will get “Fails to Meet.” Again, if your site is not mobile-friendly, you need to make this an immediate priority.
It’s important to remember that the puropose of this review is apply implied context that humans take for granted to these searches.
A user searching for restaurant reviews is most likely going to want results that concern locations in their area. An award-winning review of a restaurant in the New York Times might be expert, authoritative, and trustworthy, but it is not going to meet the needs of someone searching for a good place for lunch in California.
´Results that completely fail to meet the user intent, such as a lack of attention to an aspect of the query (or user location) are defined as Fails to Meet. These results may also be used for results that are extremely low quality, have very stale or outdated information, be nearly impossible to use on a mobile device, etc.´
It’s important to remember that a high-quality page can still fail to meet user expectations. This is what Google wants to address. It is not the goal to de-rank pages, but to make the search results more aligned with the searcher’s intended actions.
As Google handles searches from every corner of every topic, they need to define page quality in relation to diverse subjects and diverse objectives. It can’t limit serach results to a handful of professional pages because the Internet is too varied. What classifies as expert content can change depending on the subject and the objective of the searcher.
For example, a site with meaningful main content, a pleasing layout, and a certain level of authority in the subject can be classed as the highest level of E.A.T. even if the topic is subjective or esoteric.
This is defined by Google as ‘Everyday Expertise’, a high-level of knowledge in subjects that may not require qualifications or a well-known author reputation. Entertainment pages, hobbies, satire, personal views, and unique content, such as about pages can all register the highest rating for E.A.T. As Google puts it,
‘Remember that we are not just talking about formal expertise. High-quality pages involve time, effort, expertise, and talent/skill. Sharing personal experience is a form of everyday expertise.’
By implementing this review, Google is aiming to make its search results more practical by applying a human layer that takes all of the following factors into account:
● The Purpose of the Page
● Expertise, Authoritativeness, Trustworthiness
● Main Content Quality and Amount
● Website Information
● Website Reputation
These factors allow raters to apply an overall page quality score.
It has been possible to produce content that looks good to web crawlers but that serves no real purpose for the searcher. The goal of the search raters is to prevent pages from displaying on inexact searches and prevent users from having to search again to find the information, not just a ‘close-enough’, they were looking for.
Google is explicit that It is not a directive to make changes to your page immediately, but a chance for Google to make your page display for more accurate searches.
Despite these assurances, it’s hard to imagine a scenario in which content—judged by Google to be low quality—will have a successful long-term future. Fortunately, the updates take time and Google is committed to helping websites understand their criteria before rankings are affected.
What is of concern to publishers is that they will not be able to reach the highest rating, as their content covers a wide-range of topics. These topics are often YMYL pages, which have even tighter definitions of page quality.
However, there are distinctions between content, news, information, and advice. A person searching for news or entertainment has a different intention to a searcher looking for practical advice. By having human reviewers, publisher pages will be judged in relation to people searching for news, entertainment, or opinionated content. This should protect relevant, quality pages from being downgraded in search results.
In this example, from Google, you can see how a subject with a high hierarchy of knowledge, such as space travel can be delivered by a news publisher. The human element, the search rater, understands that the quality of the page will satisfy the needs of the searcher, and does not require in-depth technical detail to be marked as high-quality.
What is prioritized in this case, is the relevancy of the news. Older news is still marked as high-quality content, yet it fails to meet the needs of the searcher because it is no longer relevant.
What this update tells publishers is that Google is driving hard to improve its user experience. Whilst content won’t suffer now unless publishers are aware of the search rating criteria, it is possible their content can suffer in the coming updates to the search algorithm.
Make sure you present the highest-quality content from highly reputable sources. The higher the perceived value of your site, the higher the quality ratings will be. While this doesn’t translate directly into higher rankings, doing well with regards to these guidelines can translate into the type of content Google wants to serve higher in the search results.
In the next post in the series, we’ll examine the rules Google uses to define the lowest quality pages and what it means if your content receives this rating.
This is part I. Click here to read part II