Table Of Contents
What’s the fuss about?
In a recent report published by the newscientist, it mentions that Google might implement a fact based algorithm that will give preference to the websites with accurate and trustworthy information. Google already introduced it’s knowledge graph in mid 2012 that lets it see your query (strings) as an object and it has been extracting the facts related to these objects. This database will very likely be known as Knowledge Vault in the future as that’s what this paper calls it. The on page signal this approach focuses on evaluating for a page is Knowledge Based Trust or simply KBT.
Personally, I was a bit unsure of such an update ever going live as there is a big majority of multi nationals who will not be happy with this but it’s only fair for the small businesses and sole traders given that Google has launched updates in the past that have given preference to the big brands. In this article, I shall be referring to this proposed update as KBT update and Knowledge Based Trust update interchangeably.
A word on existing form of Search Engine Optimisation (SEO) for those who are not familiar with it
The current SEO relies heavily on Link Building. It has gotten better for sure (compared to the pre-2012 era) but it is still remains one of the key factors in evaluating a web page’s authenticity and authority. A page with a high amount of authentic, high quality back links is seen as a reputed source and is given priority by Google. This has allowed big corporations to hire an SEO firm and easily dominate outrank their competition.
Here’s a diagram that illustrates an (extremely) simplified version of existing SEO ranking signals
How Knowledge Based Trust or truth update will work
Deciding what’s an accurate piece of information is a very complicated process, sometimes, even for humans. The paper submitted by the Google’s Researchers (PDF), discussed earlier in this article, proposes a way it will likely work in great depth.
In the paper, the researchers have stated that:
We propose using Knowledge-Based Trust (KBT) to estimate source trustworthiness as follows. We extract a plurality of facts from many pages using information extraction techniques. We then jointly estimate the correctness of these facts and the accuracy of the sources using inference in a probabilistic model. Inference is an iterative process, since we believe a source is accurate if its facts are correct, and we believe the facts are correct if they are extracted from an accurate source. We leverage the redundancy of information on the web to break the symmetry. Furthermore, we show how to initialize our estimate of the accuracy of sources based on authoritative information, in order to ensure that this iterative process converges to a good solution.
The following diagram illustrates a simplified version of this truth estimation process based on Knowledge Based Trust:
With the proposed KBT update, an indexed page without many exogenous green signals but with accurate information can easily outrank another page with inaccurate information but tons of backlinks.
The biggest winners
Google will have a huge advantage over it’s competition. At the moment, Search Engines do not have a known way to find out the intent of some exogenous signals. For example, a back link from a negative review page of a product contributes towards the authenticity of the product’s brand.
Google users & people in general will be the biggest winners from this update as all the false information will get buried if they do not Get Those Factoids Out.
Small businesses (not all of course)
The Biggest Losers
I think it’s for the best if I leave the guess work in this section to you 🙂
The good for humanity bit
Humans rely on the internet for decision making more than they have ever relied on any other single source. It only makes sense to dish out nothing but the facts to help people think critically and help them make well informed decisions for themselves. Factual information plays a vital role in the problem solving at a complex level.
Google’s KBT factor (if added) will enable humans to access that accurate information more readily, giving us better quality sites and benefiting both site owners and users alike.
Some valid concerns
Apart from some science deniers’ freakout I have had some valid concerns regarding this proposed algorithm (most of them came to me while I was writing this post).
- How easy will it be to change the facts in the Knowledge Vault?
- What happens to an opposite school of thought that has not been able to prove their hypothesis thus far?
- Can someone manipulate these facts by creating fraud extraction sources for Google?
- How will an already complex algorithm like this deal with laws like Right to be forgotten (EU)?
I guess only the time will tell. If you have any ideas or more questions, please leave a comment below and let us know.