How Google Works
Discover how Google crawls, indexes and ranks your website

Table of Contents
Chapter #2: How Google Works (Crawling, Indexing, and Ranking)
We’ve left the beginner stuff in chapter #1, now it’s time for you to have a crash course in the technical side of Google. This side can be a little dry, so read on if you want to understand what’s happening behind the scenes. Or, jump to chapter #3: Keyword Research.
None of the technical stuff for me thanks, show me how to use keywords to improve my rankings
Technology can feel like magic. You ask Google for a product, service or piece of information and within seconds it’s in front of. While this process is almost instant, Google’s algorithm is a complex and highly intuitive process that is discovering, understanding and organising all of the world’s content every day.
When you ask Google a question, it’s Google’s job to find the most relevant answer.
In order for YOUR business to be that answer, you need to be found by Google. If your website hasn’t been optimised for SEO to be discovered by Google, you’ve lost before you even started.
In this chapter we’ll teach you what it takes to be discovered by Google, and break down the mystery behind Google’s ranking algorithm. Settle in, you’re about to become extremely knowledgeable…
How Do Search Engines like Google Work?
All search engines (Google, Bing, Yahoo) have 3 primary functions.
- 1). Crawl: Crawling is when a search engine sends a bot to a URL and “reads” the web page.
- 2). Index: Once a URL has been crawled, the content is stored and organised in a search engine’s index. You cannot rank on Google without being crawled and indexed.
- 3). Rank: Each time a search is made, search engine’s will check their index of crawled websites and provide the most relevant content to meet a searcher’s intent.
Let’s Start With Crawling
Every day an incredible amount of new content is added to the internet. Over 2 million blogs are published each day, with countless more websites, service pages and products going online. To keep up with the amount of content generated, Google sends out a team of robots – known as crawlers or spiders, using crawling software called Googlebot.
These robots crawl the internet to find new and updated content every day, 365 days a year. This content could be in the form of a blog post, a web page, a video, an image or something else entirely. This process is done by moving from link to link between pages. Crawlers need a starting point, and from there they’ll travel between links to gather information about each page and what it is for.
If you’ve got a home page that links to 3 service pages, crawlers will travel to each of them. If your service pages aren’t linked to by ANY page, you make a crawler’s life difficult (we’ll explain how shortly).
Think of this process as like going to a supermarket for the first time. To find all the items you’re after you’ll need to explore each aisle. Crawlers are doing the same thing with millions of websites each day. Once the crawlers have found and understood new and updated content, they add it to google’s index of stored information. This consists of a HUGE database of URLs ready to be placed in front of people using Google.
Crawlers tell Google everything they need to know about your website, so that when a potential site visitor starts looking for help, Google will know if your site is a relevant match.
Do I Need to Know Much About Search Engine Indexes?
Not really.
SEO can be confusing enough without getting stuck in minor details. A search engine index is Google’s catalogue of stored information – gathered thanks to the hard work of crawlers. Your goal is to make your website discoverable by Google, create quality content that satisfies the intent of searches, and get yourself indexed.
You don’t have to understand HOW it works, just understand how to get yourself indexed. By the end of this chapter you’ll know exactly what it takes to put your business in Google’s index.
What About Search Engine Rankings?
Now we’re getting warmer.
When someone enters a keyword or keyphrase on Google, they’re looking for the most relevant result. In the blink of an eye Google scours their index to find the most relevant result and delivers the top 10, with position #1 seen as the most relevant. You’ll recognise this as the first page of Google.
The order of these results based on relevance is known as ranking.
In previous years it was possible for SEOs to game the system and drive websites to the top of Google without direct relevance to the query. But as Google’s algorithm has become more advanced, it’s safe to assume the higher a site ranks, the more relevant it is to a query – and that’s where your business needs to be.
Is My Site Being Found by Crawlers?
Worried your website isn’t being found by crawlers? You can check how many of your pages are indexed in under 30 seconds with one simple step.
Head to Google and type in ‘site:yourdomain.com’ and hit enter.
This is known as an advanced search operator. Advanced search operators are special commands that modify traditional Google searches. By requesting specific information, in this case you’re asking to see results from your domain only, you can see which pages are indexed.
If a page doesn’t appear in the results, it’s likely not been indexed yet. Here’s a quick example:
You’ll be able to see how many pages are indexed by checking the number of results displayed, underneath the advanced search operator you’ve entered. In the above example, there are 40 pages indexed by Google. Keep in mind this isn’t an exact figure, but it will give you a rough guide of whether your site is in Google’s catalogue of URLs or if you need to take extra steps to get your website indexed.
If you’re after exact results, use the Index Coverage Report in Google Search Console – a free web service provided by Google to track the visibility of your site. The Index Coverage Report is a free tool from Google that provides insight into the index status of your website, and it’s free to sign up! (Did we mention free?)
Sign up to Google Search Console here.
My Website Isn’t Showing up on Google, What Should I Do?
Don’t panic. There are a number of reasons why your website is not showing up in the search results. These include:
- Your website is brand, sparkling new and hasn’t been crawled yet
- Your website doesn’t have any links from external sites (so crawlers can’t find you)
- Your website’s navigation makes it hard for crawlers to find each page
- Your website has been penalised by Google for black-hat SEO and you’ve been removed from the SERPs
- Your website contains directives in the code blocking access to crawlers (robots.txt files)
Let’s start with robots.txt files and go from there.
Robots.txt Files Explained
Robots.txt files are located in the root directory of a website (for example, yourdomain.com/robots.text). These files communicate with crawlers and suggest which parts of your site to avoid crawling, and how fast crawling should be.
Googlebot is always on the look out for robots.txt files, for example:
- If there are NO robots.txt files, Googlebot will crawl your site
- If there ARE robots.txt files, Googlebot will typically listen to the suggestion and crawl the accessible pages of your site
- If there is an ERROR with your robots.txt file and Googlebot can’t determine if one exists or not, it will not crawl your site
Robot.txt Files Can Be Used to Optimise for Crawl Budget
“Why would I want Google to avoid crawling every page on my website?”
With so many pages to get through each day, Googlebot has to divide and conquer. Crawl budget is the allocated number of pages Google will crawl on your website before leaving. Using robots.txt files can be a creative SEO tactic to optimise your site and make sure crawlers and indexing the most valuable pages – while ignoring low quality, low value and low converting content.
Crawl budget is crucial for websites with thousands of URLs, but even smaller websites may benefit from blocking pages you don’t want to appear on Google.
It’s worth noting that not ALL crawlers follow the directives of robots.txt files. Spammy marketers looking to scrape email addresses and bombard you with unsolicited emails often build bots that ignore protocol and crawl your page looking for contact information. In the case of Google, you have more control.
What If I Have an eCommerce Store, How Does Google Know Which Page to Show?
eCommerce websites often have the same content available on multiple URLs, to make this process streamlined this is done by adding certain parameters to individual URLs.
For example, if you’ve shopped online you may have searched for “shoes” and then filtered your results on the website of your choice, browsing by size, colour and brand. Each time you change your filter, the URL changes slightly.
Google’s algorithm will typically serve the most relevant page to suit someone’s search, but you can tilt the scales in your favour using the URL Parameters feature in Google Search Console. This feature lets you tell Googlebot to avoid crawling certain URLs – in this case those with parameters you don’t want to rank. This is asking Google to hide these results from searchers to avoid a mess of duplicate results when you’d rather have people land on a primary product page.
How to Make Sure Crawlers Can Find Your Pages
We’ve covered the ways to keep Google OFF your website, let’s dive into some strategies for ensuring Googlebot IS finding and indexing your site.
It’s a mistake to assume Google will find and index your pages just because you haven’t used a robots.txt file. There are other obstacles to crawlers you’ll need to be aware of. These include:
- ✘ Content hidden behind login forms: If site visitors have to log in, fill out forms or complete surveys to access gated content, Google won’t be able to crawl those protected pages.
- ✘Content accessible through search forms: Google loves crawling through simple navigation menus, but website search boxes are a dead-end. Site visitors can still use a search bar to look for individual pages, but crawlers can’t.
- ✘ Content in the form of non-text media: Google’s algorithm is pretty sharp, but it has limitations. If you place content in non-text media (videos, images, GIFs) it won’t be indexed. Adding text to the markup of your page is the best way to be found by crawlers.
Navigation Tips to Help Crawlers Move Through Your Site
You’ve optimised your website to be accessible by crawlers, but that’s only half the job done. Now that Googlebot can find you, you need to roll out the red carpet and make sure crawlers can move easily from one page to the next.
IMPORTANT: If you’ve got a page you want search engines to find, but that page isn’t linked to by ANY page on your site, it’s pretty much invisible.
It’s a common mistake from kiwi business owners to set up their website for humans and create easy to use navigation and search bars – but forget Google needs to move around your site too!
Common navigation menus that obstruct crawlers include:
- Mobile navigation showing different results to desktop navigation
- Non-HTML navigation (such as JavaScript-enabled navigation)
- Forgetting to link to a primary page via website navigation
- Showing unique navigation to certain site visitors (this may appear as cloaking and is an black-hat SEO no-no)
No one wants to have to work hard to navigate a website. But don’t make the mistake of making things so easy for humans you forget about Google. Creating an intuitive and easy-to-use website is a process known as information architecture. If users don’t have to think hard about moving from one page to the next, then you’ve done your job and your information architecture is solid.
Use Sitemaps to Help Crawlers Find and Index Your Content
Of all the SEO terms out there, a sitemap is the most straightforward.
Your sitemap is a list of URLs on your site that crawlers can follow to find and index content – like following a map to buried treasure. To give your website every advantage, you can create a sitemap and submit it through your Google Search Console account. This won’t replace the need for simple, intuitive site navigation (nor is it a guarantee you’ll be indexed) but there’s literally no downside to providing Google with an up-to-date sitemap – only potential.
I’ve Seen a ‘Crawl Error’ in Google Search Console, What Does That Mean?
With so much happening behind the scene of a website, it’s natural for mistakes to occur from time to time. In the process of crawling your site, a crawler may encounter errors. These will be displayed on your Google Search Console account in your own ‘Crawl Errors’ report. This report detects both server errors and not found errors. Let’s look at a couple of typical examples of these error types.
Server Error - 5xx Codes
5xx codes occur when crawlers can’t access your content due to a server error. This means the URL is fine, but the host server failed to fulfill a request to access the page. All websites are hosted on servers, so if there’s something wrong at a host server, you’ll see a 5xx error code. The most common cause of this error is a server that timed out, causing Googlebot to abandon the request and leaving site visitors with nothing to see.
Not Found Error - 4xx Codes
4xx codes occur when crawlers can’t access your content due to a client error. This means the requested URL contains bad syntax or can’t be fulfilled. One of the most common 4xx errors is the ‘404 – not found’ error. You’ve probably seen these yourself. As you’re browsing a website you click the link to a new page, only…there isn’t one. A 404 error tells you something has gone wrong, possibly because of a type in the URL, a deleted page or a broken redirect.
Use 301 Redirects to Reduce Not Found Errors
As websites grow it’s natural for certain pages to move, or be deleted. To avoid serving up frustrating error codes, there’s a simple way to tell people and Google your page has moved – the 301 redirect.
For example, let’s say you moved a page from https://www.examplesite.com/rugby to https://examplesite.com/rugby-union. Without creating a bridge from the old site to the new, humans and crawlers will keep hitting a dead end in the form of a frustrating ‘404 – not found’ page. The 301 redirect tells Google a page has permanently moved.
We’ve put together a simple table to show you what happens when you use a 301 redirect, and what benefits you miss out on by failing to use one:
Even though the 301 redirect passes 90-99% of link juice from your old page to a new one, this doesn’t mean you’ll hold onto the rankings the old page has built up. If a page is ranking for a specific query and you 301 it to a URL with new content, the new content may not be as relevant. This can cause rankings for the new page to drop even if the old page was ranking well. The takeaway here? 301 redirects can help you avoid frustrating dead ends, but they’re extremely powerful – so use them wisely.
Google may also find it difficult to reach your page if it has to crawl through multiple redirects – these are known as redirect chains. For example if you 301 redirect from page A to page B, then later again to page C, it’s better to cut out the middle link and redirect page A to page C.
Got time for one more number? The 302 redirect is also an SEO option. This temporarily redirects a page. Think of the 302 as a detour while a certain road is undergoing maintenance. You use the 302 redirect to funnel traffic through a different page, but only until you get your site back the way it was. The 302 redirect is typically used in situations where link equity isn’t a priority.
Now your site is optimised for crawlers, the next step is to make sure it can be indexed.
How Does Google Index My Website?
Googlebot has found your site and crawlers are jumping through each link as we speak, job done, right?
Not so fast.
Just because your site has been crawled does NOT mean it will be indexed. Crawlers discover your pages, but the index is like a giant library where your pages will be stored so Google can take you off the shelf and place you in front of people when they’re looking for information, products or services.
Read on to discover how indexing works and how you can get your website into this crucial online database.
Is Googlebot Crawling My Page Properly?
You can see a snapshot of what Googlebot has crawled by viewing the cached version of your webpages. With millions of websites to crawl each day, Google crawls and caches websites at different frequencies. Larger sites are crawled more often than smaller, newer websites (don’t worry if you’re looking to optimise a newer site, we’ve got tips and tricks to help you rank so read on).
You can view the cached version of your website by clicking the drop-down arrow next to your URL in the SERPs and choose ‘cached’.
Does Google Ever Remove Pages from the Index?
Yes.
Entry to the all-important index is not a lifetime membership. Your website’s pages can be deindexed, for reasons that include:
- Your URL is returning a not found error (4xx) or server error (5xx). This could be done on purpose by deleting a page to remove it from the index, or accidentally by moving a page without adding a 301 redirect to tell Google.
- Your URL had a noindex meta tag. This tag can be added to tell Google to omit indexing a page.
- Your URL has received a manual penalty from Google for violating their Webmaster Guidelines (always a risk when you engage in black-hat SEO).
- Your URL has added a form of gated entry, like a password for users, that stops crawlers accessing the page.
If you’re feeling nervous one (or more) of your pages has been deindexed, you can use the free URL Inspection Tool to check the status of your pages.
Can I Tell Google How to Index My Site?
You sure can.
Meta directives, or meta tags, are instructions you can give to Google to provide direction for how you’d like your site indexed. For example, you can tell Google things like ‘don’t index this page’ or ‘don’t pass link equity from this page to another’. Passing on these instructions can be a little technical, and you’ll need to add Robots Meta Tags in the of your HTML pages or via the X-Robots-Tag in the HTTP header.
Finding an SEO consultant can help you avoid the headache of technical SEO, but here’s a little more info on telling Google what to do (or what not to do) with your website.
Understanding Robots Meta Tags
Robots meta tags are pieces of code used within the of your HTML on your webpage. They give instructions for crawlers on how to crawl or index your pages. This gives you the power to exclude specific pages, specific search engines, or ALL search engines.
Scroll down for an overview of the most common robots meta tag values:
index: Tells Google to add your page to their index (using this meta tag is not necessary as Google assumes all pages can be indexed by default)
noindex: Disallows Google from adding your page to their index
Example:
follow: Tells Google it may follow links on the page to discover other pages and pass link equity to those pages (all pages are assumed to have the ‘follow’ meta tag by default)
nofollow: Tells Google NOT to follow links or pass link equity through to other pages
Example:
noarchive: Tells Google not to show a cached copy of your page in the SERPs (this is useful if you run an eCommerce site with changing prices to avoid outdated pricing appearing on Google).
Example:
These robot meta tags can be used on their own or together to direct Google and help you control how your pages are seen, indexed, or ignored.
Understanding X-Robots-Tag
The x-robots-tag is used in the HTTP header of your URL. This gives you a little more flexibility and functionality than Robots Meta Tags if you want to block search engines on a wider scale. Using an x-robots-tag you can use regular expressions, block non-HTML files and apply noindex tags across your entire site.
If you’re like most Kiwi business owners you’re here to learn how to grow your rankings and traffic (and in turn revenue), so we won’t drag you too deep down the technical SEO rabbit hole. If you’d like to learn more about Robots Meta Tags or X-Robots-Tags, we’ll leave Google’s official guidelines here for you.
The takeaway for your business is you can direct the way Google crawls and indexes your site to put your most important pages in front of the right people. OK, enough with the overly technical SEO, let’s talk about how Google ranks websites (and how you can move towards the #1 spot).
How Does Google Pick the #1 Website for Each Query?
Google’s goal is to return the most relevant website for every search made by search engine users. And it’s Google’s ranking algorithm that determines the top websites for any given query.
This algorithm has gone through many changes over the years, between singular updates that significantly impacted rankings, to thousands of small changes each year that constantly refine the ranking process. These changes were often made to tackle black-hat SEO tactics that were gaming the system, such as low-quality content, spam backlinks, or relevance. As you read this, the algorithm is now a complex and constantly evolving system that uses AI-technology and machine learning.
We could teach a course on the algorithm changes over time, but what’s important to your business isn’t how the algorithm has changed – but how to leverage it today.
We’ll keep this simple, Google’s algorithm finds the most relevant results and lists them in order from #1 to #10 – this is known as ranking.
These Are the Top 3 Ranking Factors for Your SEO
In Chapter #1 we mentioned there are roughly 200 ranking signals used by Google. Well now we’re going to dilute those down to the MOST important 3.
While Google is famously quiet, they have let out clues over the years so when it comes to your SEO, these are the 3 factors you need to get right.
- Backlinks
- Content
- RankBrain
What Does Google Want from the #1 Ranking Website?
In a word – relevance.
Google (and all search engines) want the same thing, to satisfy the search of someone who’s making a query online. The websites occupying those precious top 10 spots on the first page are seen by Google to be the most relevant, and the most likely to provide useful answers to a searcher’s question.
Jump back 15 years and Google’s algorithm wasn’t as advanced as it is now. Black-hat SEOs were able to manipulate the system using sneaky tricks and spammy tactics that directly violated Google’s guidelines. One of the most common (and one of the most crucial to avoid today) was keyword stuffing.
What Is Keyword Stuffing?
In the next chapter you’ll learn how crucial keywords are if you want to successfully rank on Google. To give you the short version, by adding keywords to your content you can increase the odds of your website appearing when people search for those same keywords.
For example, when people search for ‘How to make a kiwi fruit smoothie’ you can optimise your page by using those same keywords and phrases.
The problem was, black-hat SEOs realised Google’s algorithm was looking for those same keywords as a metric of relevance, and they took advantage by creating hard to read text that only served to trigger the algorithm, and provided a horrible user experience.
For example, “If you want to know how to make a kiwi fruit smoothie you need our kiwi fruit smoothie tips and tricks. We know how to make the best kiwi fruit smoothies out of all the kiwi fruit smoothie makers. Sit back and read how to make amazing kiwi fruit smoothies from New Zealand’s best kiwi fruit smoothie makers.”
Not very fun to read, is it?
The keyword stuffing example above is EXACTLY why Google constantly updates and evolves its algorithm – to keep black-hat SEOs from creating a horrible user experience and to promote websites who go above and beyond to create quality content that solves user’s queries without stuffing keywords in.
The Power of Links in SEO
While keyword stuffing no longer moves the needle on your SEO rankings, there is a proven SEO strategy that New Zealand’s highest ranking websites use – links.
There are two types of links your website needs to be optimised properly:
- Inbound links (or backlinks)
- Internal links
Backlinks are links from other websites that point to yours. For example, this is a backlink to Google’s Webmaster Guidelines. Clicking on that link would take you to Google’s page, as we’ve created a bridge from our page to theirs.
In contrast, internal links create bridges between pages on the same website. For example, this is an internal link to chapter #3: Keyword Research.
Let’s focus on backlinks…
Backlinks have played a major role in rankings as they are much harder to manipulate than keywords. This is also why links from high-quality websites are more valuable to your SEO results than smaller, low-quality websites. It’s easy to place the same keyword in your content over and over. It’s harder to convince another website to create a backlink to your website, and it’s harder still to convince a popular and high-quality website like the New Zealand herald or Stuff.co.nz to create a backlink to your site.
Since the early days of Google’s ranking algorithm, links helped Google see which sites were trustworthy, which influenced their rankings. After all, if you have 100 quality websites linking to you, this is a sign to Google you’re high quality and trustworthy yourself.
Think of backlinks as your digital word-of-mouth. The more people are linking to your website, the more they’re talking about you. And because this system is so hard to manipulate, Google incorporated it as one of the most crucial aspects of ranking, with a core algorithm known as PageRank.
Named after one of Google’s founders, Larry Page, PageRank estimates the importance of a webpage based on the quality and quantity of websites pointing to it, i.e backlinks. This is why it is more valuable to receive links from a smaller number of quality websites, instead of a large number of spammy websites.
If you only take one thing from all this, let it be this – the more natural (not bought) backlinks you earn from high-authority and trusted websites, the better your odds of higher rankings on Google.
What Is RankBrain (And How Does It Work)?
RankBrain is the machine learning component of Google’s core algorithm. Don’t let images of Terminator and Skynet come to mind when you hear about machine learning. This type of AI-software improves over time, constantly learning and updating based on previous observations and data analysis. By allowing a constantly evolving algorithm to help decide on rankings, it becomes harder in theory for black-hat SEOs to trick their way to the top.
More than a simple ranking software, RankBrain is constantly monitoring the behaviour of people using Google. For example, if RankBrain notices more clicks on a website in a lower position, it may adjust the results to reflect the relevance of the (previously) lower ranked site. This is another reason why spammy SEO tactics never work in the long run – with RankBrain always updating and monitoring user activity, you’re better off giving people what they want instead of trying to trick Google.
Now for the $1 million dollar question…how does RankBrain work?
Well, we’d tell you if we knew every detail. Because not only is Google famously secretive with the inner workings of their algorithm (an understandable form of privacy to stop people from abusing the system), but Google has admitted they don’t even know how RankBrain works!
Thankfully, you don’t need to know how RankBrain works, just that it works. Google will continue to use RankBrain to find and promote the most relevant content. As a business owner that’s GREAT NEWS. That means all you have to do is understand your searcher’s intent, create high-quality content that satisfies their intent, and deliver it in an engaging user experience.
If Backlinks Are Key, Does My Content Matter?
You’ve probably heard the expression by now, ‘Content is King’. This is as true as ever.
If you don’t have any amazing, eye-catching and irresistible content, why would anyone create backlinks to your website?
Backlinks have to build bridges worth walking over. No one will link to your website just because you want them to. You need quality content that provides value, this could be in the form of a blog, video, images, or service page. This ‘Digital Estate’s Introduction to SEO’ is an example of content that creates backlinks. When other websites write about SEO, keyword research, or link building, they build backlinks to this resource when quoting statistics, or looking for industry thought-leaders to make their own content stronger.
As well as driving backlinks, your rankings will be impacted by how relevant and trustworthy your content is. In other words, does your content satisfy the searcher’s intent and help them solve their problem?
Here at Digital Estate we receive a lot of questions from business owners asking how long their content should be to rank. Unfortunately, this is like asking how long a piece of string is. There are no formal benchmarks for:
- Content length
- Keyword usage (known as keyword density)
Much of the time you can perform some simple competitor analysis to find the benchmark for your target keywords. Start by punching in your target keywords then go through the top 10 results on the first page.
How long is their content? How many words are used? How often do they use their keywords?
Checking the top ranking pages is a quick (and free) way to gauge the competition and see what you’ll need as a bare minimum if you want to rank. If you’re ever unsure of content length, the most direct route is to put your readers first. If their query requires an in-depth piece of content, that’s what you should give them. If their query could be answered in a short and sharp way, there’s no need to go overboard.
Does Google Care How Long People Visit My Website?
You’ll realise by now Google loves to keep a few cards close to their chest. When it comes to the people who visit your website, data studied by our Digital Estate experts (as well as other SEOs) shows the way people interact with your website – known as engagement – affects your rankings in part due to correlation and part causation.
What do we mean by this? When we talk about your engagement we mean actions like:
- Clicks (from Google to your website)
- Time on page (how long someone spends on a page before leaving)
- Bounce rate (the number of sessions where users visited just one page, measured as a percentage)
- Pogo sticking (clicking on an organic result on Google then quickly going back to the SERPs and clicking on another result for the same query)
There’s no shortage of SEOs who claim these engagement metrics impact rankings, and other SEOs who say it’s all unrelated. This comes down to a chicken and the egg scenario – is high engagement directly leading to higher rankings? Or do higher ranking sites naturally receive higher engagement?
Here’s What Google Has to Say About Engagement metrics...
It wouldn’t be Google to come out and make a definitive statement on their ranking algorithm, but that hasn’t stopped the search engine giants from speaking on the subject. Google has been on record admitting they DO use click data to influence certain rankings.
Here’s former Google Chief of Search Quality, Udi Manber:
“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”
This isn’t definitive proof your engagement impacts your rankings, but thinking logically you want people to spend more time on your website whether it’s good for your SEO or not. So while Google hasn’t officially declared your engagement metrics a ranking signal, it’s beneficial for you to create a website people enjoy spending time on as that will earn dividends through more clicks, calls and conversions.
To put in a simple way, your backlinks and content help convince Google you’re worthy of high rankings, and high engagement is proof Google got it right.
A (Quick) History of Google’s Search Results
It’s easier to know where you’re going once you know where you’ve come from.
Before Google’s algorithm became the complex, AI-driven machine it is now, SEO was a simpler place. The phrase ‘10 blue links’ was used to describe the simple structure of SERPs before Google Ads, Local SEO and other SERP features were established. Here’s an example of what Google looked like when the results showed the ‘10 blue links’.
In the above example, spot #1 was the holy grail.
Until Google started making changes. These occurred in the form of new SERP features. SERP features you’d recognise today include:
- Google Ads (paid traffic)
- Featured Snippets
- Local Pack (Google Map results)
- Knowledge Panel
- ‘People Also Ask’ boxes
- Sitelinks
Naturally, new SERP features pushed organic results down the page. For some queries, organic results could end up below a featured snippet, local pack and People Also Ask boxes – making it harder to stand out.
“Wait a minute, doesn’t Google want what’s best for users? Why would they bump results down the page?”
As frustrating as fewer organic spots is for many New Zealand businesses, this IS good for users. Some searchers can be satisfied with certain SERP features, so while businesses get less real estate in the traditional organic results, users get a more relevant result.
Here’s a quick example of how each type of query can trigger a unique SERP feature.
Curious about search intent? We’ll deep-dive into intent (and how you can optimise your content to rank across ALL SERP features) in Chapter #3: Keyword Research
How Can I Use Local SEO to Rank for Local Searches?
We touched on Local SEO in Chapter #1: SEO for Beginners.
The ranking strategies to appear for eCommerce SEO searches are different from local SEO, which are different from international SEO.
When it comes to local SEO, Google has an index of businesses that it chooses to display for local searches. To appear in this index you’ll need a business that has a physical location (like a restaurant or retail store) or a business that travels to your customers (like a plumber or electrician).
Either of these will qualify your business for a Google My Business (GMB) listing.
I’ve Got My GMB, What Do I Do Next?
Now that you’ve claimed, verified and secured your GMB, appearing in local searches comes down to three main ranking factors.
- 1) Relevance
- 2) Distance
- 3) Prominence
Relevance
Relevance is how closely your business matches what a local searcher is looking for. If you have a store that sells Manuka honey you won’t be a relevant match to someone looking for a store that sells clothes. To help Google understand what your local business is about, you’ll need to fill out your business information within your GMB listing. The more information you add (and the more accurate you are), the easier it is for Google to learn about you and boost your chances of appearing for relevant local searches.
Takeaway: Fill out your GMB information carefully and accurately to boost your local SEO.
Distance
Distance is how far your business is from a searcher’s physical location. Unlike international SEO or eCommerce SEO that can place your website in front of customers across the North and South Island, local SEO is highly dependent on where people are searching from. Proximity has a HUGE effect on local SEO, so if your business operates in Auckland, you won’t appear as a local result when someone searches in Dunedin. It’s worth noting that someone in Dunedin could search for Auckland businesses by adding a different city to their query.
Takeaway: Once you’ve added your location in your GMB, Google will use geo-location to serve you to local searchers.
Prominence
Prominence is how well your business is known in the real world. Google wants to reward businesses that aren’t just optimising their online presence, but are known and loved by people in the real world too. This helps Google protect their local rankings against black-hat SEOs who try to manipulate a business’ presence online. To gauge your real world success Google takes into account your reviews and any citations on local directories.
So I Should Be Getting Reviews for My Business Then?
100% yes.
Having customers leave 5-star reviews on your GMB listing is a proven way to boost your local SEO. Just as 1-star reviews can hurt your local SEO. We’ve said it before and we’ll say it again – great SEO is for people as much as it is for Google. So don’t think of your reviews as a way to impress Google. When your GMB is full of 5-star reviews and customers singing your praises, this will help drive new customers. 88% of consumers trust online reviews as much as they trust recommendations from friends and family, so when your reviews pile up, your leads will too.
Digital Estate Tip: The majority of your customers won’t leave a review, even if they love your business. You can build reviews by sending a review link to your best customers and asking them to leave their feedback. It’s a quick, Google-safe way to build your reputation and boost your prominence.
Do I Need to Get Citations Too?
You sure do.
Think of your reviews and citations as the one-two punch for local SEO success.
Your citations, or business listings, are references to your business in local directories. Well known examples of these directories are the Yellow Pages, Yelp and Neighbourly. It’s not a matter of throwing up your business name and waiting for Google to see you though. When it comes to citations there is NOTHING (we’ll repeat this…NOTHING) more important than getting your NAP right.
To save you Googling that acronym, your NAP stands for your:
Google is looking for mentions of your website across the internet, and the consistency of your citations is key to your prominence. Google assumes a well known and trusted business would be listed using the same name, address and phone number anywhere it’s found online. That means a consistent NAP is a signal of trust, your prominence goes up, and your local SEO improves. On the flipside, if your NAP is constantly different, Google loses trust in your business, your prominence suffers, and your local SEO suffers.
Building citations can be time-consuming and challenging without an SEO consultant steering you clear of potholes and pitfalls because the smallest inconsistencies can have major effects on your local rankings. For example, writing ‘3’ in one citation then spelling ‘three’ in another makes Google nervous. Writing ‘Street’ in one citation then abbreviating to ‘St’ in another makes Google nervous.
Whether you opt for an SEO expert or build citations yourself, keeping your NAP consistent has to be your #1 goal.
Breaking News: Now You Know How Google Crawls, Indexes and Ranks Your Website
You’ve learned what some SEOs have taken years to master – now it’s up to you to apply your SEO knowledge properly and professionally. Remember, the wrong optimisation strategies can block crawlers, have you deindexed, and keep your rankings from ever breaking into the all-important top 10.
If you’re ready to keep leveling up, the next chapter is waiting for you. We’ll show you the keyword research strategies that will help Google AND users find your content. It’s the roadmap to higher rankings and more clicks…and it’s ready for you now.
Chapter #3: Keyword Research