Technical SEO: How Search Engines Rank Web Pages

Technical SEO is the arm of search engine optimization that focuses on how a website is built. Technical optimizations range from improving page speed to better usability. Faster, more user-friendly websites tend to rank better in Google because of its own strives to improve the search experience for its customer, the searcher. An SEO audit is usually performed at the beginning of an SEO campaign but an audit can be used whenever is necessary to determine why a specific page is not winning in search for a keyword.

What are “Organic” Search Results?

In the image below, the links outlined in green are paid ads. Companies pay Google money in a sort of “digital auction” to show up here for specific searches. The links outlined in blue are “organic” or natural search results.


How does Google rank natural search results?

Without getting too crazy or pretending to know everything, let’s assume that Google is just a large Excel spreadsheet of the internet. Organic search is how Google filters and sorts that spreadsheet, with what’s left being the search results you see.

  • Using the Excel analogy and the image below, Google filters out rows to only show web pages that are relevant to the user’s search
  • From there, Google sorts the relevant rows (URLs) based on ~200 columns of data.

Now imagine that those “ranking factor” columns aren’t static; they change in weight, in order and may even disappear from the equation completely. You may have already seen this in action. The search engine results page (SERP) changes based on what Google knows about you, your browsing history, the type of device you’re using, how big your device screen is and whether or not you’re logged into Google at the time.

Because of all of these things, technical SEO goes much deeper than just improving a page’s load time or creating sitemaps. At its core, tech SEO is about communication. Technical SEO helps Google understand what we want our pages to rank for in search. Properly optimizing a website can help Google find, index, understand and ultimately show our content to people who need it.

Anil Dash recently published an article on Medium that got me thinking. Are we doing the right thing by trying to reverse-engineer Google, Twitter, Facebook and other tech giants? Or should they be catering to our vision of the web instead?

What Does a Technical SEO Do?

At a glance, here are a few things technical SEO does to help a website’s organic visibility:

  • We provide Google instructions about which URLs are important and which ones aren’t
  • We help Google understand what each URL is about in terms it can understand
  • We help prevent content duplication
  • We help Google avoid getting caught in loops as it browses our website
  • We help Google learn complex or ambiguous ideas using a vocabulary it can understand
  • We help to ensure that every user on any device has a great experience

Technical SEO Site Audit Checklist

There are hundreds or more technical SEO tools and checklists available but none are better than Annie Cushing’s. Her SEO checklist is in-depth yet easy to understand. I recommend you purchase her self-guided SEO audit template but her Google Doc checklist should be in your favorites right now.

What is The Algorithm, Anyway?

Behind Google’s powerful search engine is an algorithm that stems from a paper written at Stanford University by the founders of Google, Sergey Brin, and Larry Page called The Anatomy of a Large-Scale Hypertextual Web Search Engine. Before it became Google, Brin and Page’s search engine was called BackRub.

Why “Backrub”? The name made sense because of a core component of Google’s algorithm: hyperlinks. Also known as PageRank (for Larry, not a web page), hyperlinks provided Google with a way to measure how popular a web page or URL was in relation to others by counting the number and strength of each link. This system of web pages “voting” for each other is believed to be that special something that made Google stand out from its other search competitors.

Paula Allen and Virginia Nussey of Bruce Clay, Inc. started a conversation about the importance of technical SEO. You can read the full conversation here but the short version is that Virginia believes content should be the focus in 2018. Content is a critical part of the equation but here’s my full response:

I first learned SEO from Bruce Clay over 10 years ago and much of what I learned is still extremely valuable today. So, thank you and I can vouch for the efficacy of siloing in particular.

Now to the Technical SEO question. It shouldn’t be about content versus tech. Each has its own audience and different way to communicate.

Content answers the questions people are asking. You’re communicating to humans with it, on their level.

Technical SEO is like content, but for search engines. You are communicating with Google on its own terms to help it understand which content is important for searchers, why it’s important, who it’s for, for what purpose and where to find it.

We had it backwards for many years: we wrote content for search engines ala keyword stuffing. Today, we (hopefully) write original, magnetic and dynamic content for people. But without technical SEO, that content may never get in front of the folks who will read it, share it and link to it.

Will you get to number one with technical SEO alone? Probably not. But will your amazing piece on “How to Find the Perfect Gadget for Grandpa on Black Friday” be seen if Google can’t get to it, understand it or consider it an authority? Also probably not.

Technical SEO also goes beyond communication and beyond SEO. Technical SEO usually involves improving a website’s UX and speed. This can improve paid search efficiencies by elevating Landing Page Experience Scores. Optimal on-page technical SEO can make pages more relevant and boost Quality Score.

With Voice Search, VR, AR and whatever else is going to come out, Technical SEO isn’t just going to be more relevant, it’s going to be much harder.

SEO is an investment all around. Not putting some effort into the technical side will likely hurt your return on the content side.


Modern SEO

Fast-forward to 2017 and a lot has changed. Google is no longer just a search engine and users consume the web more on mobile devices than on desktops. Google’s algorithm continues to get smarter, relying on much more than just links to determine who wins.

I believe there are 5 primary categories of technical SEO.

  1. Discovery: Can Google find your content?
  2. Indexation: Will Google show the right content?
  3. Intent: What are consumers seeing in search?
  4. Relevance: Does Google understand your content?
  5. Authority: Is your website an expert on this topic?


Google uses small applications called Spiders to crawl the Web. Think of each spider or bot that clicks on every hyperlink of every website it can find. As they click and browse every website they see, they collect information about those web pages that they’ll transfer back to Google to store in its database.

Thinking about Google search as a database first can make it easier to see how it works. Within a database or even an Excel sheet, there are rows. Each row is unique, made so with what’s called a unique identifier. No two rows show be identical but sometimes they are. In the case of Google’s database of the Web, sometimes two or more rows will be exactly the same except for their URLs.


SEO techs may refer to this document as the “robots dot t-x-t” file, which should be found after the root domain name, such as This location is also called the root directory and the file is a simple text file with big SEO implications.

The robots.txt file has 2 primary jobs:

  1. Play traffic cop: it tells search engines and other crawlers where they should and should not go on a website
  2. Provide the first link to the XML sitemap

Sample robots.txt:

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php

The user agent is another way of saying spider, crawler or any other application attempting to crawl a website. It’s a catch-all but you can specify spiders, such as Google-bot.

Allow and Disallow tell spiders where they should and should not go, respectively. Using the robots.txt file to block certain URLs can assist in reducing potential duplicate content on a site.

Learn more about robots.txt

XML Sitemap

The term “sitemap” often causes confusion when dealing with SEO. Many websites offer a sitemap to its human visitors, which is generally a page that looks like a table of contents with links to important pages on the site.

An XML sitemap is different. It’s an XML file, not a web page and it’s built for crawlers. Like the robots.txt file, the XML sitemap is simple in design yet important in helping crawlers get around. It can get complex if needed.

Sample XML Sitemap Structure:

XML is a markup language that uses pairs, much like HTML. In the example above, the object is a URL and inside it is its location and the last time it was changed. More options, such as images and priority are also available as is the ability to “nest” XML sitemaps within larger sitemap files. This is more useful for very large sites with thousands (or millions) of URLs where one document would get too large.


Google uses a web page’s URL as a unique identifier in its database of the Web. Because of this, small changes to a web page’s URL can confuse Google. For example, from our perspective the following web pages are identical:



However, the URL, the web address in the bar of your browser may show two different things:

In SEO as with coding details are extremely important. Software can’t guess what we mean yet or make too many assumptions about the code it sees.

Without technical SEO Google’s database might be filled with multiple URLs that are all trying to rank for the same thing. The problem comes into play when Google needs to decide which of those URLs should actually be shown to searchers.


The search landscape, or search engine results pages (SERPs), have changed a lot over the years. Google’s search listings have probably changed the most dramatically as it started showing more than just links, but actual answers to questions. Local listings, photos, videos, product ads, reviews, recipes, Wikipedia entries… the list goes on and on.

Every keyword seems to result in a different type of SERP. But why?

Intent is the answer. Google is no longer just trying to match a keyword entered in search to a keyword on a page in its database. It’s trying to figure out what you really mean by typing in a query.

Google calls these “Moments”. They are the four main pillars of a searcher’s intent when they do a Google search and each one will shift the search landscape accordingly.

Google Moments

I Want to Know Moments

This is your standard search. The consumer is looking for information about a specific subject, even if they don’t actually type, “what is…?” Depending on the search the SERP is likely to be filled with Knowledge Graph results, Wikipedia links, and other authoritative websites.

Also look out for “people also asked” boxes. This is an indication that Google needs a little more info about your needs to figure out what type of content to serve.

I Want to Go Moments

A few years ago an internal training document by Google got leaked online. The document was for a team of Googlers called Quality Raters, humans who helped train the algorithm by assessing web pages. I Want to Go moments described in those guidelines were about branded keywords, such as Nike or Tylenol. In the publicly released guides, Google describes I Want to Go moments as queries related to local businesses and travel.

I feel both definitions are still accurate and each changes the SERP in its own way. It also means that traditionally unbranded words can eventually become branded.

I Want to Do Moments

This moment is pretty straightforward in that Google will assume that you’re looking for directions on how to accomplish a task. The SERPs for this moment are likely to be easy-to-read lists and how-to videos from YouTube.

I Want to Buy Moments

Certain keywords alone will trigger SERPs with products and ads related to products without the word “buy” being in the query. Query-rewriting is that fancy thing Google will do when it makes an assumption about what you’re really looking for based on a variety of factors. The most obvious SERP feature for this moment is PLA’s or Product Listing Ads.

As Google’s primary competitor, expect to Amazon results on page one if not number one below ads.


Once you have an idea of who your audience is you can create content that answers their questions in the way they expect to see it, whether it be an educational blog post, a video or a product page.

While the content itself should be created by the SEO content strategy team the tech team should assist in the final output. A web page is filled with opportunities that will ultimately become your page’s “love letter to Google”. As I stated earlier, the details matter, especially since Google doesn’t know how to read as a human might. The first step to winning at SEO is being able to tell Google what you want to win for, hands down.

But Google does read. It looks for clues about what a URL wants to rank for in several places on the page, and it’s debatable as to which elements are more heavily weighted (and which aren’t counted at all):

  • The page title (this is the text that appears in your browser window)
  • URL (the address bar)
  • The content on the page
  • The meta description
  • Schema or other structured data (Don’t worry about this for now. We’ll cover it in another post.)


In my opinion, links are what helped Google stand out from other search engines. As mentioned at the beginning of this post, Google views links as votes between URLs. Links, because of their power in SEO, have spawned legions of spammers, tools, and strategies – all designed to get more.

A website with more links does tend to do better in search results but Google’s understanding of links has evolved over the years. Days of creating low-quality, spammy, and unnatural links are just about gone. “Easy”, “quick” or “mass-produced” link tactics are all dangerous and should be avoided, though many agencies still attempt to game the system for their clients and themselves.

However, links do matter, so gaining ground in Google still means your website needs to be more “popular” than others for highly-competitive searches. Instead of investing time, money and energy into manually building new links every time, invest in creating something that is worth linking to.

It sounds like the tired “create great content” spiel, but it gets results. Results that don’t rub Google the wrong way and ultimately do more for your brand than a manually built link could ever do.

So what defines “great”? That depends on your audience. There is no one-size-fits-all content type that will do well for everyone. Uncover what your audience wants and needs, and what’s missing to build something they will share. And if it’s worth building, it’s also worth promoting. Promote your great content with paid search and media. Brands who do this the best are often rewarded with new links without ever sending an outreach email.

For an idea of our team at PHM looks at websites, here’s a live SEO audit we did for SEMRush

Share this

Leave a Reply

Your email address will not be published. Required fields are marked *


© 2017