1. What Elements Should Text Links Consist Of To Ensure The
Best Possible SEO Performance?
- Anchor text, a-tag with href-attribute
- Nofollow attribute, anchor text
- a-tag with href-attribute, noindex attribute
- The number of links pointing at a certain page
- The value a hyperlink passes to a particular webpage
- Optimized website link hierarchy
- Multiple links to a single URL
- Using linkhubs
- Meta robots nofollow
- Interlink relevant contents with each other
- Internal, link-level rel-nofollow
- XML sitemaps must only contain URLs that give a HTTP 200 response
- It is recommended to use gzip compression and UTF-8 encoding
- There can be only one XML sitemap per website
- XML sitemaps should usually be used when a website is very extensive
- It is recommended to have URLs that return non-200 status codes within XML sitemaps
- Duplicate pages/content
- A well-defined hierarchy of the pages
- Content freshness
- It can be downloaded to your local computer
- It can’t audit desktop and mobile versions of a website separately
- It provides you with a list of issues with ways of fixing
- It allows you to include or exclude certain parts of a website from audit
- Reverse DNS lookup
- User Agent Overrider
- User Agent Switcher
- Less than ones without noindex
- Never
- Occasionally
- False
- True
- It should point to URLs that serve HTTP200 status codes
- It is useful to create canonical tag chaining
- Each URL can have several rel-canonical directives
- Pages linked by a canonical tag should have identical or at least very similar content
- Google prefers them over other pages because they are dynamically generated and thus very fresh.
- they do not pass any linkjuice to other pages
- those pages are dynamic and thus can create bad UX for the searcher
- True
- False
- It is important to have all sub-pages of a category being indexed
- Proper pagination is required for the overall good performance of a domain in search results
- rel=next and rel=prev attributes explain to Google which page in the chain comes next or appeared before it
- Pagination is extremely important in e-commerce and editorial websites
- Using the X-robots-tag and the noindex attribute
- Introducing hreflang using X-Robots headers
- Using the X-robots rel=canonical header
- Server-side errors
- Client-side errors
- Redirects
- The rankings will be fully transferred to the new URL
- Link equity will be passed to the new URL
- To not lose important positions without any replacement
- The new URL won’t have any redirect chains
- When there is another page to replace the deleted URL
- If the page can be restored in the near future
- When the page existed and then was intentionally removed, and will never be back
- When you want to delete the page from the index as quickly as possible and are sure it won’t ever be back
- Using the 503 status code with the retry-after header
- Using the HTTP status code 200
- Using the noindex directive in your robots.txt file
- Using the 500 status code with the retry-after header
- The method of the request (usually GET/POST)
- The request URL
- The server IP/hostname
- Passwords
- The time spent on a URL
- True
- False
- 2xx range
- 3xx range
- 5xx range
- 4xx range
- It is not a good idea to combine different data sources for deep analysis. It’s much better to concentrate on just one data source, e.g. logfile
- Combining data from logfiles and webcrawls helps compare simulated and real crawler behavior
- If you overlay your sitemap with your logfiles, you may see a lack of internal links that shows that the site architecture is not working properly
- They have strong default geo-targeting features, e.g. .fr for French
- They may be unavailable in different regions/markets
- They need to be registered within the local market, which can make it expensive
- 301 and 303
- 302 and 301
- 302 and 303
- <link rel=”alternate” href=”http://example.com/” hreflang=”x-default”/>
- <link rel=”alternate” href=”http://example.com/en” hreflang=”uk”/>
- <link rel=”alternate” href=”http://example.com/en” hreflang=”en-au”/>
- True
- False
- Avoid using new modern formats like WebP
- Asynchronous requests
- Increase the number of ССS files per URL
- Proper compression & meta data removal for images
- True
- False
- HTTP
- HTTPS
- FTP
- The non-critical CSS is required when the site starts to render
- There is an initial view (which is critical) and below-the-fold-content
- CRP on mobile is bigger than on a desktop
- The “Critical” tool on Github helps to build CCS for CRP optimisation
- Invalid mark-up still works, so there’s no need to control it
- Even if GSC says that your mark-up is not valid, Google will still consider it
- Changes in HTML can break the mark-up, so monitoring is needed
- Using AMP is the only way to get into the Google News carousel/box
- AMP implementation is easy, there’s no need to rewrite HTML and build a new CSS
- CSS files do not need to be inlined as non-blocking compared to a regular version
- A regular website can never be as fast as an AMP version
- rel=amp HTML tags
- hreflang tags
- Canonical tags
- Responsive web design
- Independent/standalone mobile site
- Dynamic serving
- True
- False