Had the opportunity to attend a fantastic SEO Q&A session today! The discussion was led by SEO industry experts John, Gary, and Martin.
Key takeaways:
- To block syndicated content from appearing in Google Discover, use a noindex robots meta tag.
- Having two domains with different TLDs targeting the same keywords might seem confusing and can be considered as search result manipulation.
- JavaScript warnings for libraries with known security vulnerabilities don't affect ranking, but they should not be ignored.
- To prevent text from appearing in the search snippet, use the data-nosnippet HTML attribute. If you want to block a specific section of a page, you might have to use an iframe or a JavaScript piece of content.
- Sitemaps do not guarantee indexing. This relies on the quality and popularity of your content.
- Google might have specific requirements for some attributes and types of structured data in order to use them for product features.
- Integration of HSTS headers does not affect Search, but Google uses canonicalization to pick the most appropriate version of a page.
- Google compares current and previous XML sitemap versions to determine what's new or removed.
- An HTML sitemap is user-oriented, while an XML sitemap is for crawlers.
- Structure data with parsing errors can't be extracted, so it's ignored.
- Including numbers in URLs doesn't impact SEO.
- Website URL blocking could be due to issues in the robots.txt file, server logs or problems detected in the Search Console.
- Contrary to some rumors, there is no Google SEO certification.
- URLs with ASPX endings can be indexed by Googlebot.
- Fake Googlebots can be identified by verifying the IP ranges that Googlebot uses.
- NOODP used to tell search engines to ignore the description from the DMOZ Open Directory Project. But it's no longer effective.
- For a video thumbnail in SERPs, the video doesn't have to be the first element but should be the main content.