Post by account_disabled on Feb 20, 2024 12:03:10 GMT 5.5
I am trying to tell you what can be done in SEO with the help of R on ZEO Blog. We have previously talked about extracting data from the API using R and using it on the Ads side. Today, I will discuss the topics of scanning the website, using XPath on the crawled site, and what you can do when you only want to scan the sitemap. You can also find the sources of the packages I used and benefited from in the article at the bottom of the article. Are you ready to make your work a little easier with R? Thanks to R, you can crawl a website for free and obtain the data you want with XPath commands.
You can also do many of the things that tools such as Screaming Greece Phone Number Frog SEO Spider or Deepcrawl can do with the steps I will explain. Let's quickly move on to its use. You can get information about the installation in my article where I explain the installation and other settings of R Studio . First, we install our packages for R Crawler; Then we start the scan by entering the site address to scan our site. The duration of this process will vary depending on the site size and your computer .
Rcrawler I scanned 150 URLs from ZEO's website, both as an example and to quickly add images to my article. Let's display the scan results with the following code: View(INDEX) All pages on the site came with status codes etc. Let's go into a little more detail in the data and extract parts such as title and H1 using XPath; You can write any XPath commands you want to extract the data you want as XPath (such as description, H2). I also recommend you to take a look at the presentation by Mert from our team, where he explains topics related to XPath . When the crawl is completed, it shows me the data directly; The data arrived, but I was not happy with this situation; I need to visualize this data a little more. For this, you can create the following data frame; My data is now more understandable; In the X7 column, I can easily understand whether the page type is page-blog or a tool page.
You can also do many of the things that tools such as Screaming Greece Phone Number Frog SEO Spider or Deepcrawl can do with the steps I will explain. Let's quickly move on to its use. You can get information about the installation in my article where I explain the installation and other settings of R Studio . First, we install our packages for R Crawler; Then we start the scan by entering the site address to scan our site. The duration of this process will vary depending on the site size and your computer .
Rcrawler I scanned 150 URLs from ZEO's website, both as an example and to quickly add images to my article. Let's display the scan results with the following code: View(INDEX) All pages on the site came with status codes etc. Let's go into a little more detail in the data and extract parts such as title and H1 using XPath; You can write any XPath commands you want to extract the data you want as XPath (such as description, H2). I also recommend you to take a look at the presentation by Mert from our team, where he explains topics related to XPath . When the crawl is completed, it shows me the data directly; The data arrived, but I was not happy with this situation; I need to visualize this data a little more. For this, you can create the following data frame; My data is now more understandable; In the X7 column, I can easily understand whether the page type is page-blog or a tool page.