Skip to content Skip to sidebar Skip to footer

Extracting Values From Multiple Html Files

I am new to web-scraping. I have 3000+ html/htm files and I need to extract 'tr' values from them and transform in a dataframe to do further analysis. Codes which I have used is: h

Solution 1:

lapply will output a list of documents. That cant be handled by read_html. Instead include all rvest actions in lapply:

html <- list.files(pattern="\\.(htm|html)$")

mydata <- lapply(html, function(file) {
  read_html(file) %>% html_nodes('tr') %>% html_text()
})

Example

Having two test files in my WD with content

<html><head></head><body><table><tr><td>Martin</td></tr></table></body></html>

and

<html><head></head><body><table><tr><td>Carolin</td></tr></table></body></html>

would output

> mydata
[[1]]
[1] "Martin"[[2]]
[1] "Carolin"

In my case I could then format it using

data.frame(Content = unlist(mydata))

Post a Comment for "Extracting Values From Multiple Html Files"