I am webscraping this site http://tickertrak.com/ and am working in google collab since I plan to strip the data out of the table into the dataframe and then dump into google sheets.
My current problem is that I can't figure out how to work from where I am.
import time
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
from bs4 import BeautifulSoup as soup
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',chrome_options=chrome_options)
wd.get("http://tickertrak.com/")
time.sleep(2)
df = wd.page_source
s = soup(wd.page_source, 'html.parser').find('table', {'class':'table table-striped table-bordered table-sm tablesorter tablesorter-default hasFilters'})
How can I work from here to get the data into a df?
question from:https://stackoverflow.com/questions/65945600/google-collab-webscraping