Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

i want to extract links from multiple web pages.Everything works fine for extract but for multiple urls first url getting twice and last one not getting.What is the reason for this?

import re
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import csv
from bs4 import BeautifulSoup

URLs = ["https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/1","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/2",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/3","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/4","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/5",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/6","https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/7"]

driver = webdriver.Chrome(ChromeDriverManager().install())

file = open('linkler.csv', 'w+', newline='')
writer = csv.writer(file)
writer.writerow(['linkler'])


for link in URLs:
  driver.get(link)

  html_source = driver.page_source

  soup = BeautifulSoup(html_source, "html.parser")

  for links in soup.findAll('a', attrs={'href': re.compile("^/soccer/turkey/super-lig-2019-2020/")}):
    writer.writerow([links.get('href')])


driver.quit()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
4.1k views
Welcome To Ask or Share your Answers For Others

1 Answer

after a lot of scan i get the problem , the site is blocking ur requests if there's no rest time so i fix it by adding sleep time ! now your code will work fine i test it !

import re
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import csv
from bs4 import BeautifulSoup
import time

URLs = ["https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/1",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/2",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/3",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/4",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/5",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/6",
        "https://www.oddsportal1.com/soccer/turkey/super-lig-2019-2020/results/#/page/7"]

driver = webdriver.Chrome(ChromeDriverManager().install())

file = open('linkler.csv', 'w+', newline='')
writer = csv.writer(file)
writer.writerow(['linkler'])

for link in URLs:
    driver.get(link)
    time.sleep(5)
    html_source = driver.page_source

    soup = BeautifulSoup(html_source, "html.parser")

    for links in soup.findAll('a', attrs={'href': re.compile("^/soccer/turkey/super-lig-2019-2020/")}):
        writer.writerow([links.get('href')])

driver.quit()

enter image description here


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...