PMDA site has the same URL for search confirmation as search form, but I would like to analyze html for search confirmation.
The code below cannot analyze the search confirmation html and detects the search form html, so
Click elem_serch_btn1
fails.
What should I do to analyze the html of search confirmation?
Please let me know.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from urllib import request
from bs4 import BeautifulSoup
import requests
from urllib.parse import urljoin
import openpyxl as op
import datetime
import time
driver = webdriver.Chrome('C:\/chromedriver.exe')
driver.get ("https://www.pmda.go.jp/PmdaSearch/kikiSearch/")
elem_search_word=driver.find_element_by_id("txtName")
elem_search_word.send_keys("Blood Irradiation Device")
elem_search_btn = driver.find_element_by_name('btnA')
elem_search_btn.click()
cur_url=driver.current_url
html=request.urlopen(cur_url)
soup = BeautifulSoup(html, 'html.parser')
print(soup)
time.sleep(5)
elem_serch_btn1 = driver.find_element_by_link_text('//*[@id="ResultList"]/tbody/tr[2]/td[1]/div/a')
elem_serch_btn1.click()
Web servers have sessions to share unique information between servers and browsers, and even if you access the same URL, different browsers may display different pages.
Therefore, you must use the browser that first accessed the page to retrieve the page information.
I first accessed the page using driver
, but I think I can't get the information you want because I use request.urlopen
, which is a different browser, to access the URL.
Therefore, it would be good to try a method that does not use request.urlopen
as follows:
html=driver.page_source
soup = BeautifulSoup(html, 'html.parser')
print(soup)
© 2024 OneMinuteCode. All rights reserved.