pbootcms网站模板|日韩1区2区|织梦模板||网站源码|日韩1区2区|jquery建站特效-html5模板网

    <i id='B8ykc'><tr id='B8ykc'><dt id='B8ykc'><q id='B8ykc'><span id='B8ykc'><b id='B8ykc'><form id='B8ykc'><ins id='B8ykc'></ins><ul id='B8ykc'></ul><sub id='B8ykc'></sub></form><legend id='B8ykc'></legend><bdo id='B8ykc'><pre id='B8ykc'><center id='B8ykc'></center></pre></bdo></b><th id='B8ykc'></th></span></q></dt></tr></i><div class="qaqi3tu" id='B8ykc'><tfoot id='B8ykc'></tfoot><dl id='B8ykc'><fieldset id='B8ykc'></fieldset></dl></div>

    <small id='B8ykc'></small><noframes id='B8ykc'>

    1. <legend id='B8ykc'><style id='B8ykc'><dir id='B8ykc'><q id='B8ykc'></q></dir></style></legend>
        <bdo id='B8ykc'></bdo><ul id='B8ykc'></ul>
      <tfoot id='B8ykc'></tfoot>
      1. 通過 Python 使用 Selenium 進行多處理時,Chrome 在幾

        Chrome crashes after several hours while multiprocessing using Selenium through Python(通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰)

              <bdo id='txTAz'></bdo><ul id='txTAz'></ul>

                  <tfoot id='txTAz'></tfoot>
                • <small id='txTAz'></small><noframes id='txTAz'>

                • <i id='txTAz'><tr id='txTAz'><dt id='txTAz'><q id='txTAz'><span id='txTAz'><b id='txTAz'><form id='txTAz'><ins id='txTAz'></ins><ul id='txTAz'></ul><sub id='txTAz'></sub></form><legend id='txTAz'></legend><bdo id='txTAz'><pre id='txTAz'><center id='txTAz'></center></pre></bdo></b><th id='txTAz'></th></span></q></dt></tr></i><div class="j2lvupv" id='txTAz'><tfoot id='txTAz'></tfoot><dl id='txTAz'><fieldset id='txTAz'></fieldset></dl></div>
                • <legend id='txTAz'><style id='txTAz'><dir id='txTAz'><q id='txTAz'></q></dir></style></legend>
                    <tbody id='txTAz'></tbody>

                  本文介紹了通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  限時送ChatGPT賬號..

                  這是幾個小時抓取后的錯誤回溯:

                  This is the error traceback after several hours of scraping:

                  The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.
                  

                  這是我的 selenium python 設置:

                  This is my setup of selenium python:

                  #scrape.py
                  from selenium.common.exceptions import *
                  from selenium.webdriver.common.by import By
                  from selenium.webdriver.support import expected_conditions as EC
                  from selenium.webdriver.support.ui import WebDriverWait
                  from selenium.webdriver.chrome.options import Options
                  
                  def run_scrape(link):
                      chrome_options = Options()
                      chrome_options.add_argument('--no-sandbox')
                      chrome_options.add_argument("--headless")
                      chrome_options.add_argument('--disable-dev-shm-usage')
                      chrome_options.add_argument("--lang=en")
                      chrome_options.add_argument("--start-maximized")
                      chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
                      chrome_options.add_experimental_option('useAutomationExtension', False)
                      chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36")
                      chrome_options.binary_location = "/usr/bin/google-chrome"
                      browser = webdriver.Chrome(executable_path=r'/usr/local/bin/chromedriver', options=chrome_options)
                      browser.get(<link passed here>)
                      try:
                          #scrape process
                      except:
                          #other stuffs
                      browser.quit()
                  

                  #multiprocess.py
                  import time,
                  from multiprocessing import Pool
                  from scrape import *
                  
                  if __name__ == '__main__':
                      start_time = time.time()
                      #links = list of links to be scraped
                      pool = Pool(20)
                      results = pool.map(run_scrape, links)
                      pool.close()
                      print("Total Time Processed: "+"--- %s seconds ---" % (time.time() - start_time))
                  

                  Chrome、ChromeDriver 設置、Selenium 版本

                  Chrome, ChromeDriver Setup, Selenium Version

                  ChromeDriver 79.0.3945.36 (3582db32b33893869b8c1339e8f4d9ed1816f143-refs/branch-heads/3945@{#614})
                  Google Chrome 79.0.3945.79
                  Selenium Version: 4.0.0a3
                  

                  我想知道為什么 chrome 正在關閉但其他進程正在運行?

                  Im wondering why is the chrome is closing but other processes are working?

                  推薦答案

                  我拿了你的代碼,稍微修改了一下以適應我的測試環境,下面是執行結果:

                  I took your code, modified it a bit to suit to my Test Environment and here is the execution results:

                  • 代碼塊:

                  • Code Block:

                  • multiprocess.py:

                  import time
                  from multiprocessing import Pool
                  from multiprocessingPool.scrape import run_scrape
                  
                  if __name__ == '__main__':
                      start_time = time.time()
                      links = ["https://selenium.dev/downloads/", "https://selenium.dev/documentation/en/"] 
                      pool = Pool(2)
                      results = pool.map(run_scrape, links)
                      pool.close()
                      print("Total Time Processed: "+"--- %s seconds ---" % (time.time() - start_time)) 
                  

                • scrape.py:

                  from selenium import webdriver
                  from selenium.common.exceptions import NoSuchElementException, TimeoutException
                  from selenium.webdriver.common.by import By
                  from selenium.webdriver.chrome.options import Options
                  
                  def run_scrape(link):
                      chrome_options = Options()
                      chrome_options.add_argument('--no-sandbox')
                      chrome_options.add_argument("--headless")
                      chrome_options.add_argument('--disable-dev-shm-usage')
                      chrome_options.add_argument("--lang=en")
                      chrome_options.add_argument("--start-maximized")
                      chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
                      chrome_options.add_experimental_option('useAutomationExtension', False)
                      chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36")
                      chrome_options.binary_location=r'C:Program Files (x86)GoogleChromeApplicationchrome.exe'
                      browser = webdriver.Chrome(executable_path=r'C:UtilityBrowserDriverschromedriver.exe', options=chrome_options)
                      browser.get(link)
                      try:
                          print(browser.title)
                      except (NoSuchElementException, TimeoutException):
                          print("Error")
                      browser.quit()
                  

                • 控制臺輸出:

                  Downloads
                  The Selenium Browser Automation Project :: Documentation for Selenium
                  Total Time Processed: --- 10.248600006103516 seconds ---
                  

                  很明顯你的程序在邏輯上完美無缺.

                  It is pretty much evident your program is logically flawless and just perfect.

                  正如您在幾個小時的抓取后提到的這個錯誤,我懷疑這是因為 WebDriver 不是線程安全的.話雖如此,如果您可以序列化對底層驅動程序實例的訪問,則可以在多個線程中共享一個引用.這是不可取的.但是你總是可以實例化一個 WebDriver 每個線程的實例.

                  As you mentioned this error surfaces after several hours of scraping, I suspect this due to the fact that WebDriver is not thread-safe. Having said that, if you can serialize access to the underlying driver instance, you can share a reference in more than one thread. This is not advisable. But you can always instantiate one WebDriver instance for each thread.

                  理想情況下,線程安全的問題不在于您的代碼,而在于實際的瀏覽器綁定.他們都假設一次只有一個命令(例如,像真實用戶一樣).但另一方面,您始終可以為每個將啟動多個瀏覽選項卡/窗口的線程實例化一個 WebDriver 實例.到目前為止,您的程序似乎很完美.

                  Ideally the issue of thread-safety isn't in your code but in the actual browser bindings. They all assume there will only be one command at a time (e.g. like a real user). But on the other hand you can always instantiate one WebDriver instance for each thread which will launch multiple browsing tabs/windows. Till this point it seems your program is perfect.

                  現在,不同的線程 可以在同一個Webdriver 上運行,但是測試的結果不會是你所期望的.背后的原因是,當您使用多線程在不同的選項卡/窗口上運行不同的測試時,需要一點線程安全編碼,否則您將執行的操作如 click()send_keys() 將轉到當前具有焦點 的打開的選項卡/窗口,而不管您希望運行的線程.這實質上意味著所有測試將在具有焦點在預期選項卡/窗口上的同一選項卡/窗口上同時運行.

                  Now, different threads can be run on same Webdriver, but then the results of the tests would not be what you expect. The reason behind is, when you use multi-threading to run different tests on different tabs/windows a little bit of thread safety coding is required or else the actions you will perform like click() or send_keys() will go to the opened tab/window that is currently having the focus regardless of the thread you expect to be running. Which essentially means all the test will run simultaneously on the same tab/window that has focus but not on the intended tab/window.

                  這篇關于通過 Python 使用 Selenium 進行多處理時,Chrome 在幾個小時后崩潰的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個參數傳遞給 pool.map() 函數)
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進程池.當其中一個工作進程確定不再需要完成工作時,如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊列引用傳遞給 pool.map_async() 管理的函數?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯誤的另一個混淆,“模塊對象沒有屬性“f)

                • <tfoot id='rrBVn'></tfoot>

                  <small id='rrBVn'></small><noframes id='rrBVn'>

                    1. <i id='rrBVn'><tr id='rrBVn'><dt id='rrBVn'><q id='rrBVn'><span id='rrBVn'><b id='rrBVn'><form id='rrBVn'><ins id='rrBVn'></ins><ul id='rrBVn'></ul><sub id='rrBVn'></sub></form><legend id='rrBVn'></legend><bdo id='rrBVn'><pre id='rrBVn'><center id='rrBVn'></center></pre></bdo></b><th id='rrBVn'></th></span></q></dt></tr></i><div class="x7wrbwr" id='rrBVn'><tfoot id='rrBVn'></tfoot><dl id='rrBVn'><fieldset id='rrBVn'></fieldset></dl></div>
                        <bdo id='rrBVn'></bdo><ul id='rrBVn'></ul>
                          • <legend id='rrBVn'><style id='rrBVn'><dir id='rrBVn'><q id='rrBVn'></q></dir></style></legend>
                              <tbody id='rrBVn'></tbody>
                          • 主站蜘蛛池模板: 儋州在线-儋州招聘找工作、找房子、找对象,儋州综合生活信息门户! | 世纪豪门官网 世纪豪门集成吊顶加盟电话 世纪豪门售后电话 | 沈阳网站建设_沈阳网站制作_沈阳网页设计-做网站就找示剑新零售 沈阳缠绕膜价格_沈阳拉伸膜厂家_沈阳缠绕膜厂家直销 | 旅游规划_旅游策划_乡村旅游规划_景区规划设计_旅游规划设计公司-北京绿道联合旅游规划设计有限公司 | Akribis直线电机_直线模组_力矩电机_直线电机平台|雅科贝思Akribis-杭州摩森机电科技有限公司 | EFM 022静电场测试仪-套帽式风量计-静电平板监测器-上海民仪电子有限公司 | 臭氧老化试验箱,高低温试验箱,恒温恒湿试验箱,防水试验设备-苏州亚诺天下仪器有限公司 | 云南标线|昆明划线|道路标线|交通标线-就选云南云路施工公司-云南云路科技有限公司 | 空心明胶胶囊|植物胶囊|清真胶囊|浙江绿键胶囊有限公司欢迎您! | 许昌奥仕达自动化设备有限公司 | ISO9001认证咨询_iso9001企业认证代理机构_14001|18001|16949|50430认证-艾世欧认证网 | 高通量组织研磨仪-多样品组织研磨仪-全自动组织研磨仪-研磨者科技(广州)有限公司 | 烟气在线监测系统_烟气在线监测仪_扬尘检测仪_空气质量监测站「山东风途物联网」 | 电动不锈钢套筒阀-球面偏置气动钟阀-三通换向阀止回阀-永嘉鸿宇阀门有限公司 | 环氧乙烷灭菌器_压力蒸汽灭菌器_低温等离子过氧化氢灭菌器 _低温蒸汽甲醛灭菌器_清洗工作站_医用干燥柜_灭菌耗材-环氧乙烷灭菌器_脉动真空压力蒸汽灭菌器_低温等离子灭菌设备_河南省三强医疗器械有限责任公司 | Safety light curtain|Belt Sway Switches|Pull Rope Switch|ultrasonic flaw detector-Shandong Zhuoxin Machinery Co., Ltd | 冷却塔改造厂家_不锈钢冷却塔_玻璃钢冷却塔改造维修-广东特菱节能空调设备有限公司 | 气弹簧定制-气动杆-可控气弹簧-不锈钢阻尼器-工业气弹簧-可调节气弹簧厂家-常州巨腾气弹簧供应商 | 菲希尔X射线测厚仪-菲希尔库伦法测厚仪-无锡骏展仪器有限责任公司 | 祝融环境-地源热泵多恒系统高新技术企业,舒适生活环境缔造者! | 卡诺亚轻高定官网_卧室系统_整家定制_定制家居_高端定制_全屋定制加盟_定制家具加盟_定制衣柜加盟 | 设计圈 - 让设计更有价值!| 食品机械专用传感器-落料放大器-低价接近开关-菲德自控技术(天津)有限公司 | 尾轮组_头轮组_矿用刮板_厢式刮板机_铸石刮板机厂家-双驰机械 | 湿地保护| 飞飞影视_热门电影在线观看_影视大全 | 破碎机_上海破碎机_破碎机设备_破碎机厂家-上海山卓重工机械有限公司 | 上海软件开发-上海软件公司-软件外包-企业软件定制开发公司-咏熠科技 | 高效节能电机_伺服主轴电机_铜转子电机_交流感应伺服电机_图片_型号_江苏智马科技有限公司 | 多功能干燥机,过滤洗涤干燥三合一设备-无锡市张华医药设备有限公司 | 昆山新莱洁净应用材料股份有限公司-卫生级蝶阀,无菌取样阀,不锈钢隔膜阀,换向阀,离心泵 | 砖机托板价格|免烧砖托板|空心砖托板厂家_山东宏升砖机托板厂 | 网站优化公司_北京网站优化_抖音短视频代运营_抖音关键词seo优化排名-通则达网络 | 芝麻黑-芝麻黑石材厂家-永峰石业| 众品家具网-家具品牌招商_家具代理加盟_家具门户的首选网络媒体。 | 软瓷_柔性面砖_软瓷砖_柔性石材_MCM软瓷厂家_湖北博悦佳软瓷 | 手表腕表维修保养鉴定售后服务中心网点 - 名表维修保养 | 对夹式止回阀厂家,温州对夹式止回阀制造商--永嘉县润丰阀门有限公司 | 高温链条油|高温润滑脂|轴承润滑脂|机器人保养用油|干膜润滑剂-东莞卓越化学 | 橡胶接头|可曲挠橡胶接头|橡胶软接头安装使用教程-上海松夏官方网站 | LZ-373测厚仪-华瑞VOC气体检测仪-个人有毒气体检测仪-厂家-深圳市深博瑞仪器仪表有限公司 |