pbootcms网站模板|日韩1区2区|织梦模板||网站源码|日韩1区2区|jquery建站特效-html5模板网

    <bdo id='tnpU7'></bdo><ul id='tnpU7'></ul>
<legend id='tnpU7'><style id='tnpU7'><dir id='tnpU7'><q id='tnpU7'></q></dir></style></legend>

    <i id='tnpU7'><tr id='tnpU7'><dt id='tnpU7'><q id='tnpU7'><span id='tnpU7'><b id='tnpU7'><form id='tnpU7'><ins id='tnpU7'></ins><ul id='tnpU7'></ul><sub id='tnpU7'></sub></form><legend id='tnpU7'></legend><bdo id='tnpU7'><pre id='tnpU7'><center id='tnpU7'></center></pre></bdo></b><th id='tnpU7'></th></span></q></dt></tr></i><div class="dzqsslr" id='tnpU7'><tfoot id='tnpU7'></tfoot><dl id='tnpU7'><fieldset id='tnpU7'></fieldset></dl></div>

        <small id='tnpU7'></small><noframes id='tnpU7'>

      1. <tfoot id='tnpU7'></tfoot>

        使用 Python 的 Multiprocessing 模塊執(zhí)行同時(shí)和單獨(dú)的

        Using Python#39;s Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs(使用 Python 的 Multiprocessing 模塊執(zhí)行同時(shí)和單獨(dú)的 SEAWAT/MODFLOW 模型運(yùn)行)

            <tbody id='PqHQE'></tbody>

          • <legend id='PqHQE'><style id='PqHQE'><dir id='PqHQE'><q id='PqHQE'></q></dir></style></legend>
              <tfoot id='PqHQE'></tfoot>
              • <bdo id='PqHQE'></bdo><ul id='PqHQE'></ul>
                1. <i id='PqHQE'><tr id='PqHQE'><dt id='PqHQE'><q id='PqHQE'><span id='PqHQE'><b id='PqHQE'><form id='PqHQE'><ins id='PqHQE'></ins><ul id='PqHQE'></ul><sub id='PqHQE'></sub></form><legend id='PqHQE'></legend><bdo id='PqHQE'><pre id='PqHQE'><center id='PqHQE'></center></pre></bdo></b><th id='PqHQE'></th></span></q></dt></tr></i><div class="ho3erp7" id='PqHQE'><tfoot id='PqHQE'></tfoot><dl id='PqHQE'><fieldset id='PqHQE'></fieldset></dl></div>

                  <small id='PqHQE'></small><noframes id='PqHQE'>

                2. 本文介紹了使用 Python 的 Multiprocessing 模塊執(zhí)行同時(shí)和單獨(dú)的 SEAWAT/MODFLOW 模型運(yùn)行的處理方法,對(duì)大家解決問題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

                  問題描述

                  限時(shí)送ChatGPT賬號(hào)..

                  我正在嘗試在我的 8 處理器 64 位 Windows 7 機(jī)器上運(yùn)行 100 個(gè)模型.我想同時(shí)運(yùn)行 7 個(gè)模型實(shí)例以減少我的總運(yùn)行時(shí)間(每個(gè)模型運(yùn)行大約 9.5 分鐘).我已經(jīng)查看了與 Python 的多處理模塊有關(guān)的幾個(gè)線程,但仍然缺少一些東西.

                  concurrent.futures.ThreadPoolExecutor 既簡(jiǎn)單又足夠了,但它需要 第三方對(duì) Python 2.x 的依賴(自 Python 3.2 起就在 stdlib 中).

                  #!/usr/bin/env python導(dǎo)入操作系統(tǒng)導(dǎo)入并發(fā)期貨def 運(yùn)行(文件名,def_param):... # 在 `filename` 上調(diào)用外部程序# 填充文件ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'wdir = os.path.join(ws, r'fieldgen
                  eals')files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy'))# 啟動(dòng)線程以 concurrent.futures.ThreadPoolExecutor(max_workers=8) 作為執(zhí)行者:future_to_file = dict((executor.submit(run, f, ws), f) for f in files)concurrent.futures.as_completed(future_to_file) 中的未來:f = future_to_file[未來]如果 future.exception() 不是無:print('%r 產(chǎn)生異常: %s' % (f, future.exception()))# run() 不返回任何東西,所以 `future.result()` 總是 `None`

                  或者如果我們忽略 run() 引發(fā)的異常:

                  從 itertools 導(dǎo)入重復(fù)... # 相同# 啟動(dòng)線程以 concurrent.futures.ThreadPoolExecutor(max_workers=8) 作為執(zhí)行者:executor.map(運(yùn)行,文件,重復(fù)(ws))# run() 不返回任何內(nèi)容,因此可以忽略 `map()` 結(jié)果

                  subprocess + threading(手動(dòng)池)解決方案

                  #!/usr/bin/env python從 __future__ 導(dǎo)入 print_function導(dǎo)入操作系統(tǒng)導(dǎo)入子流程導(dǎo)入系統(tǒng)從隊(duì)列導(dǎo)入隊(duì)列從線程導(dǎo)入線程def 運(yùn)行(文件名,def_param):... # 定義 exe, swt_namsubprocess.check_call([exe, swt_nam]) # 運(yùn)行外部程序def 工人(隊(duì)列):"""處理隊(duì)列中的文件."""對(duì)于 iter(queue.get, None) 中的 args:嘗試:運(yùn)行(*參數(shù))except Exception as e: # 捕獲異常以避免退出# 過早線程print('%r failed: %s' % (args, e,), file=sys.stderr)# 啟動(dòng)線程q = 隊(duì)列()線程= [線程(目標(biāo)=工作者,參數(shù)=(q,))為_在范圍內(nèi)(8)]對(duì)于線程中的 t:t.daemon = True # 如果程序死了,線程就死了t.start()# 填充文件ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'wdir = os.path.join(ws, r'fieldgen
                  eals')對(duì)于 os.listdir(wdir) 中的 f:如果 f.endswith('.npy'):q.put_nowait((os.path.join(wdir, f), ws))for _ in threads: q.put_nowait(None) # 表示不再有文件for t in threads: t.join() # 等待完成

                  I'm trying to complete 100 model runs on my 8-processor 64-bit Windows 7 machine. I'd like to run 7 instances of the model concurrently to decrease my total run time (approx. 9.5 min per model run). I've looked at several threads pertaining to the Multiprocessing module of Python, but am still missing something.

                  Using the multiprocessing module

                  How to spawn parallel child processes on a multi-processor system?

                  Python Multiprocessing queue

                  My Process:

                  I have 100 different parameter sets I'd like to run through SEAWAT/MODFLOW to compare the results. I have pre-built the model input files for each model run and stored them in their own directories. What I'd like to be able to do is have 7 models running at a time until all realizations have been completed. There needn't be communication between processes or display of results. So far I have only been able to spawn the models sequentially:

                  import os,subprocess
                  import multiprocessing as mp
                  
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  files = []
                  for f in os.listdir(ws + r'fieldgen
                  eals'):
                      if f.endswith('.npy'):
                          files.append(f)
                  
                  ## def work(cmd):
                  ##     return subprocess.call(cmd, shell=False)
                  
                  def run(f,def_param=ws):
                      real = f.split('_')[2].split('.')[0]
                      print 'Realization %s' % real
                  
                      mf2k = r'c:modflowmf2k.1_19inmf2k.exe '
                      mf2k5 = r'c:modflowMF2005_1_8inmf2005.exe '
                      seawatV4 = r'c:modflowswt_v4_00_04exeswt_v4.exe '
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                  
                      exe = seawatV4x64
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                  
                      os.system( exe + swt_nam )
                  
                  
                  if __name__ == '__main__':
                      p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
                      tasks = range(len(files))
                      results = []
                      for f in files:
                          r = p.map_async(run(f), tasks, callback=results.append)
                  

                  I changed the if __name__ == 'main': to the following in hopes it would fix the lack of parallelism I feel is being imparted on the above script by the for loop. However, the model fails to even run (no Python error):

                  if __name__ == '__main__':
                      p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
                      p.map_async(run,((files[f],) for f in range(len(files))))
                  

                  Any and all help is greatly appreciated!

                  EDIT 3/26/2012 13:31 EST

                  Using the "Manual Pool" method in @J.F. Sebastian's answer below I get parallel execution of my external .exe. Model realizations are called up in batches of 8 at a time, but it doesn't wait for those 8 runs to complete before calling up the next batch and so on:

                  from __future__ import print_function
                  import os,subprocess,sys
                  import multiprocessing as mp
                  from Queue import Queue
                  from threading import Thread
                  
                  def run(f,ws):
                      real = f.split('_')[-1].split('.')[0]
                      print('Realization %s' % real)
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                      subprocess.check_call([seawatV4x64, swt_nam])
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen
                  eals')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                  
                      for _ in threads: q.put_nowait(None) # signal no more files
                      for t in threads: t.join() # wait for completion
                  
                  if __name__ == '__main__':
                  
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  No error traceback is available. The run() function performs its duty when called upon a single model realization file as with mutiple files. The only difference is that with multiple files, it is called len(files) times though each of the instances immediately closes and only one model run is allowed to finish at which time the script exits gracefully (exit code 0).

                  Adding some print statements to main() reveals some information about active thread-counts as well as thread status (note that this is a test on only 8 of the realization files to make the screenshot more manageable, theoretically all 8 files should be run concurrently, however the behavior continues where they are spawn and immediately die except one):

                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen	est')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count())]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                      print('Active Count a',threading.activeCount())
                      for _ in threads:
                          print(_)
                          q.put_nowait(None) # signal no more files
                      for t in threads: 
                          print(t)
                          t.join() # wait for completion
                      print('Active Count b',threading.activeCount())
                  

                  **The line which reads "D:\Data\Users..." is the error information thrown when I manually stop the model from running to completion. Once I stop the model running, the remaining thread status lines get reported and the script exits.

                  EDIT 3/26/2012 16:24 EST

                  SEAWAT does allow concurrent execution as I've done this in the past, spawning instances manually using iPython and launching from each model file folder. This time around, I'm launching all model runs from a single location, namely the directory where my script resides. It looks like the culprit may be in the way SEAWAT is saving some of the output. When SEAWAT is run, it immediately creates files pertaining to the model run. One of these files is not being saved to the directory in which the model realization is located, but in the top directory where the script is located. This is preventing any subsequent threads from saving the same file name in the same location (which they all want to do since these filenames are generic and non-specific to each realization). The SEAWAT windows were not staying open long enough for me to read or even see that there was an error message, I only realized this when I went back and tried to run the code using iPython which directly displays the printout from SEAWAT instead of opening a new window to run the program.

                  I am accepting @J.F. Sebastian's answer as it is likely that once I resolve this model-executable issue, the threading code he has provided will get me where I need to be.

                  FINAL CODE

                  Added cwd argument in subprocess.check_call to start each instance of SEAWAT in its own directory. Very key.

                  from __future__ import print_function
                  import os,subprocess,sys
                  import multiprocessing as mp
                  from Queue import Queue
                  from threading import Thread
                  import threading
                  
                  def run(f,ws):
                      real = f.split('_')[-1].split('.')[0]
                      print('Realization %s' % real)
                      seawatV4x64 = r'c:modflowswt_v4_00_04exeswt_v4x64.exe '
                      cwd = ws + r'
                  eals
                  eal%sss' % real
                      swt_nam = ws + r'
                  eals
                  eal%sssss.nam_swt' % real
                      subprocess.check_call([seawatV4x64, swt_nam],cwd=cwd)
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      wdir = os.path.join(ws, r'fieldgen
                  eals')
                      q = Queue()
                      for f in os.listdir(wdir):
                          if f.endswith('.npy'):
                              q.put_nowait((os.path.join(wdir, f), ws))
                  
                      # start threads
                      threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count()-1)]
                      for t in threads:
                          t.daemon = True # threads die if the program dies
                          t.start()
                      for _ in threads: q.put_nowait(None) # signal no more files
                      for t in threads: t.join() # wait for completion
                  
                  if __name__ == '__main__':
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  解決方案

                  I don't see any computations in the Python code. If you just need to execute several external programs in parallel it is sufficient to use subprocess to run the programs and threading module to maintain constant number of processes running, but the simplest code is using multiprocessing.Pool:

                  #!/usr/bin/env python
                  import os
                  import multiprocessing as mp
                  
                  def run(filename_def_param): 
                      filename, def_param = filename_def_param # unpack arguments
                      ... # call external program on `filename`
                  
                  def safe_run(*args, **kwargs):
                      """Call run(), catch exceptions."""
                      try: run(*args, **kwargs)
                      except Exception as e:
                          print("error: %s run(*%r, **%r)" % (e, args, kwargs))
                  
                  def main():
                      # populate files
                      ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                      workdir = os.path.join(ws, r'fieldgen
                  eals')
                      files = ((os.path.join(workdir, f), ws)
                               for f in os.listdir(workdir) if f.endswith('.npy'))
                  
                      # start processes
                      pool = mp.Pool() # use all available CPUs
                      pool.map(safe_run, files)
                  
                  if __name__=="__main__":
                      mp.freeze_support() # optional if the program is not frozen
                      main()
                  

                  If there are many files then pool.map() could be replaced by for _ in pool.imap_unordered(safe_run, files): pass.

                  There is also mutiprocessing.dummy.Pool that provides the same interface as multiprocessing.Pool but uses threads instead of processes that might be more appropriate in this case.

                  You don't need to keep some CPUs free. Just use a command that starts your executables with a low priority (on Linux it is a nice program).

                  ThreadPoolExecutor example

                  concurrent.futures.ThreadPoolExecutor would be both simple and sufficient but it requires 3rd-party dependency on Python 2.x (it is in the stdlib since Python 3.2).

                  #!/usr/bin/env python
                  import os
                  import concurrent.futures
                  
                  def run(filename, def_param):
                      ... # call external program on `filename`
                  
                  # populate files
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  wdir = os.path.join(ws, r'fieldgen
                  eals')
                  files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy'))
                  
                  # start threads
                  with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
                      future_to_file = dict((executor.submit(run, f, ws), f) for f in files)
                  
                      for future in concurrent.futures.as_completed(future_to_file):
                          f = future_to_file[future]
                          if future.exception() is not None:
                             print('%r generated an exception: %s' % (f, future.exception()))
                          # run() doesn't return anything so `future.result()` is always `None`
                  

                  Or if we ignore exceptions raised by run():

                  from itertools import repeat
                  
                  ... # the same
                  
                  # start threads
                  with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
                       executor.map(run, files, repeat(ws))
                       # run() doesn't return anything so `map()` results can be ignored
                  

                  subprocess + threading (manual pool) solution

                  #!/usr/bin/env python
                  from __future__ import print_function
                  import os
                  import subprocess
                  import sys
                  from Queue import Queue
                  from threading import Thread
                  
                  def run(filename, def_param):
                      ... # define exe, swt_nam
                      subprocess.check_call([exe, swt_nam]) # run external program
                  
                  def worker(queue):
                      """Process files from the queue."""
                      for args in iter(queue.get, None):
                          try:
                              run(*args)
                          except Exception as e: # catch exceptions to avoid exiting the
                                                 # thread prematurely
                              print('%r failed: %s' % (args, e,), file=sys.stderr)
                  
                  # start threads
                  q = Queue()
                  threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
                  for t in threads:
                      t.daemon = True # threads die if the program dies
                      t.start()
                  
                  # populate files
                  ws = r'D:DataUsersjbellinoProjectstJohnsDeepeningmodelxsec_a'
                  wdir = os.path.join(ws, r'fieldgen
                  eals')
                  for f in os.listdir(wdir):
                      if f.endswith('.npy'):
                          q.put_nowait((os.path.join(wdir, f), ws))
                  
                  for _ in threads: q.put_nowait(None) # signal no more files
                  for t in threads: t.join() # wait for completion
                  

                  這篇關(guān)于使用 Python 的 Multiprocessing 模塊執(zhí)行同時(shí)和單獨(dú)的 SEAWAT/MODFLOW 模型運(yùn)行的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  What exactly is Python multiprocessing Module#39;s .join() Method Doing?(Python 多處理模塊的 .join() 方法到底在做什么?)
                  Passing multiple parameters to pool.map() function in Python(在 Python 中將多個(gè)參數(shù)傳遞給 pool.map() 函數(shù))
                  multiprocessing.pool.MaybeEncodingError: #39;TypeError(quot;cannot serialize #39;_io.BufferedReader#39; objectquot;,)#39;(multiprocessing.pool.MaybeEncodingError: TypeError(cannot serialize _io.BufferedReader object,)) - IT屋-程序員軟件開
                  Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?(Python 多進(jìn)程池.當(dāng)其中一個(gè)工作進(jìn)程確定不再需要完成工作時(shí),如何退出腳本?) - IT屋-程序員
                  How do you pass a Queue reference to a function managed by pool.map_async()?(如何將隊(duì)列引用傳遞給 pool.map_async() 管理的函數(shù)?)
                  yet another confusion with multiprocessing error, #39;module#39; object has no attribute #39;f#39;(與多處理錯(cuò)誤的另一個(gè)混淆,“模塊對(duì)象沒有屬性“f)

                      <tbody id='Uumga'></tbody>

                    • <small id='Uumga'></small><noframes id='Uumga'>

                      <legend id='Uumga'><style id='Uumga'><dir id='Uumga'><q id='Uumga'></q></dir></style></legend>
                      <tfoot id='Uumga'></tfoot>
                        <i id='Uumga'><tr id='Uumga'><dt id='Uumga'><q id='Uumga'><span id='Uumga'><b id='Uumga'><form id='Uumga'><ins id='Uumga'></ins><ul id='Uumga'></ul><sub id='Uumga'></sub></form><legend id='Uumga'></legend><bdo id='Uumga'><pre id='Uumga'><center id='Uumga'></center></pre></bdo></b><th id='Uumga'></th></span></q></dt></tr></i><div class="juszqd8" id='Uumga'><tfoot id='Uumga'></tfoot><dl id='Uumga'><fieldset id='Uumga'></fieldset></dl></div>

                          • <bdo id='Uumga'></bdo><ul id='Uumga'></ul>
                            主站蜘蛛池模板: 东莞喷砂机-喷砂机-喷砂机配件-喷砂器材-喷砂加工-东莞市协帆喷砂机械设备有限公司 | 博博会2021_中国博物馆及相关产品与技术博览会【博博会】 | 防水接头-电缆防水接头-金属-电缆密封接头-不锈钢电缆接头 | 防火板_饰面耐火板价格、厂家_品牌认准格林雅 | 拉卡拉POS机官网 - 官方直营POS机办理|在线免费领取 | 啤酒设备-小型啤酒设备-啤酒厂设备-济南中酿机械设备有限公司 | 科研ELISA试剂盒,酶联免疫检测试剂盒,昆虫_植物ELISA酶免试剂盒-上海仁捷生物科技有限公司 | 柴油机_柴油发电机_厂家_品牌-江苏卡得城仕发动机有限公司 | 全自动定氮仪-半自动凯氏定氮仪厂家-祎鸿仪器 | 色谱柱-淋洗液罐-巴罗克试剂槽-巴氏吸管-5ml样品瓶-SBS液氮冻存管-上海希言科学仪器有限公司 | 杭州双螺杆挤出机-百科 | Maneurop/美优乐压缩机,活塞压缩机,型号规格,技术参数,尺寸图片,价格经销商 | 开平机_纵剪机厂家_开平机生产厂家|诚信互赢-泰安瑞烨精工机械制造有限公司 | 不锈钢/气体/液体玻璃转子流量计(防腐,选型,规格)-常州天晟热工仪表有限公司【官网】 | 吉林污水处理公司,长春工业污水处理设备,净水设备-长春易洁环保科技有限公司 | LCD3D打印机|教育|桌面|光固化|FDM3D打印机|3D打印设备-广州造维科技有限公司 | 定制奶茶纸杯_定制豆浆杯_广东纸杯厂_[绿保佳]一家专业生产纸杯碗的厂家 | 环保袋,无纺布袋,无纺布打孔袋,保温袋,环保袋定制,环保袋厂家,环雅包装-十七年环保袋定制厂家 | 广域铭岛Geega(际嘉)工业互联网平台-以数字科技引领行业跃迁 | 自恢复保险丝_贴片保险丝_力特保险丝_Littelfuse_可恢复保险丝供应商-秦晋电子 | 智慧旅游_智慧景区_微景通-智慧旅游景区解决方案提供商 | 防伪溯源|防窜货|微信二维码营销|兆信_行业内领先的防伪防窜货数字化营销解决方案供应商 | 代做标书-代写标书-专业标书文件编辑-「深圳卓越创兴公司」 | 纸布|钩编布|钩针布|纸草布-莱州佳源工艺纸布厂 | 鹤壁创新仪器公司-全自动量热仪,定硫仪,煤炭测硫仪,灰熔点测定仪,快速自动测氢仪,工业分析仪,煤质化验仪器 | hc22_hc22价格_hc22哈氏合金—东锜特殊钢 | 热闷罐-高温罐-钢渣热闷罐-山东鑫泰鑫智能热闷罐厂家 | 深圳网站建设-高端企业网站开发-定制网页设计制作公司 | 台式低速离心机-脱泡离心机-菌种摇床-常州市万丰仪器制造有限公司 | 沈阳庭院景观设计_私家花园_别墅庭院设计_阳台楼顶花园设计施工公司-【沈阳现代时园艺景观工程有限公司】 | 气动调节阀,电动调节阀,自力式压力调节阀,切断阀「厂家」-浙江利沃夫自控阀门 | 平面钻,法兰钻,三维钻-山东兴田阳光智能装备股份有限公司 | 龙门加工中心-数控龙门加工中心厂家价格-山东海特数控机床有限公司_龙门加工中心-数控龙门加工中心厂家价格-山东海特数控机床有限公司 | 广东佛电电器有限公司|防雷开关|故障电弧断路器|智能量测断路器 广东西屋电气有限公司-广东西屋电气有限公司 | 自动焊锡机_点胶机_螺丝机-锐驰机器人 | 海日牌清洗剂-打造带电清洗剂、工业清洗剂等清洗剂国内一线品牌 海外整合营销-独立站营销-社交媒体运营_广州甲壳虫跨境网络服务 | 插针变压器-家用电器变压器-工业空调变压器-CD型电抗器-余姚市中驰电器有限公司 | 无菌检查集菌仪,微生物限度仪器-苏州长留仪器百科 | 喷播机厂家_二手喷播机租赁_水泥浆洒布机-河南青山绿水机电设备有限公司 | 蜂窝块状沸石分子筛-吸附脱硫分子筛-萍乡市捷龙环保科技有限公司 | 臭氧老化试验箱,高低温试验箱,恒温恒湿试验箱,防水试验设备-苏州亚诺天下仪器有限公司 |