pbootcms网站模板|日韩1区2区|织梦模板||网站源码|日韩1区2区|jquery建站特效-html5模板网

<small id='cPXNx'></small><noframes id='cPXNx'>

    1. <tfoot id='cPXNx'></tfoot>
        <legend id='cPXNx'><style id='cPXNx'><dir id='cPXNx'><q id='cPXNx'></q></dir></style></legend>

      1. <i id='cPXNx'><tr id='cPXNx'><dt id='cPXNx'><q id='cPXNx'><span id='cPXNx'><b id='cPXNx'><form id='cPXNx'><ins id='cPXNx'></ins><ul id='cPXNx'></ul><sub id='cPXNx'></sub></form><legend id='cPXNx'></legend><bdo id='cPXNx'><pre id='cPXNx'><center id='cPXNx'></center></pre></bdo></b><th id='cPXNx'></th></span></q></dt></tr></i><div class="cig2aim" id='cPXNx'><tfoot id='cPXNx'></tfoot><dl id='cPXNx'><fieldset id='cPXNx'></fieldset></dl></div>

          <bdo id='cPXNx'></bdo><ul id='cPXNx'></ul>

      2. pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合

        pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
          <bdo id='Hhwjl'></bdo><ul id='Hhwjl'></ul>
            <tbody id='Hhwjl'></tbody>
          <i id='Hhwjl'><tr id='Hhwjl'><dt id='Hhwjl'><q id='Hhwjl'><span id='Hhwjl'><b id='Hhwjl'><form id='Hhwjl'><ins id='Hhwjl'></ins><ul id='Hhwjl'></ul><sub id='Hhwjl'></sub></form><legend id='Hhwjl'></legend><bdo id='Hhwjl'><pre id='Hhwjl'><center id='Hhwjl'></center></pre></bdo></b><th id='Hhwjl'></th></span></q></dt></tr></i><div class="cg20mk0" id='Hhwjl'><tfoot id='Hhwjl'></tfoot><dl id='Hhwjl'><fieldset id='Hhwjl'></fieldset></dl></div>

          <small id='Hhwjl'></small><noframes id='Hhwjl'>

              <legend id='Hhwjl'><style id='Hhwjl'><dir id='Hhwjl'><q id='Hhwjl'></q></dir></style></legend>

                • <tfoot id='Hhwjl'></tfoot>
                  本文介紹了pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

                  問(wèn)題描述

                  我在 Mac 上使用 docker image sequenceiq/spark 來(lái)研究這些spark examples,在學(xué)習(xí)過(guò)程中,我根據(jù)這個(gè)答案,當(dāng)我啟動(dòng)Simple Data Operations 例子,這里是發(fā)生了什么:

                  I use docker image sequenceiq/spark on my Mac to study these spark examples, during the study process, I upgrade the spark inside that image to 1.6.1 according to this answer, and the error occurred when I start the Simple Data Operations example, here is what happened:

                  當(dāng)我運(yùn)行 df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() 它引發(fā)錯(cuò)誤,與pyspark控制臺(tái)的完整堆棧如下:

                  when I run df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() it raise a error, and the full stack with the pyspark console is as followed:

                  Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
                  [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
                  Type "help", "copyright", "credits" or "license" for more information.
                  16/04/12 22:45:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
                  Welcome to
                        ____              __
                       / __/__  ___ _____/ /__
                      _\ \/ _ \/ _ `/ __/  '_/
                     /__ / .__/\_,_/_/ /_/\_\   version 1.6.1
                        /_/
                  
                  Using Python version 2.6.6 (r266:84292, Jul 23 2015 15:22:56)
                  SparkContext available as sc, HiveContext available as sqlContext.
                  >>> url = "jdbc:mysql://localhost:3306/test?user=root;password=myPassWord"
                  >>> df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  16/04/12 22:46:05 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:06 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
                  16/04/12 22:46:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
                  16/04/12 22:46:16 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:17 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                    File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
                      return self._df(self._jreader.load())
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
                    File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
                      return f(*a, **kw)
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
                  py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
                  : java.sql.SQLException: No suitable driver
                      at java.sql.DriverManager.getDriver(DriverManager.java:278)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at scala.Option.getOrElse(Option.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
                      at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
                      at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
                      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
                      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                      at java.lang.reflect.Method.invoke(Method.java:606)
                      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
                      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
                      at py4j.Gateway.invoke(Gateway.java:259)
                      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
                      at py4j.commands.CallCommand.execute(CallCommand.java:79)
                      at py4j.GatewayConnection.run(GatewayConnection.java:209)
                      at java.lang.Thread.run(Thread.java:744)
                  
                  >>>
                  

                  這是我迄今為止嘗試過(guò)的:

                  Here is what I have tried till now:

                  1. 下載mysql-connector-java-5.0.8-bin.jar,放入/usr/local/spark/lib/.還是一樣的錯(cuò)誤.

                  1. Download mysql-connector-java-5.0.8-bin.jar, and put it in to /usr/local/spark/lib/. It still the same error.

                  像這樣創(chuàng)建t.py:

                  from pyspark import SparkContext  
                  from pyspark.sql import SQLContext  
                  
                  sc = SparkContext(appName="PythonSQL")  
                  sqlContext = SQLContext(sc)  
                  df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()  
                  
                  df.printSchema()  
                  countsByAge = df.groupBy("age").count()  
                  countsByAge.show()  
                  countsByAge.write.format("json").save("file:///usr/local/mysql/mysql-connector-java-5.0.8/db.json")  
                  

                  然后,我嘗試了 spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py.結(jié)果還是一樣.

                  then, I tried spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py. The result is still the same.

                  1. 然后我嘗試了 pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py,有和沒(méi)有下面的t.py,還是一樣.
                  1. Then I tried pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py, both with and without the following t.py, still the same.

                  在此期間,mysql 正在運(yùn)行.這是我的操作系統(tǒng)信息:

                  During all of this, the mysql is running. And here is my os info:

                  # rpm --query centos-release  
                  centos-release-6-5.el6.centos.11.2.x86_64
                  

                  hadoop 版本是 2.6.

                  And the hadoop version is 2.6.

                  現(xiàn)在不知道下一步該去哪里,希望有大神幫忙指點(diǎn)一下,謝謝!

                  Now I don't where to go next, so I hope some one can help give some advice, thanks!

                  推薦答案

                  當(dāng)我嘗試將腳本寫(xiě)入 MySQL 時(shí),我遇到了java.sql.SQLException:沒(méi)有合適的驅(qū)動(dòng)程序".

                  I ran into "java.sql.SQLException: No suitable driver" when I tried to have my script write to MySQL.

                  這是我為解決這個(gè)問(wèn)題所做的.

                  Here's what I did to fix that.

                  在 script.py 中

                  In script.py

                  df.write.jdbc(url="jdbc:mysql://localhost:3333/my_database"
                                    "?user=my_user&password=my_password",
                                table="my_table",
                                mode="append",
                                properties={"driver": 'com.mysql.jdbc.Driver'})
                  

                  然后我以這種方式運(yùn)行 spark-submit

                  Then I ran spark-submit this way

                  SPARK_HOME=/usr/local/Cellar/apache-spark/1.6.1/libexec spark-submit --packages mysql:mysql-connector-java:5.1.39 ./script.py
                  

                  請(qǐng)注意,SPARK_HOME 特定于安裝 spark 的位置.對(duì)于您的環(huán)境,這個(gè) https://github.com/sequenceiq/docker-spark/blob/master/README.md 可能會(huì)有所幫助.

                  Note that SPARK_HOME is specific to where spark is installed. For your environment this https://github.com/sequenceiq/docker-spark/blob/master/README.md might help.

                  如果以上所有內(nèi)容都令人困惑,請(qǐng)嘗試以下操作:
                  在 t.py 中替換

                  In case all the above is confusing, try this:
                  In t.py replace

                  sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  

                  sqlContext.read.format("jdbc").option("dbtable","people").option("driver", 'com.mysql.jdbc.Driver').load()
                  

                  然后運(yùn)行

                  spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local[4] t.py
                  

                  這篇關(guān)于pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數(shù)根據(jù) N 個(gè)先前值來(lái)決定接下來(lái)的 N 個(gè)行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達(dá)式的結(jié)果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數(shù)的 ignore 選項(xiàng)是忽略整個(gè)事務(wù)還是只是有問(wèn)題的行?) - IT屋-程序員軟件開(kāi)發(fā)技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時(shí)出錯(cuò),使用 for 循環(huán)數(shù)組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數(shù)據(jù)庫(kù)表作為 Spark 數(shù)據(jù)幀讀取?)
                • <i id='H7lsK'><tr id='H7lsK'><dt id='H7lsK'><q id='H7lsK'><span id='H7lsK'><b id='H7lsK'><form id='H7lsK'><ins id='H7lsK'></ins><ul id='H7lsK'></ul><sub id='H7lsK'></sub></form><legend id='H7lsK'></legend><bdo id='H7lsK'><pre id='H7lsK'><center id='H7lsK'></center></pre></bdo></b><th id='H7lsK'></th></span></q></dt></tr></i><div class="yeuomca" id='H7lsK'><tfoot id='H7lsK'></tfoot><dl id='H7lsK'><fieldset id='H7lsK'></fieldset></dl></div>

                    • <tfoot id='H7lsK'></tfoot>

                        • <bdo id='H7lsK'></bdo><ul id='H7lsK'></ul>
                          <legend id='H7lsK'><style id='H7lsK'><dir id='H7lsK'><q id='H7lsK'></q></dir></style></legend>

                          <small id='H7lsK'></small><noframes id='H7lsK'>

                              <tbody id='H7lsK'></tbody>
                            主站蜘蛛池模板: MES系统工业智能终端_生产管理看板/安灯/ESOP/静电监控_讯鹏科技 | 汝成内控-行政事业单位内部控制管理服务商 | 螺杆式冷水机-低温冷水机厂家-冷冻机-风冷式-水冷式冷水机-上海祝松机械有限公司 | 解放卡车|出口|济南重汽|报价大全|山东三维商贸有限公司 | 深圳美安可自动化设备有限公司,喷码机,定制喷码机,二维码喷码机,深圳喷码机,纸箱喷码机,东莞喷码机 UV喷码机,日期喷码机,鸡蛋喷码机,管芯喷码机,管内壁喷码机,喷码机厂家 | 液压油缸-液压站生产厂家-洛阳泰诺液压科技有限公司 | 超声波乳化机-超声波分散机|仪-超声波萃取仪-超声波均质机-精浩机械|首页 | 粘度计NDJ-5S,粘度计NDJ-8S,越平水分测定仪-上海右一仪器有限公司 | 诺冠气动元件,诺冠电磁阀,海隆防爆阀,norgren气缸-山东锦隆自动化科技有限公司 | 分子蒸馏设备(短程分子蒸馏装置)_上海达丰仪器| 上海平衡机-单面卧式动平衡机-万向节动平衡机-圈带动平衡机厂家-上海申岢动平衡机制造有限公司 | 海鲜池-专注海鲜鱼缸、移动海鲜缸、饭店鱼缸设计定做-日晟水族厂家 | 撕碎机_轮胎破碎机_粉碎机_回收生产线厂家_东莞华达机械有限公司 | 东莞爱加真空科技有限公司-进口真空镀膜机|真空镀膜设备|Polycold维修厂家 | 厂房出售_厂房仓库出租_写字楼招租_土地出售-中苣招商网-中苣招商网 | 一体化污水处理设备-一体化净水设备-「山东梦之洁水处理」 | ET3000双钳形接地电阻测试仪_ZSR10A直流_SXJS-IV智能_SX-9000全自动油介质损耗测试仪-上海康登 | 皮带输送机-大倾角皮带输送机-皮带输送机厂家-河南坤威机械 | 泥沙分离_泥沙分离设备_泥砂分离机_洛阳隆中重工机械有限公司 | SMC-ASCO-CKD气缸-FESTO-MAC电磁阀-上海天筹自动化设备官网 | 艾默生变频器,艾默生ct,变频器,ct驱动器,广州艾默生变频器,供水专用变频器,风机变频器,电梯变频器,艾默生变频器代理-广州市盟雄贸易有限公司官方网站-艾默生变频器应用解决方案服务商 | 淄博不锈钢,淄博不锈钢管,淄博不锈钢板-山东振远合金科技有限公司 | 锂电叉车,电动叉车_厂家-山东博峻智能科技有限公司 | 宿松新闻网 宿松网|宿松在线|宿松门户|安徽宿松(直管县)|宿松新闻综合网站|宿松官方新闻发布 | STRO|DTRO-STRO反渗透膜(科普)_碟滤 | 飞行者联盟-飞机模拟机_无人机_低空经济_航空技术交流平台 | 上海律师事务所_上海刑事律师免费咨询平台-煊宏律师事务所 | 直流电能表-充电桩电能表-导轨式电能表-智能电能表-浙江科为电气有限公司 | 齿轮减速机_齿轮减速电机-VEMT蜗轮蜗杆减速机马达生产厂家瓦玛特传动瑞环机电 | 干洗加盟网-洗衣店品牌排行-干洗设备价格-干洗连锁加盟指南 | 合肥宠物店装修_合肥宠物美容院装修_合肥宠物医院设计装修公司-安徽盛世和居装饰 | 高温链条油|高温润滑脂|轴承润滑脂|机器人保养用油|干膜润滑剂-东莞卓越化学 | 金环宇|金环宇电线|金环宇电缆|金环宇电线电缆|深圳市金环宇电线电缆有限公司|金环宇电缆集团 | 水平筛厂家-三轴椭圆水平振动筛-泥沙震动筛设备_山东奥凯诺矿机 包装设计公司,产品包装设计|包装制作,包装盒定制厂家-汇包装【官方网站】 | 铝合金重力铸造_铝合金翻砂铸造_铝铸件厂家-东莞市铝得旺五金制品有限公司 | 环保袋,无纺布袋,无纺布打孔袋,保温袋,环保袋定制,环保袋厂家,环雅包装-十七年环保袋定制厂家 | NBA直播_NBA直播免费观看直播在线_NBA直播免费高清无插件在线观看-24直播网 | 高精度电阻回路测试仪-回路直流电阻测试仪-武汉特高压电力科技有限公司 | 微型气象仪_气象传感器_防爆气象传感器-天合传感器大全 | LOGO设计_品牌设计_VI设计 - 特创易| 酒万铺-酒水招商-酒水代理 |