How to connect SQL workbench to SparkSQL

Steps to setup sql workbench for accessing spark-sql datases:

  1. Start sparkSql on Namenode as: 
    /opt/spark/bin/spark-sql --verbose --master yarn --driver-memory 5G --executor-memory 5G --executor-cores 2 --num-executors 5
  2. Download SQL workbench, for macOs download from: http://www.sql-workbench.net/Workbench-Build117-MacJava7.tgz
  3. Extract the downloaded tgz file and launch SQLWorkbenchJ
  4. Copy the jar   /opt/spark/lib/spark-assembly-1.2.1-hadoop2.4.0.jar [Or, equivalent as per the hadoop version] from Namnode(spark-sql server)
  5. On SQLWorkbench, from menu go to file-> Manage drivers.
  6. Click on 'Create new entry' button on top left corner.
  7. Provide the driver name such as spark-sql_driver.
  8. In Library section, select the jar (needed for jdbc driver) copied from name node in step 3 above.
  9. In the classname section, click on the 'Search button'. 
  10. From the pop up window, select the driver 'org.apache.hive.jdbc.HiveDriver' and click 'Ok'
  11. From the menu select file-> connect window.
  12. Select the Driver as 'spark-sql_driver' provided in step 6 above.
  13. Provide the URL as,  jdbc:hive2://:/ 
  14. Provide the username as 'admin' and click ok.

Comments