把Spark二进制包下载并解压到某一台*nux的机器上,这段代码中‘/Users/jilu/Downloads/’这段换成你自己的路径,这就是单机执行SparkSQL的代码,在这个程序中,我已经创建好sqlContext了,以后的部分就是SparkSQL教程了。这是我更新完1.3版之后新改的程序,不出意外1.X的版本都是这样用的。PS:补充一下这个是Python API,不是Scala的。import osimport sysimport traceback# Path for spark source folderos.environ['SPARK_HOME']="/Users/jilu/Downloads/spark-1.3.0-bin-hadoop2.4"# Append pyspark to Python Pathsys.path.append("/Users/jilu/Downloads/spark-1.3.0-bin-hadoop2.4/python/")sys.path.append("/Users/jilu/Downloads/spark-1.3.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip")# try to import needed modelstry: from pyspark import SparkContextfrom pyspark import SparkConffrom pyspark.sql import SQLContext, Rowprint ("Successfully imported Spark Modules")except ImportError as e: print ("Can not import Spark Modules {}".format(traceback.format_exc())) sys.exit(1)# config spark envconf = SparkConf().setAppName("myApp").setMaster("local")sc = SparkContext(conf=conf)sqlContext = SQLContext(sc)