Среда: Centos7 Spark2.4.0 Hadoop2.9.2 Scale2.12.8 Python3.6.6

когда я запускаю pyspark с помощью /opt/spark/bin/pyspark , ошибка выглядит следующим образом:

Python 3.6.6 (default, Jan 29 2019, 20:02:39) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
Type "help", "copyright", "credits" or "license" for more information.
2019-01-30 19:47:26 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
/opt/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session.
  warnings.warn("Failed to initialize Spark session.")
Traceback (most recent call last):
  File "/opt/spark/python/pyspark/shell.py", line 41, in <module>
    spark = SparkSession._create_shell_session()
  File "/opt/spark/python/pyspark/sql/session.py", line 573, in _create_shell_session
    return SparkSession.builder\
  File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "/opt/spark/python/pyspark/context.py", line 349, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "/opt/spark/python/pyspark/context.py", line 118, in __init__
    conf, jsc, profiler_cls)
  File "/opt/spark/python/pyspark/context.py", line 187, in _do_init
    self._accumulatorServer = accumulators._start_update_server(auth_token)
  File "/opt/spark/python/pyspark/accumulators.py", line 291, in _start_update_server
    server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler, auth_token)
  File "/opt/spark/python/pyspark/accumulators.py", line 274, in __init__
    SocketServer.TCPServer.__init__(self, server_address, RequestHandlerClass)
  File "/usr/local/python3/lib/python3.6/socketserver.py", line 453, in __init__
    self.server_bind()
  File "/usr/local/python3/lib/python3.6/socketserver.py", line 467, in server_bind
    self.socket.bind(self.server_address)
socket.gaierror: [Errno -2] Name or service not known

Но только spark-shell может дать хорошее начало:

2019-01-30 19:58:49 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://datanode1:4040
Spark context available as 'sc' (master = local[*], app id = local-1548896336336).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.0
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

Нужна помощь по этой проблеме.

0