Hi there, want to read from MongoDB Atlas data to Pyspark.
The read script is from this forum:
from Trademe_MongoDB.Credentials.Credentials import uri
from datetime import datetime
# from motor.motor_asyncio import AsyncIOMotorClient
from Trademe_MongoDB.logger_config import Logger_config
from pyspark.sql import SparkSession
logger = Logger_config().get_logger()
spark = SparkSession.\
builder.\
appName("pyspark-notebook2").\
config("spark.executor.memory", "1g").\
config("spark.mongodb.read.connection.uri", uri).\
config("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector:10.0.3").\
getOrCreate()
df = spark.read.format("mongodb").option('database', '1').option('collection', '2').load()
df.show()
Throwing errors:
The system cannot find the path specified.
Error: Missing application resource.
Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn,
k8s://https://host:port, or local (Default: local[*]).
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed via SparkFiles.get(fileName).
--archives ARCHIVES Comma-separated list of archives to be extracted into the
working directory of each executor.
--conf, -c PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab.
--help, -h Show this help message and exit.
--verbose, -v Print additional debug output.
--version, Print the version of current Spark.
Spark Connect only:
--remote CONNECT_URL URL to connect to the server for Spark Connect, e.g.,
sc://host:port. --master and --deploy-mode cannot be set
together with this option. This option is experimental, and
might change between minor releases.
Cluster deploy mode only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
Spark standalone, Mesos or K8s with cluster deploy mode only:
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified.
Spark standalone, Mesos and Kubernetes only:
--total-executor-cores NUM Total cores for all executors.
Spark standalone, YARN and Kubernetes only:
--executor-cores NUM Number of cores used by each executor. (Default: 1 in
YARN and K8S modes, or all available cores on the worker
in standalone mode).
Spark on YARN and Kubernetes only:
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
--principal PRINCIPAL Principal to be used to login to KDC.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above.
Spark on YARN only:
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
'w' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "C:\Projects\Web projects\Trademe_MongoDB\Data analysis\Load From Spark.py", line 11, in <module>
spark = SparkSession.\
File "C:\Projects\Web projects\venv\lib\site-packages\pyspark\sql\session.py", line 477, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "C:\Projects\Web projects\venv\lib\site-packages\pyspark\context.py", line 512, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "C:\Projects\Web projects\venv\lib\site-packages\pyspark\context.py", line 198, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "C:\Projects\Web projects\venv\lib\site-packages\pyspark\context.py", line 432, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "C:\Projects\Web projects\venv\lib\site-packages\pyspark\java_gateway.py", line 106, in launch_gateway
raise RuntimeError("Java gateway process exited before sending its port number")
RuntimeError: Java gateway process exited before sending its port number
Process finished with exit code 1
Background: Download Pyspark not hadoop. Spark_home envrioments variabls set up, Java envrionment variables set up. Both can read from system path.