[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eDataproc Serverless allows the execution of Spark workloads without the need to provision and manage a Dataproc cluster, offering two methods: Spark Batch and Spark Interactive.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless for Spark Batch allows users to submit batch workloads via the Google Cloud console, CLI, or API, with the service managing resource scaling and only charging for active workload execution time.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless for Spark Interactive enables the writing and running of code within Jupyter notebooks, accessible through the Dataproc JupyterLab plugin, which also provides functionalities for creating and managing Dataproc on Compute Engine clusters.\u003c/p\u003e\n"],["\u003cp\u003eCompared to Dataproc on Compute Engine, Dataproc Serverless for Spark provides serverless capabilities, faster startup times, and interactive sessions, while Compute Engine offers greater infrastructure control and supports other open-source frameworks.\u003c/p\u003e\n"],["\u003cp\u003eDataproc Serverless adheres to data residency, CMEK, and VPC-SC security requirements and supports various Spark batch workload types including PySpark, Spark SQL, Spark R, and Spark (Java or Scala).\u003c/p\u003e\n"]]],[],null,["# Serverless for Apache Spark overview\n\n| **Dataproc Serverless** is now **Google Cloud Serverless for Apache Spark**. Until updated, some documents will refer to the previous name.\n\n\u003cbr /\u003e\n\nServerless for Apache Spark lets you run Spark workloads without requiring you\nto provision and manage your own Dataproc cluster.\nThere are two ways to run Serverless for Apache Spark workloads:\n\n- [Batch workloads](#spark-batch)\n- [Interactive sessions](#spark-interactive)\n\nBatch workloads\n---------------\n\nSubmit a batch workload to the Serverless for Apache Spark service using the\nGoogle Cloud console, Google Cloud CLI, or Dataproc API. The service\nruns the workload on a managed compute infrastructure, autoscaling resources\nas needed. [Serverless for Apache Spark charges](/dataproc-serverless/pricing) apply\nonly to the time when the workload is executing.\n\nTo get started, see\n[Run an Apache Spark batch workload](/dataproc-serverless/docs/quickstarts/spark-batch).\n| You can schedule a Spark batch workload as part of an [Airflow](https://airflow.apache.org/) or [Cloud Composer](/composer) workflow using an [Airflow batch operator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/dataproc.html#create-a-batch). See [Run Serverless for Apache Spark workloads with Cloud Composer](/composer/docs/composer-2/run-dataproc-workloads) for more information.\n\nInteractive sessions\n--------------------\n\nWrite and run code in Jupyter notebooks during a Serverless for Apache Spark for\nSpark interactive session. You can create a notebook session in the following\nways:\n\n- [Run PySpark code in BigQuery Studio notebooks](/bigquery/docs/use-spark).\n Use the BigQuery Python notebook to create a\n [Spark-Connect-based](https://spark.apache.org/docs/latest/spark-connect-overview.html)\n Serverless for Apache Spark interactive session. Each BigQuery\n notebook can have only one active Serverless for Apache Spark session associated\n with it.\n\n- [Use the Dataproc JupyterLab plugin](/dataproc-serverless/docs/quickstarts/jupyterlab-sessions)\n to create multiple Jupyter notebook sessions from templates that you create\n and manage. When you install the plugin on a local machine or Compute Engine\n VM, different cards that correspond to different Spark kernel configurations\n appear on the JupyterLab launcher page. Click a card to create a Serverless for Apache Spark\n notebook session, then start writing and testing your code in the notebook.\n\n The Dataproc JupyterLab plugin also lets you\n use the JupyterLab launcher page to take the following actions:\n - Create Dataproc on Compute Engine clusters.\n - Submit jobs to Dataproc on Compute Engine clusters.\n - View Google Cloud and Spark logs.\n\nServerless for Apache Spark compared to Dataproc on Compute Engine\n------------------------------------------------------------------\n\nIf you want to provision and manage infrastructure, and then execute\nworkloads on Spark and other open source processing frameworks, use\n[Dataproc on Compute Engine](/dataproc/docs).\nThe following table lists key differences between the Dataproc on\nCompute Engine and Serverless for Apache Spark.\n\nSecurity compliance\n-------------------\n\nServerless for Apache Spark adheres to all [data residency](/terms/data-residency),\n[CMEK](/dataproc-serverless/docs/guides/cmek-serverless),\n[VPC-SC](/dataproc-serverless/docs/concepts/network#s8s-and-vpc-sc-networks),\nand other security requirements that Dataproc is compliant with.\n\nBatch workload capabilities\n---------------------------\n\nYou can run the following Serverless for Apache Spark batch workload types:\n\n- PySpark\n- Spark SQL\n- Spark R\n- Spark (Java or Scala)\n\nYou can specify [Spark properties](/dataproc-serverless/docs/concepts/properties)\nwhen you submit a Serverless for Apache Spark batch workload."]]