[EXPAND] For example, to submit Spark Job to the cluster you just need to send `POST /batches` with JSON body containing Spark config options, mapped to `spark-submit` script analogous arguments.
```bash
$SPARK_HOME/bin/spark-submit \
--master k8s://https://: \
--deploy-mode cluster \
--name SparkPi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image= \
local:///path/to/examples.jar
# Has the similar effect as calling Livy via REST API
curl -H 'Content-Type: application/json' -X POST \
-d '{
"name": "SparkPi",
"className": "org.apache.spark.examples.SparkPi",
"numExecutors": 5,
"conf": {
"spark.kubernetes.container.image": ""
},
"file": "local:///path/to/examples.jar"
}' "http://livy.endpoint.com/batches"
```
Under the hood Livy parses POSTed configs and does `spark-submit` for you, bypassing other defaults configured for the Livy server.
After the job submission Livy discovers Spark Driver Pod scheduled to the Kubernetes cluster with Kubernetes API and starts to track its state, cache Spark Pods logs and details descriptions making that information available through Livy REST API, builds routes to Spark UI, Spark History Server, Monitoring systems with [Kubernetes Ingress][kubernetes-ingress] resources, [Nginx Ingress Controller][nginx-ingress] in particular and displays the links on Livy Web UI.
Providing REST interface for Spark Jobs orchestration Livy allows any number of integrations with Web/Mobile apps and services, easy way of setting up flows via jobs scheduling frameworks.
Livy has in-built lightweight Web UI, which makes it really competitive to Yarn in terms of navigation, debugging and cluster discovery.