Skip to content

Conversation

@garlandz-db
Copy link
Contributor

Make the scala-job a default scala template. The following changes were made:

  • Rename scala-job => default-scala
  • Add standard support
  • Updated job config for serverless to use client 4 private preview with java_dependencies
  • Updated job config for standard to use DBR 17.3 cluster
  • Updated the Main code to generate DBConnect session without specifying .clusterID or .serverless() (this is necessary to ensure when the job is generated, we only pass DatabricksSession.builder..getOrCreate() which the repl will then hijack with its own session. Adding .clusterId() or .serverless() would otherwise create a new session which the user would not expect)

pietern pushed a commit that referenced this pull request Oct 31, 2025
We will beta preview first with some slight changes to the existing
template until we can get the big pr approved (#119).
import org.apache.spark.sql.functions.udf

object Main {
def main(args: Array[String]): Unit = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would use the same catalog/schema parameters as seen in the default template:

At this point all the default templates offer a parametrizable catalog and schema.

This could also be done in a follow-up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

@garlandz-db garlandz-db force-pushed the default_scala_template branch from 405428d to 57b3e64 Compare November 4, 2025 17:05
@garlandz-db garlandz-db force-pushed the default_scala_template branch from 57b3e64 to ca5a72b Compare November 4, 2025 17:07
Copy link
Contributor

@lennartkats-db lennartkats-db left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please review open comments

def main(args: Array[String]): Unit = {
println("Hello, World!")

val catalog = getFromArgs(args, "catalog").getOrElse("samples")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove default samples return..

println("Hello, World!")

val catalog = getFromArgs(args, "catalog").getOrElse("samples")
val schema = getFromArgs(args, "schema").getOrElse("nyctaxi")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark.sql(s"USE SCHEMA $schema")

@lennartkats-db lennartkats-db merged commit 7c1f308 into databricks:main Nov 12, 2025
1 check passed
github-merge-queue bot pushed a commit to databricks/cli that referenced this pull request Nov 13, 2025
## Changes
<!-- Brief summary of your changes that is easy to understand -->
This adds a new default scala DABs template as a follow up to
databricks/bundle-examples#119
## Why
<!-- Why are these changes needed? Provide the context that the reviewer
might be missing.
For example, were there any decisions behind the change that are not
reflected in the code itself? -->
This is to provide an off the shelf template for customers to start
using scala dbconnect to develop scala jobs in Databricks

## Tests
<!-- How have you tested the changes? -->
Manually tested user flow interactively and as job workload for both
standard and serverless
<!-- If your PR needs to be included in the release notes for next
release,
add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
github-merge-queue bot pushed a commit to databricks/cli that referenced this pull request Nov 13, 2025
## Changes
<!-- Brief summary of your changes that is easy to understand -->
This adds a new default scala DABs template as a follow up to
databricks/bundle-examples#119
## Why
<!-- Why are these changes needed? Provide the context that the reviewer
might be missing.
For example, were there any decisions behind the change that are not
reflected in the code itself? -->
This is to provide an off the shelf template for customers to start
using scala dbconnect to develop scala jobs in Databricks

## Tests
<!-- How have you tested the changes? -->
Manually tested user flow interactively and as job workload for both
standard and serverless
<!-- If your PR needs to be included in the release notes for next
release,
add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants