I have a piece of scala code that works locally
val test = "resources/test.csv" val trainInput = spark.read .option("header", "true") .option("inferSchema", "true") .format("com.databricks.spark.csv") .load(train) .cache
However when i try to run it on azure, spark by submitting the job, and adjusting the following line: val test = "wasb:///tmp/MachineLearningScala/test.csv"
It doesn't work. How do i reference files in blob storage in azure using scala? This should be straight forward.