Omer Ozsakarya
06/16/2023, 9:14 AMDivyansh Jain
06/16/2023, 12:32 PM.SQL
in my repo. Is there any way to execute those SQLs on Databrivks via Azure DevOps CO/CD pipeline.
Please help I'm very new to CI/CD.
Thanks!fukuoka masanobu
06/19/2023, 3:53 PMAbidi Gassen
06/22/2023, 9:50 AMThanhtan Le
06/27/2023, 9:06 AMMartin
06/28/2023, 8:26 PMAlbert Wong
06/28/2023, 10:08 PMEinat Orr
06/29/2023, 6:28 AMMartin
06/29/2023, 4:32 PMidentity columns
still not have been donated to OSS; contrary to promises made a year ago:
https://www.databricks.com/wp-content/uploads/2022/06/db-247-blog-img-3.pngโพ
Carly Akerly
07/05/2023, 11:27 PMshingo
07/06/2023, 5:52 PMBarny Self
07/06/2023, 7:48 PMClemens Schroeer
07/13/2023, 4:55 PMCarly Akerly
07/19/2023, 3:51 PMEinat Orr
07/26/2023, 8:06 AMAbolfazl karimian
08/02/2023, 10:28 AMConfiguration conf = new Configuration();
TableClient myTableClient = DefaultTableClient.create(conf);
Table myTable = Table.forPath("Table/Path");
Snapshot mySnapshot = myTable.getLatestSnapshot(myTableClient);
Scan myScan = mySnapshot.getScanBuilder(myTableClient).build();
Row state = myScan.getScanState(myTableClient);
CloseableIterator<ColumnarBatch> FilesBatchIter = myScan.getScanFiles(myTableClient);
Optional<Expression> exp = Optional.empty();
this is the configuration that seems right.(I dont know about Optional<Expression>)
Not i want to use them to fetch data from table
while (FilesBatchIter.hasNext()) {
CloseableIterator<Row> files = FilesBatchIter.next().getRows();
CloseableIterator<DataReadResult> dataResult = Scan.readData(myTableClient, state, files, exp);
// prints the file names
while (files.hasNext()) {
Row file = files.next();
String column1Value = file.getString(0);
System.out.println(column1Value);
}
// Checking if any result is fetched by Scan.readData
while (dataResult.hasNext()) {
System.out.println("Data Found!");
}
}
It will prints file names but it seems there would be nothing in dataResult.
Any Ideas? or any sample code?Divyansh Jain
08/03/2023, 3:46 AMSairam Yeturi
08/17/2023, 9:52 AMDivyansh Jain
08/20/2023, 7:18 AMSrinivas Maganti
08/24/2023, 12:28 PMMartin
08/28/2023, 4:06 PMdf = spark.table("myTable").withColumn("C", col("A") + col("B")).withColumnRenamed("A", "Z")
magicLineageAnalyzer(df)
>> df.Z <-- myTable.A
>> df.B <-- myTable.B
>> df.C <-- myTable.A, myTable.B
rtyler
09/07/2023, 9:24 PMSrinivas Maganti
09/08/2023, 5:54 AMS Thelin
09/09/2023, 11:04 AMAvital Trifsik
09/14/2023, 12:58 PMCarly Akerly
09/19/2023, 2:58 PMSteve Russo
09/20/2023, 8:20 PM