The databricks notebook will fail out of the box because the insert statement in Cmd 13 is not compatible with the SQL Schema.
There should be a fix to the databricks workbook or the SQL schema. The insert that fails is the following:
INSERT INTO jdbcObservationTable
SELECT id AS observationid, SUBSTRING_INDEX(subject.reference,'/',-1) AS patientid, code.coding[0].code AS observationcode, code.coding[0].display AS observation, status, valueQuantity.unit, valueQuantity.value FROM observationTable;
A simple workaround is to insert dummy data to the missing columns:
INSERT INTO jdbcObservationTable
SELECT id AS observationid, SUBSTRING_INDEX(subject.reference,'/',-1) AS patientid, code.coding[0].code AS observationcode, '' AS deviceid, code.coding[0].display AS observation, '', '', status, valueQuantity.unit, valueQuantity.value FROM observationTable;
A better workaround is to fix the SQL schema, or insert the correct data in the databricks workbook.
The databricks notebook will fail out of the box because the insert statement in
Cmd 13is not compatible with the SQL Schema.There should be a fix to the databricks workbook or the SQL schema. The insert that fails is the following:
A simple workaround is to insert dummy data to the missing columns:
A better workaround is to fix the SQL schema, or insert the correct data in the databricks workbook.