Exercise - Deploy care management analytics capability

Completed

In this exercise, you deploy the care management analytics capabilities. You upload sample data to the process folder and run the data transformation pipeline.

Prerequisites

Deploy Care management analytics capability

  1. Navigate to Power BI and sign in.

  2. Select the Fabric workspace FL HC Cloud you used to deploy healthcare data solutions in Fabric.

  3. Locate and select the healthcare data solutions FLHealthcare you deployed.

    Screenshot of the healthcare data solutions FLHealthcare.

  4. In the Solution capabilities and sample data, select Care management analytics.

    Screenshot of selecting Care management analytics.

  5. Review what is deployed and the prerequisites.

  6. Select Deploy to workspace.

    Screenshot of selecting Deploy to workspace.

  7. Wait for the deployment to be completed. After the deployment is completed, you can see a notification on the message bar.

  8. Select Manage capability.

    Screenshot of selecting Manage capability.

  9. Here, you can view, configure, and manage the artifacts deployed with the capability.

Upload sample data to Process folder

Note

If you already copied the sample clinical data into the process folder, skip to the next task.

You can copy the sample data to the process folder using the pipeline or the notebook.

Use pipeline to copy the sample data to the Process folder

You can use the copy data activity pipeline to copy the sample data from Sample data folder to the process folder in the bronze lakehouse. You can set up a pipeline with Source and Destinations folders. For more detailed steps, see the Exercise - Deploy Healthcare Data Solutions in Microsoft Fabric With Sample Data.

Source folder: Root folder | SampleData | Clinical | FHIR-NDJSON | FHIR-HDS | 51KSyntheticPatients

Destination folder: Root folder | Process | Clinical | FHIR-NDJSON | FHIR-HDS

Use notebook to copy the sample data to the Process folder

  1. Select the Fabric workspace FL HC Cloud you deployed healthcare data solutions.

  2. Locate and select the FLHealthcare_msft_bronze lakehouse.

    Screenshot of selecting the FLHealthcare_msft_bronze lakehouse.

  3. Select Open notebook and select New notebook.

    Screenshot of selecting New notebook.

  4. Paste the following script. This script copies the 51KSyntheticPatients dataset located in the SampleData folder to the Process folder.

    from notebookutils import mssparkutils
    source_path = '[Source ABFS path]'
    target_path = '[Target ABFS path]'
    mssparkutils.fs.fastcp(source_path,target_path)
    
  5. Select Lakehouses.

    Screenshot of selecting Lakehouses.

  6. Expand Files | SampleData | Clinical | FHIR-NDJSON | FHIR-HDS and then select the button of the 51KSyntheticPatients node and select Copy ABFS path.

    Screenshot of selecting Copy ABFS path of 51KSyntheticPatients.

  7. Replace [Source ABFS path] with the path you copied.

    Screenshot of replacing Source ABFS path with the path you copied.

  8. Expand Process | Clinical | FHIR-NDJSON and then select the button of the FHIR-HDS node and select Copy ABFS path.

    Screenshot of selecting Copy ABFS path of the FHIR-HDS node.

  9. Replace [Target ABFS path] with the path you copied.

  10. Select Run and wait for the run to be completed.

    Screenshot of replacing Target ABFS path with the path you copied.

  11. The script should run successfully.

    Screenshot of the script running result.

  12. Refresh the FHIR-HDS node located under Process | Clinical | FHIR-NDJSON.

    Screenshot of refreshing the FHIR-HDS node.

  13. Select the FHIR-HDS node. You can view the copied dataset.

    Screenshot of selecting the copied dataset.

  14. Select the dataset. You should see the copied files.

    Screenshot of the copied files.

SDoH Report workaround

  1. Select the workspace you created.

  2. Locate and select to open the msft_bronze Lakehouse.

  3. Expand Files | ReferenceData | SDOH and select LocationDatasets.

  4. Select the … button of the ZIPToFIPSMapping.xlsx file and choose Properties.

    Screenshot of selecting the properties button.

  5. Copy the ABFS path and keep it on a notepad. You will need this path in a future step.

    Screenshot of the ABFS path.

  6. Select the workspace you created.

  7. Locate and select to open the msft_silver Lakehouse.

  8. Select Open notebook and choose New notebook.

  9. Paste the script below.

 import pandas as pd

from pyspark.sql import SparkSession

excel_file_path = "[Excel File path]"

df_excel = pd.read_excel(excel_file_path)
df_excel['zip'] = df_excel['zip'].astype(str)
df_excel['countyFips'] = df_excel['countyFips'].astype(str)
df_spark = spark.createDataFrame(df_excel)

# Write Spark DataFrame to the delta table

delta_table_path = "[Table path]"

df_spark.write.format("delta").mode("overwrite").save(delta_table_path)

# Verify by reading the delta table

df_read = spark.read.format("delta").load(delta_table_path)
df_read.show()

  1. Replace [Excel file path] with ABFS path you copied from the bronze lakehouse.

  2. Expand the lakehouse and then expand Tables.

  3. Locate the ZipToFipsMapping table, select the button and choose Copy path.

    Screenshot of selecting the copy path button of the ZipToFipsMapping table.

  4. Replace [Table path] with the table path you copied.

  5. Select Run cell.

    Screenshot of selecting the run cell button.

  6. The delta data-frame should be displayed in the output.

    Screenshot of the output.

Run data transformation pipeline

  1. Select the workspace FL HC Cloud where you deployed healthcare data solutions.

  2. Locate and select the FLHealthcare_msft_clinical_data_foundation_ingestion data pipeline.

    Screenshot of selecting the FLHealthcare_msft_cma Data pipeline.

  3. Review the pipeline activities.

  4. Select Run and wait for the pipeline run to be complete.

    Screenshot of running the pipeline.

    Note

    If the pipeline fails, rerun the pipeline and it should succeed the second time.

  5. If the fhir_ndjson_bronze_ingestion notebook fails, open the error message. If you don’t get any errors, continue to the next task.

    Screenshot of the notebook error.

  6. Review the error details. The error directs you to check the BusinessEvents table.

    Screenshot of the notebook error description.

  7. Select the workspace FL HC Cloud.

  8. Locate and open the admin lakehouse.

    Screenshot of the admin lakehouse.

  9. Expand Tables and select the BusinessEvents table.

  10. Check the recently created rows.

    Screenshot of the rows that are created.

  11. In the FL HC Cloud workspace, locate and open the bronze lakehouse.

    Screenshot of the bronze lakehouse.

  12. Expand Files > Failed > Clinical > FHIR-NDJSON > FHIR-HDS and select 51KSyntheticPatients folder.

  13. If you just have one file, then you can proceed. This file is the only one with issues and it won’t impact your next tasks in this exercise.

    Screenshot of the risk assessment json file.

Run the data transformation pipeline

  1. Select the workspace FL HC Cloud.

  2. Locate and select the FLHealthcare_msft_cma data pipeline.

    Screenshot of the FLHealthcare_msft_cma datapipeline.

  3. Review the pipeline activities.

  4. Select Run and wait for the pipeline run to be complete.