\n",
"\n",
"# Exercise 3: Pre-Processing for ICON\n",
"\n",
"This step-by-step exercise will familiarize you with the **necessary input data for the ICON model**. We will prepare initial and boundary data for a limited area model run (ICON-LAM). \n",
"\n",
"In detail, we will cover the following topics:\n",
"\n",
"- grids and external parameters\n",
"- processing raw input data sets into initial and boundary data\n",
"\n",
"We assume that the students are familiar with the contents of the basic exercises (Jupyter, Slurm, namelists). \n",
"\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "eaae410d",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"id": "e8e43632",
"metadata": {},
"source": [
"Let's start with a short examination of the **necessary input data files**. Later on we will generate (parts of) this data with the help of the Climate Data Operators (CDO). Detailed information on the datasets can be found in Chapter 2 of the ICON manual, see the link at the bottom of the page). \n",
"\n",
"The following illustration gives an overview of the different building blocks."
]
},
{
"cell_type": "markdown",
"id": "979e046e",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "ef81d873",
"metadata": {},
"source": [
"First, we link the directory which contains some example files: "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a13ae190-f325-41d9-8893-3439021ae720",
"metadata": {},
"outputs": [],
"source": [
"export SCRATCHDIR=/scratch/${USER::1}/$USER"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e1fa1d3",
"metadata": {},
"outputs": [],
"source": [
"cd $SCRATCHDIR\n",
"ln -sf /pool/data/ICON/ICON_training/test/example_data/ ."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89d2743b-4214-469b-9da7-82263eefb922",
"metadata": {},
"outputs": [],
"source": [
"export EXAMPLEDIR=$SCRATCHDIR/example_data"
]
},
{
"cell_type": "markdown",
"id": "c989ef74",
"metadata": {},
"source": [
"The subdirectories therein point to\n",
"- `const`: LAM grid data and external parameters \n",
" *(see the block on the left in the above illustration)*\n",
"- `raw_data`: the initial and boundary data retrieved from DWD's database \n",
" *(see the block at the top of the illustration)*\n",
"- `pre_data`: pre-processed initial and boundary data \n",
" *(see the block in the middle of the figure)*"
]
},
{
"cell_type": "markdown",
"id": "9cc89563",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Investigate the file sizes of the raw data input. Which part of the raw data becomes unnecessary if we limit the forecast to, say, 24hrs?
\n",
" Exercise: \n",
" What would be the respective pamore commands to retrieve these data? (Hint: see the ICON Manual. A link to the manual is provided in the References section at the end of this Jupyter notebook).
\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "0df62afd-6c16-4e8d-b849-62829e11002f",
"metadata": {},
"source": [
"\n",
"Solution\n",
"\n",
"Initial data: `pamore -d 2021071400 -hstart 0 -hstop 0 -lt a -model iglo -iglo_startdata_0`\n",
"\n",
"Boundary data: `pamore -d 2021071400 -hstart 0 -hstop 48 -hinc 2 -model iglo -hindcast_ilam`\n",
"\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"id": "2069a4ed",
"metadata": {},
"source": [
"## Grids and external parameters\n",
"\n",
"In order to run the ICON model, it is necessary to load the horizontal grid information as an input parameter. This information is stored within so-called **grid files**. Additionally, **external parameter fields** describe properties of the Earth’s surface and atmosphere like the topography and the land-sea mask. Of course, we have already encountered these files in the previous exercises; now let's take another closer look."
]
},
{
"cell_type": "markdown",
"id": "fe279071",
"metadata": {},
"source": [
"### Grid files\n",
"\n",
"Providing the grid files is a one-time process. It only needs to be repeated if the model setup changes. Similarly, for the external parameters there are updates only at longer intervals, for example when updated raw data sets become available.\n",
"\n",
"- For fixed domain sizes and resolutions a [list of grid files](http://icon-downloads.mpimet.mpg.de/) has been pre-built for the ICON model together with the corresponding external parameters. \n",
"- Custom grid files can be generated through an online grid generator tool, see the References section below."
]
},
{
"cell_type": "markdown",
"id": "91fcb265",
"metadata": {},
"source": [
"ICON data files do not completely contain the description of the underlying grid. To answer the question *\"Which grid file is related to my simulation data?\"*, users may compare the **horizontal grid UUID**, a non-human-readable sequence of numbers which is sort of a fingerprint attribute."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c50fa22-b104-40e5-9c3f-8718f975278a",
"metadata": {},
"outputs": [],
"source": [
"export GRIDFILENAME=$EXAMPLEDIR/const/iconR3B08_DOM01.nc"
]
},
{
"cell_type": "markdown",
"id": "38488e36",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Find out about the meta-data attribute uuidOfHGrid in the grid file. To this end, display the NetCDF file header with the `ncdump` utility. The `ncdump` tool generates a text representation of a NetCDF file and is included in the netcdf-c module.\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23b2b8e5",
"metadata": {},
"outputs": [],
"source": [
"module load netcdf-c\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "d87f930b-b6c6-4af0-9145-f767b0ce02b8",
"metadata": {},
"source": [
"\n",
"Solution\n",
"\n",
"```\n",
"module load netcdf-c\n",
"ncdump -h $GRIDFILENAME | grep uuidOfHGrid\n",
"```\n",
"\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"id": "ad98ed47",
"metadata": {},
"source": [
"For ICON grid files the following nomenclature has been established: In general, by **RnBk** we denote a grid that originates from an icosahedron whose edges have been initially divided into **n** parts, followed by **k** subsequent edge bisections."
]
},
{
"cell_type": "markdown",
"id": "d8dbf5c6",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "6e95b47c",
"metadata": {},
"source": [
"With the information about *n* and *k*, the effective mesh size of a global grid can be estimated as
\n",
" Exercise: \n",
" Revisit the ncdump -h output for the grid file above. Find out about the root subdivision (grid_root) and the grid (bisection) level to estimate the mesh size.\n",
"
\n",
" Exercise: \n",
" Open an interactive Linux terminal and find out about the meta-data attributes of the external parameter file. Find some proof that this data set matches our LAM grid. \n",
"
"
]
},
{
"cell_type": "markdown",
"id": "9258d2fd",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "b9bd5ac7-b8eb-4d3d-876a-9a4506a97962",
"metadata": {},
"source": [
"\n",
"Solution\n",
"\n",
"```\n",
"ncdump -h $EXTPARFILENAME | grep uuid # uuidOfHGrid attribute matches\n",
"```\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "a42f34de",
"metadata": {},
"source": [
"Among the various constant data, the field `topography_c` contains the **geometric height of the earth's surface** above sea level. We will visualize the topography with the plot script [scripts/icon_exercise_prepare_lam_plot_hsurf.ipynb](scripts/icon_exercise_prepare_lam_plot_hsurf.ipynb)."
]
},
{
"cell_type": "markdown",
"id": "224c7083",
"metadata": {},
"source": [
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f37e1ab-ddf3-4f25-bd33-7c5e23d86b1d",
"metadata": {},
"outputs": [],
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "4cea1cd9",
"metadata": {},
"source": [
"This should generate a bitmap file [HSURF.png](HSURF.png); please check this by opening the image via the file browser. It should look like the following plot:"
]
},
{
"cell_type": "markdown",
"id": "61bdfd6a",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "460bf813",
"metadata": {},
"source": [
"## Raw input data sets"
]
},
{
"cell_type": "markdown",
"id": "cae1f8e2",
"metadata": {},
"source": [
"Usually, the ICON limited area runs are driven by input data originating from DWD's operational NWP process chain, see [here](https://www.dwd.de/EN/ourservices/nwp_forecast_data/nwp_forecast_data.html) for a summary description. \n",
"These GRIB2 formatted raw data sets contain the so-called **initialized analysis** which means that the first guess and analysis fields have already been merged.\n",
"\n",
"Before the data is delivered through DWD's Automatic File Distribution (AFD) service, a **data extraction** of the global forecast data is performed. This step reduces the raw data to a subregion which roughly covers the limited area domain, while retaining the same mesh resolution as DWD’s global driving model."
]
},
{
"cell_type": "markdown",
"id": "e14ede42",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Based on the above information on the local grid, we can (roughly) estimate the number of required raw data grid cells for the subregion. Currently, DWD's deterministic global forecast has a mesh size of roughly 13km.\n",
" Estimate the number of grid cells that would need to be extracted from the DWD dataset.\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b471badd-a7eb-4823-a760-70bf1870df71",
"metadata": {},
"outputs": [],
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "b5885dc8",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"\n",
"Solution\n",
"\n",
"- calculate the factor between the mesh sizes: 13/6.5 = 2 \n",
" (or we can also use the exact factor 2 when we know that the global DWD grid is an `R03B07` grid).\n",
"- get the `ncdump -h` info on the local cell number: `ncdump -h const/iconR3B08_DOM01.nc | grep cell` (37488)\n",
"- calculate an estimate: `37488 / 2^2` (9372)\n",
"- this is a rough estimate and a lower bound, the cut-out area usually contains significantly more cells\n",
"\n",
"```\n",
"module load netcdf-c\n",
"ncdump -h $EXAMPLEDIR/const/iconR3B08_DOM01.nc | grep -m 1 cell\n",
"\n",
"python3 - << EOF\n",
"mesh_size_global = 13.0\n",
"mesh_size_local = 6.5\n",
"cell = 37488\n",
"\n",
"factor = mesh_size_global/mesh_size_local\n",
"raw_cell = cell/factor**2\n",
"\n",
"print(\"factor =\",factor)\n",
"print(\"raw_cell =\",raw_cell);\n",
"EOF\n",
"```\n",
"\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"id": "b04a44c5",
"metadata": {},
"source": [
"At this point, we shortly introduce the `grib_ls` command, which is a basic command-line tool for displaying GRIB2 data. It is included in the **[ecCodes](https://confluence.ecmwf.int/display/ECC/grib_ls) library and tools**."
]
},
{
"cell_type": "markdown",
"id": "e07065bd",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Run the grib_ls tool on one of the raw input data sets and determine the number of levels. In ICON levels are ordered top-down such that the record with the largest level index corresponds to the level which is near the surface. \n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bf2dcfee-d0d7-4d5d-96aa-ce3832697c48",
"metadata": {},
"outputs": [],
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "02e88581",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"\n",
"Solution\n",
"\n",
"The dataset contains 90 levels of the global model, which differs from the ICON-LAM setup in the next hands-on exercise. Therefore, a vertical interpolation step is required as part of the ICON model initialization.\n",
"\n",
"```\n",
"module load eccodes\n",
"export ECCODES_DEFINITION_PATH=/pool/data/ICON/ICON_training/eccodes/definitions.edzw-2.27.0-1:$ECCODES_DEFINITION_PATH\n",
"\n",
"grib_ls -w shortName=T $EXAMPLEDIR/raw_data/init_ML_20210714T000000Z.grb\n",
"```\n",
"\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"id": "e01204b6",
"metadata": {},
"source": [
"The `grib_ls` utility also accepts an option `-P ` for displaying additional metadata key."
]
},
{
"cell_type": "markdown",
"id": "73e7147c",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Apply the grib_ls tool once more and find out about the `localCreationDateYear`, `localCreationDateMonth`, `localCreationDateDay` of the initial data set file in the raw input data set.\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9cf78ed3-0564-42bc-abfe-9d5a2298bd85",
"metadata": {},
"outputs": [],
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "f15c4823",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"\n",
"Solution\n",
"\n",
"```\n",
"module load eccodes\n",
"export ECCODES_DEFINITION_PATH=/pool/data/ICON/ICON_training/eccodes/definitions.edzw-2.27.0-1:$ECCODES_DEFINITION_PATH\n",
"\n",
"grib_ls -P localCreationDateYear,localCreationDateMonth,localCreationDateDay $EXAMPLEDIR/raw_data/init_ML_20210714T000000Z.grb | head -n 5\n",
"```\n",
"\n",
"This yields:\n",
"\n",
"| key | value |\n",
"| :---------------------- | ----- |\n",
"| localCreationDateYear | 2021 |\n",
"| localCreationDateMonth | 7 |\n",
"| localCreationDateDay | 14 |\n",
"\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"id": "ee1d76a9-b247-466c-883c-3516a9b97f0b",
"metadata": {},
"source": [
"As a final remark on the ICON-LAM data sets, we point out a setting that may turn into a typical pitfall when running ICON-LAM as a locally installed code.\n",
"\n",
"**GRIB definition files** are external text files which constitute a kind of parameter database.\n",
"They describe the decoding rules and the keys which are used to identify the meteorological\n",
"fields, most importantly the field name (”shortName” key). Therefore the DWD-specific definition files are essential for the read-in process.\n",
"\n",
"The place where the GRIB2 definition files can be found is specified through the `ECCODES_DEFINITION_PATH` environment variable which is preset within our batch scripts."
]
},
{
"cell_type": "markdown",
"id": "a888fc64-f1e9-4703-afc5-e02422831c17",
"metadata": {},
"source": [
"
\n",
" Exercise: \n",
" Print out the contents of any of the raw initial data set with the grib_ls command. \n",
" What happens to the grib_ls output when you replace the setting of the ECCODES_DEFINITION_PATH environment variable by\n",
"
\n",
"
an empty string, or
\n",
"
the setting that has been used by the Slurm batch jobs so far?
\n",
"
\n",
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c398e2e1-aa35-4950-a29f-a0656b61680b",
"metadata": {},
"outputs": [],
"source": [
""
]
},
{
"cell_type": "markdown",
"id": "5328408a-6d89-4550-8cc6-ec645896357d",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"\n",
"Solution\n",
"\n",
"The command `echo $ECCODES_DEFINITION_PATH` in the Slurm run scripts was: \n",
"```\n",
"module load eccodes\n",
"export ECCODES_DEFINITION_PATH=/pool/data/ICON/ICON_training/eccodes/definitions.edzw-2.27.0-1:$ECCODES_DEFINITION_PATH\n",
"echo $ECCODES_DEFINITION_PATH\n",
"```\n",
"\n",
"Overloading this setting with an empty string makes the DWD's settings unknown to the GRIB2 reader. The command\n",
"```\n",
"ECCODES_DEFINITION_PATH=\"\" grib_ls $EXAMPLEDIR/raw_data/forcing_ML_20210714T000000Z.grb\n",
"```\n",
"yields, for example, `snmr` as the short name for the field `QS` (snow mixing ratio).\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "ebd6d96a-3911-4ba7-a3ab-b7ac58abd33c",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"id": "4c57f5aa",
"metadata": {},
"source": [
"## Pre-processing of the data sets"
]
},
{
"cell_type": "markdown",
"id": "8599ca39-956a-4fa8-9e35-a46a816edbfa",
"metadata": {},
"source": [
"Both, an **initial state and lateral boundary conditions** have to be provided when running ICON in limited area mode (LAM). The latter are time dependent and are updated periodically by reading input files. \n",
"High-resolution limited area forecasts usually run ICON at horizontal resolutions which differ from those of\n",
"the initial data. Therefore, the analysis data (raw data) has to be **interpolated onto the local target grid**.\n",
"\n",
"In this practical exercise we will start from an initialized analysis provided by DWD's operational deterministic forecast suite.\n",
"These data sets were retrieved from DWD's data base with the `pamore` commands from the beginning of this exercise.\n",
"\n",
"At the end of this practical exercise, all necessary forcing data for a limited area run will be located in the following directory (initial and boundary data):\n",
"\n",
"```bash\n",
"DATADIR_LAM=$SCRATCHDIR/data_lam\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "473fa497",
"metadata": {},
"source": [
"### Initial data"
]
},
{
"cell_type": "markdown",
"id": "17411f53",
"metadata": {},
"source": [
"The pre-processing tools perform a *horizontal* remapping only. There is no need for *vertical* interpolation as a separate pre-processing step. The ICON model itself will take care of the interpolation onto the model levels, assumed that the user has provided the height level field `HHL`.\n",
"\n",
"The following table lists the content of the initialized analysis product that is pre-processed for ICON-LAM (see also Section 11.4 in the ICON Manual):"
]
},
{
"cell_type": "markdown",
"id": "f9b9438f",
"metadata": {},
"source": [
"| short name | description |\n",
"| :----- | :----- |\n",
"| ALB_SEAICE | sea ice albedo | \n",
"| C_T_LK | shape factor w.r.t. temp. profile in the thermocline (lakes) |\n",
"| EVAP_PL | evaporation of plants |\n",
"| FR_ICE | sea/lake ice fraction |\n",
"| FRESHSNW | age of snow indicator |\n",
"| H_ICE | sea ice depth |\n",
"| H_ML_LK | mixed-layer thickness (lakes) |\n",
"| H_SNOW | snow depth |\n",
"| HSNOW_MAX | maximum snow depth reached within current snow-cover period |\n",
"| HHL | vertical coordinate half level heights |\n",
"| P | pressure | \n",
"| QC QI QR QS QV | mass fractions (cloud liquid water, ...) | \n",
"| QV_S | surface specific humidity | \n",
"| RHO_SNOW | snow density |\n",
"| SMI | soil moisture index |\n",
"| SNOAG | duration of current snow-cover period |\n",
"| T | air temperature |\n",
"| T_BOT_LK | temperature at water-bottom sediment interface (lakes) |\n",
"| T_G | surface temperature |\n",
"| T_ICE | sea ice temperature |\n",
"| T_MNW_LK | mean temperature of the water column (lakes) |\n",
"| T_SNOW | snow temperature |\n",
"| T_SO | soil temperature |\n",
"| T_WML_LK | mixed-layer temperature (lakes) |\n",
"| TKE | turbulent kinetic energy |\n",
"| U, V | horizontal velocity components |\n",
"| W | vertical velocity |\n",
"| W_I | water content of interception layer |\n",
"| W_SNOW | snow water equivalent |\n",
"| W_SO_ICE | soil ice content |\n",
"| Z0 | surface roughness length |\n"
]
},
{
"cell_type": "markdown",
"id": "fda784ba-11ca-46a8-97e3-1be5c85be433",
"metadata": {},
"source": [
"
\n",
" Exercise (Remapping initial data):\n",
" \n",
" Open the notebook scripts/remap_inidata.ipynb and perform the necessary steps to interpolate the initial data set onto the LAM target grid. \n",
" Afterwards, return to this Jupyter notebook.\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "ca8fb4b0",
"metadata": {},
"source": [
"### Boundary data"
]
},
{
"cell_type": "markdown",
"id": "a383fab6",
"metadata": {},
"source": [
"The data files which are intended to be used as lateral boundary conditions for the model contain the following set of variables (the so-called COSMO set of variables), see the ICON manual for details. A link to the manual is provided in the References section at the end of this Jupyter notebook:\n",
"\n",
"| name | field |\n",
"| :--- | :---- |\n",
"| U | eastward component of wind |\n",
"| V | northward component of wind |\n",
"| W | vertical wind speed |\n",
"| T | temperature |\n",
"| P | pressure |\n",
"| QV, QC, QI, QR, QS | mixing ratios |\n",
"| HHL | geometric height of the layer limits |"
]
},
{
"cell_type": "markdown",
"id": "1d9a31c4",
"metadata": {},
"source": [
"The following remarks are related to the boundary data:\n",
"- Naturally, the time frequency with which the boundary data are updated has a significant impact on the results.\n",
"- The constant height-level information `HHL` needs only be contained in the raw data file whose validity date matches the envisaged model start date.\n",
"- For efficient I/O during the model run, the ICON model can use an auxiliary grid, the so-called **boundary grid**. This ring-shaped grid contains only the data points on which the lateral boundary data are defined.\n",
"_This requires `iconsub` from the `icontools` which is not covered in this exercise._"
]
},
{
"cell_type": "markdown",
"id": "d10ecf16-b7c2-4435-8194-2d6e9647e9f5",
"metadata": {},
"source": [
"
\n",
" Exercise (Remapping boundary data):\n",
" \n",
" Open the notebook scripts/remap_lbcdata.ipynb and perform the necessary steps to interpolate the lateral boundary forcing data set onto the LAM target grid. \n",
" Afterwards, return to this Jupyter notebook.\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "124c8ce8-6308-4698-9b22-237ad4b7904a",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"id": "9c446a4a",
"metadata": {},
"source": [
"## Grid & External Parameter Generation with the Zonda Web Interface"
]
},
{
"cell_type": "markdown",
"id": "939648a3",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"\n",
"Zonda is a web interface designed to facilitate the generation of ICON grid files and External Parameter data (ExtPar) on ICON triangular grids for research and on-demand simulations.\n",
"Zonda makes use of containerized versions of the ICON grid generator and ExtPar. It constructs the Fortran and Python namelist setups and runs the containers on a public server.\n",
"\n",
"The invocation of Zonda is realized as a two-step process:\n",
"\n",
"- In the frontend (see the Zonda website for details), the user specifies the domain(s) including the appropriate settings for the external parameter generation.\n",
" Example configurations and an expert mode with additional choices are available to assist the user.\n",
" The user gets a `JSON` code snippet containing the chosen configuration which is required for the second step.\n",
"- In the backend (see Zonda Request for details), the JSON code has to be pasted into a Github issue.\n",
" Then, the Github CI triggers the generation of the ICON Grid and ExtPar data.\n",
" These files in `NetCDF 4` format are then provided as `.zip` file.\n",
"\n",
"The Zonda web interface comes with Documentation.\n",
"Furthermore, the available options in the frontend contain tooltips with short descriptions and\n",
"links to the documentation of the respective option.\n",
" "
]
},
{
"cell_type": "markdown",
"id": "55ab88a8-1a48-4b83-a995-b46066877388",
"metadata": {
"tags": []
},
"source": [
"
Congratulations! You have successfully completed Exercise 3.