diff --git a/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json b/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json index ec35716..67e2ad7 100644 --- a/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json +++ b/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json @@ -1,8 +1,8 @@ { - "hash": "30320b3257f9d83b2053cf69ced67973", + "hash": "d3685cb37b776df00c767e09e6854ca0", "result": { "engine": "knitr", - "markdown": "---\ntitle: \"Mapping Land Cover with R\"\nauthor: \"al\"\ndate: \"2024-07-01\"\ndate-modified: \"2024-07-01\"\ncategories: [land cover, R, planetary computer, satellite]\nimage: \"image.jpg\"\nparams:\n repo_owner: \"NewGraphEnvironment\"\n repo_name: \"new_graphiti\"\n post_dir_name: \"2024-06-30-land-cover\"\n update_pkgs: FALSE\n update_gis: FALSE\nexecute:\n warning: false\nformat: \n html:\n code-fold: true\n---\n\n\nWe want to quantifying and visualize remotely sense land cover data.... Here is a first start. We will use the European\nSpace Agency's WorldCover product which provides global land cover maps for the years 2020 and 2021 at 10 meter\nresolution based on the combination of Sentinel-1 radar data and Sentinel-2 imagery. We will use the 2021 dataset\nfor mapping an area of the Skeena watershed near Houston, British Columbia. \n\n
\n\n\nThis post was inspired - with much of the code copied - from a repository on GitHub from the wonderfully talented\n[Milos Popovic](https://github.com/milos-agathon/esa-land-cover). \n\n
\n\nFirst thing we will do is load our packages. If you do not have the packages installed yet you can change the `update_pkgs` param in\nthe `yml` of this file to `TRUE`. Using `pak` is great because it allows you to update your packages when you want to.\n\n\n::: {.cell}\n\n```{.r .cell-code}\npkgs_cran <- c(\n \"usethis\",\n \"rstac\",\n \"here\",\n \"fs\",\n \"terra\",\n \"tidyverse\",\n \"rayshader\",\n \"sf\",\n \"classInt\",\n \"rgl\",\n \"tidyterra\",\n \"tabulapdf\",\n \"bcdata\",\n \"ggplot\",\n \"ggdark\")\n\npkgs_gh <- c(\n \"poissonconsulting/fwapgr\",\n \"NewGraphEnvironment/rfp\"\n )\n\npkgs <- c(pkgs_cran, pkgs_gh)\n\nif(params$update_pkgs){\n # install the pkgs\n lapply(pkgs,\n pak::pkg_install,\n ask = FALSE)\n}\n\n# load the pkgs\npkgs_ld <- c(pkgs_cran,\n basename(pkgs_gh))\ninvisible(\n lapply(pkgs_ld,\n require,\n character.only = TRUE)\n)\n\nsource(here::here(\"scripts/functions.R\"))\n```\n:::\n\n\n# Define our Area of Interest\nHere we diverge a bit from Milos version as we are going to load a custom area of interest. We will be connecting to\nour remote database using Poisson Consulting's `fwapgr::fwa_watershed_at_measure` function which leverages the in database\n[`FWA_WatershedAtMeasure`](https://smnorris.github.io/fwapg/04_functions.html#fwa-watershedatmeasure) function from \n[Simon Norris'](https://github.com/smnorris) wonderful [`fwapg`](https://github.com/smnorris/fwapg)\npackage. \n\n
\n\nWe use a `blue line key` and a `downstream route measure` to define our area of interest which is the Neexdzii Kwa \n(a.k.a Upper Bulkley River) near Houston, British Columbia.\n\n
\n\nAs per the [Freshwater Atlas of BC](https://catalogue.data.gov.bc.ca/dataset/freshwater-atlas-stream-network/resource/5459c8c7-f95c-42fa-a439-24439c11929d) - the `blue line key`:\n\n> Uniquely identifies a single flow line such that a main channel and a secondary channel with the same watershed code would have different blue line keys (the Fraser River and all side channels have different blue line keys).\n\n
\n\nA `downstream route measure` is:\n\n>\tThe distance, in meters, along the route from the mouth of the route to the feature. This distance is measured from the mouth of the containing route to the downstream end of the feature.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# lets build a custom watersehed just for upstream of the confluence of Neexdzii Kwa and Wetzin Kwa\n# blueline key\nblk <- 360873822\n# downstream route measure\ndrm <- 166030.4\n\naoi <- fwapgr::fwa_watershed_at_measure(blue_line_key = blk, \n downstream_route_measure = drm) |> \n sf::st_transform(4326)\n\n#get the bounding box of our aoi\naoi_bb <- sf::st_bbox(aoi)\n```\n:::\n\n\n# Retrieve the Land Cover Data\nFor this example we will retrieve our precipitation data from Microsofts Planetary Computer. We will use `usethis::use_git_ignore` to add the data to our `.gitignore` file so that we do not commit that insano enormous tiff files to our git repository.\n\n usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n \n\n::: {.cell}\n\n```{.r .cell-code}\n# let's create our data directory\ndir_data <- here::here('posts', params$post_dir_name, \"data\")\n\nfs::dir_create(dir_data)\n\n# usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query <- rstac::stac(\"https://planetarycomputer.microsoft.com/api/stac/v1\")\n\nms_collections <- ms_query |>\n rstac::collections() |>\n rstac::get_request()\n```\n:::\n\n\nNow - we want to understand these datasets a bit better so let's make a little function to view the options for datasets \nwe can download.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Function to extract required fields\n# Function to extract required fields\nextract_fields <- function(x) {\n tibble(\n id = x$id,\n title = x$title,\n time_start = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][1],\n time_end = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][2],\n description = x$description\n )\n}\n\n# Apply the function to each element in ms_collections$collections and combine into a dataframe\ndf <- purrr::map_dfr(ms_collections$collections, extract_fields)\n\nmy_dt_table(df, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n
\n\nHere is the description of our dataset:\n\n\n The European Space Agency (ESA) [WorldCover](https://esa-worldcover.org/en) product provides global land cover maps\n for the years 2020 and 2021 at 10 meter resolution based on the combination of\n [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) radar data and\n [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) imagery. The discrete classification maps\n provide 11 classes defined using the Land Cover Classification System (LCCS) developed by the United Nations (UN)\n Food and Agriculture Organization (FAO). The map images are stored in [cloud-optimized\n GeoTIFF](https://www.cogeo.org/) format. The WorldCover product is developed by a consortium of European service\n providers and research organizations. [VITO](https://remotesensing.vito.be/) (Belgium) is the prime contractor of\n the WorldCover consortium together with [Brockmann Consult](https://www.brockmann-consult.de/) (Germany), \n [CSSI](https://www.c-s.fr/) (France), [Gamma Remote Sensing AG](https://www.gamma-rs.ch/) (Switzerland), [International\n Institute for Applied Systems Analysis](https://www.iiasa.ac.at/) (Austria), and [Wageningen\n University](https://www.wur.nl/nl/Wageningen-University.htm) (The Netherlands). Two versions of the WorldCover\n product are available: - WorldCover 2020 produced using v100 of the algorithm - [WorldCover 2020 v100 User\n Manual](https://esa-worldcover.s3.eu-central-1.amazonaws.com/v100/2020/docs/WorldCover_PUM_V1.0.pdf) - [WorldCover\n 2020 v100 Validation\n Report]() - WorldCover\n 2021 produced using v200 of the algorithm - [WorldCover 2021 v200 User\n Manual]() -\n [WorldCover 2021 v200 Validaton\n Report]() Since the\n WorldCover maps for 2020 and 2021 were generated with different algorithm versions (v100 and v200, respectively),\n changes between the maps include both changes in real land cover and changes due to the used algorithms.\n\n\nHere we build the query for what we want. We are specifying `collect_this <- \"esa-worldcover\"`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncollect_this <- \"esa-worldcover\"\n\nms_esa_query <- rstac::stac_search(\n q = ms_query,\n collections = collect_this,\n datetime = \"2021-01-01T00:00:00Z/2021-12-31T23:59:59Z\",\n bbox = aoi_bb,\n limit = 100\n ) |>\n rstac::get_request()\n\nms_esa_query\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n###Items\n- features (2 item(s)):\n - ESA_WorldCover_10m_2021_v200_N54W129\n - ESA_WorldCover_10m_2021_v200_N54W126\n- assets: input_quality, map, rendered_preview, tilejson\n- item's fields: \nassets, bbox, collection, geometry, id, links, properties, stac_extensions, stac_version, type\n```\n\n\n:::\n:::\n\nNext we need to sign in to the planetary computer with `rstac::sign_planetary_computer()`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query_signin <- rstac::items_sign(\n ms_esa_query,\n rstac::sign_planetary_computer()\n )\n```\n:::\n\n\n
\n\nTo actually download the data we are going to put a chunk option that allows us to just execute the code once and \nupdate it with the `update_gis` param in our `yml`. \n\n\n::: {.cell}\n\n```{.r .cell-code}\nrstac::assets_download(\n items = ms_query_signin,\n asset_names = \"map\",\n output_dir = here::here('posts', params$post_dir_name, \"data\"),\n overwrite = TRUE\n)\n```\n:::\n\n\n
\n\nNice. So now let's read in these data, clip them to our area of interest with `terra::crop` then combine them into one tiff using\n`terra::mosaic`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndir_out <- here::here('posts', params$post_dir_name, \"data/esa-worldcover/v200/2021/map\")\n\nrast_files <- list.files(\n dir_out,\n full.names = TRUE\n)\n\nland_cover_raster_raw <- rast_files |>\n purrr::map(terra::rast) \n\n# Clip the rasters to the AOI\nland_cover_raster_clipped <- purrr::map(\n land_cover_raster_raw,\n ~ terra::crop(.x, aoi, snap = \"in\", mask = TRUE)\n)\n\n# combine the rasters\nland_cover_raster <- do.call(terra::mosaic, land_cover_raster_clipped)\n\n# Optionally, plot the merged raster\n# terra::plot(land_cover_raster)\n```\n:::\n\n\n# Digital Elevation Model\n\nLet's grab the digital elevation model using `elevatr::get_elev_raster` so we can downsample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndem <- elevatr::get_elev_raster(\n locations = aoi,\n z = 11,\n clip = \"bbox\"\n) |>\n terra::rast()\n```\n:::\n\n\n# Resample\nHere we resample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_raster_resampled <- terra::resample(\n land_cover_raster,\n dem,\n method = \"near\"\n)\n\n# terra::plot(land_cover_raster_resampled)\n```\n:::\n\n\n# Plot the Land Cover and DEM\n\n## Get Additional Data\nWe could use some data for context such as major streams and the railway. We get the streams and railway from \ndata distribution bc api using the `bcdata` package. Our `rfp` package calls just allow some extra sanity checks on the \n`bcdata::bcdc_query_geodata` function. It's not really necessary but can be helpful when errors occur (ex. the name of \nthe column to filter on is input incorrectly). \n\n
\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# grab all the railways\nl_rail <- rfp::rfp_bcd_get_data(\n bcdata_record_id = stringr::str_to_upper(\"whse_basemapping.gba_railway_tracks_sp\")\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() \n\n# streams in the bulkley and then filter to just keep the big ones\nl_streams <- rfp::rfp_bcd_get_data(\n bcdata_record_id = stringr::str_to_upper(\"whse_basemapping.fwa_stream_networks_sp\"),\n col_filter = stringr::str_to_upper(\"watershed_group_code\"),\n col_filter_value = \"BULK\",\n # grab a smaller object by including less columns\n col_extract = stringr::str_to_upper(c(\"linear_feature_id\", \"stream_order\"))\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() |> \n dplyr::filter(stream_order > 4)\n```\n:::\n\n\nNow we trim up those layers. We have some functions to validate and repair geometries and then we clip them to our area of interest.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlayers_to_trim <- tibble::lst(l_rail, l_streams)\n\n# Function to validate and repair geometries\nvalidate_geometries <- function(layer) {\n layer <- sf::st_make_valid(layer)\n layer <- layer[sf::st_is_valid(layer), ]\n return(layer)\n}\n\n# Apply validation to the AOI and layers\naoi <- validate_geometries(aoi)\nlayers_to_trim <- purrr::map(layers_to_trim, validate_geometries)\n\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi)\n) \n```\n:::\n\n\n\n## Get Legend Values\n\nSo we need to map the values in the raster to the actual land cover classes. We can do this by extracting the cross\nreference table from the pdf provided in the metatdata of the data. We will use the `tabulapdf` package to extract the\ntable and do some work to collapse it into a cross-referenceing tool we can use for land cover classifications and \nsubsequent color schemes.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# extract the cross reference table from the pdf\npdf_file <- \"https://esa-worldcover.s3.eu-central-1.amazonaws.com/v200/2021/docs/WorldCover_PUM_V2.0.pdf\"\npage <- 15\n\n# table_map <- tabulapdf::locate_areas(pdf_file, pages = page)\n# table_coords <- list(as.numeric(unlist(table_map[[1]])))\n\ntable_coords <- list(c(94.55745, 74.66493, 755.06007, 550.41094))\n\n\nxref_raw <- tabulapdf::extract_tables(\n pdf_file,\n pages = page,\n method = \"lattice\",\n area = table_coords,\n guess = FALSE\n)\n\n# ##this is how we make a clean dataframe\nxref_raw2 <- xref_raw |> \n purrr::pluck(1) |>\n tibble::as_tibble() |>\n janitor::row_to_names(1) |>\n janitor::clean_names()\n\nxref_raw3 <- xref_raw2 %>%\n fill(code, .direction = \"down\")\n\n# Custom function to concatenate rows within each group\ncollapse_rows <- function(df) {\n df %>%\n summarise(across(everything(), ~ paste(na.omit(.), collapse = \" \")))\n}\n\n# Group by code and apply the custom function\nxref <- xref_raw3 %>%\n group_by(code) %>%\n group_modify(~ collapse_rows(.x)) %>%\n ungroup() |> \n dplyr::arrange(code) |> \n purrr::set_names(c(\"code\", \"land_cover_class\", \"lccs_code\", \"definition\", \"color_code\")) |> \n tidyr::separate(color_code, into = c(\"r\", \"g\", \"b\"), sep = \",\", convert = TRUE, remove = FALSE) %>%\n dplyr::mutate(color = rgb(r, g, b, maxColorValue = 255))\n\nmy_dt_table(xref, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\nSo - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object.\nWe can recover that table with `terra::coltab(land_cover_raster_raw[[1]])`\n\n## Plot \nOk. Let's plot it up. We will use `ggplot2` and `tidyterra` to plot the land cover raster and then add the streams and railway on top of that.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncolor_table <- terra::coltab(land_cover_raster_raw[[1]])[[1]]\n\ncoltab(land_cover_raster_resampled) <- color_table\n\n\nmap <- ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_raster_resampled),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_raster_resampled),\n name = \"ESA Land Cover\",\n labels = xref$land_cover_class\n ) +\n # geom_sf(\n # data = aoi,\n # fill = \"transparent\",\n # color = \"white\",\n # linewidth = .5\n # ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"yellow\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n\n# save the plot\n# ggsave(here::here('posts', params$post_dir_name, \"image.jpg\"), width = 10, height = 10)\n\nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-1.png){width=672}\n:::\n:::\n", + "markdown": "---\ntitle: \"Mapping Land Cover with R\"\nauthor: \"al\"\ndate: \"2024-07-01\"\ndate-modified: \"2024-07-01\"\ncategories: [land cover, R, planetary computer, remote sensing]\nimage: \"image.jpg\"\nparams:\n repo_owner: \"NewGraphEnvironment\"\n repo_name: \"new_graphiti\"\n post_dir_name: \"2024-06-30-land-cover\"\n update_pkgs: FALSE\n update_gis: FALSE\nexecute:\n warning: false\nformat: \n html:\n code-fold: true\n---\n\n\nWe want to quantifying and visualize remotely sense land cover data.... Here is a first start. We will use the European\nSpace Agency's WorldCover product which provides global land cover maps for the years 2020 and 2021 at 10 meter\nresolution based on the combination of Sentinel-1 radar data and Sentinel-2 imagery. We will use the 2021 dataset\nfor mapping an area of the Skeena watershed near Houston, British Columbia. \n\n
\n\n\nThis post was inspired - with much of the code copied - from a repository on GitHub from the wonderfully talented\n[Milos Popovic](https://github.com/milos-agathon/esa-land-cover). \n\n
\n\nFirst thing we will do is load our packages. If you do not have the packages installed yet you can change the `update_pkgs` param in\nthe `yml` of this file to `TRUE`. Using `pak` is great because it allows you to update your packages when you want to.\n\n\n::: {.cell}\n\n```{.r .cell-code}\npkgs_cran <- c(\n \"usethis\",\n \"rstac\",\n \"here\",\n \"fs\",\n \"terra\",\n \"tidyverse\",\n \"rayshader\",\n \"sf\",\n \"classInt\",\n \"rgl\",\n \"tidyterra\",\n \"tabulapdf\",\n \"bcdata\",\n \"ggplot\",\n \"ggdark\")\n\npkgs_gh <- c(\n \"poissonconsulting/fwapgr\",\n \"NewGraphEnvironment/rfp\"\n )\n\npkgs <- c(pkgs_cran, pkgs_gh)\n\nif(params$update_pkgs){\n # install the pkgs\n lapply(pkgs,\n pak::pkg_install,\n ask = FALSE)\n}\n\n# load the pkgs\npkgs_ld <- c(pkgs_cran,\n basename(pkgs_gh))\ninvisible(\n lapply(pkgs_ld,\n require,\n character.only = TRUE)\n)\n\nsource(here::here(\"scripts/functions.R\"))\n```\n:::\n\n\n# Define our Area of Interest\nHere we diverge a bit from Milos version as we are going to load a custom area of interest. We will be connecting to\nour remote database using Poisson Consulting's `fwapgr::fwa_watershed_at_measure` function which leverages the in database\n[`FWA_WatershedAtMeasure`](https://smnorris.github.io/fwapg/04_functions.html#fwa-watershedatmeasure) function from \n[Simon Norris'](https://github.com/smnorris) wonderful [`fwapg`](https://github.com/smnorris/fwapg)\npackage. \n\n
\n\nWe use a `blue line key` and a `downstream route measure` to define our area of interest which is the Neexdzii Kwa \n(a.k.a Upper Bulkley River) near Houston, British Columbia.\n\n
\n\nAs per the [Freshwater Atlas of BC](https://catalogue.data.gov.bc.ca/dataset/freshwater-atlas-stream-network/resource/5459c8c7-f95c-42fa-a439-24439c11929d) - the `blue line key`:\n\n> Uniquely identifies a single flow line such that a main channel and a secondary channel with the same watershed code would have different blue line keys (the Fraser River and all side channels have different blue line keys).\n\n
\n\nA `downstream route measure` is:\n\n>\tThe distance, in meters, along the route from the mouth of the route to the feature. This distance is measured from the mouth of the containing route to the downstream end of the feature.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# lets build a custom watersehed just for upstream of the confluence of Neexdzii Kwa and Wetzin Kwa\n# blueline key\nblk <- 360873822\n# downstream route measure\ndrm <- 166030.4\n\naoi <- fwapgr::fwa_watershed_at_measure(blue_line_key = blk, \n downstream_route_measure = drm) |> \n sf::st_transform(4326)\n\n#get the bounding box of our aoi\naoi_bb <- sf::st_bbox(aoi)\n```\n:::\n\n\n# Retrieve the Land Cover Data\nFor this example we will retrieve our precipitation data from Microsofts Planetary Computer. We will use `usethis::use_git_ignore` to add the data to our `.gitignore` file so that we do not commit that insano enormous tiff files to our git repository.\n\n usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n \n\n::: {.cell}\n\n```{.r .cell-code}\n# let's create our data directory\ndir_data <- here::here('posts', params$post_dir_name, \"data\")\n\nfs::dir_create(dir_data)\n\n# usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query <- rstac::stac(\"https://planetarycomputer.microsoft.com/api/stac/v1\")\n\nms_collections <- ms_query |>\n rstac::collections() |>\n rstac::get_request()\n```\n:::\n\n\nNow - we want to understand these datasets a bit better so let's make a little function to view the options for datasets \nwe can download.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Function to extract required fields\n# Function to extract required fields\nextract_fields <- function(x) {\n tibble(\n id = x$id,\n title = x$title,\n time_start = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][1],\n time_end = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][2],\n description = x$description\n )\n}\n\n# Apply the function to each element in ms_collections$collections and combine into a dataframe\ndf <- purrr::map_dfr(ms_collections$collections, extract_fields)\n\nmy_dt_table(df, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n
\n\nHere is the description of our dataset:\n\n\n The European Space Agency (ESA) [WorldCover](https://esa-worldcover.org/en) product provides global land cover maps\n for the years 2020 and 2021 at 10 meter resolution based on the combination of\n [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) radar data and\n [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) imagery. The discrete classification maps\n provide 11 classes defined using the Land Cover Classification System (LCCS) developed by the United Nations (UN)\n Food and Agriculture Organization (FAO). The map images are stored in [cloud-optimized\n GeoTIFF](https://www.cogeo.org/) format. The WorldCover product is developed by a consortium of European service\n providers and research organizations. [VITO](https://remotesensing.vito.be/) (Belgium) is the prime contractor of\n the WorldCover consortium together with [Brockmann Consult](https://www.brockmann-consult.de/) (Germany), \n [CSSI](https://www.c-s.fr/) (France), [Gamma Remote Sensing AG](https://www.gamma-rs.ch/) (Switzerland), [International\n Institute for Applied Systems Analysis](https://www.iiasa.ac.at/) (Austria), and [Wageningen\n University](https://www.wur.nl/nl/Wageningen-University.htm) (The Netherlands). Two versions of the WorldCover\n product are available: - WorldCover 2020 produced using v100 of the algorithm - [WorldCover 2020 v100 User\n Manual](https://esa-worldcover.s3.eu-central-1.amazonaws.com/v100/2020/docs/WorldCover_PUM_V1.0.pdf) - [WorldCover\n 2020 v100 Validation\n Report]() - WorldCover\n 2021 produced using v200 of the algorithm - [WorldCover 2021 v200 User\n Manual]() -\n [WorldCover 2021 v200 Validaton\n Report]() Since the\n WorldCover maps for 2020 and 2021 were generated with different algorithm versions (v100 and v200, respectively),\n changes between the maps include both changes in real land cover and changes due to the used algorithms.\n\n\nHere we build the query for what we want. We are specifying `collect_this <- \"esa-worldcover\"`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncollect_this <- \"esa-worldcover\"\n\nms_esa_query <- rstac::stac_search(\n q = ms_query,\n collections = collect_this,\n datetime = \"2021-01-01T00:00:00Z/2021-12-31T23:59:59Z\",\n bbox = aoi_bb,\n limit = 100\n ) |>\n rstac::get_request()\n\nms_esa_query\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n###Items\n- features (2 item(s)):\n - ESA_WorldCover_10m_2021_v200_N54W129\n - ESA_WorldCover_10m_2021_v200_N54W126\n- assets: input_quality, map, rendered_preview, tilejson\n- item's fields: \nassets, bbox, collection, geometry, id, links, properties, stac_extensions, stac_version, type\n```\n\n\n:::\n:::\n\nNext we need to sign in to the planetary computer with `rstac::sign_planetary_computer()`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query_signin <- rstac::items_sign(\n ms_esa_query,\n rstac::sign_planetary_computer()\n )\n```\n:::\n\n\n
\n\nTo actually download the data we are going to put a chunk option that allows us to just execute the code once and \nupdate it with the `update_gis` param in our `yml`. \n\n\n::: {.cell}\n\n```{.r .cell-code}\nrstac::assets_download(\n items = ms_query_signin,\n asset_names = \"map\",\n output_dir = here::here('posts', params$post_dir_name, \"data\"),\n overwrite = TRUE\n)\n```\n:::\n\n\n
\n\nNice. So now let's read in these data, clip them to our area of interest with `terra::crop` then combine them into one tiff using\n`terra::mosaic`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndir_out <- here::here('posts', params$post_dir_name, \"data/esa-worldcover/v200/2021/map\")\n\nrast_files <- list.files(\n dir_out,\n full.names = TRUE\n)\n\nland_cover_raster_raw <- rast_files |>\n purrr::map(terra::rast) \n\n# Clip the rasters to the AOI\nland_cover_raster_clipped <- purrr::map(\n land_cover_raster_raw,\n ~ terra::crop(.x, aoi, snap = \"in\", mask = TRUE)\n)\n\n# combine the rasters\nland_cover_raster <- do.call(terra::mosaic, land_cover_raster_clipped)\n\n# Optionally, plot the merged raster\n# terra::plot(land_cover_raster)\n```\n:::\n\n\n# Digital Elevation Model\n\nLet's grab the digital elevation model using `elevatr::get_elev_raster` so we can downsample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndem <- elevatr::get_elev_raster(\n locations = aoi,\n z = 11,\n clip = \"bbox\"\n) |>\n terra::rast()\n```\n:::\n\n\n# Resample\nHere we resample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_raster_resampled <- terra::resample(\n land_cover_raster,\n dem,\n method = \"near\"\n)\n\n# terra::plot(land_cover_raster_resampled)\n```\n:::\n\n\n# Plot the Land Cover and DEM\n\n## Get Additional Data\nWe could use some data for context such as major streams and the railway. We get the streams and railway from \ndata distribution bc api using the `bcdata` package. Our `rfp` package calls just allow some extra sanity checks on the \n`bcdata::bcdc_query_geodata` function. It's not really necessary but can be helpful when errors occur (ex. the name of \nthe column to filter on is input incorrectly). \n\n
\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# grab all the railways\nl_rail <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.gba_railway_tracks_sp\"\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() \n\n# streams in the bulkley and then filter to just keep the big ones\nl_streams <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.fwa_stream_networks_sp\",\n col_filter = \"watershed_group_code\",\n col_filter_value = \"BULK\",\n # grab a smaller object by including less columns\n col_extract = c(\"linear_feature_id\", \"stream_order\", \"gnis_name\", \"downstream_route_measure\", \"blue_line_key\", \"length_metre\")\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() |> \n dplyr::filter(stream_order > 4)\n```\n:::\n\n\nNow we trim up those layers. We have some functions to validate and repair geometries and then we clip them to our area of interest.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlayers_to_trim <- tibble::lst(l_rail, l_streams)\n\n# Function to validate and repair geometries\nvalidate_geometries <- function(layer) {\n layer <- sf::st_make_valid(layer)\n layer <- layer[sf::st_is_valid(layer), ]\n return(layer)\n}\n\n# Apply validation to the AOI and layers\naoi <- validate_geometries(aoi)\nlayers_to_trim <- purrr::map(layers_to_trim, validate_geometries)\n\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi)\n) \n```\n:::\n\n\n\n## Get Legend Values\n\nSo we need to map the values in the raster to the actual land cover classes. We can do this by extracting the cross\nreference table from the pdf provided in the metatdata of the data. We will use the `tabulapdf` package to extract the\ntable and do some work to collapse it into a cross-referenceing tool we can use for land cover classifications and \nsubsequent color schemes.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# extract the cross reference table from the pdf\npdf_file <- \"https://esa-worldcover.s3.eu-central-1.amazonaws.com/v200/2021/docs/WorldCover_PUM_V2.0.pdf\"\npage <- 15\n\n# table_map <- tabulapdf::locate_areas(pdf_file, pages = page)\n# table_coords <- list(as.numeric(unlist(table_map[[1]])))\n\ntable_coords <- list(c(94.55745, 74.66493, 755.06007, 550.41094))\n\n\nxref_raw <- tabulapdf::extract_tables(\n pdf_file,\n pages = page,\n method = \"lattice\",\n area = table_coords,\n guess = FALSE\n)\n\n# ##this is how we make a clean dataframe\nxref_raw2 <- xref_raw |> \n purrr::pluck(1) |>\n tibble::as_tibble() |>\n janitor::row_to_names(1) |>\n janitor::clean_names()\n\nxref_raw3 <- xref_raw2 |> \n tidyr::fill(code, .direction = \"down\")\n\n# Custom function to concatenate rows within each group\ncollapse_rows <- function(df) {\n df |> \n dplyr::summarise(across(everything(), ~ paste(na.omit(.), collapse = \" \")))\n}\n\n# Group by code and apply the custom function\nxref <- xref_raw3 |>\n dplyr::group_by(code) |>\n dplyr::group_modify(~ collapse_rows(.x)) |>\n dplyr::ungroup() |> \n dplyr::mutate(code = as.numeric(code)) |> \n dplyr::arrange(code) |> \n purrr::set_names(c(\"code\", \"land_cover_class\", \"lccs_code\", \"definition\", \"color_code\")) |> \n # now we make a list of the color codes and convert to hex. Even though we don't actually need them here...\n dplyr::mutate(color_code = purrr::map(color_code, ~ as.numeric(strsplit(.x, \",\")[[1]])),\n color = purrr::map_chr(color_code, ~ rgb(.x[1], .x[2], .x[3], maxColorValue = 255))) |> \n dplyr::relocate(definition, .after = last_col())\n\n\nmy_dt_table(xref, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\nWe seem to get issues when the colors we have in our tiff does not match our cross-reference table. For this reason we will\nremove any values in the `xref` object that are not in the rasters that we are plotting.\n\n\n
\n\nAlso - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object.\nWe can recover that table with `terra::coltab(land_cover_raster_raw[[1]])`\n\n## Plot \nOk. Let's plot it up. We will use `ggplot2` and `tidyterra` to plot the land cover raster and then add the streams and railway on top of that.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncolor_table <- terra::coltab(land_cover_raster_raw[[1]])[[1]]\n\ncoltab(land_cover_raster_resampled) <- color_table\n\nxref_cleaned <- xref |> \n filter(code %in% sort(unique(terra::values(land_cover_raster_resampled))))\n\n\nmap <- ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_raster_resampled),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_raster_resampled),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n # geom_sf(\n # data = aoi,\n # fill = \"transparent\",\n # color = \"white\",\n # linewidth = .5\n # ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n\nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-1.png){width=672}\n:::\n:::\n\n\n# Refine the Area of Interest\n\n\n::: {.cell}\n\n```{.r .cell-code}\nbuffer_size <- 250\nlength_define <- 10000\n```\n:::\n\n\n\nNice. So let's say we want to zoom in on a particular area such as the mainstem of the Wetzin Kwa for the lowest\n10km of the stream for a buffered area at approximately 250m either side of the stream. We can do that by getting the name of the stream. The smallest downstream route measure and the first segments that add to 10000m in linear length. \n\n\n\n::: {.cell}\n\n```{.r .cell-code}\naoi_neexdzii <- layers_trimmed$l_streams |> \n dplyr::filter(gnis_name == \"Bulkley River\") |> \n dplyr::arrange(downstream_route_measure) |> \n # calculate when we get to 10000m by adding up the length_metre field and filtering out everything up to 10000m\n dplyr::filter(cumsum(length_metre) <= length_define) \n\naoi_neexdzii_buffered <- sf::st_buffer(aoi_neexdzii, buffer_size, endCapStyle = \"FLAT\")\n```\n:::\n\n\n## Plot Refined Area\n\nClip the resampled raster to the buffered area.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_sample <- terra::crop(land_cover_raster_resampled, aoi_neexdzii_buffered, snap = \"in\", mask = TRUE, extend = TRUE)\n```\n:::\n\n\n
\n\nSo it looks like we lose our color values with the crop. We see that with `has.colors(land_cover_sample)`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nhas.colors(land_cover_sample)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] TRUE\n```\n\n\n:::\n:::\n\n\n
\n\nLet's add them back in with the `terra::coltab` function.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncoltab(land_cover_sample) <- color_table\nhas.colors(land_cover_sample)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] TRUE\n```\n\n\n:::\n:::\n\n\n
\n\nNow we should be able to plot what we have. Let's re-trim up our extra data layers and add those in as well.\n\n::: {.cell}\n\n```{.r .cell-code}\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi_neexdzii_buffered)\n) \n\nxref_cleaned <- xref |> \n filter(code %in% sort(unique(terra::values(land_cover_sample))))\n\nmap <- ggplot2::ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_sample),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_sample),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n# save the plot\n# ggsave(here::here('posts', params$post_dir_name, \"image.jpg\"), width = 10, height = 10)\n \nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot2-1.png){width=672}\n:::\n:::\n\n\n
\n\nPretty sweet. Next up is to summarize the land cover classes for different areas to build our understanding of\npotential impacts due to land cover changes. We will likely use the `terra::zonal` function to do this. Stay tuned.\n", "supporting": [], "filters": [ "rmarkdown/pagebreak.lua" diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-1.png index 25004b7..b099984 100644 Binary files a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-1.png and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-1.png differ diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png new file mode 100644 index 0000000..fc3496b Binary files /dev/null and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png differ diff --git a/posts/2024-06-30-land-cover/image.jpg b/posts/2024-06-30-land-cover/image.jpg index eddf159..7e073b2 100644 Binary files a/posts/2024-06-30-land-cover/image.jpg and b/posts/2024-06-30-land-cover/image.jpg differ diff --git a/posts/2024-06-30-land-cover/index.qmd b/posts/2024-06-30-land-cover/index.qmd index c2e2f8f..8b21d9e 100644 --- a/posts/2024-06-30-land-cover/index.qmd +++ b/posts/2024-06-30-land-cover/index.qmd @@ -3,7 +3,7 @@ title: "Mapping Land Cover with R" author: "al" date: "2024-07-01" date-modified: "`r format(Sys.time(), '%Y-%m-%d')`" -categories: [land cover, R, planetary computer, satellite] +categories: [land cover, R, planetary computer, remote sensing] image: "image.jpg" params: repo_owner: "NewGraphEnvironment" @@ -18,7 +18,7 @@ format: code-fold: true --- -We want to quantifying and visualize remotely sense land cover data.... Here is a first start. We will use the European +Visualize and quantify remotely sense land cover data.... Here is a first start. We will use the European Space Agency's WorldCover product which provides global land cover maps for the years 2020 and 2021 at 10 meter resolution based on the combination of Sentinel-1 radar data and Sentinel-2 imagery. We will use the 2021 dataset for mapping an area of the Skeena watershed near Houston, British Columbia. @@ -50,7 +50,10 @@ pkgs_cran <- c( "tabulapdf", "bcdata", "ggplot", - "ggdark") + "ggdark", + "knitr", + "DT", + "htmlwidgets") pkgs_gh <- c( "poissonconsulting/fwapgr", @@ -103,7 +106,7 @@ A `downstream route measure` is: > The distance, in meters, along the route from the mouth of the route to the feature. This distance is measured from the mouth of the containing route to the downstream end of the feature. -```{r aoi-def} +```{r aoi-vector} # lets build a custom watersehed just for upstream of the confluence of Neexdzii Kwa and Wetzin Kwa # blueline key blk <- 360873822 @@ -227,7 +230,7 @@ ms_query_signin <- rstac::items_sign( To actually download the data we are going to put a chunk option that allows us to just execute the code once and update it with the `update_gis` param in our `yml`. -```{r aoi-dl, eval = params$update_gis} +```{r aoi-raster, eval = params$update_gis} rstac::assets_download( items = ms_query_signin, asset_names = "map", @@ -308,18 +311,18 @@ the column to filter on is input incorrectly). ```{r dl-layers, cache = TRUE} # grab all the railways l_rail <- rfp::rfp_bcd_get_data( - bcdata_record_id = stringr::str_to_upper("whse_basemapping.gba_railway_tracks_sp") + bcdata_record_id = "whse_basemapping.gba_railway_tracks_sp" ) |> sf::st_transform(4326) |> janitor::clean_names() # streams in the bulkley and then filter to just keep the big ones l_streams <- rfp::rfp_bcd_get_data( - bcdata_record_id = stringr::str_to_upper("whse_basemapping.fwa_stream_networks_sp"), - col_filter = stringr::str_to_upper("watershed_group_code"), + bcdata_record_id = "whse_basemapping.fwa_stream_networks_sp", + col_filter = "watershed_group_code", col_filter_value = "BULK", # grab a smaller object by including less columns - col_extract = stringr::str_to_upper(c("linear_feature_id", "stream_order")) + col_extract = c("linear_feature_id", "stream_order", "gnis_name", "downstream_route_measure", "blue_line_key", "length_metre") ) |> sf::st_transform(4326) |> janitor::clean_names() |> @@ -385,30 +388,40 @@ xref_raw2 <- xref_raw |> janitor::row_to_names(1) |> janitor::clean_names() -xref_raw3 <- xref_raw2 %>% - fill(code, .direction = "down") +xref_raw3 <- xref_raw2 |> + tidyr::fill(code, .direction = "down") # Custom function to concatenate rows within each group collapse_rows <- function(df) { - df %>% - summarise(across(everything(), ~ paste(na.omit(.), collapse = " "))) + df |> + dplyr::summarise(across(everything(), ~ paste(na.omit(.), collapse = " "))) } # Group by code and apply the custom function -xref <- xref_raw3 %>% - group_by(code) %>% - group_modify(~ collapse_rows(.x)) %>% - ungroup() |> +xref <- xref_raw3 |> + dplyr::group_by(code) |> + dplyr::group_modify(~ collapse_rows(.x)) |> + dplyr::ungroup() |> + dplyr::mutate(code = as.numeric(code)) |> dplyr::arrange(code) |> purrr::set_names(c("code", "land_cover_class", "lccs_code", "definition", "color_code")) |> - tidyr::separate(color_code, into = c("r", "g", "b"), sep = ",", convert = TRUE, remove = FALSE) %>% - dplyr::mutate(color = rgb(r, g, b, maxColorValue = 255)) + # now we make a list of the color codes and convert to hex. Even though we don't actually need them here... + dplyr::mutate(color_code = purrr::map(color_code, ~ as.numeric(strsplit(.x, ",")[[1]])), + color = purrr::map_chr(color_code, ~ rgb(.x[1], .x[2], .x[3], maxColorValue = 255))) |> + dplyr::relocate(definition, .after = last_col()) + my_dt_table(xref, cols_freeze_left = 0, page_length = 5) ``` -So - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object. +We seem to get issues when the colors we have in our tiff does not match our cross-reference table. For this reason we will +remove any values in the `xref` object that are not in the rasters that we are plotting. + + +
+ +Also - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object. We can recover that table with `terra::coltab(land_cover_raster_raw[[1]])` ## Plot @@ -420,41 +433,140 @@ color_table <- terra::coltab(land_cover_raster_raw[[1]])[[1]] coltab(land_cover_raster_resampled) <- color_table +xref_cleaned <- xref |> + filter(code %in% sort(unique(terra::values(land_cover_raster_resampled)))) + map <- ggplot() + + tidyterra::geom_spatraster( + data = as.factor(land_cover_raster_resampled), + use_coltab = TRUE, + maxcell = Inf + ) + + tidyterra::scale_fill_coltab( + data = as.factor(land_cover_raster_resampled), + name = "ESA Land Cover", + labels = xref_cleaned$land_cover_class + ) + + # geom_sf( + # data = aoi, + # fill = "transparent", + # color = "white", + # linewidth = .5 + # ) + + geom_sf( + data = layers_trimmed$l_streams, + color = "blue", + size = 1 + ) + + geom_sf( + data = layers_trimmed$l_rail, + color = "black", + size = 1 + ) + + ggdark::dark_theme_void() + + +map +``` + +# Refine the Area of Interest + +```{r buffer-define} +buffer_size <- 250 +length_define <- 10000 +``` + + +Nice. So let's say we want to zoom in on a particular area such as the mainstem of the Wetzin Kwa for the lowest +10km of the stream for a buffered area at approximately `r as.character(buffer_size)`m either side of the stream. We can do that by getting the name of the stream. The smallest downstream route measure and the first segments that add to `r as.character(length_define)`m in linear length. + + +```{r zoom-in} +aoi_neexdzii <- layers_trimmed$l_streams |> + dplyr::filter(gnis_name == "Bulkley River") |> + dplyr::arrange(downstream_route_measure) |> + # calculate when we get to 10000m by adding up the length_metre field and filtering out everything up to 10000m + dplyr::filter(cumsum(length_metre) <= length_define) + +aoi_neexdzii_buffered <- sf::st_buffer(aoi_neexdzii, buffer_size, endCapStyle = "FLAT") + +``` + +## Plot Refined Area + +Clip the resampled raster to the buffered area. + +```{r} +land_cover_sample <- terra::crop(land_cover_raster_resampled, aoi_neexdzii_buffered, snap = "in", mask = TRUE, extend = TRUE) + +``` + +
+ +So it looks like we lose our color values with the crop. We see that with `has.colors(land_cover_sample)`. + +```{r colors-check2} +has.colors(land_cover_sample) +``` + +
+ +Let's add them back in with the `terra::coltab` function. + + +```{r colors-add2} + +coltab(land_cover_sample) <- color_table +has.colors(land_cover_sample) +``` + +
+ +Now we should be able to plot what we have. Let's re-trim up our extra data layers and add those in as well. +```{r plot2} + + +# clip them with purrr and sf +layers_trimmed <- purrr::map( + layers_to_trim, + ~ sf::st_intersection(.x, aoi_neexdzii_buffered) +) + +xref_cleaned <- xref |> + filter(code %in% sort(unique(terra::values(land_cover_sample)))) + +map <- ggplot2::ggplot() + tidyterra::geom_spatraster( - data = as.factor(land_cover_raster_resampled), + data = as.factor(land_cover_sample), use_coltab = TRUE, maxcell = Inf ) + tidyterra::scale_fill_coltab( - data = as.factor(land_cover_raster_resampled), + data = as.factor(land_cover_sample), name = "ESA Land Cover", - labels = xref$land_cover_class + labels = xref_cleaned$land_cover_class ) + - # geom_sf( - # data = aoi, - # fill = "transparent", - # color = "white", - # linewidth = .5 - # ) + + geom_sf( + data = layers_trimmed$l_rail, + color = "black", + size = 1 + ) + geom_sf( data = layers_trimmed$l_streams, color = "blue", size = 1 - ) + - geom_sf( - data = layers_trimmed$l_rail, - color = "yellow", - size = 1 ) + ggdark::dark_theme_void() - # save the plot # ggsave(here::here('posts', params$post_dir_name, "image.jpg"), width = 10, height = 10) - + map + ``` +
+Pretty sweet. Next up is to summarize the land cover classes for different areas to build our understanding of +potential impacts due to land cover changes. We will likely use the `terra::zonal` function to do this. Stay tuned. diff --git a/posts/2024-06-30-land-cover/index_cache/html/__packages b/posts/2024-06-30-land-cover/index_cache/html/__packages index f470837..1bc307c 100644 --- a/posts/2024-06-30-land-cover/index_cache/html/__packages +++ b/posts/2024-06-30-land-cover/index_cache/html/__packages @@ -27,5 +27,6 @@ rgl tidyterra tabulapdf bcdata +ggdark fwapgr rfp diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.RData b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.RData deleted file mode 100644 index 8be8048..0000000 Binary files a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.RData and /dev/null differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdx b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdx deleted file mode 100644 index 49d3f02..0000000 Binary files a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdx and /dev/null differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData new file mode 100644 index 0000000..7eefc01 Binary files /dev/null and b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdb b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb similarity index 82% rename from posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdb rename to posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb index b29807f..53f892c 100644 Binary files a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_12a2939ce932b8b1053d81f81cfeead3.rdb and b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdx b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdx new file mode 100644 index 0000000..0bcae50 Binary files /dev/null and b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdx differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/plot-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/plot-1.png index 25004b7..b099984 100644 Binary files a/posts/2024-06-30-land-cover/index_files/figure-html/plot-1.png and b/posts/2024-06-30-land-cover/index_files/figure-html/plot-1.png differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png new file mode 100644 index 0000000..fc3496b Binary files /dev/null and b/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png differ diff --git a/scripts/functions.R b/scripts/functions.R index 394f371..ea6cd19 100644 --- a/scripts/functions.R +++ b/scripts/functions.R @@ -1,11 +1,11 @@ my_dt_table <- function(dat, cols_freeze_left = 3, page_length = 10, - col_align = 'dt-right', + col_align = 'dt-center', #'dt-right', font_size = '10px', style_input = 'bootstrap'){ - dat %>% + dat |> DT::datatable( style = style_input, class = 'cell-border stripe', #'dark' '.table-dark', @@ -23,6 +23,8 @@ my_dt_table <- function(dat, lengthMenu = list(c(5,10,25,50,-1), c(5,10,25,50,"All")), colReorder = TRUE, + #https://stackoverflow.com/questions/45508033/adjusting-height-and-width-in-dtdatatable-r-markdown + rowCallback = htmlwidgets::JS("function(r,d) {$(r).attr('height', '100px')}"), #https://stackoverflow.com/questions/44101055/changing-font-size-in-r-datatables-dt initComplete = htmlwidgets::JS(glue::glue( "function(settings, json) {{ $(this.api().table().container()).css({{'font-size': '{font_size}'}}); }}"