From 83a44ea175205ad2bd81abd0386caab5dda3ad8e Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 6 Nov 2024 12:14:35 +0000 Subject: [PATCH 1/8] [pre-commit.ci] pre-commit autoupdate (#703) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit updates: - [github.com/astral-sh/ruff-pre-commit: v0.6.3 → v0.7.2](https://github.com/astral-sh/ruff-pre-commit/compare/v0.6.3...v0.7.2) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- .pre-commit-config.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 67779e53..7cb5b7b8 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -3,7 +3,7 @@ default_language_version: repos: - repo: https://github.com/astral-sh/ruff-pre-commit # https://beta.ruff.rs/docs/usage/#github-action - rev: v0.6.3 + rev: v0.7.2 hooks: - id: ruff args: [--fix, --exit-non-zero-on-fix] From 4063431cd13e37e8b206dd1dcd6a51e54c376fa8 Mon Sep 17 00:00:00 2001 From: Bryn Pickering <17178478+brynpickering@users.noreply.github.com> Date: Wed, 13 Nov 2024 18:05:31 +0000 Subject: [PATCH 2/8] Add `where` helper function to enable reproduction of fancier group constraints (#698) Co-authored-by: Ivan Ruiz Manuel <72193617+irm-codebase@users.noreply.github.com> --- CHANGELOG.md | 4 + docs/reference/api/helper_functions.md | 3 + docs/user_defined_math/helper_functions.md | 121 +++++++++++++++++++++ docs/user_defined_math/syntax.md | 114 +------------------ mkdocs.yml | 1 + src/calliope/backend/helper_functions.py | 57 ++++++++++ tests/test_backend_expression_parser.py | 29 ++++- tests/test_backend_helper_functions.py | 78 +++++++++++++ 8 files changed, 296 insertions(+), 111 deletions(-) create mode 100644 docs/user_defined_math/helper_functions.md diff --git a/CHANGELOG.md b/CHANGELOG.md index fecb4092..6ef5c98c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,10 @@ ### User-facing changes +|changed| Helper functions are now documented on their own page within the "Defining your own math" section of the documentation (#698). + +|new| `where(array, condition)` math helper function to apply a where array _inside_ an expression, to enable extending component dimensions on-the-fly, and applying filtering to different components within the expression (#604, #679). + |new| Data tables can inherit options from `templates`, like `techs` and `nodes` (#676). |new| dimension renaming functionality when loading from a data source, using the `rename_dims` option (#680). diff --git a/docs/reference/api/helper_functions.md b/docs/reference/api/helper_functions.md index 7ecfc5b0..3c9e575e 100644 --- a/docs/reference/api/helper_functions.md +++ b/docs/reference/api/helper_functions.md @@ -4,3 +4,6 @@ search: --- ::: calliope.backend.helper_functions + options: + docstring_options: + ignore_init_summary: true diff --git a/docs/user_defined_math/helper_functions.md b/docs/user_defined_math/helper_functions.md new file mode 100644 index 00000000..8b08dad8 --- /dev/null +++ b/docs/user_defined_math/helper_functions.md @@ -0,0 +1,121 @@ + +# Helper functions + +For [`where` strings](syntax.md#where-strings) and [`expression` strings](syntax.md#where-strings), there are many helper functions available to use, to allow for more complex operations to be undertaken within the string. +Their functionality is detailed in the [helper function API page](../reference/api/helper_functions.md). +Here, we give a brief summary. +Helper functions generally require a good understanding of their functionality, so make sure you are comfortable with them beforehand. + +## inheritance + +Using `inheritance(...)` in a `where` string allows you to grab a subset of technologies / nodes that all share the same [`template`](../creating/templates.md) in the technology's / node's `template` key. +If a `template` also inherits from another `template` (chained inheritance), you will get all `techs`/`nodes` that are children along that inheritance chain. + +So, for the definition: + +```yaml +templates: + techgroup1: + template: techgroup2 + flow_cap_max: 10 + techgroup2: + base_tech: supply +techs: + tech1: + template: techgroup1 + tech2: + template: techgroup2 +``` + +`inheritance(techgroup1)` will give the `[tech1]` subset and `inheritance(techgroup2)` will give the `[tech1, tech2]` subset. + +## any + +Parameters are indexed over multiple dimensions. +Using `any(..., over=...)` in a `where` string allows you to check if there is at least one non-NaN value in a given dimension (akin to [xarray.DataArray.any][]). +So, `any(cost, over=[nodes, techs])` will check if there is at least one non-NaN tech+node value in the `costs` dimension (the other dimension that the `cost` decision variable is indexed over). + +## defined + +Similar to [any](syntax.md#any), using `defined(..., within=...)` in a `where` string allows you to check for non-NaN values along dimensions. +In the case of `defined`, you can check if e.g., certain technologies have been defined within the nodes or certain carriers are defined within a group of techs or nodes. + +So, for the definition: + +```yaml +techs: + tech1: + base_tech: conversion + carrier_in: electricity + carrier_out: heat + tech2: + base_tech: conversion + carrier_in: [coal, biofuel] + carrier_out: electricity +nodes: + node1: + techs: {tech1} + node2: + techs: {tech1, tech2} +``` + +`defined(carriers=electricity, within=techs)` would yield a list of `[True, True]` as both technologies define electricity. + +`defined(techs=[tech1, tech2], within=nodes)` would yield a list of `[True, True]` as both nodes define _at least one_ of `tech1` or `tech2`. + +`defined(techs=[tech1, tech2], within=nodes, how=all)` would yield a list of `[False, True]` as only `node2` defines _both_ `tech1` and `tech2`. + +## sum + +Using `sum(..., over=)` in an expression allows you to sum over one or more dimensions of your component array (be it a parameter, decision variable, or global expression). + +## select_from_lookup_arrays + +Some of our arrays in [`model.inputs`][calliope.Model.inputs] are not data arrays, but "lookup" arrays. +These arrays are used to map the array's index items to other index items. +For instance when using [time clustering](../advanced/time.md#time-clustering), the `lookup_cluster_last_timestep` array is used to get the timestep resolution and the stored energy for the last timestep in each cluster. +Using `select_from_lookup_arrays(..., dim_name=lookup_array)` allows you to apply this lookup array to your data array. + +## get_val_at_index + +If you want to access an integer index in your dimension, use `get_val_at_index(dim_name=integer_index)`. +For example, `get_val_at_index(timesteps=0)` will get the first timestep in your timeseries, `get_val_at_index(timesteps=-1)` will get the final timestep. +This is mostly used when conditionally applying a different expression in the first / final timestep of the timeseries. + +It can be used in the `where` string (e.g., `timesteps=get_val_at_index(timesteps=0)` to mask all other timesteps) and the `expression string` (via [slices](syntax.md#slices) - `storage[timesteps=$first_timestep]` and `first_timestep` expression being `get_val_at_index(timesteps=0)`). + +## roll + +We do not use for-loops in our math. +This can be difficult to get your head around initially, but it means that to define expressions of the form `var[t] == var[t-1] + param[t]` requires shifting all the data in your component array by N places. +Using `roll(..., dimension_name=N)` allows you to do this. +For example, `roll(storage, timesteps=1)` will shift all the storage decision variable objects by one timestep in the array. +Then, `storage == roll(storage, timesteps=1) + 1` is equivalent to applying `storage[t] == storage[t - 1] + 1` in a for-loop. + +## default_if_empty + +We work with quite sparse arrays in our models. +So, although your arrays are indexed over e.g., `nodes`, `techs` and `carriers`, a decision variable or parameter might only have one or two values in the array, with the rest being NaN. +This can play havoc with defining math, with `nan` values making their way into your optimisation problem and then killing the solver or the solver interface. +Using `default_if_empty(..., default=...)` in your `expression` string allows you to put a placeholder value in, which will be used if the math expression unavoidably _needs_ a value. +Usually you shouldn't need to use this, as your `where` string will mask those NaN values. +But if you're having trouble setting up your math, it is a useful function to getting it over the line. + +!!! note + Our internally defined parameters, listed in the `Parameters` section of our [pre-defined base math documentation][base-math] all have default values which propagate to the math. + You only need to use `default_if_empty` for decision variables and global expressions, and for user-defined parameters. + +## where + +[Where strings](syntax.md#where-strings) only allow you to apply conditions across the whole expression equations. +Sometimes, it's necessary to apply specific conditions to different components _within_ the expression. +Using `where(, )` helper function enables this, +where `` is a reference to a parameter, variable, or global expression and `` is a reference to an array in your model inputs that contains only `True`/`1` and `False`/`0`/`NaN` values. +`` will then be applied to ``, keeping only the values in `` where `` is `True`/`1`. + +This helper function can also be used to _extend_ the dimensions of a ``. +If the `` has any dimensions not present in ``, `` will be [broadcast](https://tutorial.xarray.dev/fundamentals/02.3_aligning_data_objects.html#broadcasting-adjusting-arrays-to-the-same-shape) to include those dimensions. + +!!! note + `Where` gets referred to a lot in Calliope math. + It always means the same thing: applying [xarray.DataArray.where][]. diff --git a/docs/user_defined_math/syntax.md b/docs/user_defined_math/syntax.md index b2266d23..afdf91e6 100644 --- a/docs/user_defined_math/syntax.md +++ b/docs/user_defined_math/syntax.md @@ -37,7 +37,7 @@ When checking the existence of an input parameter it is possible to first sum it - If you want to apply a constraint across all `nodes` and `techs`, but only for node+tech combinations where the `flow_out_eff` parameter has been defined, you would include `flow_out_eff`. - If you want to apply a constraint over `techs` and `timesteps`, but only for combinations where the `source_use_max` parameter has at least one `node` with a value defined, you would include `any(resource, over=nodes)`. (1) - 1. `any` is a [helper function](#helper-functions); read more below! + 1. `any` is a [helper function](helper_functions.md#any)! 1. Checking the value of a configuration option or an input parameter. Checks can use any of the operators: `>`, `<`, `=`, `<=`, `>=`. @@ -50,7 +50,7 @@ Configuration options are any that are defined in `config.build`, where you can - If you want to apply a constraint only for the first timestep in your timeseries, you would include `timesteps=get_val_at_index(dim=timesteps, idx=0)`. (1) - If you want to apply a constraint only for the last timestep in your timeseries, you would include `timesteps=get_val_at_index(dim=timesteps, idx=-1)`. - 1. `get_val_at_index` is a [helper function](#helper-functions); read more below! + 1. `get_val_at_index` is a [helper function](helper_functions.md#get_val_at_index)! 1. Checking the `base_tech` of a technology (`storage`, `supply`, etc.) or its inheritance chain (if using `templates` and the `template` parameter). @@ -58,7 +58,7 @@ Configuration options are any that are defined in `config.build`, where you can - If you want to create a decision variable across only `storage` technologies, you would include `base_tech=storage`. - If you want to apply a constraint across only your own `rooftop_supply` technologies (e.g., you have defined `rooftop_supply` in `templates` and your technologies `pv` and `solar_thermal` define `#!yaml template: rooftop_supply`), you would include `inheritance(rooftop_supply)`. - Note that `base_tech=...` is a simple check for the given value of `base_tech`, while `inheritance()` is a helper function ([see below](#helper-functions)) which can deal with finding techs/nodes using the same template, e.g. `pv` might inherit the `rooftop_supply` template which in turn might inherit the template `electricity_supply`. + Note that `base_tech=...` is a simple check for the given value of `base_tech`, while `inheritance()` is a [helper function](helper_functions.md) which can deal with finding techs/nodes using the same template, e.g. `pv` might inherit the `rooftop_supply` template which in turn might inherit the template `electricity_supply`. 1. Subsetting a set. The sets available to subset are always [`nodes`, `techs`, `carriers`] + any additional sets defined by you in [`foreach`](#foreach-lists). @@ -67,7 +67,7 @@ The sets available to subset are always [`nodes`, `techs`, `carriers`] + any add - If you want to filter `nodes` where any of a set of `techs` are defined: `defined(techs=[tech1, tech2], within=nodes, how=any)` (1). - 1. `defined` is a [helper function](#helper-functions); read more below! + 1. `defined` is a [helper function](helper_functions.md#defined)! To combine statements you can use the operators `and`/`or`. You can also use the `not` operator to negate any of the statements. @@ -109,112 +109,6 @@ Behind the scenes, we will make sure that every relevant element of the defined Slicing math components involves appending the component with square brackets that contain the slices, e.g. `flow_out[carriers=electricity, nodes=[A, B]]` will slice the `flow_out` decision variable to focus on `electricity` in its `carriers` dimension and only has two nodes (`A` and `B`) on its `nodes` dimension. To find out what dimensions you can slice a component on, see your input data (`model.inputs`) for parameters and the definition for decision variables in your math dictionary. -## Helper functions - -For [`where` strings](#where-strings) and [`expression` strings](#where-strings), there are many helper functions available to use, to allow for more complex operations to be undertaken. -Their functionality is detailed in the [helper function API page](../reference/api/helper_functions.md). -Here, we give a brief summary. -Some of these helper functions require a good understanding of their functionality to apply, so make sure you are comfortable with them before using them. - -### inheritance - -using `inheritance(...)` in a `where` string allows you to grab a subset of technologies / nodes that all share the same [`template`](../creating/templates.md) in the technology's / node's `template` key. -If a `template` also inherits from another `template` (chained inheritance), you will get all `techs`/`nodes` that are children along that inheritance chain. - -So, for the definition: - -```yaml -templates: - techgroup1: - template: techgroup2 - flow_cap_max: 10 - techgroup2: - base_tech: supply -techs: - tech1: - template: techgroup1 - tech2: - template: techgroup2 -``` - -`inheritance(techgroup1)` will give the `[tech1]` subset and `inheritance(techgroup2)` will give the `[tech1, tech2]` subset. - -### any - -Parameters are indexed over multiple dimensions. -Using `any(..., over=...)` in a `where` string allows you to check if there is at least one non-NaN value in a given dimension (akin to [xarray.DataArray.any][]). -So, `any(cost, over=[nodes, techs])` will check if there is at least one non-NaN tech+node value in the `costs` dimension (the other dimension that the `cost` decision variable is indexed over). - -### defined - -Similar to [any](#any), using `defined(..., within=...)` in a `where` string allows you to check for non-NaN values along dimensions. -In the case of `defined`, you can check if e.g., certain technologies have been defined within the nodes or certain carriers are defined within a group of techs or nodes. - -So, for the definition: - -```yaml -techs: - tech1: - base_tech: conversion - carrier_in: electricity - carrier_out: heat - tech2: - base_tech: conversion - carrier_in: [coal, biofuel] - carrier_out: electricity -nodes: - node1: - techs: {tech1} - node2: - techs: {tech1, tech2} -``` - -`defined(carriers=electricity, within=techs)` would yield a list of `[True, True]` as both technologies define electricity. - -`defined(techs=[tech1, tech2], within=nodes)` would yield a list of `[True, True]` as both nodes define _at least one_ of `tech1` or `tech2`. - -`defined(techs=[tech1, tech2], within=nodes, how=all)` would yield a list of `[False, True]` as only `node2` defines _both_ `tech1` and `tech2`. - -### sum - -Using `sum(..., over=)` in an expression allows you to sum over one or more dimension of your component array (be it a parameter, decision variable, or global expression). - -### select_from_lookup_arrays - -Some of our arrays in [`model.inputs`][calliope.Model.inputs] are not data arrays, but "lookup" arrays. -These arrays are used to map the array's index items to other index items. -For instance when using [time clustering](../advanced/time.md#time-clustering), the `lookup_cluster_last_timestep` array is used to get the timestep resolution and the stored energy for the last timestep in each cluster. -Using `select_from_lookup_arrays(..., dim_name=lookup_array)` allows you to apply this lookup array to your data array. - -### get_val_at_index - -If you want to access an integer index in your dimension, use `get_val_at_index(dim_name=integer_index)`. -For example, `get_val_at_index(timesteps=0)` will get the first timestep in your timeseries, `get_val_at_index(timesteps=-1)` will get the final timestep. -This is mostly used when conditionally applying a different expression in the first / final timestep of the timeseries. - -It can be used in the `where` string (e.g., `timesteps=get_val_at_index(timesteps=0)` to mask all other timesteps) and the `expression string` (via [slices](#slices) - `storage[timesteps=$first_timestep]` and `first_timestep` expression being `get_val_at_index(timesteps=0)`). - -### roll - -We do not use for-loops in our math. -This can be difficult to get your head around initially, but it means that to define expressions of the form `var[t] == var[t-1] + param[t]` requires shifting all the data in your component array by N places. -Using `roll(..., dimension_name=N)` allows you to do this. -For example, `roll(storage, timesteps=1)` will shift all the storage decision variable objects by one timestep in the array. -Then, `storage == roll(storage, timesteps=1) + 1` is equivalent to applying `storage[t] == storage[t - 1] + 1` in a for-loop. - -### default_if_empty - -We work with quite sparse arrays in our models. -So, although your arrays are indexed over e.g., `nodes`, `techs` and `carriers`, a decision variable or parameter might only have one or two values in the array, with the rest being NaN. -This can play havoc with defining math, with `nan` values making their way into your optimisation problem and then killing the solver or the solver interface. -Using `default_if_empty(..., default=...)` in your `expression` string allows you to put a placeholder value in, which will be used if the math expression unavoidably _needs_ a value. -Usually you shouldn't need to use this, as your `where` string will mask those NaN values. -But if you're having trouble setting up your math, it is a useful function to getting it over the line. - -!!! note - Our internally defined parameters, listed in the `Parameters` section of our [pre-defined base math documentation][base-math] all have default values which propagate to the math. - You only need to use `default_if_empty` for decision variables and global expressions, and for user-defined parameters. - ## equations Equations are combinations of [expression strings](#expression-strings) and [where strings](#where-strings). diff --git a/mkdocs.yml b/mkdocs.yml index fa677f27..6b41e180 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -117,6 +117,7 @@ nav: - user_defined_math/index.md - user_defined_math/components.md - user_defined_math/syntax.md + - user_defined_math/helper_functions.md - user_defined_math/customise.md - Example additional math gallery: - user_defined_math/examples/index.md diff --git a/src/calliope/backend/helper_functions.py b/src/calliope/backend/helper_functions.py index 0f1b9505..f8cef607 100644 --- a/src/calliope/backend/helper_functions.py +++ b/src/calliope/backend/helper_functions.py @@ -748,3 +748,60 @@ def as_array(self, var: xr.DataArray, default: float | int) -> xr.DataArray: return xr.DataArray(default) else: return var.fillna(default) + + +class Where(ParsingHelperFunction): + """Apply `where` array _within_ an expression string.""" + + #: + NAME = "where" + #: + ALLOWED_IN = ["expression"] + + def as_math_string(self, array: str, condition: str) -> str: # noqa: D102, override + return rf"({array} \text{{if }} {condition} == True)" + + def as_array(self, array: xr.DataArray, condition: xr.DataArray) -> xr.DataArray: + """Apply a `where` condition to a math array within an expression string. + + Args: + array (xr.DataArray): Math component array. + condition (xr.DataArray): + Boolean where array. + If not `bool` type, NaNs and 0 will be assumed as False and all other values will be assumed as True. + + Returns: + xr.DataArray: + Returns the input array with the condition applied, + including having been broadcast across any new dimensions provided by the condition. + + Examples: + One common use-case is to introduce a new dimension to the variable which represents subsets of one of the main model dimensions. + In this case, each member of `cap_node_groups` is a subset of `nodes` and we want to sum `flow_cap` over each of those subsets and set a maximum value. + + input: + ```yaml + parameters: + node_grouping: + data: True + index: [[group_1, region1], [group_1, region1_1], [group_2, region1_2], [group_2, region1_3], [group_3, region2]] + dims: [cap_node_groups, nodes] + node_group_max: + data: [1, 2, 3] + index: [group_1, group_2, group_3] + dims: cap_node_groups + ``` + + math: + ```yaml + constraints: + my_new_constraint: + foreach: [techs, cap_node_groups] + equations: + - expression: sum(where(flow_cap, node_grouping), over=nodes) <= node_group_max + ``` + """ + if self._backend_interface is not None: + condition = self._input_data[condition.name] + + return array.where(condition.fillna(False).astype(bool)) diff --git a/tests/test_backend_expression_parser.py b/tests/test_backend_expression_parser.py index ded1feb9..b8baf7ec 100644 --- a/tests/test_backend_expression_parser.py +++ b/tests/test_backend_expression_parser.py @@ -39,7 +39,16 @@ def as_array(self, x, y): @pytest.fixture def valid_component_names(): - return ["foo", "with_inf", "only_techs", "no_dims", "multi_dim_var", "no_dim_var"] + return [ + "foo", + "with_inf", + "only_techs", + "no_dims", + "multi_dim_var", + "no_dim_var", + "all_true", + "only_techs_as_bool", + ] @pytest.fixture @@ -567,6 +576,24 @@ def test_function_one_arg_allowed_invalid_string( assert check_error_or_warning(excinfo, "Expected") +class TestEquationParserHelper: + @pytest.mark.parametrize( + ("where", "expected_notnull"), + [ + ("all_true", [[True, True, True, True], [True, True, True, True]]), + ("only_techs_as_bool", [False, True, True, True]), + ], + ) + def test_helper_function_where( + self, helper_function, eval_kwargs, where, expected_notnull + ): + """Test that `where` helper function works as expected when passed a backend interface object.""" + string_ = f"where(no_dims, {where})" + parsed_ = helper_function.parse_string(string_, parse_all=True) + evaluated_ = parsed_[0].eval(**eval_kwargs) + np.testing.assert_array_equal(evaluated_.notnull(), expected_notnull) + + class TestEquationParserArithmetic: numbers = [2, 100, 0.02, "1e2", "2e-2", "inf"] diff --git a/tests/test_backend_helper_functions.py b/tests/test_backend_helper_functions.py index 9f696927..cf77d69d 100644 --- a/tests/test_backend_helper_functions.py +++ b/tests/test_backend_helper_functions.py @@ -63,6 +63,11 @@ def expression_default_if_empty(expression, parsing_kwargs): return expression["default_if_empty"](**parsing_kwargs) +@pytest.fixture(scope="class") +def expression_where(expression, parsing_kwargs): + return expression["where"](**parsing_kwargs) + + class TestAsArray: @pytest.fixture(scope="class") def parsing_kwargs(self, dummy_model_data): @@ -322,6 +327,66 @@ def test_default_if_empty_some_nan_var( result, [[1.0, 1, 1.0, 3], [np.inf, 2.0, True, 1]] ) + def test_expression_where_techs_only(self, expression_where, dummy_model_data): + """Test that applying where array masks expected values without affecting the array dimensions.""" + where_array = xr.DataArray( + [True, True, False, False], coords={"techs": dummy_model_data.techs} + ) + result = expression_where(dummy_model_data.only_techs, where_array) + np.testing.assert_equal(result.values, [np.nan, 1, np.nan, np.nan]) + + def test_expression_where_techs_add_nodes(self, expression_where, dummy_model_data): + """Test that applying where array masks expected values _and_ adds to the new array dimensions with a known dim.""" + where_array = xr.DataArray( + [[True, True, False, False], [False, False, True, True]], + coords={"nodes": dummy_model_data.nodes, "techs": dummy_model_data.techs}, + ) + result = expression_where(dummy_model_data.only_techs, where_array) + assert result.nodes.equals(dummy_model_data.nodes) + + expected = xr.DataArray( + [[np.nan, 1, np.nan, np.nan], [np.nan, np.nan, 2, 3]], + coords={"nodes": dummy_model_data.nodes, "techs": dummy_model_data.techs}, + ) + assert result.equals(expected.transpose(*result.dims)) + + def test_expression_where_techs_add_new_dim( + self, expression_where, dummy_model_data + ): + """Test that applying where array masks expected values _and_ adds to the new array dimensions with a new dim.""" + where_array = xr.DataArray( + [[True, True, False, False], [False, False, True, True]], + coords={"new_dim": ["a", "b"], "techs": dummy_model_data.techs}, + ) + result = expression_where(dummy_model_data.only_techs, where_array) + assert "new_dim" not in dummy_model_data.coords + expected = xr.DataArray( + [[np.nan, 1, np.nan, np.nan], [np.nan, np.nan, 2, 3]], + coords={"new_dim": ["a", "b"], "techs": dummy_model_data.techs}, + ) + assert result.equals(expected.transpose(*result.dims)) + + def test_expression_where_no_shared_dim(self, expression_where, dummy_model_data): + """Test that applying where array with no shared dims combines dims in new array.""" + where_array = xr.DataArray([True, False], coords={"new_dim": ["a", "b"]}) + result = expression_where(dummy_model_data.only_techs, where_array) + + expected = xr.DataArray( + [[np.nan, 1, 2, 3], [np.nan, np.nan, np.nan, np.nan]], + coords={"new_dim": ["a", "b"], "techs": dummy_model_data.techs}, + ) + assert result.equals(expected.transpose(*result.dims)) + + def test_expression_where_no_initial_dim(self, expression_where, dummy_model_data): + """Test that applying where array adds a dim where there wasn't one before.""" + where_array = xr.DataArray( + [True, False], coords={"nodes": dummy_model_data.nodes} + ) + result = expression_where(dummy_model_data.no_dims, where_array) + + expected = xr.DataArray([2, np.nan], coords={"nodes": dummy_model_data.nodes}) + assert result.equals(expected.transpose(*result.dims)) + class TestAsMathString: @pytest.fixture(scope="class") @@ -458,3 +523,16 @@ def test_default_if_empty_non_existent_float(self, expression_default_if_empty): r"\text{foo}", default=1.0 ) assert default_if_empty_string == r"(\text{foo}\vee{}1.0)" + + def test_expression_where_no_dims(self, expression_where): + expression_where_string = expression_where(r"\text{foo}", r"\text{bar}") + assert expression_where_string == r"(\text{foo} \text{if } \text{bar} == True)" + + def test_expression_where_with_dims(self, expression_where): + expression_where_string = expression_where( + r"\textbf{foo}_\text{techs}", r"\textit{bar}_\text{nodes}" + ) + assert ( + expression_where_string + == r"(\textbf{foo}_\text{techs} \text{if } \textit{bar}_\text{nodes} == True)" + ) From 3980d5358f07121e12b35f7f5d6740ba1f81fca0 Mon Sep 17 00:00:00 2001 From: Ivan Ruiz Manuel <72193617+irm-codebase@users.noreply.github.com> Date: Thu, 14 Nov 2024 18:00:24 +0100 Subject: [PATCH 3/8] Add simplistic mypy configuration (#707) --- pyproject.toml | 4 ++++ requirements/dev.txt | 3 ++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index c0d5839f..b8635ed9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -75,6 +75,10 @@ count = '' quiet-level = 3 ignore-words-list = "socio-economic" # British english spelling that isn't covered by the inbuilt dictionary +[tool.mypy] +ignore_missing_imports = true +files = "src/" + [tool.setuptools.packages.find] where = ["src"] include = ["calliope*"] diff --git a/requirements/dev.txt b/requirements/dev.txt index e471490e..1524f8aa 100644 --- a/requirements/dev.txt +++ b/requirements/dev.txt @@ -6,10 +6,11 @@ mkdocs-jupyter >= 0.24, < 0.24.7 mkdocs-macros-plugin >= 1.0, < 2 mkdocs-material >= 9.5, < 10 mkdocstrings-python >= 1.7, < 2 +mypy >= 1.13.0, < 2 pandas-stubs plotly >= 5, < 6 pre-commit < 4 pytest >= 8, < 9 pytest-cov < 5 pytest-order < 2 -pytest-xdist < 4 # pytest distributed testing plugin \ No newline at end of file +pytest-xdist < 4 # pytest distributed testing plugin From dea1c15bdfa7b28d39aecc32db0c1598bd63d182 Mon Sep 17 00:00:00 2001 From: Ivan Ruiz Manuel <72193617+irm-codebase@users.noreply.github.com> Date: Tue, 19 Nov 2024 12:44:17 +0100 Subject: [PATCH 4/8] Fix pyomo breakage (#713) * Fix broken pyomo docs links * Set pyomo lowerbound to 6.8.2 to avoid gurobi 12 incompatibility --- CHANGELOG.md | 4 ++++ docs/creating/config.md | 2 +- docs/installation.md | 2 +- requirements/base.txt | 4 ++-- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 6ef5c98c..e6b1c295 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,10 @@ |changed| `data_sources` -> `data_tables` and `data_sources.source` -> `data_tables.data`. This change has occurred to avoid confusion between data "sources" and model energy "sources" (#673). +### Internal changes + +|fixed| Avoided gurobi 12.0 incompatibility with pyomo by setting the lower bound to v6.8.2. + ## 0.7.0.dev4 (2024-09-10) ### User-facing changes diff --git a/docs/creating/config.md b/docs/creating/config.md index ed2a90ac..aff62163 100644 --- a/docs/creating/config.md +++ b/docs/creating/config.md @@ -84,7 +84,7 @@ In fact, you can use a set of results from using `plan` model to initialise both ### `config.solve.solver` Possible options for solver include `glpk`, `gurobi`, `cplex`, and `cbc`. -The interface to these solvers is done through the Pyomo library. Any [solver compatible with Pyomo](https://pyomo.readthedocs.io/en/6.5.0/solving_pyomo_models.html#supported-solvers) should work with Calliope. +The interface to these solvers is done through the Pyomo library. Any [solver compatible with Pyomo](https://pyomo.readthedocs.io/en/latest/reference/topical/appsi/appsi.solvers.html) should work with Calliope. For solvers with which Pyomo provides more than one way to interface, the additional `solver_io` option can be used. In the case of Gurobi, for example, it is usually fastest to use the direct Python interface: diff --git a/docs/installation.md b/docs/installation.md index f32ea654..dca4aaf6 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -62,7 +62,7 @@ However, we recommend to not use this solver where possible, since it performs r Indeed, our example models use the free and open source [CBC](#cbc) solver instead, but installing it on Windows requires an extra step. [CBC](#cbc) (open-source) or [Gurobi](#gurobi) (commercial) are recommended for large problems, and have been confirmed to work with Calliope. The following subsections provide additional detail on how to install a solver. -This list is not exhaustive; any solvers [supported by Pyomo](https://pyomo.readthedocs.io/en/stable/solving_pyomo_models.html#supported-solvers) can be used. +This list is not exhaustive; any solvers [supported by Pyomo](https://pyomo.readthedocs.io/en/latest/reference/topical/appsi/appsi.solvers.html) can be used. ### CBC diff --git a/requirements/base.txt b/requirements/base.txt index 2bf5f664..03ae47a4 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -9,8 +9,8 @@ natsort >= 8, < 9 netcdf4 >= 1.2, < 1.7 numpy >= 1, < 2 pandas >= 2.1.3, < 2.3 # Minimum bound is 2.1.3 because of a regression in v2.1.0/2.1.1 inflating time/memory consumption on groupby operations with MultiIndex -pyomo >= 6.5, < 6.7.2 +pyomo >= 6.8.2, < 7 pyparsing >= 3.0, < 3.1 ruamel.yaml >= 0.18, < 0.19 typing-extensions >= 4, < 5 -xarray >= 2024.1, < 2024.4 \ No newline at end of file +xarray >= 2024.1, < 2024.4 From dcd9699b3952f370f58a6c8b9f7f80e88a847204 Mon Sep 17 00:00:00 2001 From: Bryn Pickering <17178478+brynpickering@users.noreply.github.com> Date: Wed, 20 Nov 2024 16:25:57 +0000 Subject: [PATCH 5/8] Add `broadcast_param_data` config option and default it to False. (#715) --- CHANGELOG.md | 3 + docs/creating/config.md | 4 ++ docs/creating/parameters.md | 26 +++++++++ docs/examples/national_scale/index.md | 55 +++++++++++++------ docs/examples/urban_scale/index.md | 23 ++------ docs/user_defined_math/syntax.md | 2 +- src/calliope/config/config_schema.yaml | 7 +++ .../national_scale/data_tables/costs.csv | 6 ++ .../example_models/national_scale/model.yaml | 16 +++++- .../national_scale/model_config/techs.yaml | 51 +---------------- .../example_models/urban_scale/model.yaml | 5 ++ .../urban_scale/model_config/techs.yaml | 14 ----- src/calliope/preprocess/model_data.py | 7 +++ tests/common/test_model/model.yaml | 2 + tests/test_preprocess_model_data.py | 13 +++++ 15 files changed, 132 insertions(+), 102 deletions(-) create mode 100644 src/calliope/example_models/national_scale/data_tables/costs.csv diff --git a/CHANGELOG.md b/CHANGELOG.md index e6b1c295..b58be7ca 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,9 @@ ### User-facing changes +|changed| Single data entries defined in YAML indexed parameters will not be automatically broadcast along indexed dimensions. +To achieve the same functionality as in ` Date: Mon, 25 Nov 2024 21:52:12 +0000 Subject: [PATCH 6/8] Bump dawidd6/action-download-artifact from 3 to 6 in /.github/workflows (#718) Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 3 to 6. - [Release notes](https://github.com/dawidd6/action-download-artifact/releases) - [Commits](https://github.com/dawidd6/action-download-artifact/compare/v3...v6) --- updated-dependencies: - dependency-name: dawidd6/action-download-artifact dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .github/workflows/release.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index cb08d1df..b5e24ddb 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -13,7 +13,7 @@ jobs: steps: - name: Download built package from another workflow - uses: dawidd6/action-download-artifact@v3 + uses: dawidd6/action-download-artifact@v6 with: name: ${{ env.PACKAGENAME }} workflow: pr-ci.yml From 6b60eb79b7ea5222f9c9e7d0e3fd4e4bd0e61e7e Mon Sep 17 00:00:00 2001 From: Bryn Pickering <17178478+brynpickering@users.noreply.github.com> Date: Fri, 29 Nov 2024 10:15:51 +0000 Subject: [PATCH 7/8] Update math lower bound capacity setting (#700) --- CHANGELOG.md | 2 + src/calliope/math/plan.yaml | 68 +++++++++++++++++++++------ tests/common/lp_files/area_use.lp | 17 +++++++ tests/common/lp_files/flow_cap.lp | 9 ++-- tests/common/lp_files/flow_out_max.lp | 2 +- tests/common/lp_files/source_cap.lp | 17 +++++++ tests/common/lp_files/storage_cap.lp | 17 +++++++ tests/test_backend_general.py | 4 +- tests/test_math.py | 68 +++++++++++++++++---------- 9 files changed, 155 insertions(+), 49 deletions(-) create mode 100644 tests/common/lp_files/area_use.lp create mode 100644 tests/common/lp_files/source_cap.lp create mode 100644 tests/common/lp_files/storage_cap.lp diff --git a/CHANGELOG.md b/CHANGELOG.md index b58be7ca..500cc35f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,8 @@ ### User-facing changes +|fixed| Technology capacity lower bound constraints so that `[cap-type]_min` (e.g., `flow_cap_min`) is not always enforced if the `purchased_units` variable is active (#643). + |changed| Single data entries defined in YAML indexed parameters will not be automatically broadcast along indexed dimensions. To achieve the same functionality as in `- Fix the storage capacity of any technology using integer units to define its capacity. - foreach: [nodes, techs, carriers] + foreach: [nodes, techs] where: "storage AND purchased_units AND storage_cap_per_unit" equations: - expression: storage_cap == purchased_units * storage_cap_per_unit @@ -378,7 +378,7 @@ constraints: description: >- Fix the flow capacity of any technology using integer units to define its capacity. foreach: [nodes, techs, carriers] - where: "operating_units AND flow_cap_per_unit" + where: "purchased_units AND flow_cap_per_unit" equations: - expression: flow_cap == purchased_units * flow_cap_per_unit @@ -394,14 +394,18 @@ constraints: - where: NOT flow_cap_max expression: flow_cap <= bigM * purchased_units - flow_capacity_min_purchase_milp: + flow_capacity_minimum: description: >- Set the lower bound on a technology's flow capacity, - for any technology with integer capacity purchasing. + for any technology with a non-zero lower bound, + with or without integer capacity purchasing. foreach: [nodes, techs, carriers] - where: "purchased_units AND flow_cap_min" + where: "flow_cap_min" equations: - - expression: flow_cap >= flow_cap_min * purchased_units + - where: NOT purchased_units + expression: flow_cap >= flow_cap_min + - where: purchased_units + expression: flow_cap >= flow_cap_min * purchased_units storage_capacity_max_purchase_milp: description: >- @@ -412,14 +416,44 @@ constraints: equations: - expression: storage_cap <= storage_cap_max * purchased_units - storage_capacity_min_purchase_milp: + storage_capacity_minimum: description: >- - Set the lower bound on a technology's storage capacity, - for any technology with integer capacity purchasing. + Set the lower bound on a technology's storage capacity + for any technology with a non-zero lower bound, + with or without integer capacity purchasing. + foreach: [nodes, techs] + where: "storage_cap_min" + equations: + - where: NOT purchased_units + expression: storage_cap >= storage_cap_min + - where: purchased_units + expression: storage_cap >= storage_cap_min * purchased_units + + area_use_minimum: + description: >- + Set the lower bound on a technology's area use + for any technology with a non-zero lower bound, + with or without integer capacity purchasing. + foreach: [nodes, techs] + where: "area_use_min" + equations: + - where: NOT purchased_units + expression: area_use >= area_use_min + - where: purchased_units + expression: area_use >= area_use_min * purchased_units + + source_capacity_minimum: + description: >- + Set the lower bound on a technology's source capacity + for any supply technology with a non-zero lower bound, + with or without integer capacity purchasing. foreach: [nodes, techs] - where: "purchased_units AND storage_cap_min" + where: "base_tech=supply AND source_cap_min" equations: - - expression: storage_cap >= storage_cap_min * purchased_units + - where: NOT purchased_units + expression: source_cap >= source_cap_min + - where: purchased_units + expression: source_cap >= source_cap_min * purchased_units unit_capacity_max_systemwide_milp: description: >- @@ -496,7 +530,7 @@ variables: unit: power foreach: [nodes, techs, carriers] bounds: - min: flow_cap_min + min: 0 # set in a distinct constraint to handle the integer purchase variable max: flow_cap_max link_flow_cap: @@ -561,7 +595,7 @@ variables: foreach: [nodes, techs] where: "(area_use_min OR area_use_max OR area_use_per_flow_cap OR sink_unit=per_area OR source_unit=per_area)" bounds: - min: area_use_min + min: 0 # set in a distinct constraint to handle the integer purchase variable max: area_use_max source_use: @@ -586,7 +620,7 @@ variables: foreach: [nodes, techs] where: "base_tech=supply" bounds: - min: source_cap_min + min: 0 # set in a distinct constraint to handle the integer purchase variable max: source_cap_max # --8<-- [start:variable] @@ -601,7 +635,7 @@ variables: where: "include_storage=True OR base_tech=storage" domain: real # optional; defaults to real. bounds: - min: storage_cap_min + min: 0 # set in a distinct constraint to handle the integer purchase variable max: storage_cap_max active: true # optional; defaults to true. # --8<-- [end:variable] @@ -662,6 +696,10 @@ variables: description: >- Flow capacity that will be set to zero if the technology is not operating in a given timestep and will be set to the value of the decision variable `flow_cap` otherwise. + This is useful when you want to set a minimum flow capacity for any technology investment, but also want to allow the model to decide the capacity. + It is expected to only be used when `purchased_units_max == 1`, + i.e., the `purchased_units` decision variable is binary. + If `purchased_units_max > 1`, you may get strange results and should instead use the less flexible `flow_cap_per_unit`. default: 0 unit: power foreach: [nodes, techs, carriers, timesteps] diff --git a/tests/common/lp_files/area_use.lp b/tests/common/lp_files/area_use.lp new file mode 100644 index 00000000..83f3f879 --- /dev/null +++ b/tests/common/lp_files/area_use.lp @@ -0,0 +1,17 @@ +\* Source Pyomo model name=None *\ + +min +objectives(foo)(0): ++1 variables(area_use)(a__test_supply_elec) ++1 variables(area_use)(b__test_supply_elec) + +s.t. + +c_l_constraints(area_use_minimum)(a__test_supply_elec)_: ++1 variables(area_use)(a__test_supply_elec) +>= 1.0 + +bounds + 0 <= variables(area_use)(a__test_supply_elec) <= +inf + 0 <= variables(area_use)(b__test_supply_elec) <= 100.0 +end diff --git a/tests/common/lp_files/flow_cap.lp b/tests/common/lp_files/flow_cap.lp index 8b54cc2d..b97e95e0 100644 --- a/tests/common/lp_files/flow_cap.lp +++ b/tests/common/lp_files/flow_cap.lp @@ -7,12 +7,11 @@ objectives(foo)(0): s.t. -c_e_ONE_VAR_CONSTANT: -+1 ONE_VAR_CONSTANT -= 1 +c_l_constraints(flow_capacity_minimum)(a__test_supply_elec__electricity)_: ++1 variables(flow_cap)(a__test_supply_elec__electricity) +>= 1.0 bounds - 1 <= ONE_VAR_CONSTANT <= 1 - 1.0 <= variables(flow_cap)(a__test_supply_elec__electricity) <= +inf + 0 <= variables(flow_cap)(a__test_supply_elec__electricity) <= +inf 0 <= variables(flow_cap)(b__test_supply_elec__electricity) <= 100.0 end diff --git a/tests/common/lp_files/flow_out_max.lp b/tests/common/lp_files/flow_out_max.lp index 31e0c6c0..856d27f6 100644 --- a/tests/common/lp_files/flow_out_max.lp +++ b/tests/common/lp_files/flow_out_max.lp @@ -75,7 +75,7 @@ bounds 0 <= variables(flow_cap)(a__test_link_a_b_heat__heat) <= 5.0 0 <= variables(flow_out)(a__test_link_a_b_heat__heat__2005_01_01_01_00) <= +inf 0 <= variables(flow_out)(a__test_supply_elec__electricity__2005_01_01_00_00) <= +inf - 100.0 <= variables(flow_cap)(a__test_supply_elec__electricity) <= 100.0 + 0 <= variables(flow_cap)(a__test_supply_elec__electricity) <= 10.0 0 <= variables(flow_out)(a__test_supply_elec__electricity__2005_01_01_01_00) <= +inf 0 <= variables(flow_out)(b__test_link_a_b_elec__electricity__2005_01_01_00_00) <= +inf 0 <= variables(flow_cap)(b__test_link_a_b_elec__electricity) <= 10.0 diff --git a/tests/common/lp_files/source_cap.lp b/tests/common/lp_files/source_cap.lp new file mode 100644 index 00000000..7a3b9c52 --- /dev/null +++ b/tests/common/lp_files/source_cap.lp @@ -0,0 +1,17 @@ +\* Source Pyomo model name=None *\ + +min +objectives(foo)(0): ++1 variables(source_cap)(a__test_supply_elec) ++1 variables(source_cap)(b__test_supply_elec) + +s.t. + +c_l_constraints(source_capacity_minimum)(a__test_supply_elec)_: ++1 variables(source_cap)(a__test_supply_elec) +>= 1.0 + +bounds + 0 <= variables(source_cap)(a__test_supply_elec) <= +inf + 0 <= variables(source_cap)(b__test_supply_elec) <= 100.0 +end diff --git a/tests/common/lp_files/storage_cap.lp b/tests/common/lp_files/storage_cap.lp new file mode 100644 index 00000000..12c25edb --- /dev/null +++ b/tests/common/lp_files/storage_cap.lp @@ -0,0 +1,17 @@ +\* Source Pyomo model name=None *\ + +min +objectives(foo)(0): ++1 variables(storage_cap)(a__test_supply_elec) ++1 variables(storage_cap)(b__test_supply_elec) + +s.t. + +c_l_constraints(storage_capacity_minimum)(a__test_supply_elec)_: ++1 variables(storage_cap)(a__test_supply_elec) +>= 1.0 + +bounds + 0 <= variables(storage_cap)(a__test_supply_elec) <= +inf + 0 <= variables(storage_cap)(b__test_supply_elec) <= 100.0 +end diff --git a/tests/test_backend_general.py b/tests/test_backend_general.py index 8b42fa70..9f9d3d81 100644 --- a/tests/test_backend_general.py +++ b/tests/test_backend_general.py @@ -666,11 +666,11 @@ def test_update_variable_single_bound_multi_val(self, caplog, solved_model_func) def test_update_variable_error_update_parameter_instead(self, solved_model_func): """Check that expected error is raised if trying to update a variable bound that was set by a parameter.""" with pytest.raises(calliope.exceptions.BackendError) as excinfo: - solved_model_func.backend.update_variable_bounds("flow_cap", min=1) + solved_model_func.backend.update_variable_bounds("flow_cap", max=1) assert check_error_or_warning( excinfo, "Cannot update variable bounds that have been set by parameters." - " Use `update_parameter('flow_cap_min')` to update the min bound of flow_cap.", + " Use `update_parameter('flow_cap_max')` to update the max bound of flow_cap.", ) def test_fix_variable_before_solve(self, built_model_cls_longnames): diff --git a/tests/test_math.py b/tests/test_math.py index 35aed4e7..174f40cc 100644 --- a/tests/test_math.py +++ b/tests/test_math.py @@ -49,32 +49,54 @@ class TestBaseMath: def base_math(self): return AttrDict.from_yaml(CALLIOPE_DIR / "math" / "plan.yaml") - def test_flow_cap(self, compare_lps): - self.TEST_REGISTER.add("variables.flow_cap") + @pytest.mark.parametrize( + ("variable", "constraint", "overrides"), + [ + ("flow_cap", "flow_capacity_minimum", {}), + ( + "storage_cap", + "storage_capacity_minimum", + {"techs.test_supply_elec.include_storage": True}, + ), + ("area_use", "area_use_minimum", {}), + ("source_cap", "source_capacity_minimum", {}), + ], + ) + def test_capacity_variables_and_bounds( + self, compare_lps, variable, constraint, overrides + ): + """Check that variables are initiated with the appropriate bounds, + and that the lower bound is updated from zero via a separate constraint if required. + """ + constraint_full = f"constraints.{constraint}" + self.TEST_REGISTER.add(f"variables.{variable}") + self.TEST_REGISTER.add(constraint_full) model = build_test_model( { - "nodes.b.techs.test_supply_elec.flow_cap_max": 100, - "nodes.a.techs.test_supply_elec.flow_cap_min": 1, - "nodes.a.techs.test_supply_elec.flow_cap_max": np.nan, + f"nodes.b.techs.test_supply_elec.{variable}_max": 100, + f"nodes.a.techs.test_supply_elec.{variable}_min": 1, + f"nodes.a.techs.test_supply_elec.{variable}_max": np.nan, + **overrides, }, "simple_supply,two_hours,investment_costs", ) - custom_math = { - # need the variable defined in a constraint/objective for it to appear in the LP file bounds - "objectives": { - "foo": { - "equations": [ - { - "expression": "sum(flow_cap[techs=test_supply_elec], over=[nodes, carriers])" - } - ], - "sense": "minimise", - } + # Custom objective ensures that all variables appear in the LP file. + # Variables not found in either an objective or constraint will never appear in the LP. + sum_in_objective = "[nodes]" if variable != "flow_cap" else "[nodes, carriers]" + custom_objective = { + "objectives.foo": { + "equations": [ + { + "expression": f"sum({variable}[techs=test_supply_elec], over={sum_in_objective})" + } + ], + "sense": "minimise", } } - compare_lps(model, custom_math, "flow_cap") - - # "flow_cap" is the name of the lp file + custom_math = AttrDict( + {constraint_full: PLAN_MATH.get_key(constraint_full), **custom_objective} + ) + compare_lps(model, custom_math, variable) def test_storage_max(self, compare_lps): self.TEST_REGISTER.add("constraints.storage_max") @@ -86,13 +108,7 @@ def test_storage_max(self, compare_lps): def test_flow_out_max(self, compare_lps): self.TEST_REGISTER.add("constraints.flow_out_max") - model = build_test_model( - { - "nodes.a.techs.test_supply_elec.flow_cap_min": 100, - "nodes.a.techs.test_supply_elec.flow_cap_max": 100, - }, - "simple_supply,two_hours,investment_costs", - ) + model = build_test_model({}, "simple_supply,two_hours,investment_costs") custom_math = { "constraints": {"flow_out_max": PLAN_MATH.constraints.flow_out_max} From f2e4f4310c8921bd735ce3c6e947d6a1582d6c28 Mon Sep 17 00:00:00 2001 From: Bryn Pickering <17178478+brynpickering@users.noreply.github.com> Date: Fri, 29 Nov 2024 13:59:34 +0000 Subject: [PATCH 8/8] Update area units and `available_area` description. (#705) --- CHANGELOG.md | 2 ++ src/calliope/config/model_def_schema.yaml | 16 ++++++++++------ 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 500cc35f..940052f3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,8 @@ ### User-facing changes +|fixed| Area-based parameters have appropriate documented units of `area` rather than `area^2` (#701). + |fixed| Technology capacity lower bound constraints so that `[cap-type]_min` (e.g., `flow_cap_min`) is not always enforced if the `purchased_units` variable is active (#643). |changed| Single data entries defined in YAML indexed parameters will not be automatically broadcast along indexed dimensions. diff --git a/src/calliope/config/model_def_schema.yaml b/src/calliope/config/model_def_schema.yaml index 3f536642..b1f8d491 100644 --- a/src/calliope/config/model_def_schema.yaml +++ b/src/calliope/config/model_def_schema.yaml @@ -520,7 +520,7 @@ properties: description: >- Sets `area_use` to a parameter in operate mode. NOTE: this parameter cannot be used in `plan` mode as it clashes with the decision variable of the same name. - x-unit: $\text{area}^{2}$. + x-unit: $\text{area}$. area_use_max: $ref: "#/$defs/TechParamNullNumber" @@ -529,7 +529,7 @@ properties: title: Maximum usable area. description: >- If set to a finite value, limits the upper bound of the `area_use` decision variable to this value. - x-unit: $\text{area}^{2}$. + x-unit: $\text{area}$. area_use_min: $ref: "#/$defs/TechParamNullNumber" @@ -538,7 +538,7 @@ properties: title: Minimum usable area. description: >- Limits the lower bound of the `area_use` decision variable to this value. - x-unit: $\text{area}^{2}$. + x-unit: $\text{area}$. area_use_per_flow_cap: $ref: "#/$defs/TechParamNullNumber" @@ -547,7 +547,7 @@ properties: title: Area use per flow capacity description: >- If set, forces `area_use` to follow `flow_cap` with the given numerical ratio (e.g. setting to 1.5 means that `area_use == 1.5 * flow_cap`). - x-unit: $\frac{\text{area}^{2}}{\text{power}}$. + x-unit: $\frac{\text{area}}{\text{power}}$. storage_cap: $ref: "#/$defs/TechParamNullNumber" @@ -558,7 +558,7 @@ properties: description: >- Sets `storage_cap` to a parameter in operate mode. NOTE: this parameter cannot be used in `plan` mode as it clashes with the decision variable of the same name. - x-unit: $\text{area}^{2}$. + x-unit: $\text{area}$. storage_cap_max: $ref: "#/$defs/TechParamNullNumber" @@ -950,7 +950,7 @@ properties: title: Cost of area use. description: >- Cost per unit `area_use`. - x-unit: $\text{area}^{-2}$. + x-unit: $\text{area}^{-1}$. cost_source_cap: $ref: "#/$defs/TechCostNullNumber" @@ -1016,3 +1016,7 @@ properties: minimum: 0 default: .inf x-resample_method: mean + title: Available area at the given node. + description: >- + Limits the total area that can be occupied by all technologies which have the `area_use` decision variable activated. + x-unit: $\text{area}$.