diff --git a/CHANGELOG.md b/CHANGELOG.md index e29e9487..fe23bf1b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,8 @@ ### User-facing changes +|changed| to ensure the model configuration always remains in sync with the results, `kwargs` in `model.build()` and `model.solve()` now directly affect `model.config` + |changed| `template:` can now be used anywhere within YAML definition files, not just in the `nodes`, `techs` and `data_tables` sections. |changed| "An overview of the Calliope terminology" information admonition to remove self-references and improve understandability. @@ -33,6 +35,8 @@ This change has occurred to avoid confusion between data "sources" and model ene ### Internal changes +|changed| Model configuration now uses `pydantic`. + |changed| Model definition reading is now defined in a single place (preprocess/model_definition.py). |changed| Moved YAML reading/importing functionality out of `AttrDict`. It is now part of our `io` functionality. diff --git a/docs/creating/config.md b/docs/creating/config.md index 70b60f9d..307dc9ee 100644 --- a/docs/creating/config.md +++ b/docs/creating/config.md @@ -83,7 +83,7 @@ In `plan` mode, capacities are determined by the model, whereas in `operate` mod In `spores` mode, the model is first run in `plan` mode, then run `N` number of times to find alternative system configurations with similar monetary cost, but maximally different choice of technology capacity and location (node). In most cases, you will want to use the `plan` mode. -In fact, you can use a set of results from using `plan` model to initialise both the `operate` (`config.build.operate_use_cap_results`) and `spores` modes. +In fact, you can use a set of results from using `plan` model to initialise both the `operate` (`config.build.operate.use_cap_results`) and `spores` modes. ### `config.solve.solver` diff --git a/docs/creating/parameters.md b/docs/creating/parameters.md index 20961937..760e3f4d 100644 --- a/docs/creating/parameters.md +++ b/docs/creating/parameters.md @@ -52,7 +52,7 @@ Which will add the new dimension `my_new_dim` to your model: `model.inputs.my_ne `foreach: [my_new_dim]`. !!! warning - The `parameter` section should not be used for large datasets (e.g., indexing over the time dimension) as it will have a high memory overhead on loading the data. + The `parameter` section should not be used for large datasets (e.g., indexing over the time dimension) as it will have a high memory overhead when loading the data. ## Broadcasting data along indexed dimensions diff --git a/docs/creating/scenarios.md b/docs/creating/scenarios.md index fb6f2069..b98b7cb2 100644 --- a/docs/creating/scenarios.md +++ b/docs/creating/scenarios.md @@ -37,33 +37,6 @@ Scenarios consist of a name and a list of override names which together form tha Scenarios and overrides can be used to generate scripts that run a single Calliope model many times, either sequentially, or in parallel on a high-performance cluster (see the section on [generating scripts to repeatedly run variations of a model](../advanced/scripts.md)). -## Importing other YAML files in overrides - -When using overrides, it is possible to have [`import` statements](yaml.md#relative-file-imports) for more flexibility. -This can be useful if many overrides are defined which share large parts of model configuration, such as different levels of interconnection between model zones -The following example illustrates this: - -```yaml -overrides: - some_override: - techs: - some_tech.constraints.flow_cap_max: 10 - import: [additional_definitions.yaml] -``` - -`additional_definitions.yaml`: - -```yaml -techs: - some_other_tech.constraints.flow_out_eff: 0.1 -``` - -This is equivalent to the following override: - -```yaml -overrides: - some_override: - techs: - some_tech.constraints.flow_cap_max: 10 - some_other_tech.constraints.flow_out_eff: 0.1 -``` +???+ warning + Overrides are executed _after_ `imports:` but _before_ `templates:`. + This means it is possible to override template values, but not the files imported in your model definition. diff --git a/docs/hooks/generate_readable_schema.py b/docs/hooks/generate_readable_schema.py index 89ae232e..be72c513 100644 --- a/docs/hooks/generate_readable_schema.py +++ b/docs/hooks/generate_readable_schema.py @@ -14,12 +14,13 @@ import jsonschema2md from mkdocs.structure.files import File +from calliope import config from calliope.util import schema TEMPDIR = tempfile.TemporaryDirectory() SCHEMAS = { - "config_schema": schema.CONFIG_SCHEMA, + "config_schema": config.CalliopeConfig().model_no_ref_schema(), "model_schema": schema.MODEL_SCHEMA, "math_schema": schema.MATH_SCHEMA, "data_table_schema": schema.DATA_TABLE_SCHEMA, diff --git a/docs/migrating.md b/docs/migrating.md index b704ba6b..2ca19821 100644 --- a/docs/migrating.md +++ b/docs/migrating.md @@ -340,9 +340,9 @@ Along with [changing the YAML hierarchy of model configuration](#model-and-run- * `model.subset_time` → `config.init.time_subset` * `model.time: {function: resample, function_options: {'resolution': '6H'}}` → `config.init.time_resample` -* `run.operation.window` → `config.build.operate_window` -* `run.operation.horizon` → `config.build.operate_horizon` -* `run.operation.use_cap_results` → `config.build.operate_use_cap_results` +* `run.operation.window` → `config.build.operate.window` +* `run.operation.horizon` → `config.build.operate.horizon` +* `run.operation.use_cap_results` → `config.build.operate.use_cap_results` We have also moved some _data_ out of the configuration and into the [top-level `parameters` section](creating/parameters.md): @@ -516,8 +516,8 @@ Therefore, `24H` is equivalent to `24` in v0.6 if you are using hourly resolutio init: time_resample: 6H build: - operate_window: 12H - operate_horizon: 24H + operate.window: 12H + operate.horizon: 24H ``` !!! warning diff --git a/docs/running.md b/docs/running.md index 82b90d03..fe4102bc 100644 --- a/docs/running.md +++ b/docs/running.md @@ -25,7 +25,7 @@ The `calliope run` command takes the following options: * `--scenario={scenario}` and `--override_dict={yaml_string}`: Specify a scenario, or one or several overrides, to apply to the model, or apply specific overrides from a YAML string (see below for more information). * `--help`: Show all available options. -Multiple options can be specified, for example, saving NetCDF, CSV, and HTML plots simultaneously. +Multiple options can be specified, for example, saving NetCDF and CSV simultaneously. ```shell $ calliope run testmodel/model.yaml --save_netcdf=results.nc --save_csv=outputs diff --git a/requirements/base.txt b/requirements/base.txt index 03ae47a4..8e1dc473 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -4,6 +4,7 @@ geographiclib >= 2, < 3 ipdb >= 0.13, < 0.14 ipykernel < 7 jinja2 >= 3, < 4 +jsonref >= 1.1, < 2 jsonschema >= 4, < 5 natsort >= 8, < 9 netcdf4 >= 1.2, < 1.7 @@ -14,3 +15,4 @@ pyparsing >= 3.0, < 3.1 ruamel.yaml >= 0.18, < 0.19 typing-extensions >= 4, < 5 xarray >= 2024.1, < 2024.4 +pydantic >= 2.9.2 diff --git a/src/calliope/attrdict.py b/src/calliope/attrdict.py index b3262f83..2d286fef 100644 --- a/src/calliope/attrdict.py +++ b/src/calliope/attrdict.py @@ -232,7 +232,6 @@ def union( KeyError: `other` has an already defined key and `allow_override == False` """ if not isinstance(other, AttrDict): - # FIXME-yaml: remove AttrDict wrapping in uses of this function. other = AttrDict(other) self_keys = self.keys_nested() other_keys = other.keys_nested() diff --git a/src/calliope/backend/__init__.py b/src/calliope/backend/__init__.py index d37395d8..84929792 100644 --- a/src/calliope/backend/__init__.py +++ b/src/calliope/backend/__init__.py @@ -15,19 +15,19 @@ from calliope.preprocess import CalliopeMath if TYPE_CHECKING: + from calliope import config from calliope.backend.backend_model import BackendModel def get_model_backend( - name: str, data: xr.Dataset, math: CalliopeMath, **kwargs + build_config: "config.Build", data: xr.Dataset, math: CalliopeMath ) -> "BackendModel": """Assign a backend using the given configuration. Args: - name (str): name of the backend to use. + build_config: Build configuration options. data (Dataset): model data for the backend. math (CalliopeMath): Calliope math. - **kwargs: backend keyword arguments corresponding to model.config.build. Raises: exceptions.BackendError: If invalid backend was requested. @@ -35,10 +35,10 @@ def get_model_backend( Returns: BackendModel: Initialized backend object. """ - match name: + match build_config.backend: case "pyomo": - return PyomoBackendModel(data, math, **kwargs) + return PyomoBackendModel(data, math, build_config) case "gurobi": - return GurobiBackendModel(data, math, **kwargs) + return GurobiBackendModel(data, math, build_config) case _: - raise BackendError(f"Incorrect backend '{name}' requested.") + raise BackendError(f"Incorrect backend '{build_config.backend}' requested.") diff --git a/src/calliope/backend/backend_model.py b/src/calliope/backend/backend_model.py index f8431513..f556c64e 100644 --- a/src/calliope/backend/backend_model.py +++ b/src/calliope/backend/backend_model.py @@ -26,17 +26,13 @@ import numpy as np import xarray as xr -from calliope import exceptions +from calliope import config, exceptions from calliope.attrdict import AttrDict from calliope.backend import helper_functions, parsing from calliope.exceptions import warn as model_warn from calliope.io import load_config, to_yaml from calliope.preprocess.model_math import ORDERED_COMPONENTS_T, CalliopeMath -from calliope.util.schema import ( - MODEL_SCHEMA, - extract_from_schema, - update_then_validate_config, -) +from calliope.util.schema import MODEL_SCHEMA, extract_from_schema if TYPE_CHECKING: from calliope.backend.parsing import T as Tp @@ -69,20 +65,20 @@ class BackendModelGenerator(ABC): _PARAM_UNITS = extract_from_schema(MODEL_SCHEMA, "x-unit") _PARAM_TYPE = extract_from_schema(MODEL_SCHEMA, "x-type") - def __init__(self, inputs: xr.Dataset, math: CalliopeMath, **kwargs): + def __init__( + self, inputs: xr.Dataset, math: CalliopeMath, build_config: config.Build + ): """Abstract base class to build a representation of the optimisation problem. Args: inputs (xr.Dataset): Calliope model data. math (CalliopeMath): Calliope math. - **kwargs (Any): build configuration overrides. + build_config: Build configuration options. """ self._dataset = xr.Dataset() self.inputs = inputs.copy() self.inputs.attrs = deepcopy(inputs.attrs) - self.inputs.attrs["config"]["build"] = update_then_validate_config( - "build", self.inputs.attrs["config"], **kwargs - ) + self.config = build_config self.math: CalliopeMath = deepcopy(math) self._solve_logger = logging.getLogger(__name__ + ".") @@ -200,6 +196,7 @@ def _check_inputs(self): "equation_name": "", "backend_interface": self, "input_data": self.inputs, + "build_config": self.config, "helper_functions": helper_functions._registry["where"], "apply_where": True, "references": set(), @@ -246,7 +243,7 @@ def add_optimisation_components(self) -> None: # The order of adding components matters! # 1. Variables, 2. Global Expressions, 3. Constraints, 4. Objectives self._add_all_inputs_as_parameters() - if self.inputs.attrs["config"]["build"]["pre_validate_math_strings"]: + if self.config.pre_validate_math_strings: self._validate_math_string_parsing() for components in typing.get_args(ORDERED_COMPONENTS_T): component = components.removesuffix("s") @@ -399,7 +396,7 @@ def _add_all_inputs_as_parameters(self) -> None: if param_name in self.parameters.keys(): continue elif ( - self.inputs.attrs["config"]["build"]["mode"] != "operate" + self.config.mode != "operate" and param_name in extract_from_schema(MODEL_SCHEMA, "x-operate-param").keys() ): @@ -606,7 +603,11 @@ class BackendModel(BackendModelGenerator, Generic[T]): """Calliope's backend model functionality.""" def __init__( - self, inputs: xr.Dataset, math: CalliopeMath, instance: T, **kwargs + self, + inputs: xr.Dataset, + math: CalliopeMath, + build_config: config.Build, + instance: T, ) -> None: """Abstract base class to build backend models that interface with solvers. @@ -614,9 +615,9 @@ def __init__( inputs (xr.Dataset): Calliope model data. math (CalliopeMath): Calliope math. instance (T): Interface model instance. - **kwargs: build configuration overrides. + build_config: Build configuration options. """ - super().__init__(inputs, math, **kwargs) + super().__init__(inputs, math, build_config) self._instance = instance self.shadow_prices: ShadowPrices self._has_verbose_strings: bool = False diff --git a/src/calliope/backend/gurobi_backend_model.py b/src/calliope/backend/gurobi_backend_model.py index 2d2e0a48..6dbbde3b 100644 --- a/src/calliope/backend/gurobi_backend_model.py +++ b/src/calliope/backend/gurobi_backend_model.py @@ -14,6 +14,7 @@ import pandas as pd import xarray as xr +from calliope import config from calliope.backend import backend_model, parsing from calliope.exceptions import BackendError, BackendWarning from calliope.exceptions import warn as model_warn @@ -41,19 +42,21 @@ class GurobiBackendModel(backend_model.BackendModel): """gurobipy-specific backend functionality.""" - def __init__(self, inputs: xr.Dataset, math: CalliopeMath, **kwargs) -> None: + def __init__( + self, inputs: xr.Dataset, math: CalliopeMath, build_config: config.Build + ) -> None: """Gurobi solver interface class. Args: inputs (xr.Dataset): Calliope model data. math (CalliopeMath): Calliope math. - **kwargs: passed directly to the solver. + build_config: Build configuration options. """ if importlib.util.find_spec("gurobipy") is None: raise ImportError( "Install the `gurobipy` package to build the optimisation problem with the Gurobi backend." ) - super().__init__(inputs, math, gurobipy.Model(), **kwargs) + super().__init__(inputs, math, build_config, gurobipy.Model()) self._instance: gurobipy.Model self.shadow_prices = GurobiShadowPrices(self) @@ -144,7 +147,7 @@ def _objective_setter( ) -> xr.DataArray: expr = element.evaluate_expression(self, references=references) - if name == self.inputs.attrs["config"].build.objective: + if name == self.config.objective: self._instance.setObjective(expr.item(), sense=sense) self.log("objectives", name, "Objective activated.") diff --git a/src/calliope/backend/latex_backend_model.py b/src/calliope/backend/latex_backend_model.py index c33229b0..5b901cc3 100644 --- a/src/calliope/backend/latex_backend_model.py +++ b/src/calliope/backend/latex_backend_model.py @@ -12,6 +12,7 @@ import pandas as pd import xarray as xr +from calliope import config from calliope.backend import backend_model, parsing from calliope.exceptions import ModelError from calliope.preprocess import CalliopeMath @@ -305,19 +306,19 @@ def __init__( self, inputs: xr.Dataset, math: CalliopeMath, + build_config: config.Build, include: Literal["all", "valid"] = "all", - **kwargs, ) -> None: """Interface to build a string representation of the mathematical formulation using LaTeX math notation. Args: inputs (xr.Dataset): model data. math (CalliopeMath): Calliope math. + build_config: Build configuration options. include (Literal["all", "valid"], optional): Defines whether to include all possible math equations ("all") or only those for which at least one index item in the "where" string is valid ("valid"). Defaults to "all". - **kwargs: for the backend model generator. """ - super().__init__(inputs, math, **kwargs) + super().__init__(inputs, math, build_config) self.include = include def add_parameter( # noqa: D102, override diff --git a/src/calliope/backend/parsing.py b/src/calliope/backend/parsing.py index 33c9ea47..5cdd0808 100644 --- a/src/calliope/backend/parsing.py +++ b/src/calliope/backend/parsing.py @@ -311,6 +311,7 @@ def evaluate_where( helper_functions=helper_functions._registry["where"], input_data=backend_interface.inputs, backend_interface=backend_interface, + build_config=backend_interface.config, references=references if references is not None else set(), apply_where=True, ) diff --git a/src/calliope/backend/pyomo_backend_model.py b/src/calliope/backend/pyomo_backend_model.py index 5ba41ba0..a0caadfc 100644 --- a/src/calliope/backend/pyomo_backend_model.py +++ b/src/calliope/backend/pyomo_backend_model.py @@ -26,6 +26,7 @@ from pyomo.opt import SolverFactory # type: ignore from pyomo.util.model_size import build_model_size_report # type: ignore +from calliope import config from calliope.exceptions import BackendError, BackendWarning from calliope.exceptions import warn as model_warn from calliope.preprocess import CalliopeMath @@ -58,15 +59,17 @@ class PyomoBackendModel(backend_model.BackendModel): """Pyomo-specific backend functionality.""" - def __init__(self, inputs: xr.Dataset, math: CalliopeMath, **kwargs) -> None: + def __init__( + self, inputs: xr.Dataset, math: CalliopeMath, build_config: config.Build + ) -> None: """Pyomo solver interface class. Args: inputs (xr.Dataset): Calliope model data. math (CalliopeMath): Calliope math. - **kwargs: passed directly to the solver. + build_config: Build configuration options. """ - super().__init__(inputs, math, pmo.block(), **kwargs) + super().__init__(inputs, math, build_config, pmo.block()) self._instance.parameters = pmo.parameter_dict() self._instance.variables = pmo.variable_dict() @@ -185,7 +188,7 @@ def _objective_setter( ) -> xr.DataArray: expr = element.evaluate_expression(self, references=references) objective = pmo.objective(expr.item(), sense=sense) - if name == self.inputs.attrs["config"].build.objective: + if name == self.config.objective: text = "activated" objective.activate() else: diff --git a/src/calliope/backend/where_parser.py b/src/calliope/backend/where_parser.py index f434a9bf..9bb81eac 100644 --- a/src/calliope/backend/where_parser.py +++ b/src/calliope/backend/where_parser.py @@ -13,15 +13,16 @@ import xarray as xr from typing_extensions import NotRequired, TypedDict +from calliope import config from calliope.backend import expression_parser from calliope.exceptions import BackendError +from calliope.util import tools if TYPE_CHECKING: from calliope.backend.backend_model import BackendModel pp.ParserElement.enablePackrat() - BOOLEANTYPE = np.bool_ | np.typing.NDArray[np.bool_] @@ -34,6 +35,7 @@ class EvalAttrs(TypedDict): helper_functions: dict[str, Callable] apply_where: NotRequired[bool] references: NotRequired[set] + build_config: config.Build class EvalWhere(expression_parser.EvalToArrayStr): @@ -118,8 +120,8 @@ def as_math_string(self) -> str: # noqa: D102, override return rf"\text{{config.{self.config_option}}}" def as_array(self) -> xr.DataArray: # noqa: D102, override - config_val = ( - self.eval_attrs["input_data"].attrs["config"].build[self.config_option] + config_val = tools.get_dot_attr( + self.eval_attrs["build_config"], self.config_option ) if not isinstance(config_val, int | float | str | bool | np.bool_): diff --git a/src/calliope/cli.py b/src/calliope/cli.py index ae347dbf..30e24fe2 100644 --- a/src/calliope/cli.py +++ b/src/calliope/cli.py @@ -278,9 +278,9 @@ def run( # Else run the model, then save outputs else: click.secho("Starting model run...") - + kwargs = {} if save_logs: - model.config.set_key("solve.save_logs", save_logs) + kwargs["solve.save_logs"] = save_logs if save_csv is None and save_netcdf is None: click.secho( @@ -292,14 +292,13 @@ def run( # If save_netcdf is used, override the 'save_per_spore_path' to point to a # directory of the same name as the planned netcdf - if save_netcdf and model.config.solve.spores_save_per_spore: - model.config.set_key( - "solve.spores_save_per_spore_path", + if save_netcdf and model.config.solve.spores.save_per_spore: + kwargs["solve.spores_save_per_spore_path"] = ( save_netcdf.replace(".nc", "/spore_{}.nc"), ) model.build() - model.solve() + model.solve(**kwargs) termination = model._model_data.attrs.get( "termination_condition", "unknown" ) diff --git a/src/calliope/config.py b/src/calliope/config.py new file mode 100644 index 00000000..3eab9993 --- /dev/null +++ b/src/calliope/config.py @@ -0,0 +1,278 @@ +# Copyright (C) since 2013 Calliope contributors listed in AUTHORS. +# Licensed under the Apache 2.0 License (see LICENSE file). +"""Implements the Calliope configuration class.""" + +import logging +from collections.abc import Hashable +from pathlib import Path +from typing import Annotated, Literal, TypeVar + +import jsonref +from pydantic import AfterValidator, BaseModel, Field, model_validator +from pydantic_core import PydanticCustomError +from typing_extensions import Self + +from calliope.attrdict import AttrDict + +MODES_T = Literal["plan", "operate", "spores"] +CONFIG_T = Literal["init", "build", "solve"] + +LOGGER = logging.getLogger(__name__) + +# == +# Taken from https://github.com/pydantic/pydantic-core/pull/820#issuecomment-1670475909 +T = TypeVar("T", bound=Hashable) + + +def _validate_unique_list(v: list[T]) -> list[T]: + if len(v) != len(set(v)): + raise PydanticCustomError("unique_list", "List must be unique") + return v + + +UniqueList = Annotated[ + list[T], + AfterValidator(_validate_unique_list), + Field(json_schema_extra={"uniqueItems": True}), +] +# == + + +class ConfigBaseModel(BaseModel): + """A base class for creating pydantic models for Calliope configuration options.""" + + model_config = { + "extra": "forbid", + "frozen": True, + "revalidate_instances": "always", + "use_attribute_docstrings": True, + } + + def update(self, update_dict: dict, deep: bool = False) -> Self: + """Return a new iteration of the model with updated fields. + + Args: + update_dict (dict): Dictionary with which to update the base model. + deep (bool, optional): Set to True to make a deep copy of the model. Defaults to False. + + Returns: + BaseModel: New model instance. + """ + new_dict: dict = {} + # Iterate through dict to be updated and convert any sub-dicts into their respective pydantic model objects. + # Wrapped in `AttrDict` to allow users to define dot notation nested configuration. + for key, val in AttrDict(update_dict).items(): + key_class = getattr(self, key) + if isinstance(key_class, ConfigBaseModel): + new_dict[key] = key_class.update(val) + else: + LOGGER.info( + f"Updating {self.model_config['title']} `{key}`: {key_class} -> {val}" + ) + new_dict[key] = val + updated = super().model_copy(update=new_dict, deep=deep) + updated.model_validate(updated) + return updated + + def model_no_ref_schema(self) -> AttrDict: + """Generate an AttrDict with the schema replacing $ref/$def for better readability. + + Returns: + AttrDict: class schema. + """ + schema_dict = AttrDict(super().model_json_schema()) + schema_dict = AttrDict(jsonref.replace_refs(schema_dict)) + schema_dict.del_key("$defs") + return schema_dict + + +class Init(ConfigBaseModel): + """All configuration options used when initialising a Calliope model.""" + + model_config = {"title": "Model initialisation configuration"} + name: str | None = Field(default=None) + """Model name""" + + calliope_version: str | None = Field(default=None) + """Calliope framework version this model is intended for""" + + broadcast_param_data: bool = Field(default=False) + """ + If True, single data entries in YAML indexed parameters will be broadcast across all index items. + Otherwise, the number of data entries needs to match the number of index items. + Defaults to False to mitigate unexpected broadcasting when applying overrides. + """ + + time_subset: tuple[str, str] | None = Field(default=None) + """ + Subset of timesteps as an two-element list giving the **inclusive** range. + For example, ["2005-01", "2005-04"] will create a time subset from "2005-01-01 00:00:00" to "2005-04-31 23:59:59". + + Strings must be ISO8601-compatible, i.e. of the form `YYYY-mm-dd HH:MM:SS` (e.g, '2005-01 ', '2005-01-01', '2005-01-01 00:00', ...) + """ + + time_resample: str | None = Field(default=None, pattern="^[0-9]+[a-zA-Z]") + """Setting to adjust time resolution, e.g. '2h' for 2-hourly""" + + time_cluster: str | None = Field(default=None) + """ + Setting to cluster the timeseries. + Must be a path to a file where each date is linked to a representative date that also exists in the timeseries. + """ + + time_format: str = Field(default="ISO8601") + """ + Timestamp format of all time series data when read from file. + 'ISO8601' means '%Y-%m-%d %H:%M:%S'. + """ + + distance_unit: Literal["km", "m"] = Field(default="km") + """ + Unit of transmission link `distance` (m - metres, km - kilometres). + Automatically derived distances from lat/lon coordinates will be given in this unit. + """ + + +class BuildOperate(ConfigBaseModel): + """Operate mode configuration options used when building a Calliope optimisation problem (`calliope.Model.build`).""" + + model_config = {"title": "Model build operate mode configuration"} + window: str = Field(default="24h") + """ + Operate mode rolling `window`, given as a pandas frequency string. + See [here](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases) for a list of frequency aliases. + """ + + horizon: str = Field(default="48h") + """ + Operate mode rolling `horizon`, given as a pandas frequency string. + See [here](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases) for a list of frequency aliases. + Must be ≥ `window` + """ + + use_cap_results: bool = Field(default=False) + """If the model already contains `plan` mode results, use those optimal capacities as input parameters to the `operate` mode run.""" + + +class Build(ConfigBaseModel): + """Base configuration options used when building a Calliope optimisation problem (`calliope.Model.build`).""" + + model_config = {"title": "Model build configuration"} + mode: MODES_T = Field(default="plan") + """Mode in which to run the optimisation.""" + + add_math: UniqueList[str] = Field(default=[]) + """ + List of references to files which contain additional mathematical formulations to be applied on top of or instead of the base mode math. + If referring to an pre-defined Calliope math file (see documentation for available files), do not append the reference with ".yaml". + If referring to your own math file, ensure the file type is given as a suffix (".yaml" or ".yml"). + Relative paths will be assumed to be relative to the model definition file given when creating a calliope Model (`calliope.Model(model_definition=...)`) + """ + + ignore_mode_math: bool = Field(default=False) + """ + If True, do not initialise the mathematical formulation with the pre-defined math for the given run `mode`. + This option can be used to completely re-define the Calliope mathematical formulation. + """ + + backend: Literal["pyomo", "gurobi"] = Field(default="pyomo") + """Module with which to build the optimisation problem.""" + + ensure_feasibility: bool = Field(default=False) + """ + Whether to include decision variables in the model which will meet unmet demand or consume unused supply in the model so that the optimisation solves successfully. + This should only be used as a debugging option (as any unmet demand/unused supply is a sign of improper model formulation). + """ + + objective: str = Field(default="min_cost_optimisation") + """Name of internal objective function to use, from those defined in the pre-defined math and any applied additional math.""" + + pre_validate_math_strings: bool = Field(default=True) + """ + If true, the Calliope math definition will be scanned for parsing errors _before_ undertaking the much more expensive operation of building the optimisation problem. + You can switch this off (e.g., if you know there are no parsing errors) to reduce overall build time. + """ + + operate: BuildOperate = BuildOperate() + + +class SolveSpores(ConfigBaseModel): + """SPORES configuration options used when solving a Calliope optimisation problem (`calliope.Model.solve`).""" + + model_config = {"title": "Model solve SPORES mode configuration"} + number: int = Field(default=3) + """SPORES mode number of iterations after the initial base run.""" + + score_cost_class: str = Field(default="score") + """SPORES mode cost class to vary between iterations after the initial base run.""" + + slack_cost_group: str = Field(default="monetary") + """SPORES mode cost class to keep below the given `slack` (usually "monetary").""" + + save_per_spore: bool = Field(default=False) + """ + Whether or not to save the result of each SPORES mode run between iterations. + If False, will consolidate all iterations into one dataset after completion of N iterations (defined by `number`) and save that one dataset. + """ + + save_per_spore_path: str | None = Field(default=None) + """If saving per spore, the path to save to.""" + + skip_cost_op: bool = Field(default=False) + """If the model already contains `plan` mode results, use those as the initial base run results and start with SPORES iterations immediately.""" + + @model_validator(mode="after") + def require_save_per_spore_path(self) -> Self: + """Ensure that path is given if saving per spore.""" + if self.save_per_spore: + if self.save_per_spore_path is None: + raise ValueError( + "Must define `save_per_spore_path` if you want to save each SPORES result separately." + ) + elif not Path(self.save_per_spore_path).is_dir(): + raise ValueError("`save_per_spore_path` must be a directory.") + return self + + +class Solve(ConfigBaseModel): + """Base configuration options used when solving a Calliope optimisation problem (`calliope.Model.solve`).""" + + model_config = {"title": "Model Solve Configuration"} + save_logs: str | None = Field(default=None) + """If given, should be a path to a directory in which to save optimisation logs.""" + + solver_io: str | None = Field(default=None) + """ + Some solvers have different interfaces that perform differently. + For instance, setting `solver_io="python"` when using the solver `gurobi` tends to reduce the time to send the optimisation problem to the solver. + """ + + solver_options: dict = Field(default={}) + """Any solver options, as key-value pairs, to pass to the chosen solver""" + + solver: str = Field(default="cbc") + """Solver to use. Any solvers that have Pyomo interfaces can be used. Refer to the Pyomo documentation for the latest list.""" + + zero_threshold: float = Field(default=1e-10) + """On postprocessing the optimisation results, values smaller than this threshold will be considered as optimisation artefacts and will be set to zero.""" + + shadow_prices: UniqueList[str] = Field(default=[]) + """Names of model constraints.""" + + spores: SolveSpores = SolveSpores() + + +class CalliopeConfig(ConfigBaseModel): + """Calliope configuration class.""" + + model_config = { + "title": "Model configuration schema", + "extra": "forbid", + "frozen": True, + "revalidate_instances": "always", + "use_attribute_docstrings": True, + } + + init: Init = Init() + build: Build = Build() + solve: Solve = Solve() diff --git a/src/calliope/config/config_schema.yaml b/src/calliope/config/config_schema.yaml index f6fbeb51..aca88656 100644 --- a/src/calliope/config/config_schema.yaml +++ b/src/calliope/config/config_schema.yaml @@ -15,179 +15,16 @@ properties: init: type: object description: All configuration options used when initialising a Calliope model - additionalProperties: false - properties: - name: - type: ["null", string] - default: null - description: Model name - calliope_version: - type: ["null", string] - default: null - description: Calliope framework version this model is intended for - time_subset: - oneOf: - - type: "null" - - type: array - minItems: 2 - maxItems: 2 - items: - type: string - description: ISO8601 format datetime strings of the form `YYYY-mm-dd HH:MM:SS` (e.g, '2005-01', '2005-01-01', '2005-01-01 00:00', ...) - default: null - description: >- - Subset of timesteps as an two-element list giving the **inclusive** range. - For example, ['2005-01', '2005-04'] will create a time subset from '2005-01-01 00:00:00' to '2005-04-31 23:59:59'. - time_resample: - type: ["null", string] - default: null - description: setting to adjust time resolution, e.g. "2h" for 2-hourly - pattern: "^[0-9]+[a-zA-Z]" - time_cluster: - type: ["null", string] - default: null - description: setting to cluster the timeseries, must be a path to a file where each date is linked to a representative date that also exists in the timeseries. - time_format: - type: string - default: "ISO8601" - description: Timestamp format of all time series data when read from file. "ISO8601" means "%Y-%m-%d %H:%M:%S". - distance_unit: - type: string - default: km - description: >- - Unit of transmission link `distance` (m - metres, km - kilometres). - Automatically derived distances from lat/lon coordinates will be given in this unit. - enum: [m, km] - broadcast_param_data: - type: boolean - default: false - description: - If True, single data entries in YAML indexed parameters will be broadcast across all index items. - If False, the number of data entries in an indexed parameter needs to match the number of index items. - Defaults to False to mitigate unexpected broadcasting when applying overrides. build: type: object description: > All configuration options used when building a Calliope optimisation problem (`calliope.Model.build`). Additional configuration items will be passed onto math string parsing and can therefore be accessed in the `where` strings by `config.[item-name]`, where "[item-name]" is the name of your own configuration item. - additionalProperties: true - properties: - add_math: - type: array - default: [] - description: List of references to files which contain additional mathematical formulations to be applied on top of or instead of the base mode math. - uniqueItems: true - items: - type: string - description: > - If referring to an pre-defined Calliope math file (see documentation for available files), do not append the reference with ".yaml". - If referring to your own math file, ensure the file type is given as a suffix (".yaml" or ".yml"). - Relative paths will be assumed to be relative to the model definition file given when creating a calliope Model (`calliope.Model(model_definition=...)`). - ignore_mode_math: - type: boolean - default: false - description: >- - If True, do not initialise the mathematical formulation with the pre-defined math for the given run `mode`. - This option can be used to completely re-define the Calliope mathematical formulation. - backend: - type: string - default: pyomo - description: Module with which to build the optimisation problem - ensure_feasibility: - type: boolean - default: false - description: > - whether to include decision variables in the model which will meet unmet demand or consume unused supply in the model so that the optimisation solves successfully. - This should only be used as a debugging option (as any unmet demand/unused supply is a sign of improper model formulation). - mode: - type: string - default: plan - description: Mode in which to run the optimisation. - enum: [plan, spores, operate] - objective: - type: string - default: min_cost_optimisation - description: Name of internal objective function to use, from those defined in the pre-defined math and any applied additional math. - operate_window: - type: string - description: >- - Operate mode rolling `window`, given as a pandas frequency string. - See [here](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases) for a list of frequency aliases. - operate_horizon: - type: string - description: >- - Operate mode rolling `horizon`, given as a pandas frequency string. - See [here](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases) for a list of frequency aliases. - Must be ≥ `operate_window` - operate_use_cap_results: - type: boolean - default: false - description: If the model already contains `plan` mode results, use those optimal capacities as input parameters to the `operate` mode run. - pre_validate_math_strings: - type: boolean - default: true - description: >- - If true, the Calliope math definition will be scanned for parsing errors _before_ undertaking the much more expensive operation of building the optimisation problem. - You can switch this off (e.g., if you know there are no parsing errors) to reduce overall build time. solve: type: object description: All configuration options used when solving a Calliope optimisation problem (`calliope.Model.solve`). - additionalProperties: false - properties: - spores_number: - type: integer - default: 3 - description: SPORES mode number of iterations after the initial base run. - spores_score_cost_class: - type: string - default: spores_score - description: SPORES mode cost class to vary between iterations after the initial base run. - spores_slack_cost_group: - type: string - description: SPORES mode cost class to keep below the given `slack` (usually "monetary"). - spores_save_per_spore: - type: boolean - default: false - description: Whether or not to save the result of each SPORES mode run between iterations. If False, will consolidate all iterations into one dataset after completion of N iterations (defined by `spores_number`) and save that one dataset. - spores_save_per_spore_path: - type: string - description: If saving per spore, the path to save to. - spores_skip_cost_op: - type: boolean - default: false - description: If the model already contains `plan` mode results, use those as the initial base run results and start with SPORES iterations immediately. - save_logs: - type: ["null", string] - default: null - description: If given, should be a path to a directory in which to save optimisation logs. - solver_io: - type: ["null", string] - default: null - description: > - Some solvers have different interfaces that perform differently. - For instance, setting `solver_io="python"` when using the solver `gurobi` tends to reduce the time to send the optimisation problem to the solver. - solver_options: - type: ["null", object] - default: null - description: Any solver options, as key-value pairs, to pass to the chosen solver - solver: - type: string - default: cbc - description: Solver to use. Any solvers that have Pyomo interfaces can be used. Refer to the Pyomo documentation for the latest list. - zero_threshold: - type: number - default: 1e-10 - description: On postprocessing the optimisation results, values smaller than this threshold will be considered as optimisation artefacts and will be set to zero. - shadow_prices: - type: array - uniqueItems: true - items: - type: string - description: Names of model constraints. - default: [] - description: List of constraints for which to extract shadow prices. Shadow prices will be added as variables to the model results as `shadow_price_{constraintname}`. parameters: type: [object, "null"] diff --git a/src/calliope/example_models/national_scale/scenarios.yaml b/src/calliope/example_models/national_scale/scenarios.yaml index 58a3dc81..0e34f8f9 100644 --- a/src/calliope/example_models/national_scale/scenarios.yaml +++ b/src/calliope/example_models/national_scale/scenarios.yaml @@ -70,8 +70,9 @@ overrides: init.time_subset: ["2005-01-01", "2005-01-10"] build: mode: operate - operate_window: 12h - operate_horizon: 24h + operate: + window: 12h + horizon: 24h nodes: region1.techs.ccgt.flow_cap: 30000 diff --git a/src/calliope/example_models/urban_scale/scenarios.yaml b/src/calliope/example_models/urban_scale/scenarios.yaml index 12d114cb..d754496d 100644 --- a/src/calliope/example_models/urban_scale/scenarios.yaml +++ b/src/calliope/example_models/urban_scale/scenarios.yaml @@ -51,8 +51,9 @@ overrides: init.time_subset: ["2005-07-01", "2005-07-10"] build: mode: operate - operate_window: 2h - operate_horizon: 48h + operate: + window: 2h + horizon: 48h nodes: X1: diff --git a/src/calliope/model.py b/src/calliope/model.py index 5d4a36b5..10831347 100644 --- a/src/calliope/model.py +++ b/src/calliope/model.py @@ -12,20 +12,17 @@ import xarray as xr import calliope -from calliope import backend, exceptions, io, preprocess +from calliope import backend, config, exceptions, io, preprocess from calliope.attrdict import AttrDict from calliope.postprocess import postprocess as postprocess_results -from calliope.preprocess.data_tables import DataTable from calliope.preprocess.model_data import ModelDataFactory from calliope.util.logging import log_time from calliope.util.schema import ( CONFIG_SCHEMA, MODEL_SCHEMA, extract_from_schema, - update_then_validate_config, validate_dict, ) -from calliope.util.tools import relative_path if TYPE_CHECKING: from calliope.backend.backend_model import BackendModel @@ -43,7 +40,7 @@ class Model: """A Calliope Model.""" _TS_OFFSET = pd.Timedelta(1, unit="nanoseconds") - ATTRS_SAVED = ("_def_path", "applied_math") + ATTRS_SAVED = ("applied_math", "config", "def_path") def __init__( self, @@ -74,11 +71,12 @@ def __init__( **kwargs: initialisation overrides. """ self._timings: dict = {} - self.config: AttrDict + self.config: config.CalliopeConfig self.defaults: AttrDict self.applied_math: preprocess.CalliopeMath - self._def_path: str | None = None self.backend: BackendModel + self.def_path: str | None = None + self._start_window_idx: int = 0 self._is_built: bool = False self._is_solved: bool = False @@ -88,11 +86,15 @@ def __init__( LOGGER, self._timings, "model_creation", comment="Model: initialising" ) if isinstance(model_definition, xr.Dataset): + if kwargs: + raise exceptions.ModelError( + "Cannot apply initialisation configuration overrides when loading data from an xarray Dataset." + ) self._init_from_model_data(model_definition) else: if not isinstance(model_definition, dict): # Only file definitions allow relative files. - self._def_path = str(model_definition) + self.def_path = str(model_definition) self._init_from_model_definition( model_definition, scenario, override_dict, data_table_dfs, **kwargs ) @@ -133,7 +135,7 @@ def is_solved(self): def _init_from_model_definition( self, - model_definition: dict | str, + model_definition: dict | str | Path, scenario: str | None, override_dict: dict | None, data_table_dfs: dict[str, pd.DataFrame] | None, @@ -152,7 +154,7 @@ def _init_from_model_definition( model_definition, scenario, override_dict ) model_def_full.union({"config.init": kwargs}, allow_override=True) - # First pass to check top-level keys are all good + # First pass to check top-level keys are all good. FIXME-config: remove after pydantic is ready validate_dict(model_def_full, CONFIG_SCHEMA, "Model definition") log_time( @@ -161,34 +163,24 @@ def _init_from_model_definition( "model_run_creation", comment="Model: preprocessing stage 1 (model_run)", ) - model_config = AttrDict(extract_from_schema(CONFIG_SCHEMA, "default")) - model_config.union(model_def_full.pop("config"), allow_override=True) - - init_config = update_then_validate_config("init", model_config) - - if init_config["time_cluster"] is not None: - init_config["time_cluster"] = relative_path( - self._def_path, init_config["time_cluster"] - ) + model_config = config.CalliopeConfig(**model_def_full.pop("config")) param_metadata = {"default": extract_from_schema(MODEL_SCHEMA, "default")} attributes = { - "calliope_version_defined": init_config["calliope_version"], + "calliope_version_defined": model_config.init.calliope_version, "calliope_version_initialised": calliope.__version__, "applied_overrides": applied_overrides, "scenario": scenario, "defaults": param_metadata["default"], } - data_tables: list[DataTable] = [] - for table_name, table_dict in model_def_full.pop("data_tables", {}).items(): - data_tables.append( - DataTable( - init_config, table_name, table_dict, data_table_dfs, self._def_path - ) - ) - + # FIXME-config: remove config input once model_def_full uses pydantic model_data_factory = ModelDataFactory( - init_config, model_def_full, data_tables, attributes, param_metadata + model_config.init, + model_def_full, + self.def_path, + data_table_dfs, + attributes, + param_metadata, ) model_data_factory.build() @@ -201,9 +193,9 @@ def _init_from_model_definition( comment="Model: preprocessing stage 2 (model_data)", ) - self._add_observed_dict("config", model_config) + self._model_data.attrs["name"] = model_config.init.name + self.config = model_config - self._model_data.attrs["name"] = init_config["name"] log_time( LOGGER, self._timings, @@ -220,15 +212,14 @@ def _init_from_model_data(self, model_data: xr.Dataset) -> None: model_data (xr.Dataset): Model dataset with input parameters as arrays and configuration stored in the dataset attributes dictionary. """ - if "_def_path" in model_data.attrs: - self._def_path = model_data.attrs.pop("_def_path") if "applied_math" in model_data.attrs: self.applied_math = preprocess.CalliopeMath.from_dict( model_data.attrs.pop("applied_math") ) + if "config" in model_data.attrs: + self.config = config.CalliopeConfig(**model_data.attrs.pop("config")) self._model_data = model_data - self._add_model_data_methods() if self.results: self._is_solved = True @@ -240,47 +231,6 @@ def _init_from_model_data(self, model_data: xr.Dataset) -> None: comment="Model: loaded model_data", ) - def _add_model_data_methods(self): - """Add observed data to `model`. - - 1. Filter model dataset to produce views on the input/results data - 2. Add top-level configuration dictionaries simultaneously to the model data attributes and as attributes of this class. - - """ - self._add_observed_dict("config") - - def _add_observed_dict(self, name: str, dict_to_add: dict | None = None) -> None: - """Add the same dictionary as property of model object and an attribute of the model xarray dataset. - - Args: - name (str): - Name of dictionary which will be set as the model property name and - (if necessary) the dataset attribute name. - dict_to_add (dict | None, optional): - If given, set as both the model property and the dataset attribute, - otherwise set an existing dataset attribute as a model property of the - same name. Defaults to None. - - Raises: - exceptions.ModelError: If `dict_to_add` is not given, it must be an attribute of model data. - TypeError: `dict_to_add` must be a dictionary. - """ - if dict_to_add is None: - try: - dict_to_add = self._model_data.attrs[name] - except KeyError: - raise exceptions.ModelError( - f"Expected the model property `{name}` to be a dictionary attribute of the model dataset. If you are loading the model from a NetCDF file, ensure it is a valid Calliope model." - ) - if not isinstance(dict_to_add, dict): - raise TypeError( - f"Attempted to add dictionary property `{name}` to model, but received argument of type `{type(dict_to_add).__name__}`" - ) - else: - dict_to_add = AttrDict(dict_to_add) - self._model_data.attrs[name] = dict_to_add - setattr(self, name, dict_to_add) - def build( self, force: bool = False, add_math_dict: dict | None = None, **kwargs ) -> None: @@ -307,30 +257,26 @@ def build( comment="Model: backend build starting", ) - backend_config = {**self.config["build"], **kwargs} - mode = backend_config["mode"] + self.config = self.config.update({"build": kwargs}) + mode = self.config.build.mode if mode == "operate": if not self._model_data.attrs["allow_operate_mode"]: raise exceptions.ModelError( "Unable to run this model in operate (i.e. dispatch) mode, probably because " "there exist non-uniform timesteps (e.g. from time clustering)" ) - start_window_idx = backend_config.pop("start_window_idx", 0) - backend_input = self._prepare_operate_mode_inputs( - start_window_idx, **backend_config - ) + backend_input = self._prepare_operate_mode_inputs(self.config.build.operate) else: backend_input = self._model_data - init_math_list = [] if backend_config.get("ignore_mode_math") else [mode] + init_math_list = [] if self.config.build.ignore_mode_math else [mode] end_math_list = [] if add_math_dict is None else [add_math_dict] - full_math_list = init_math_list + backend_config["add_math"] + end_math_list + full_math_list = init_math_list + self.config.build.add_math + end_math_list LOGGER.debug(f"Math preprocessing | Loading math: {full_math_list}") - model_math = preprocess.CalliopeMath(full_math_list, self._def_path) + model_math = preprocess.CalliopeMath(full_math_list, self.def_path) - backend_name = backend_config.pop("backend") self.backend = backend.get_model_backend( - backend_name, backend_input, model_math, **backend_config + self.config.build, backend_input, model_math ) self.backend.add_optimisation_components() @@ -366,14 +312,14 @@ def solve(self, force: bool = False, warmstart: bool = False, **kwargs) -> None: exceptions.ModelError: Cannot run the model if there are already results loaded, unless `force` is True. exceptions.ModelError: Some preprocessing steps will stop a run mode of "operate" from being possible. """ - # Check that results exist and are non-empty - if not self._is_built: + if not self.is_built: raise exceptions.ModelError( "You must build the optimisation problem (`.build()`) " "before you can run it." ) - if hasattr(self, "results"): + to_drop = [] + if hasattr(self, "results"): # Check that results exist and are non-empty if self.results.data_vars and not force: raise exceptions.ModelError( "This model object already has results. " @@ -382,26 +328,25 @@ def solve(self, force: bool = False, warmstart: bool = False, **kwargs) -> None: ) else: to_drop = self.results.data_vars - else: - to_drop = [] - run_mode = self.backend.inputs.attrs["config"]["build"]["mode"] + self.config = self.config.update({"solve": kwargs}) + + shadow_prices = self.config.solve.shadow_prices + self.backend.shadow_prices.track_constraints(shadow_prices) + + mode = self.config.build.mode self._model_data.attrs["timestamp_solve_start"] = log_time( LOGGER, self._timings, "solve_start", - comment=f"Optimisation model | starting model in {run_mode} mode.", + comment=f"Optimisation model | starting model in {mode} mode.", ) - - solver_config = update_then_validate_config("solve", self.config, **kwargs) - - shadow_prices = solver_config.get("shadow_prices", []) - self.backend.shadow_prices.track_constraints(shadow_prices) - - if run_mode == "operate": - results = self._solve_operate(**solver_config) + if mode == "operate": + results = self._solve_operate(**self.config.solve.model_dump()) else: - results = self.backend._solve(warmstart=warmstart, **solver_config) + results = self.backend._solve( + warmstart=warmstart, **self.config.solve.model_dump() + ) log_time( LOGGER, @@ -414,7 +359,7 @@ def solve(self, force: bool = False, warmstart: bool = False, **kwargs) -> None: # Add additional post-processed result variables to results if results.attrs["termination_condition"] in ["optimal", "feasible"]: results = postprocess_results.postprocess_model_results( - results, self._model_data + results, self._model_data, self.config.solve.zero_threshold ) log_time( @@ -431,7 +376,6 @@ def solve(self, force: bool = False, warmstart: bool = False, **kwargs) -> None: self._model_data = xr.merge( [results, self._model_data], compat="override", combine_attrs="no_conflicts" ) - self._add_model_data_methods() self._model_data.attrs["timestamp_solve_complete"] = log_time( LOGGER, @@ -443,12 +387,10 @@ def solve(self, force: bool = False, warmstart: bool = False, **kwargs) -> None: self._is_solved = True - def run(self, force_rerun=False, **kwargs): + def run(self, force_rerun=False): """Run the model. If ``force_rerun`` is True, any existing results will be overwritten. - - Additional kwargs are passed to the backend. """ exceptions.warn( "`run()` is deprecated and will be removed in a " @@ -462,7 +404,9 @@ def to_netcdf(self, path): """Save complete model data (inputs and, if available, results) to a NetCDF file at the given `path`.""" saved_attrs = {} for attr in set(self.ATTRS_SAVED) & set(self.__dict__.keys()): - if not isinstance(getattr(self, attr), str | list | None): + if attr == "config": + saved_attrs[attr] = self.config.model_dump() + elif not isinstance(getattr(self, attr), str | list | None): saved_attrs[attr] = dict(getattr(self, attr)) else: saved_attrs[attr] = getattr(self, attr) @@ -504,28 +448,24 @@ def info(self) -> str: return "\n".join(info_strings) def _prepare_operate_mode_inputs( - self, start_window_idx: int = 0, **config_kwargs + self, operate_config: config.BuildOperate ) -> xr.Dataset: """Slice the input data to just the length of operate mode time horizon. Args: - start_window_idx (int, optional): - Set the operate `window` to start at, based on integer index. - This is used when re-initialising the backend model for shorter time horizons close to the end of the model period. - Defaults to 0. - **config_kwargs: kwargs related to operate mode configuration. + operate_config (config.BuildOperate): operate mode configuration options. Returns: xr.Dataset: Slice of input data. """ - window = config_kwargs["operate_window"] - horizon = config_kwargs["operate_horizon"] self._model_data.coords["windowsteps"] = pd.date_range( self.inputs.timesteps[0].item(), self.inputs.timesteps[-1].item(), - freq=window, + freq=operate_config.window, + ) + horizonsteps = self._model_data.coords["windowsteps"] + pd.Timedelta( + operate_config.horizon ) - horizonsteps = self._model_data.coords["windowsteps"] + pd.Timedelta(horizon) # We require an offset because pandas / xarray slicing is _inclusive_ of both endpoints # where we only want it to be inclusive of the left endpoint. # Except in the last time horizon, where we want it to include the right endpoint. @@ -535,11 +475,11 @@ def _prepare_operate_mode_inputs( self._model_data.coords["horizonsteps"] = clipped_horizonsteps - self._TS_OFFSET sliced_inputs = self._model_data.sel( timesteps=slice( - self._model_data.windowsteps[start_window_idx], - self._model_data.horizonsteps[start_window_idx], + self._model_data.windowsteps[self._start_window_idx], + self._model_data.horizonsteps[self._start_window_idx], ) ) - if config_kwargs.get("operate_use_cap_results", False): + if operate_config.use_cap_results: to_parameterise = extract_from_schema(MODEL_SCHEMA, "x-operate-param") if not self._is_solved: raise exceptions.ModelError( @@ -562,10 +502,7 @@ def _solve_operate(self, **solver_config) -> xr.Dataset: """ if self.backend.inputs.timesteps[0] != self._model_data.timesteps[0]: LOGGER.info("Optimisation model | Resetting model to first time window.") - self.build( - force=True, - **{"mode": "operate", **self.backend.inputs.attrs["config"]["build"]}, - ) + self.build(force=True) LOGGER.info("Optimisation model | Running first time window.") @@ -592,11 +529,8 @@ def _solve_operate(self, **solver_config) -> xr.Dataset: "Optimisation model | Reaching the end of the timeseries. " "Re-building model with shorter time horizon." ) - self.build( - force=True, - start_window_idx=idx + 1, - **self.backend.inputs.attrs["config"]["build"], - ) + self._start_window_idx = idx + 1 + self.build(force=True) else: self.backend._dataset.coords["timesteps"] = new_inputs.timesteps self.backend.inputs.coords["timesteps"] = new_inputs.timesteps @@ -613,6 +547,7 @@ def _solve_operate(self, **solver_config) -> xr.Dataset: step_results = self.backend._solve(warmstart=False, **solver_config) + self._start_window_idx = 0 results_list.append(step_results.sel(timesteps=slice(windowstep, None))) results = xr.concat(results_list, dim="timesteps", combine_attrs="no_conflicts") results.attrs["termination_condition"] = ",".join( diff --git a/src/calliope/postprocess/math_documentation.py b/src/calliope/postprocess/math_documentation.py index ebfb3193..e1210f64 100644 --- a/src/calliope/postprocess/math_documentation.py +++ b/src/calliope/postprocess/math_documentation.py @@ -30,7 +30,7 @@ def __init__( """ self.name: str = model.name + " math" self.backend: LatexBackendModel = LatexBackendModel( - model._model_data, model.applied_math, include, **kwargs + model._model_data, model.applied_math, model.config.build, include ) self.backend.add_optimisation_components() diff --git a/src/calliope/postprocess/postprocess.py b/src/calliope/postprocess/postprocess.py index 402b928e..327b1ce2 100644 --- a/src/calliope/postprocess/postprocess.py +++ b/src/calliope/postprocess/postprocess.py @@ -11,7 +11,7 @@ def postprocess_model_results( - results: xr.Dataset, model_data: xr.Dataset + results: xr.Dataset, model_data: xr.Dataset, zero_threshold: float ) -> xr.Dataset: """Post-processing of model results. @@ -22,11 +22,11 @@ def postprocess_model_results( Args: results (xarray.Dataset): Output from the solver backend. model_data (xarray.Dataset): Calliope model data. + zero_threshold (float): Numbers below this value will be assumed to be zero Returns: xarray.Dataset: input-results dataset. """ - zero_threshold = model_data.config.solve.zero_threshold results["capacity_factor"] = capacity_factor(results, model_data) results["systemwide_capacity_factor"] = capacity_factor( results, model_data, systemwide=True diff --git a/src/calliope/preprocess/data_tables.py b/src/calliope/preprocess/data_tables.py index d10a623a..5a6b8acb 100644 --- a/src/calliope/preprocess/data_tables.py +++ b/src/calliope/preprocess/data_tables.py @@ -50,22 +50,20 @@ class DataTable: def __init__( self, - model_config: dict, table_name: str, data_table: DataTableDict, data_table_dfs: dict[str, pd.DataFrame] | None = None, - model_definition_path: Path | None = None, + model_definition_path: str | Path | None = None, ): """Load and format a data table from file / in-memory object. Args: - model_config (dict): Model initialisation configuration dictionary. table_name (str): name of the data table. data_table (DataTableDict): Data table definition dictionary. data_table_dfs (dict[str, pd.DataFrame] | None, optional): If given, a dictionary mapping table names in `data_table` to in-memory pandas DataFrames. Defaults to None. - model_definition_path (Path | None, optional): + model_definition_path (Path, optional): If given, the path to the model definition YAML file, relative to which data table filepaths will be set. If None, relative data table filepaths will be considered relative to the current working directory. Defaults to None. @@ -74,7 +72,6 @@ def __init__( self.input = data_table self.dfs = data_table_dfs if data_table_dfs is not None else dict() self.model_definition_path = model_definition_path - self.config = model_config self.columns = self._listify_if_defined("columns") self.index = self._listify_if_defined("rows") diff --git a/src/calliope/preprocess/model_data.py b/src/calliope/preprocess/model_data.py index b94e1c9f..65597798 100644 --- a/src/calliope/preprocess/model_data.py +++ b/src/calliope/preprocess/model_data.py @@ -5,6 +5,7 @@ import itertools import logging from copy import deepcopy +from pathlib import Path from typing import Literal import numpy as np @@ -15,9 +16,10 @@ from calliope import exceptions from calliope.attrdict import AttrDict +from calliope.config import Init from calliope.preprocess import data_tables, time from calliope.util.schema import MODEL_SCHEMA, validate_dict -from calliope.util.tools import listify +from calliope.util.tools import listify, relative_path LOGGER = logging.getLogger(__name__) @@ -70,9 +72,10 @@ class ModelDataFactory: def __init__( self, - model_config: dict, - model_definition: ModelDefinition, - data_tables: list[data_tables.DataTable], + init_config: Init, + model_definition: AttrDict, + definition_path: str | Path | None, + data_table_dfs: dict[str, pd.DataFrame] | None, attributes: dict, param_attributes: dict[str, dict], ): @@ -81,17 +84,28 @@ def __init__( This includes resampling/clustering timeseries data as necessary. Args: - model_config (dict): Model initialisation configuration (i.e., `config.init`). + init_config (Init): Model initialisation configuration (i.e., `config.init`). model_definition (ModelDefinition): Definition of model nodes and technologies, and their potential `templates`. - data_tables (list[data_tables.DataTable]): Pre-loaded data tables that will be used to initialise the dataset before handling definitions given in `model_definition`. + definition_path (Path, None): Path to the main model definition file. Defaults to None. + data_table_dfs: (dict[str, pd.DataFrame], None): Dataframes with model data. Defaults to None. attributes (dict): Attributes to attach to the model Dataset. param_attributes (dict[str, dict]): Attributes to attach to the generated model DataArrays. """ - self.config: dict = model_config + self.config: Init = init_config self.model_definition: ModelDefinition = model_definition.copy() self.dataset = xr.Dataset(attrs=AttrDict(attributes)) self.tech_data_from_tables = AttrDict() - self.init_from_data_tables(data_tables) + self.definition_path: str | Path | None = definition_path + tables = [] + for table_name, table_dict in model_definition.get_key( + "data_tables", {} + ).items(): + tables.append( + data_tables.DataTable( + table_name, table_dict, data_table_dfs, self.definition_path + ) + ) + self.init_from_data_tables(tables) flipped_attributes: dict[str, dict] = dict() for key, val in param_attributes.items(): @@ -243,7 +257,7 @@ def update_time_dimension_and_params(self): raise exceptions.ModelError( "Must define at least one timeseries parameter in a Calliope model." ) - time_subset = self.config.get("time_subset", None) + time_subset = self.config.time_subset if time_subset is not None: self.dataset = time.subset_timeseries(self.dataset, time_subset) self.dataset = time.add_inferred_time_params(self.dataset) @@ -251,11 +265,13 @@ def update_time_dimension_and_params(self): # By default, the model allows operate mode self.dataset.attrs["allow_operate_mode"] = 1 - if self.config["time_resample"] is not None: - self.dataset = time.resample(self.dataset, self.config["time_resample"]) - if self.config["time_cluster"] is not None: + if self.config.time_resample is not None: + self.dataset = time.resample(self.dataset, self.config.time_resample) + if self.config.time_cluster is not None: self.dataset = time.cluster( - self.dataset, self.config["time_cluster"], self.config["time_format"] + self.dataset, + relative_path(self.definition_path, self.config.time_cluster), + self.config.time_format, ) def clean_data_from_undefined_members(self): @@ -323,7 +339,7 @@ def add_link_distances(self): self.dataset.longitude.sel(nodes=node2).item(), )["s12"] distance_array = pd.Series(distances).rename_axis(index="techs").to_xarray() - if self.config["distance_unit"] == "km": + if self.config.distance_unit == "km": distance_array /= 1000 else: LOGGER.debug( @@ -656,7 +672,7 @@ def _add_to_dataset(self, to_add: xr.Dataset, id_: str): """ to_add_numeric_dims = self._update_numeric_dims(to_add, id_) to_add_numeric_ts_dims = time.timeseries_to_datetime( - to_add_numeric_dims, self.config["time_format"], id_ + to_add_numeric_dims, self.config.time_format, id_ ) self.dataset = xr.merge( [to_add_numeric_ts_dims, self.dataset], diff --git a/src/calliope/preprocess/model_definition.py b/src/calliope/preprocess/model_definition.py index 8a7f86da..24184867 100644 --- a/src/calliope/preprocess/model_definition.py +++ b/src/calliope/preprocess/model_definition.py @@ -62,8 +62,6 @@ def _load_scenario_overrides( override_dict (dict | None, optional): Overrides to apply _after_ `scenario` overrides. Defaults to None. - **kwargs: - initialisation overrides. Returns: tuple[AttrDict, str]: diff --git a/src/calliope/util/schema.py b/src/calliope/util/schema.py index 86613580..d6ef3692 100644 --- a/src/calliope/util/schema.py +++ b/src/calliope/util/schema.py @@ -25,20 +25,6 @@ def reset(): importlib.reload(sys.modules[__name__]) -def update_then_validate_config( - config_key: str, config_dict: AttrDict, **update_kwargs -) -> AttrDict: - """Return an updated version of the configuration schema.""" - to_validate = deepcopy(config_dict[config_key]) - to_validate.union(update_kwargs, allow_override=True) - validate_dict( - {"config": {config_key: to_validate}}, - CONFIG_SCHEMA, - f"`{config_key}` configuration", - ) - return to_validate - - def update_model_schema( top_level_property: Literal["nodes", "techs", "parameters"], new_entries: dict, diff --git a/src/calliope/util/tools.py b/src/calliope/util/tools.py index 51920d88..e30c854c 100644 --- a/src/calliope/util/tools.py +++ b/src/calliope/util/tools.py @@ -11,7 +11,7 @@ T = TypeVar("T") -def relative_path(base_path_file, path) -> Path: +def relative_path(base_path_file: str | Path | None, path: str | Path) -> Path: """Path standardization. If ``path`` is not absolute, it is interpreted as relative to the @@ -47,3 +47,27 @@ def listify(var: Any) -> list: else: var = [var] return var + + +def get_dot_attr(var: Any, attr: str) -> Any: + """Get nested attributes in dot notation. + + Works for nested objects (e.g., dictionaries, pydantic models). + + Args: + var (Any): Object to extract nested attributes from. + attr (str): Name of the attribute (e.g., "foo.bar"). + + Returns: + Any: Value at the given location. + """ + levels = attr.split(".", 1) + + if isinstance(var, dict): + value = var[levels[0]] + else: + value = getattr(var, levels[0]) + + if len(levels) > 1: + value = get_dot_attr(value, levels[1]) + return value diff --git a/tests/common/test_model/energy_cap_per_storage_cap.yaml b/tests/common/test_model/energy_cap_per_storage_cap.yaml index ec1a81f8..4b2d34a8 100644 --- a/tests/common/test_model/energy_cap_per_storage_cap.yaml +++ b/tests/common/test_model/energy_cap_per_storage_cap.yaml @@ -50,5 +50,5 @@ overrides: techs.my_storage.flow_cap_per_storage_cap_min: 1 config: build.mode: operate - solve.operate_window: 24 - solve.operate_horizon: 24 + solve.operate.window: 24 + solve.operate.horizon: 24 diff --git a/tests/common/test_model/scenarios.yaml b/tests/common/test_model/scenarios.yaml index f1531511..40993d20 100644 --- a/tests/common/test_model/scenarios.yaml +++ b/tests/common/test_model/scenarios.yaml @@ -415,8 +415,8 @@ overrides: config.build.mode: operate config.init.time_subset: ["2005-01-01", "2005-01-02"] config.build.ensure_feasibility: true - config.build.operate_window: 6h - config.build.operate_horizon: 12h + config.build.operate.window: 6h + config.build.operate.horizon: 12h investment_costs: templates: diff --git a/tests/common/util.py b/tests/common/util.py index 9d658637..5c4a2105 100644 --- a/tests/common/util.py +++ b/tests/common/util.py @@ -95,9 +95,7 @@ def build_lp( math (dict | None, optional): All constraint/global expression/objective math to apply. Defaults to None. backend_name (Literal["pyomo"], optional): Backend to use to create the LP file. Defaults to "pyomo". """ - math = calliope.preprocess.CalliopeMath( - ["plan", *model.config.build.get("add_math", [])] - ) + math = calliope.preprocess.CalliopeMath(["plan", *model.config.build.add_math]) math_to_add = calliope.AttrDict() if isinstance(math_data, dict): diff --git a/tests/conftest.py b/tests/conftest.py index 3d4694c5..5c3a9e8b 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -5,10 +5,11 @@ import pytest import xarray as xr +from calliope import config from calliope.attrdict import AttrDict from calliope.backend import latex_backend_model, pyomo_backend_model from calliope.preprocess import CalliopeMath -from calliope.util.schema import CONFIG_SCHEMA, MODEL_SCHEMA, extract_from_schema +from calliope.util.schema import MODEL_SCHEMA, extract_from_schema from .common.util import build_test_model as build_model @@ -32,8 +33,8 @@ def foreach(request): @pytest.fixture(scope="session") -def config_defaults(): - return AttrDict(extract_from_schema(CONFIG_SCHEMA, "default")) +def default_config(): + return config.CalliopeConfig() @pytest.fixture(scope="session") @@ -172,7 +173,7 @@ def dummy_model_math(): @pytest.fixture(scope="module") -def dummy_model_data(config_defaults, model_defaults): +def dummy_model_data(model_defaults): coords = { dim: ( ["foo", "bar"] @@ -279,20 +280,6 @@ def dummy_model_data(config_defaults, model_defaults): for param in model_data.data_vars.values(): param.attrs["is_result"] = 0 - dummy_config = AttrDict( - { - "build": { - "foo": True, - "FOO": "baz", - "foo1": np.inf, - "bar": {"foobar": "baz"}, - "a_b": 0, - "b_a": [1, 2], - } - } - ) - dummy_config.union(config_defaults) - model_data.attrs["config"] = dummy_config model_data.attrs["defaults"] = AttrDict( { @@ -344,20 +331,24 @@ def populate_backend_model(backend): @pytest.fixture(scope="module") -def dummy_pyomo_backend_model(dummy_model_data, dummy_model_math): - backend = pyomo_backend_model.PyomoBackendModel(dummy_model_data, dummy_model_math) +def dummy_pyomo_backend_model(dummy_model_data, dummy_model_math, default_config): + backend = pyomo_backend_model.PyomoBackendModel( + dummy_model_data, dummy_model_math, default_config.build + ) return populate_backend_model(backend) @pytest.fixture(scope="module") -def dummy_latex_backend_model(dummy_model_data, dummy_model_math): - backend = latex_backend_model.LatexBackendModel(dummy_model_data, dummy_model_math) +def dummy_latex_backend_model(dummy_model_data, dummy_model_math, default_config): + backend = latex_backend_model.LatexBackendModel( + dummy_model_data, dummy_model_math, default_config.build + ) return populate_backend_model(backend) @pytest.fixture(scope="class") -def valid_latex_backend(dummy_model_data, dummy_model_math): +def valid_latex_backend(dummy_model_data, dummy_model_math, default_config): backend = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math, include="valid" + dummy_model_data, dummy_model_math, default_config.build, include="valid" ) return populate_backend_model(backend) diff --git a/tests/test_backend_latex_backend.py b/tests/test_backend_latex_backend.py index e28b0830..f55da6a3 100644 --- a/tests/test_backend_latex_backend.py +++ b/tests/test_backend_latex_backend.py @@ -9,6 +9,14 @@ from .common.util import check_error_or_warning +@pytest.fixture +def temp_dummy_latex_backend_model(dummy_model_data, dummy_model_math, default_config): + """Function scoped model definition to avoid cross-test contamination.""" + return latex_backend_model.LatexBackendModel( + dummy_model_data, dummy_model_math, default_config.build + ) + + class TestLatexBackendModel: def test_inputs(self, dummy_latex_backend_model, dummy_model_data): assert dummy_latex_backend_model.inputs.equals(dummy_model_data) @@ -406,14 +414,9 @@ def test_create_obj_list(self, dummy_latex_backend_model): ), ], ) - def test_generate_math_doc( - self, dummy_model_data, dummy_model_math, format, expected - ): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) - backend_model._add_all_inputs_as_parameters() - backend_model.add_global_expression( + def test_generate_math_doc(self, temp_dummy_latex_backend_model, format, expected): + temp_dummy_latex_backend_model._add_all_inputs_as_parameters() + temp_dummy_latex_backend_model.add_global_expression( "expr", { "equations": [{"expression": "no_dims + 2"}], @@ -421,14 +424,11 @@ def test_generate_math_doc( "default": 0, }, ) - doc = backend_model.generate_math_doc(format=format) + doc = temp_dummy_latex_backend_model.generate_math_doc(format=format) assert doc == expected - def test_generate_math_doc_no_params(self, dummy_model_data, dummy_model_math): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) - backend_model.add_global_expression( + def test_generate_math_doc_no_params(self, temp_dummy_latex_backend_model): + temp_dummy_latex_backend_model.add_global_expression( "expr", { "equations": [{"expression": "1 + 2"}], @@ -436,7 +436,7 @@ def test_generate_math_doc_no_params(self, dummy_model_data, dummy_model_math): "default": 0, }, ) - doc = backend_model.generate_math_doc(format="md") + doc = temp_dummy_latex_backend_model.generate_math_doc(format="md") assert doc == textwrap.dedent( r""" @@ -457,12 +457,9 @@ def test_generate_math_doc_no_params(self, dummy_model_data, dummy_model_math): ) def test_generate_math_doc_mkdocs_features_tabs( - self, dummy_model_data, dummy_model_math + self, temp_dummy_latex_backend_model ): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) - backend_model.add_global_expression( + temp_dummy_latex_backend_model.add_global_expression( "expr", { "equations": [{"expression": "1 + 2"}], @@ -470,7 +467,9 @@ def test_generate_math_doc_mkdocs_features_tabs( "default": 0, }, ) - doc = backend_model.generate_math_doc(format="md", mkdocs_features=True) + doc = temp_dummy_latex_backend_model.generate_math_doc( + format="md", mkdocs_features=True + ) assert doc == textwrap.dedent( r""" @@ -500,13 +499,10 @@ def test_generate_math_doc_mkdocs_features_tabs( ) def test_generate_math_doc_mkdocs_features_admonition( - self, dummy_model_data, dummy_model_math + self, temp_dummy_latex_backend_model ): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) - backend_model._add_all_inputs_as_parameters() - backend_model.add_global_expression( + temp_dummy_latex_backend_model._add_all_inputs_as_parameters() + temp_dummy_latex_backend_model.add_global_expression( "expr", { "equations": [{"expression": "no_dims + 1"}], @@ -514,7 +510,9 @@ def test_generate_math_doc_mkdocs_features_admonition( "default": 0, }, ) - doc = backend_model.generate_math_doc(format="md", mkdocs_features=True) + doc = temp_dummy_latex_backend_model.generate_math_doc( + format="md", mkdocs_features=True + ) assert doc == textwrap.dedent( r""" @@ -558,13 +556,12 @@ def test_generate_math_doc_mkdocs_features_admonition( ) def test_generate_math_doc_mkdocs_features_not_in_md( - self, dummy_model_data, dummy_model_math + self, temp_dummy_latex_backend_model ): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) with pytest.raises(exceptions.ModelError) as excinfo: - backend_model.generate_math_doc(format="rst", mkdocs_features=True) + temp_dummy_latex_backend_model.generate_math_doc( + format="rst", mkdocs_features=True + ) assert check_error_or_warning( excinfo, @@ -679,12 +676,9 @@ def test_get_variable_bounds_string(self, dummy_latex_backend_model): } assert refs == {"multi_dim_var"} - def test_param_type(self, dummy_model_data, dummy_model_math): - backend_model = latex_backend_model.LatexBackendModel( - dummy_model_data, dummy_model_math - ) - backend_model._add_all_inputs_as_parameters() - backend_model.add_global_expression( + def test_param_type(self, temp_dummy_latex_backend_model): + temp_dummy_latex_backend_model._add_all_inputs_as_parameters() + temp_dummy_latex_backend_model.add_global_expression( "expr", { "equations": [{"expression": "1 + flow_cap_max"}], @@ -692,7 +686,7 @@ def test_param_type(self, dummy_model_data, dummy_model_math): "default": 0, }, ) - doc = backend_model.generate_math_doc(format="md") + doc = temp_dummy_latex_backend_model.generate_math_doc(format="md") assert doc == textwrap.dedent( r""" diff --git a/tests/test_backend_module.py b/tests/test_backend_module.py index f220a3b9..93e3ca46 100644 --- a/tests/test_backend_module.py +++ b/tests/test_backend_module.py @@ -2,7 +2,7 @@ import pytest -from calliope import backend +from calliope import AttrDict, backend from calliope.backend.backend_model import BackendModel from calliope.exceptions import BackendError @@ -10,8 +10,9 @@ @pytest.mark.parametrize("valid_backend", ["pyomo", "gurobi"]) def test_valid_model_backend(simple_supply, valid_backend): """Requesting a valid model backend must result in a backend instance.""" + build_config = simple_supply.config.build.update({"backend": valid_backend}) backend_obj = backend.get_model_backend( - valid_backend, simple_supply._model_data, simple_supply.applied_math + build_config, simple_supply._model_data, simple_supply.applied_math ) assert isinstance(backend_obj, BackendModel) @@ -19,7 +20,8 @@ def test_valid_model_backend(simple_supply, valid_backend): @pytest.mark.parametrize("spam", ["not_real", None, True, 1]) def test_invalid_model_backend(spam, simple_supply): """Backend requests should catch invalid setups.""" + invalid_config = AttrDict({"backend": spam}) with pytest.raises(BackendError): backend.get_model_backend( - spam, simple_supply._model_data, simple_supply.applied_math + invalid_config, simple_supply._model_data, simple_supply.applied_math ) diff --git a/tests/test_backend_parsing.py b/tests/test_backend_parsing.py index 8847738c..be4189a2 100644 --- a/tests/test_backend_parsing.py +++ b/tests/test_backend_parsing.py @@ -219,14 +219,18 @@ def _equation_slice_obj(name): @pytest.fixture -def dummy_backend_interface(dummy_model_data, dummy_model_math): +def dummy_backend_interface(dummy_model_data, dummy_model_math, default_config): # ignore the need to define the abstract methods from backend_model.BackendModel with patch.multiple(backend_model.BackendModel, __abstractmethods__=set()): class DummyBackendModel(backend_model.BackendModel): def __init__(self): backend_model.BackendModel.__init__( - self, dummy_model_data, dummy_model_math, instance=None + self, + dummy_model_data, + dummy_model_math, + default_config.build, + instance=None, ) self._dataset = dummy_model_data.copy(deep=True) diff --git a/tests/test_backend_pyomo.py b/tests/test_backend_pyomo.py index 710d147a..3e9b4333 100755 --- a/tests/test_backend_pyomo.py +++ b/tests/test_backend_pyomo.py @@ -1636,7 +1636,8 @@ def test_add_run_mode_custom_math(self, caplog, mode): m = build_model({}, "simple_supply,two_hours,investment_costs") math = calliope.preprocess.CalliopeMath([mode]) - backend = PyomoBackendModel(m.inputs, math, mode=mode) + build_config = m.config.build.update({"mode": mode}) + backend = PyomoBackendModel(m.inputs, math, build_config) assert backend.math == math @@ -1647,8 +1648,8 @@ def test_add_run_mode_custom_math_before_build(self, caplog): m = build_model( { - "config.build.operate_window": "12H", - "config.build.operate_horizon": "12H", + "config.build.operate.window": "12H", + "config.build.operate.horizon": "12H", }, "simple_supply,two_hours,investment_costs", ) @@ -2247,7 +2248,9 @@ def validate_math(self): def _validate_math(math_dict: dict): m = build_model({}, "simple_supply,investment_costs") math = calliope.preprocess.CalliopeMath(["plan", math_dict]) - backend = calliope.backend.PyomoBackendModel(m._model_data, math) + backend = calliope.backend.PyomoBackendModel( + m._model_data, math, m.config.build + ) backend._add_all_inputs_as_parameters() backend._validate_math_string_parsing() diff --git a/tests/test_backend_where_parser.py b/tests/test_backend_where_parser.py index def6f621..725e8939 100644 --- a/tests/test_backend_where_parser.py +++ b/tests/test_backend_where_parser.py @@ -83,7 +83,19 @@ def where(bool_operand, helper_function, data_var, comparison, subset): @pytest.fixture -def eval_kwargs(dummy_pyomo_backend_model): +def dummy_build_config(): + return { + "foo": True, + "FOO": "baz", + "foo1": np.inf, + "bar": {"foobar": "baz"}, + "a_b": 0, + "b_a": [1, 2], + } + + +@pytest.fixture +def eval_kwargs(dummy_pyomo_backend_model, dummy_build_config): return { "input_data": dummy_pyomo_backend_model.inputs, "backend_interface": dummy_pyomo_backend_model, @@ -91,6 +103,7 @@ def eval_kwargs(dummy_pyomo_backend_model): "equation_name": "foo", "return_type": "array", "references": set(), + "build_config": dummy_build_config, } @@ -235,7 +248,7 @@ def test_config_missing_from_data(self, config_option, eval_kwargs, config_strin parsed_[0].eval(**eval_kwargs) @pytest.mark.parametrize( - ("config_string", "type_"), [("config.b_a", "list"), ("config.bar", "AttrDict")] + ("config_string", "type_"), [("config.b_a", "list"), ("config.bar", "dict")] ) def test_config_fail_datatype( self, config_option, eval_kwargs, config_string, type_ diff --git a/tests/test_config.py b/tests/test_config.py new file mode 100644 index 00000000..ce18e0dd --- /dev/null +++ b/tests/test_config.py @@ -0,0 +1,223 @@ +import logging + +import numpy as np +import pydantic +import pytest +from pydantic_core import ValidationError + +from calliope import config + + +class TestUniqueList: + @pytest.fixture(scope="module") + def unique_list_model(self): + return pydantic.create_model("Model", unique_list=(config.UniqueList, ...)) + + @pytest.fixture(scope="module") + def unique_str_list_model(self): + return pydantic.create_model("Model", unique_list=(config.UniqueList[str], ...)) + + @pytest.mark.parametrize( + "valid_list", + [[1, 2, 3], [1.0, 1.1, 1.2], ["1", "2", "3"], ["1", 1, "foo"], [None, np.nan]], + ) + def test_unique_list(self, unique_list_model, valid_list): + "When there's no fixed type for list entries, they just have to be unique _within_ types" + model = unique_list_model(unique_list=valid_list) + assert model.unique_list == valid_list + + @pytest.mark.parametrize("valid_list", [[1, 2, 3], ["1", "2", "3"], ["foo", "bar"]]) + def test_unique_str_list(self, unique_list_model, valid_list): + "When there's a fixed type for list entries, they have to be unique when coerced to that type" + model = unique_list_model(unique_list=valid_list) + assert model.unique_list == valid_list + + @pytest.mark.parametrize( + "invalid_list", + [[1, 1, 2], [1, 1.0], ["1", "foo", "foo"], [None, None], [1, True], [0, False]], + ) + def test_not_unique_list(self, unique_list_model, invalid_list): + "When there's no fixed type for list entries, duplicate entries of the _same_ type is not allowed (includes int == bool)" + with pytest.raises(ValidationError, match="List must be unique"): + unique_list_model(unique_list=invalid_list) + + @pytest.mark.parametrize( + "invalid_list", [[1, 1, 2], ["foo", 1, "foo"], ["1", "foo", "foo"]] + ) + def test_not_unique_str_list(self, unique_list_model, invalid_list): + "When there's a fixed type for list entries, they have to be unique when coerced to that type" + with pytest.raises(ValidationError, match="List must be unique"): + unique_list_model(unique_list=invalid_list) + + +class TestUpdate: + @pytest.fixture(scope="module") + def config_model_flat(self): + return pydantic.create_model( + "Model", + __base__=config.ConfigBaseModel, + model_config={"title": "TITLE"}, + foo=(str, "bar"), + foobar=(int, 1), + ) + + @pytest.fixture(scope="module") + def config_model_nested(self, config_model_flat): + return pydantic.create_model( + "Model", + __base__=config.ConfigBaseModel, + model_config={"title": "TITLE 2"}, + nested=(config_model_flat, config_model_flat()), + top_level_foobar=(int, 10), + ) + + @pytest.fixture(scope="module") + def config_model_double_nested(self, config_model_nested): + return pydantic.create_model( + "Model", + __base__=config.ConfigBaseModel, + model_config={"title": "TITLE 3"}, + extra_nested=(config_model_nested, config_model_nested()), + ) + + @pytest.mark.parametrize( + ("to_update", "expected"), + [ + ({"foo": "baz"}, {"foo": "baz", "foobar": 1}), + ({"foobar": 2}, {"foo": "bar", "foobar": 2}), + ({"foo": "baz", "foobar": 2}, {"foo": "baz", "foobar": 2}), + ], + ) + def test_update_flat(self, config_model_flat, to_update, expected): + model = config_model_flat() + model_dict = model.model_dump() + + new_model = model.update(to_update) + + assert new_model.model_dump() == expected + assert model.model_dump() == model_dict + + @pytest.mark.parametrize( + ("to_update", "expected"), + [ + ( + {"top_level_foobar": 20}, + {"top_level_foobar": 20, "nested": {"foo": "bar", "foobar": 1}}, + ), + ( + {"nested": {"foobar": 2}}, + {"top_level_foobar": 10, "nested": {"foo": "bar", "foobar": 2}}, + ), + ( + {"top_level_foobar": 20, "nested": {"foobar": 2}}, + {"top_level_foobar": 20, "nested": {"foo": "bar", "foobar": 2}}, + ), + ( + {"top_level_foobar": 20, "nested.foobar": 2}, + {"top_level_foobar": 20, "nested": {"foo": "bar", "foobar": 2}}, + ), + ], + ) + def test_update_nested(self, config_model_nested, to_update, expected): + model = config_model_nested() + model_dict = model.model_dump() + + new_model = model.update(to_update) + + assert new_model.model_dump() == expected + assert model.model_dump() == model_dict + + @pytest.mark.parametrize( + "to_update", + [ + {"extra_nested.nested.foobar": 2}, + {"extra_nested": {"nested": {"foobar": 2}}}, + ], + ) + def test_update_extra_nested(self, config_model_double_nested, to_update): + model = config_model_double_nested() + model_dict = model.model_dump() + + new_model = model.update(to_update) + + assert new_model.extra_nested.nested.foobar == 2 + assert model.model_dump() == model_dict + + @pytest.mark.parametrize( + "to_update", + [ + {"extra_nested.nested.foobar": "foo"}, + {"extra_nested.top_level_foobar": "foo"}, + ], + ) + def test_update_extra_nested_validation_error( + self, config_model_double_nested, to_update + ): + model = config_model_double_nested() + + with pytest.raises(ValidationError, match="1 validation error for TITLE"): + model.update(to_update) + + @pytest.mark.parametrize( + ("to_update", "expected"), + [ + ({"extra_nested.nested.foobar": 2}, ["Updating TITLE `foobar`: 1 -> 2"]), + ( + {"extra_nested.top_level_foobar": 2}, + ["Updating TITLE 2 `top_level_foobar`: 10 -> 2"], + ), + ( + {"extra_nested.nested.foobar": 2, "extra_nested.top_level_foobar": 3}, + [ + "Updating TITLE `foobar`: 1 -> 2", + "Updating TITLE 2 `top_level_foobar`: 10 -> 3", + ], + ), + ], + ) + def test_logging(self, caplog, config_model_double_nested, to_update, expected): + caplog.set_level(logging.INFO) + + model = config_model_double_nested() + model.update(to_update) + + assert all(log_text in caplog.text for log_text in expected) + + +class TestNoRefSchema: + @pytest.fixture(scope="module") + def config_model(self): + sub_model = pydantic.create_model( + "SubModel", + __base__=config.ConfigBaseModel, + model_config={"title": "TITLE"}, + foo=(str, "bar"), + foobar=(int, 1), + ) + model = pydantic.create_model( + "Model", + __base__=config.ConfigBaseModel, + model_config={"title": "TITLE 2"}, + nested=(sub_model, sub_model()), + ) + return model + + def test_config_model_no_defs(self, config_model): + model = config_model() + json_schema = model.model_json_schema() + no_defs_json_schema = model.model_no_ref_schema() + assert "$defs" in json_schema + assert "$defs" not in no_defs_json_schema + + def test_config_model_no_resolved_refs(self, config_model): + model = config_model() + json_schema = model.model_json_schema() + no_defs_json_schema = model.model_no_ref_schema() + assert json_schema["properties"]["nested"] == { + "$ref": "#/$defs/SubModel", + "default": {"foo": "bar", "foobar": 1}, + } + assert ( + no_defs_json_schema["properties"]["nested"] + == json_schema["$defs"]["SubModel"] + ) diff --git a/tests/test_core_model.py b/tests/test_core_model.py index e16ebfa4..4aae7bfc 100644 --- a/tests/test_core_model.py +++ b/tests/test_core_model.py @@ -9,7 +9,6 @@ import calliope.preprocess from .common.util import build_test_model as build_model -from .common.util import check_error_or_warning LOGGER = "calliope.model" @@ -32,40 +31,6 @@ def test_info(self, national_scale_example): def test_info_simple_model(self, simple_supply): simple_supply.info() - def test_update_observed_dict(self, national_scale_example): - national_scale_example.config.build["backend"] = "foo" - assert national_scale_example._model_data.attrs["config"].build.backend == "foo" - - def test_add_observed_dict_from_model_data( - self, national_scale_example, dict_to_add - ): - national_scale_example._model_data.attrs["foo"] = dict_to_add - national_scale_example._add_observed_dict("foo") - assert national_scale_example.foo == dict_to_add - assert national_scale_example._model_data.attrs["foo"] == dict_to_add - - def test_add_observed_dict_from_dict(self, national_scale_example, dict_to_add): - national_scale_example._add_observed_dict("bar", dict_to_add) - assert national_scale_example.bar == dict_to_add - assert national_scale_example._model_data.attrs["bar"] == dict_to_add - - def test_add_observed_dict_not_available(self, national_scale_example): - with pytest.raises(calliope.exceptions.ModelError) as excinfo: - national_scale_example._add_observed_dict("baz") - assert check_error_or_warning( - excinfo, - "Expected the model property `baz` to be a dictionary attribute of the model dataset", - ) - assert not hasattr(national_scale_example, "baz") - - def test_add_observed_dict_not_dict(self, national_scale_example): - with pytest.raises(TypeError) as excinfo: - national_scale_example._add_observed_dict("baz", "bar") - assert check_error_or_warning( - excinfo, - "Attempted to add dictionary property `baz` to model, but received argument of type `str`", - ) - class TestOperateMode: @contextmanager @@ -104,9 +69,11 @@ def operate_model_and_log(self, request): model.build( force=True, mode="operate", - operate_use_cap_results=True, - operate_window=request.param[0], - operate_horizon=request.param[1], + operate={ + "use_cap_results": True, + "window": request.param[0], + "horizon": request.param[1], + }, ) with self.caplog_session(request) as caplog: @@ -116,20 +83,10 @@ def operate_model_and_log(self, request): return model, log - @pytest.fixture(scope="class") - def rerun_operate_log(self, request, operate_model_and_log): - """Solve in operate mode a second time, to trigger new log messages.""" - with self.caplog_session(request) as caplog: - with caplog.at_level(logging.INFO): - operate_model_and_log[0].solve(force=True) - return caplog.text - def test_backend_build_mode(self, operate_model_and_log): """Verify that we have run in operate mode""" operate_model, _ = operate_model_and_log - assert ( - operate_model.backend.inputs.attrs["config"]["build"]["mode"] == "operate" - ) + assert operate_model.backend.config.mode == "operate" def test_operate_mode_success(self, operate_model_and_log): """Solving in operate mode should lead to an optimal solution.""" @@ -146,6 +103,14 @@ def test_not_reset_model_window(self, operate_model_and_log): _, log = operate_model_and_log assert "Resetting model to first time window." not in log + @pytest.fixture + def rerun_operate_log(self, request, operate_model_and_log): + """Solve in operate mode a second time, to trigger new log messages.""" + with self.caplog_session(request) as caplog: + with caplog.at_level(logging.INFO): + operate_model_and_log[0].solve(force=True) + return caplog.text + def test_reset_model_window(self, rerun_operate_log): """The backend model time window needs resetting back to the start on rerunning in operate mode.""" assert "Resetting model to first time window." in rerun_operate_log @@ -153,8 +118,8 @@ def test_reset_model_window(self, rerun_operate_log): def test_end_of_horizon(self, operate_model_and_log): """Check that increasingly shorter time horizons are logged as model rebuilds.""" operate_model, log = operate_model_and_log - config = operate_model.backend.inputs.attrs["config"]["build"] - if config["operate_window"] != config["operate_horizon"]: + config = operate_model.backend.config + if config.operate.window != config.operate.horizon: assert "Reaching the end of the timeseries." in log else: assert "Reaching the end of the timeseries." not in log @@ -184,9 +149,18 @@ def test_build_operate_not_allowed_build(self): ): m.build(mode="operate") + def test_build_operate_use_cap_results_error(self): + """Requesting to use capacity results should return an error if the model is not pre-solved.""" + m = build_model({}, "simple_supply,operate,var_costs,investment_costs") + with pytest.raises( + calliope.exceptions.ModelError, + match="Cannot use plan mode capacity results in operate mode if a solution does not yet exist for the model.", + ): + m.build(mode="operate", operate={"use_cap_results": True}) + class TestBuild: - @pytest.fixture(scope="class") + @pytest.fixture def init_model(self): return build_model({}, "simple_supply,two_hours,investment_costs") diff --git a/tests/test_core_preprocess.py b/tests/test_core_preprocess.py index 0ee2f38c..4d3eeaf6 100644 --- a/tests/test_core_preprocess.py +++ b/tests/test_core_preprocess.py @@ -2,6 +2,7 @@ import pandas as pd import pytest +from pydantic import ValidationError import calliope import calliope.exceptions as exceptions @@ -60,11 +61,11 @@ def override(param): return read_rich_yaml(f"config.init.time_subset: {param}") # should fail: one string in list - with pytest.raises(exceptions.ModelError): + with pytest.raises(ValidationError): build_model(override_dict=override(["2005-01"]), scenario="simple_supply") # should fail: three strings in list - with pytest.raises(exceptions.ModelError): + with pytest.raises(ValidationError): build_model( override_dict=override(["2005-01-01", "2005-01-02", "2005-01-03"]), scenario="simple_supply", @@ -81,7 +82,7 @@ def override(param): ) # should fail: must be a list, not a string - with pytest.raises(exceptions.ModelError): + with pytest.raises(ValidationError): model = build_model( override_dict=override("2005-01"), scenario="simple_supply" ) @@ -94,7 +95,7 @@ def override(param): assert check_error_or_warning( error, - "subset time range ['2005-03', '2005-04'] is outside the input data time range [2005-01-01 00:00:00, 2005-01-05 23:00:00]", + "subset time range ('2005-03', '2005-04') is outside the input data time range [2005-01-01 00:00:00, 2005-01-05 23:00:00]", ) # should fail: time subset out of range of input data @@ -147,19 +148,15 @@ def test_single_timestep(self): class TestChecks: - @pytest.mark.parametrize("top_level_key", ["init", "solve"]) + @pytest.mark.parametrize( + "top_level_key", ["init", "build", "solve", "build.operate", "solve.spores"] + ) def test_unrecognised_config_keys(self, top_level_key): - """Check that the only keys allowed in 'model' and 'run' are those in the - model defaults - """ + """Check that no extra keys are allowed in the configuration.""" override = {f"config.{top_level_key}.nonsensical_key": "random_string"} - with pytest.raises(exceptions.ModelError) as excinfo: + with pytest.raises(ValidationError): build_model(override_dict=override, scenario="simple_supply") - assert check_error_or_warning( - excinfo, - "Additional properties are not allowed ('nonsensical_key' was unexpected)", - ) def test_model_version_mismatch(self): """Model config says config.init.calliope_version = 0.1, which is not what we diff --git a/tests/test_io.py b/tests/test_io.py index 802026a1..28ca4c39 100644 --- a/tests/test_io.py +++ b/tests/test_io.py @@ -66,6 +66,19 @@ def model_csv_dir(self, tmpdir_factory, model): def test_save_netcdf(self, model_file): assert os.path.isfile(model_file) + @pytest.mark.parametrize( + "kwargs", + [{"name": "foobar"}, {"calliope_version": "0.7.0", "time_resample": "2h"}], + ) + def test_model_from_file_kwarg_error(self, model_file, kwargs): + """Passing kwargs when reading model dataset files should fail.""" + model_data = calliope.io.read_netcdf(model_file) + with pytest.raises( + exceptions.ModelError, + match="Cannot apply initialisation configuration overrides when loading data from an xarray Dataset.", + ): + calliope.Model(model_data, **kwargs) + @pytest.mark.parametrize( ("attr", "expected_type", "expected_val"), [ @@ -186,10 +199,8 @@ def test_save_csv_not_optimal(self): with pytest.warns(exceptions.ModelWarning): model.to_csv(out_path, dropna=False) - @pytest.mark.parametrize("attr", ["config"]) - def test_dicts_as_model_attrs_and_property(self, model_from_file, attr): - assert attr in model_from_file._model_data.attrs.keys() - assert hasattr(model_from_file, attr) + def test_config_reload(self, model_from_file, model): + assert model_from_file.config.model_dump() == model.config.model_dump() def test_defaults_as_model_attrs_not_property(self, model_from_file): assert "defaults" in model_from_file._model_data.attrs.keys() diff --git a/tests/test_preprocess_data_sources.py b/tests/test_preprocess_data_sources.py index a250f04c..abe019a9 100644 --- a/tests/test_preprocess_data_sources.py +++ b/tests/test_preprocess_data_sources.py @@ -6,16 +6,10 @@ import calliope from calliope.preprocess import data_tables -from calliope.util.schema import CONFIG_SCHEMA, extract_from_schema from .common.util import check_error_or_warning -@pytest.fixture(scope="module") -def init_config(): - return calliope.AttrDict(extract_from_schema(CONFIG_SCHEMA, "default"))["init"] - - @pytest.fixture(scope="class") def data_dir(tmp_path_factory): filepath = tmp_path_factory.mktemp("data_tables") @@ -39,12 +33,12 @@ def _generate_data_table_dict(filename, df, rows, columns): class TestDataTableUtils: @pytest.fixture(scope="class") - def table_obj(self, init_config, generate_data_table_dict): + def table_obj(self, generate_data_table_dict): df = pd.Series({"bar": 0, "baz": 1}) table_dict = generate_data_table_dict( "foo.csv", df, rows="test_row", columns=None ) - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) ds.input["foo"] = ["foobar"] return ds @@ -130,9 +124,9 @@ def multi_row_multi_col_data(self, generate_data_table_dict): "multi_row_multi_col_file.csv", df, rows="test_row", columns="test_col" ) - def test_multi_row_no_col(self, init_config, multi_row_no_col_data): + def test_multi_row_no_col(self, multi_row_no_col_data): expected_df, table_dict = multi_row_no_col_data - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) test_param = ds.dataset["test_param"] assert not set(["test_row"]).symmetric_difference(test_param.dims) pd.testing.assert_series_equal( @@ -147,9 +141,9 @@ def test_multi_row_no_col(self, init_config, multi_row_no_col_data): "multi_row_multi_col_data", ], ) - def test_multi_row_one_col(self, init_config, request, data_table_ref): + def test_multi_row_one_col(self, request, data_table_ref): expected_df, table_dict = request.getfixturevalue(data_table_ref) - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) test_param = ds.dataset["test_param"] assert not set(["test_row", "test_col"]).symmetric_difference(test_param.dims) pd.testing.assert_series_equal( @@ -164,14 +158,11 @@ def test_multi_row_one_col(self, init_config, request, data_table_ref): "multi_row_multi_col_data", ], ) - def test_load_from_df(self, init_config, request, data_table_ref): + def test_load_from_df(self, request, data_table_ref): expected_df, table_dict = request.getfixturevalue(data_table_ref) table_dict["data"] = data_table_ref ds = data_tables.DataTable( - init_config, - "ds_name", - table_dict, - data_table_dfs={data_table_ref: expected_df}, + "ds_name", table_dict, data_table_dfs={data_table_ref: expected_df} ) test_param = ds.dataset["test_param"] assert not set(["test_row", "test_col"]).symmetric_difference(test_param.dims) @@ -179,12 +170,12 @@ def test_load_from_df(self, init_config, request, data_table_ref): test_param.to_series(), expected_df.stack(), check_names=False ) - def test_load_from_df_must_be_df(self, init_config, multi_row_no_col_data): + def test_load_from_df_must_be_df(self, multi_row_no_col_data): expected_df, table_dict = multi_row_no_col_data table_dict["data"] = "foo" with pytest.raises(calliope.exceptions.ModelError) as excinfo: data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"foo": expected_df} + "ds_name", table_dict, data_table_dfs={"foo": expected_df} ) assert check_error_or_warning(excinfo, "Data table must be a pandas DataFrame.") @@ -237,9 +228,9 @@ def multi_row_multi_col_data(self, generate_data_table_dict): columns=["test_col1", "test_col2"], ) - def test_multi_row_no_col(self, init_config, multi_row_no_col_data): + def test_multi_row_no_col(self, multi_row_no_col_data): expected_df, table_dict = multi_row_no_col_data - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) test_param = ds.dataset["test_param"] assert not set(["test_row1", "test_row2"]).symmetric_difference(test_param.dims) pd.testing.assert_series_equal( @@ -257,9 +248,9 @@ def test_multi_row_no_col(self, init_config, multi_row_no_col_data): "multi_row_multi_col_data", ], ) - def test_multi_row_one_col(self, init_config, request, data_table_ref): + def test_multi_row_one_col(self, request, data_table_ref): expected_df, table_dict = request.getfixturevalue(data_table_ref) - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) test_param = ds.dataset["test_param"] all_dims = table_dict["rows"] + table_dict["columns"] assert not set(all_dims).symmetric_difference(test_param.dims) @@ -273,7 +264,7 @@ def test_multi_row_one_col(self, init_config, request, data_table_ref): class TestDataTableSelectDropAdd: @pytest.fixture(scope="class") - def table_obj(self, init_config): + def table_obj(self): def _table_obj(**table_dict_kwargs): df = pd.DataFrame( { @@ -291,9 +282,7 @@ def _table_obj(**table_dict_kwargs): "columns": "parameters", **table_dict_kwargs, } - ds = data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"df": df} - ) + ds = data_tables.DataTable("ds_name", table_dict, data_table_dfs={"df": df}) return ds return _table_obj @@ -357,7 +346,7 @@ def test_drop_one(self, table_obj): class TestDataTableRenameDims: @pytest.fixture(scope="class") - def multi_row_one_col_data(self, data_dir, init_config, dummy_int): + def multi_row_one_col_data(self, data_dir, dummy_int): """Fixture to create the xarray dataset from the data table, including dimension name mapping.""" def _multi_row_one_col_data( @@ -377,7 +366,7 @@ def _multi_row_one_col_data( "add_dims": {"parameters": "test_param"}, "rename_dims": mapping, } - ds = data_tables.DataTable(init_config, "ds_name", table_dict) + ds = data_tables.DataTable("ds_name", table_dict) return ds.dataset return _multi_row_one_col_data @@ -416,7 +405,7 @@ def test_rename(self, dummy_int, multi_row_one_col_data, mapping, idx, col): class TestDataTableMalformed: @pytest.fixture(scope="class") - def table_obj(self, init_config): + def table_obj(self): def _table_obj(**table_dict_kwargs): df = pd.DataFrame( { @@ -433,9 +422,7 @@ def _table_obj(**table_dict_kwargs): "rows": ["test_row1", "test_row2"], **table_dict_kwargs, } - ds = data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"df": df} - ) + ds = data_tables.DataTable("ds_name", table_dict, data_table_dfs={"df": df}) return ds return _table_obj @@ -479,7 +466,7 @@ def test_check_for_protected_params(self, table_obj): class TestDataTableLookupDictFromParam: @pytest.fixture(scope="class") - def table_obj(self, init_config): + def table_obj(self): df = pd.DataFrame( { "FOO": {("foo1", "bar1"): 1, ("foo1", "bar2"): 1}, @@ -491,9 +478,7 @@ def table_obj(self, init_config): "rows": ["techs", "carriers"], "columns": "parameters", } - ds = data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"df": df} - ) + ds = data_tables.DataTable("ds_name", table_dict, data_table_dfs={"df": df}) return ds @pytest.mark.parametrize( @@ -518,13 +503,11 @@ def test_carrier_info_dict_from_model_data_var_missing_dim(self, table_obj): class TestDataTableTechDict: @pytest.fixture(scope="class") - def table_obj(self, init_config): + def table_obj(self): def _table_obj(df_dict, rows="techs"): df = pd.DataFrame(df_dict) table_dict = {"data": "df", "rows": rows, "columns": "parameters"} - ds = data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"df": df} - ) + ds = data_tables.DataTable("ds_name", table_dict, data_table_dfs={"df": df}) return ds return _table_obj @@ -584,13 +567,11 @@ def test_tech_dict_empty(self, table_obj): class TestDataTableNodeDict: @pytest.fixture(scope="class") - def table_obj(self, init_config): + def table_obj(self): def _table_obj(df_dict, rows=["nodes", "techs"]): df = pd.DataFrame(df_dict) table_dict = {"data": "df", "rows": rows, "columns": "parameters"} - ds = data_tables.DataTable( - init_config, "ds_name", table_dict, data_table_dfs={"df": df} - ) + ds = data_tables.DataTable("ds_name", table_dict, data_table_dfs={"df": df}) return ds return _table_obj diff --git a/tests/test_preprocess_model_data.py b/tests/test_preprocess_model_data.py index 06816090..9f5a0b96 100644 --- a/tests/test_preprocess_model_data.py +++ b/tests/test_preprocess_model_data.py @@ -15,26 +15,36 @@ @pytest.fixture -def model_def(): - model_def_path = Path(__file__).parent / "common" / "test_model" / "model.yaml" +def model_path(): + return Path(__file__).parent / "common" / "test_model" / "model.yaml" + + +@pytest.fixture +def model_def(model_path): model_def_override, _ = prepare_model_definition( - model_def_path, scenario="simple_supply,empty_tech_node" + model_path, scenario="simple_supply,empty_tech_node" ) - return model_def_override, model_def_path + # Erase data tables for simplicity + # FIXME: previous tests omitted this. Either update tests or remove the data_table from the test model. + model_def_override.del_key("data_tables") + return model_def_override @pytest.fixture -def init_config(config_defaults, model_def): - model_def_dict, _ = model_def - config_defaults.union(model_def_dict.pop("config"), allow_override=True) - return config_defaults["init"] +def init_config(default_config, model_def): + updated_config = default_config.update(model_def["config"]) + return updated_config.init @pytest.fixture -def model_data_factory(model_def, init_config, model_defaults): - model_def_dict, _ = model_def +def model_data_factory(model_path, model_def, init_config, model_defaults): return ModelDataFactory( - init_config, model_def_dict, [], {"foo": "bar"}, {"default": model_defaults} + init_config, + model_def, + model_path, + [], + {"foo": "bar"}, + {"default": model_defaults}, ) @@ -201,8 +211,8 @@ def test_add_link_distances_missing_distance( def test_add_link_distances_no_da( self, my_caplog, model_data_factory_w_params: ModelDataFactory, unit, expected ): - _default_distance_unit = model_data_factory_w_params.config["distance_unit"] - model_data_factory_w_params.config["distance_unit"] = unit + new_config = model_data_factory_w_params.config.update({"distance_unit": unit}) + model_data_factory_w_params.config = new_config model_data_factory_w_params.clean_data_from_undefined_members() model_data_factory_w_params.dataset["latitude"] = ( pd.Series({"A": 51.507222, "B": 48.8567}) @@ -217,7 +227,6 @@ def test_add_link_distances_no_da( del model_data_factory_w_params.dataset["distance"] model_data_factory_w_params.add_link_distances() - model_data_factory_w_params.config["distance_unit"] = _default_distance_unit assert "Link distance matrix automatically computed" in my_caplog.text assert ( model_data_factory_w_params.dataset["distance"].dropna("techs") @@ -432,7 +441,8 @@ def test_prepare_param_dict_not_lookup(self, model_data_factory: ModelDataFactor def test_prepare_param_dict_no_broadcast_allowed( self, model_data_factory, param_data ): - model_data_factory.config.broadcast_param_data = False + new_config = model_data_factory.config.update({"broadcast_param_data": False}) + model_data_factory.config = new_config param_dict = {"data": param_data, "index": [["foo"], ["bar"]], "dims": "foobar"} with pytest.raises(exceptions.ModelError) as excinfo: # noqa: PT011, false positive model_data_factory._prepare_param_dict("foo", param_dict) diff --git a/tests/test_tools.py b/tests/test_tools.py new file mode 100644 index 00000000..6cce8191 --- /dev/null +++ b/tests/test_tools.py @@ -0,0 +1,45 @@ +import pytest + +from calliope.util import tools + + +class TestListify: + @pytest.mark.parametrize( + ("var", "expected"), [(True, [True]), (1, [1]), ("foobar", ["foobar"])] + ) + def test_non_iterable(self, var, expected): + """Listification should work for any kind of object.""" + assert tools.listify(var) == expected + + @pytest.mark.parametrize( + ("var", "expected"), + [([1, 2, 3, 4], [1, 2, 3, 4]), ({"foo": "bar", "bar": "foo"}, ["foo", "bar"])], + ) + def test_iterable(self, var, expected): + """Iterable objects should be returned as lists.""" + assert tools.listify(var) == expected + + @pytest.mark.parametrize(("var", "expected"), [([], []), (None, []), ({}, [])]) + def test_empty(self, var, expected): + """Empty iterables, None and similars should be returned as an empty list.""" + assert tools.listify(var) == expected + + +@pytest.mark.parametrize( + ("attr", "expected"), + [ + ("init.time_format", "ISO8601"), + ("build.backend", "pyomo"), + ("build.operate.window", "24h"), + ("build.pre_validate_math_strings", True), + ], +) +class TestDotAttr: + def test_pydantic_access(self, default_config, attr, expected): + """Dot access of pydantic attributes should be possible.""" + assert tools.get_dot_attr(default_config, attr) == expected + + def test_dict_access(self, default_config, attr, expected): + """Dot access of dictionary items should be possible.""" + config_dict = default_config.model_dump() + assert tools.get_dot_attr(config_dict, attr) == expected