Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions src/muse/constraints.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
r"""Investment constraints.

Constraints on investements ensure that investements match some given criteria. For
Constraints on investments ensure that investments match some given criteria. For
instance, the constraints could ensure that only so much of a new asset can be built
every year.

Functions to compute constraints should be registered via the decorator
:py:meth:`~muse.constraints.register_constraints`. This registration step makes it
possible for constraints to be declared in the TOML file.

Generally, LP solvers accept linear constraint defined as:
Generally, LP solvers accept linear constraints defined as:

.. math::

A x \\leq b

with :math:`A` a matrix, :math:`x` the decision variables, and :math:`b` a vector.
However, these quantities are dimensionless. They do no have timeslices, assets, or
replacement technologies, or any other dimensions that users have set-up in their model.
The crux is to translates from MUSE's data-structures to a consistent dimensionless
replacement technologies, or any other dimensions that users have set up in their model.
The crux is to translate from MUSE's data-structures to a consistent dimensionless
format.

In MUSE, users can register constraints functions that return fully dimensional
Expand All @@ -44,8 +44,8 @@
- Any dimension in :math:`A_c .* x_c` (:math:`A_p .* x_p`) that is also in :math:`b`
defines diagonal entries into the left (right) submatrix of :math:`A`.
- Any dimension in :math:`A_c .* x_c` (:math:`A_p .* x_b`) and missing from
:math:`b` is reduce by summation over a row in the left (right) submatrix of
:math:`A`. In other words, those dimension do become part of a standard tensor
:math:`b` is reduced by summation over a row in the left (right) submatrix of
:math:`A`. In other words, those dimensions become part of a standard tensor
reduction or matrix multiplication.

There are two additional rules. However, they are likely to be the result of an
Expand Down Expand Up @@ -281,7 +281,7 @@ def max_capacity_expansion(
:math:`y=y_1` is the year marking the end of the investment period.

Let :math:`\mathcal{A}^{i, r}_{t, \iota}(y)` be the current assets, before
invesment, and let :math:`\Delta\mathcal{A}^{i,r}_t` be the future investements.
investment, and let :math:`\Delta\mathcal{A}^{i,r}_t` be the future investments.
The the constraint on agent :math:`i` are given as:

.. math::
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
ProcessName,RegionName,Time,Level,cap_par,cap_exp,fix_par,fix_exp,var_par,var_exp,MaxCapacityAddition,MaxCapacityGrowth,TotalCapacityLimit,TechnicalLife,UtilizationFactor,InterestRate,ScalingSize,Agent2,Type,Fuel,MinimumServiceFactor,Enduse
Unit,-,Year,-,MUS$2010/Mt,-,MUS$2010/Mt,-,MUS$2010/Mt,-,Mt,-,Mt,Years,-,-,-,Retrofit,-,-,-,-
procammonia_1,R1,2010,fixed,100,1,0.5,1,0,1,5,0.03,100,20,0.85,0.1,0.1,1,energy,fuel1,0.01,ammonia
procammonia_1,R1,2050,fixed,100,1,0.5,1,0,1,5,0.03,100,20,0.85,0.1,0.1,1,energy,fuel1,0.9,ammonia
procammonia_1,R1,2050,fixed,100,1,0.5,1,0,1,5,0.03,100,20,0.85,0.1,0.1,1,energy,fuel1,0.85,ammonia
procammonia_2,R1,2010,fixed,97.5,1,0.4875,1,0,1,5,0.03,100,20,0.85,0.1,0.1,1,energy,fuel2,0,ammonia
procammonia_2,R1,2050,fixed,97.5,1,0.4875,1,0,1,5,0.03,100,20,0.85,0.1,0.1,1,energy,fuel2,0,ammonia
46 changes: 41 additions & 5 deletions src/muse/readers/csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,9 +98,10 @@ def to_agent_share(name):
data.columns.name = "technodata"
data.index.name = "technology"
data = data.drop(["process_name", "region_name", "time"], axis=1)

data = data.apply(to_numeric, axis=0)

check_utilization_and_minimum_service_factors(data, filename)

result = xr.Dataset.from_dataframe(data.sort_index())
if "fuel" in result.variables:
result["fuel"] = result.fuel.isel(region=0, year=0)
Expand Down Expand Up @@ -130,6 +131,7 @@ def to_agent_share(name):

if "year" in result.dims and len(result.year) == 1:
result = result.isel(year=0, drop=True)

return result


Expand All @@ -145,7 +147,7 @@ def read_technodata_timeslices(filename: Union[str, Path]) -> xr.Dataset:
data = csv[csv.technology != "Unit"]

data = data.apply(to_numeric)
data = check_utilization_not_all_zero(data, filename)
check_utilization_and_minimum_service_factors(data, filename)

ts = pd.MultiIndex.from_frame(
data.drop(
Expand Down Expand Up @@ -269,7 +271,7 @@ def read_technologies(
Arguments:
technodata_path_or_sector: If `comm_out_path` and `comm_in_path` are not given,
then this argument refers to the name of the sector. The three paths are
then determined using standard locations and name. Specifically, thechnodata
then determined using standard locations and name. Specifically, technodata
looks for a "technodataSECTORNAME.csv" file in the standard location for
that sector. However, if `comm_out_path` and `comm_in_path` are given, then
this should be the path to the the technodata file.
Expand Down Expand Up @@ -920,18 +922,52 @@ def read_finite_resources(path: Union[str, Path]) -> xr.DataArray:
return xr.Dataset.from_dataframe(data).to_array(dim="commodity")


def check_utilization_not_all_zero(data, filename):
def check_utilization_and_minimum_service_factors(data, filename):
if "utilization_factor" not in data.columns:
raise ValueError(
f"""A technology needs to have a utilization factor defined for every
timeslice. Please check file {filename}."""
)

_check_utilization_not_all_zero(data, filename)
_check_utilization_in_range(data, filename)

if "minimum_service_factor" in data.columns:
_check_minimum_service_factors_in_range(data, filename)
_check_utilization_not_below_minimum(data, filename)


def _check_utilization_not_all_zero(data, filename):
utilization_sum = data.groupby(["technology", "region", "year"]).sum()

if (utilization_sum.utilization_factor == 0).any():
raise ValueError(
f"""A technology can not have a utilization factor of 0 for every
timeslice. Please check file {filename}."""
)
return data


def _check_utilization_in_range(data, filename):
utilization = data["utilization_factor"]
if not np.all((0 <= utilization) & (utilization <= 1)):
raise ValueError(
f"""Utilization factor values must all be between 0 and 1 inclusive.
Please check file {filename}."""
)


def _check_utilization_not_below_minimum(data, filename):
if (data["utilization_factor"] < data["minimum_service_factor"]).any():
raise ValueError(f"""Utilization factors must all be greater than or equal to
their corresponding minimum service factors. Please check
{filename}.""")


def _check_minimum_service_factors_in_range(data, filename):
min_service_factor = data["minimum_service_factor"]

if not np.all((0 <= min_service_factor) & (min_service_factor <= 1)):
raise ValueError(
f"""Minimum service factor values must all be between 0 and 1 inclusive.
Please check file {filename}."""
)
32 changes: 18 additions & 14 deletions tests/test_minimum_service.py
Original file line number Diff line number Diff line change
@@ -1,36 +1,39 @@
from itertools import permutations
from unittest.mock import patch

import numpy as np
from pytest import mark


def modify_minimum_service_factors(
model_path, sector, process_name, minimum_service_factor
model_path, sector, processes, minimum_service_factors
):
import pandas as pd

technodata_timeslices = pd.read_csv(
model_path / "technodata" / sector / "TechnodataTimeslices.csv"
)

technodata_timeslices.loc[
technodata_timeslices["ProcessName"] == process_name[0], "MinimumServiceFactor"
] = minimum_service_factor[0]

technodata_timeslices.loc[
technodata_timeslices["ProcessName"] == process_name[1], "MinimumServiceFactor"
] = minimum_service_factor[1]
for process, minimum in zip(processes, minimum_service_factors):
technodata_timeslices.loc[
technodata_timeslices["ProcessName"] == process, "MinimumServiceFactor"
] = minimum

return technodata_timeslices


@mark.parametrize("process_name", [("gasCCGT", "windturbine")])
@mark.parametrize(
"minimum_service_factor", [([1, 2, 3, 4, 5, 6], [0] * 6), ([0], [1, 2, 3, 4, 5, 6])]
"minimum_service_factors",
permutations((np.linspace(0, 1, 6), [0] * 6)),
)
def test_minimum_service_factor(tmpdir, minimum_service_factor, process_name):
@patch("muse.readers.csv.check_utilization_and_minimum_service_factors")
def test_minimum_service_factor(check_mock, tmpdir, minimum_service_factors):
import pandas as pd
from muse import examples
from muse.mca import MCA

sector = "power"
processes = ("gasCCGT", "windturbine")

# Copy the model inputs to tmpdir
model_path = examples.copy_model(
Expand All @@ -40,8 +43,8 @@ def test_minimum_service_factor(tmpdir, minimum_service_factor, process_name):
technodata_timeslices = modify_minimum_service_factors(
model_path=model_path,
sector=sector,
process_name=process_name,
minimum_service_factor=minimum_service_factor,
processes=processes,
minimum_service_factors=minimum_service_factors,
)

technodata_timeslices.to_csv(
Expand All @@ -50,10 +53,11 @@ def test_minimum_service_factor(tmpdir, minimum_service_factor, process_name):

with tmpdir.as_cwd():
MCA.factory(model_path / "settings.toml").run()
check_mock.assert_called()

supply_timeslice = pd.read_csv(tmpdir / "Results/MCAMetric_Supply.csv")

for process, service_factor in zip(process_name, minimum_service_factor):
for process, service_factor in zip(processes, minimum_service_factors):
for i, factor in enumerate(service_factor):
assert (
supply_timeslice[
Expand Down
149 changes: 149 additions & 0 deletions tests/test_readers.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
from itertools import chain, permutations
from pathlib import Path
from unittest.mock import patch

import toml
import xarray as xr
Expand Down Expand Up @@ -410,3 +412,150 @@ def test_read_trade_technodata(tmp_path):
"max_capacity_growth",
"total_capacity_limit",
}


def test_check_utilization_not_all_zero_success():
import pandas as pd
from muse.readers.csv import _check_utilization_not_all_zero

df = pd.DataFrame(
{
"utilization_factor": (0, 1, 1),
"technology": ("gas", "gas", "solar"),
"region": ("GB", "GB", "FR"),
"year": (2010, 2010, 2011),
}
)
_check_utilization_not_all_zero(df, "file.csv")


def test_check_utilization_in_range_success():
import pandas as pd
from muse.readers.csv import _check_utilization_in_range

df = pd.DataFrame({"utilization_factor": (0, 1)})
_check_utilization_in_range(df, "file.csv")


@mark.parametrize(
"values", chain.from_iterable(permutations((0, bad)) for bad in (-1, 2))
)
def test_check_utilization_in_range_fail(values):
import pandas as pd
from muse.readers.csv import _check_utilization_in_range

df = pd.DataFrame({"utilization_factor": values})
with raises(ValueError):
_check_utilization_in_range(df, "file.csv")


def test_check_utilization_not_below_minimum_success():
import pandas as pd
from muse.readers.csv import _check_utilization_not_below_minimum

df = pd.DataFrame({"utilization_factor": (0, 1), "minimum_service_factor": (0, 0)})
_check_utilization_not_below_minimum(df, "file.csv")


def test_check_utilization_not_below_minimum_fail():
import pandas as pd
from muse.readers.csv import _check_utilization_not_below_minimum

df = pd.DataFrame(
{"utilization_factor": (0, 1), "minimum_service_factor": (0.1, 0)}
)
with raises(ValueError):
_check_utilization_not_below_minimum(df, "file.csv")


def test_check_utilization_not_all_zero_fail_all_zero():
import pandas as pd
from muse.readers.csv import _check_utilization_not_all_zero

df = pd.DataFrame(
{
"utilization_factor": (0, 0, 1),
"technology": ("gas", "gas", "solar"),
"region": ("GB", "GB", "FR"),
"year": (2010, 2010, 2011),
}
)

with raises(ValueError):
_check_utilization_not_all_zero(df, "file.csv")


def test_check_minimum_service_factors_in_range_success():
import pandas as pd
from muse.readers.csv import _check_minimum_service_factors_in_range

df = pd.DataFrame({"minimum_service_factor": (0, 1)})
_check_minimum_service_factors_in_range(df, "file.csv")


@mark.parametrize(
"values", chain.from_iterable(permutations((0, bad)) for bad in (-1, 2))
)
def test_check_minimum_service_factors_in_range_fail(values):
import pandas as pd
from muse.readers.csv import _check_minimum_service_factors_in_range

df = pd.DataFrame({"minimum_service_factor": values})

with raises(ValueError):
_check_minimum_service_factors_in_range(df, "file.csv")


@patch("muse.readers.csv._check_utilization_in_range")
@patch("muse.readers.csv._check_utilization_not_all_zero")
@patch("muse.readers.csv._check_utilization_not_below_minimum")
@patch("muse.readers.csv._check_minimum_service_factors_in_range")
def test_check_utilization_and_minimum_service_factors(*mocks):
import pandas as pd
from muse.readers.csv import check_utilization_and_minimum_service_factors

df = pd.DataFrame(
{"utilization_factor": (0, 0, 1), "minimum_service_factor": (0, 0, 0)}
)
check_utilization_and_minimum_service_factors(df, "file.csv")
for mock in mocks:
mock.assert_called_once_with(df, "file.csv")


@patch("muse.readers.csv._check_utilization_in_range")
@patch("muse.readers.csv._check_utilization_not_all_zero")
@patch("muse.readers.csv._check_utilization_not_below_minimum")
@patch("muse.readers.csv._check_minimum_service_factors_in_range")
def test_check_utilization_and_minimum_service_factors_no_min(
min_service_factor_mock, utilization_below_min_mock, *mocks
):
import pandas as pd
from muse.readers.csv import check_utilization_and_minimum_service_factors

df = pd.DataFrame({"utilization_factor": (0, 0, 1)})
check_utilization_and_minimum_service_factors(df, "file.csv")
for mock in mocks:
mock.assert_called_once_with(df, "file.csv")
min_service_factor_mock.assert_not_called()
utilization_below_min_mock.assert_not_called()


@patch("muse.readers.csv._check_utilization_in_range")
@patch("muse.readers.csv._check_utilization_not_all_zero")
@patch("muse.readers.csv._check_utilization_not_below_minimum")
@patch("muse.readers.csv._check_minimum_service_factors_in_range")
def test_check_utilization_and_minimum_service_factors_fail_missing_utilization(*mocks):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need the mocks here. Otherwise, everything looks good!

import pandas as pd
from muse.readers.csv import check_utilization_and_minimum_service_factors

# NB: Required utilization_factor column is missing
df = pd.DataFrame(
{
"technology": ("gas", "gas", "solar"),
"region": ("GB", "GB", "FR"),
"year": (2010, 2010, 2011),
}
)

with raises(ValueError):
check_utilization_and_minimum_service_factors(df, "file.csv")