JijModeling 2.4.0 Release Notes#

Performance Improvements#

Significant performance improvements for dictionaries#

We improved the internal processing of dictionaries, achieving a significant performance improvement of about 30x compared with the previous implementation. If you have been avoiding dictionaries because of performance concerns, this is a good opportunity to try using them.

Breaking Changes#

Protobuf schema changes#

JijModeling 2.4.0 brings the breaking changes to the Protobuf schema for Problem. As a result, Problems serialized to Protobuf with version 2.4.0 or later can no longer be loaded by JijModeling versions 2.3.x or earlier. On the other hand, Problems serialized with versions 2.3.x or earlier can be loaded by JijModeling versions 2.4.0 or later. This may affect data storage and exchange through MINTO, but in that case, updating the dependent JijModeling version to 2.4.0 or later will allow both existing and new data to be loaded without issue. Also, this only affects direct use of JijModeling’s Protobuf schema; there is no particular impact on the OMMX format.

Feature Enhancements#

Generating arrays with a shape and generator function#

Starting with this version, the genarray() function can be used to generate arrays by specifying a shape and a generator function. This is similar to fromfunction() in NumPy.

import jijmodeling as jm


problem = jm.Problem("genarray example")
N = problem.Natural("N")
M = problem.Natural("M")
a = problem.Float("a", shape=(N, M))
x = problem.BinaryVar("x", shape=N)
Sums = problem.NamedExpr("Sums", jm.genarray(lambda i, j: a[i, j] * x[i], (N, M)))


problem
\[\begin{array}{rl} \text{Problem}\colon &\text{genarray example}\\\displaystyle \min &\displaystyle 0\\&\\\text{where}&\\&\text{Decision Variables:}\\&\qquad \begin{alignedat}{2}x&\in \mathop{\mathrm{Array}}\left[N;\left\{0, 1\right\}\right]&\quad &1\text{-dim binary variable}\\\end{alignedat}\\&\\&\text{Placeholders:}\\&\qquad \begin{alignedat}{2}a&\in \mathop{\mathrm{Array}}\left[N\times M;\mathbb{R}\right]&\quad &2\text{-dimensional array of placeholders with elements in }\mathbb{R}\\M&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\N&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\\end{alignedat}\\&\\&\text{Named Expressions:}\\&\qquad \begin{alignedat}{2}Sums&={\left( {a}_{i,j}\cdot {x}_{i}\right) }_{\substack{i\in \left\{0,\ldots ,N-1\right\}\\j\in \left\{0,\ldots ,M-1\right\}}}&\quad &\in \mathop{\mathrm{Array}}\left[N\times M;\mathbb{R}\right]\\\end{alignedat}\end{array} \]

When using the Decorator API, you can also use a comprehension syntax with jm.genarray as follows:

@jm.Problem.define("genarray example")
def problem(problem):
    N = problem.Natural()
    M = problem.Natural()
    a = problem.Float(shape=(N, M))
    x = problem.BinaryVar(shape=N)
    Sums = problem.NamedExpr(jm.genarray(a[i, j] * x[i] for i, j in (N, M)))


problem
\[\begin{array}{rl} \text{Problem}\colon &\text{genarray example}\\\displaystyle \min &\displaystyle 0\\&\\\text{where}&\\&\text{Decision Variables:}\\&\qquad \begin{alignedat}{2}x&\in \mathop{\mathrm{Array}}\left[N;\left\{0, 1\right\}\right]&\quad &1\text{-dim binary variable}\\\end{alignedat}\\&\\&\text{Placeholders:}\\&\qquad \begin{alignedat}{2}a&\in \mathop{\mathrm{Array}}\left[N\times M;\mathbb{R}\right]&\quad &2\text{-dimensional array of placeholders with elements in }\mathbb{R}\\M&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\N&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\\end{alignedat}\\&\\&\text{Named Expressions:}\\&\qquad \begin{alignedat}{2}Sums&={\left( {a}_{i,j}\cdot {x}_{i}\right) }_{\substack{i\in \left\{0,\ldots ,N-1\right\}\\j\in \left\{0,\ldots ,M-1\right\}}}&\quad &\in \mathop{\mathrm{Array}}\left[N\times M;\mathbb{R}\right]\\\end{alignedat}\end{array} \]

Only one for .. in ... clause is allowed in a genarray comprehension. The following is an example that raises an error because it uses multiple for clauses:

try:

    @jm.Problem.define("genarray example")
    def problem(problem):
        N = problem.Natural()
        M = problem.Natural()
        a = problem.Float(shape=(N, M))
        x = problem.BinaryVar(shape=N)
        Sums = problem.NamedExpr(jm.genarray(a[i, j] * x[i] for i in N for j in M))
except SyntaxError as e:
    print(str(e))
A genarray comprehension must have exactly one for-clause:

    9  |          Sums = problem.NamedExpr(jm.genarray(a[i, j] * x[i] for i in N for j in M))
                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Support for min / max along axes#

Previously, jm.sum and Expression.sum supported taking sums along a specific axis of a multidimensional array via the axis keyword argument. Starting with this version, the same functionality has been added to jm.min and jm.max as well as their corresponding Expression methods.

import jijmodeling as jm


@jm.Problem.define("min/max along axes example")
def problem(problem):
    N = problem.Natural()
    M = problem.Natural()
    a = problem.Float(shape=(N, M))
    a_min_0 = problem.NamedExpr(a.min(axis=0), save_in_ommx=True)
    a_max_1 = problem.NamedExpr(jm.max(a, axis=1), save_in_ommx=True)
    a_min_both = problem.NamedExpr(jm.min(a, axis=[1, 0]), save_in_ommx=True)


problem
\[\begin{array}{rl} \text{Problem}\colon &\text{min/max along axes example}\\\displaystyle \min &\displaystyle 0\\&\\\text{where}&\\&\\&\text{Placeholders:}\\&\qquad \begin{alignedat}{2}a&\in \mathop{\mathrm{Array}}\left[N\times M;\mathbb{R}\right]&\quad &2\text{-dimensional array of placeholders with elements in }\mathbb{R}\\M&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\N&\in \mathbb{N}&\quad &\text{A scalar placeholder in }\mathbb{N}\\\end{alignedat}\\&\\&\text{Named Expressions:}\\&\qquad \begin{alignedat}{2}a\_{}max\_{}1&=a.\mathop{\mathtt{max}}\left(\mathtt{axis}=\left[1\right]\right)&\quad &\in \mathop{\mathrm{Array}}\left[N;\mathbb{R}\right]\\&&&\text{\texttt{save\_{}in\_{}ommx=True}}\\&&&\\a\_{}min\_{}0&=a.\mathop{\mathtt{min}}\left(\mathtt{axis}=\left[0\right]\right)&\quad &\in \mathop{\mathrm{Array}}\left[M;\mathbb{R}\right]\\&&&\text{\texttt{save\_{}in\_{}ommx=True}}\\&&&\\a\_{}min\_{}both&=a.\mathop{\mathtt{min}}\left(\mathtt{axis}=\left[1,0\right]\right)&\quad &\in \mathbb{R}\\&&&\text{\texttt{save\_{}in\_{}ommx=True}}\\\end{alignedat}\end{array} \]

Now let’s create an instance and inspect the included Named Functions together with the value of a.

import numpy as np

a_data = np.array([[1, 5, 3], [4, 2, 6]])
compiler = jm.Compiler.from_problem(problem, {"N": 2, "M": 3, "a": a_data})
instance = compiler.eval_problem(problem)

display(instance.named_functions_df)
print(f"a == {a_data}")
type function used_ids name subscripts description parameters.subscripts
id
0 Constant Function(1) {} a_min_0 [0] <NA> [0]
1 Constant Function(2) {} a_min_0 [1] <NA> [1]
2 Constant Function(3) {} a_min_0 [2] <NA> [2]
3 Constant Function(5) {} a_max_1 [0] <NA> [0]
4 Constant Function(6) {} a_max_1 [1] <NA> [1]
5 Constant Function(1) {} a_min_both [] <NA> []
a == [[1 5 3]
 [4 2 6]]

Since the Named Functions in the OMMX Instance are split apart by index, the table above may be a bit hard to read. So let’s regroup them by variable using compiler, build arrays from them, and compare the results.

First, consider a_min_0 = a.min(axis=0), which takes the minimum along axis 0 (columns). This leaves axis 1 (rows), producing a vector whose entries are the minima of each column.

a_min_0_ids = compiler.get_named_function_id_by_name("a_min_0")
a_min_0_values = [
    instance.get_named_function_by_id(a_min_0_ids[(i,)]).function.constant_term
    for i in range(3)
]
assert np.all(a_min_0_values == np.min(a_data, axis=0))  # Matches NumPy's behavior!
print(f"a.min(axis=0) == {a_min_0_values}")
a.min(axis=0) == [1.0, 2.0, 3.0]

In contrast, a_max_1 = a.max(axis=1) takes the maximum along axis 1 (rows), producing a vector whose entries are the maxima of each row.

a_max_1_ids = compiler.get_named_function_id_by_name("a_max_1")
a_max_1_values = [
    instance.get_named_function_by_id(a_max_1_ids[(i,)]).function.constant_term
    for i in range(2)
]
assert np.all(a_max_1_values == np.max(a_data, axis=1))  # Matches NumPy's behavior!
print(f"a.max(axis=1) == {a_max_1_values}")
a.max(axis=1) == [5.0, 6.0]

For a_min_both = a.min(axis=[1, 0]), the minimum is taken along multiple axes. Since the input here is two-dimensional, this simply becomes the overall minimum.

a_min_both_ids = compiler.get_named_function_id_by_name("a_min_both")
a_min_both_value = instance.get_named_function_by_id(
    a_min_both_ids[()]
).function.constant_term
assert a_min_both_value == np.min(a_data)  # Matches NumPy's behavior!
print(f"a.min(axis=[1, 0]) == {a_min_both_value}")
a.min(axis=[1, 0]) == 1.0

Bugfixes#

Bugfixes in random instance data generation#

We fixed the following two bugs in random instance data generation:

Placeholders that depend on NamedExpr were not handled correctly#

We fixed a bug where placeholders whose shape (length) or key set depends on NamedExpr were not handled correctly. For example, consider the following problem:

import jijmodeling as jm


@jm.Problem.define("My Problem")
def problem(problem: jm.DecoratedProblem):
    a = problem.Float(ndim=1)
    N = problem.NamedExpr(a.len_at(0))
    b = problem.Natural(shape=(N, None))
    M = problem.NamedExpr(b.len_at(1))
    problem += jm.sum(a[i] * b[i, j] for i in N for j in M)


problem
\[\begin{array}{rl} \text{Problem}\colon &\text{My Problem}\\\displaystyle \min &\displaystyle \sum _{i=0}^{N-1}{\sum _{j=0}^{M-1}{{a}_{i}\cdot {b}_{i,j}}}\\&\\\text{where}&\\&\\&\text{Placeholders:}\\&\qquad \begin{alignedat}{2}a&\in \mathop{\mathrm{Array}}\left[(-);\mathbb{R}\right]&\quad &1\text{-dimensional array of placeholders with elements in }\mathbb{R}\\b&\in \mathop{\mathrm{Array}}\left[N\times (-);\mathbb{N}\right]&\quad &2\text{-dimensional array of placeholders with elements in }\mathbb{N}\\\end{alignedat}\\&\\&\text{Named Expressions:}\\&\qquad \begin{alignedat}{2}M&=\mathop{\mathtt{len\_{}at}}\left(b,1\right)&\quad &\in \mathbb{N}\\&&&\\N&=\mathop{\mathtt{len\_{}at}}\left(a,0\right)&\quad &\in \mathbb{N}\\\end{alignedat}\end{array} \]

In previous versions, calling generate_random_dataset() on this problem raised an exception. Starting with this release, the data is generated correctly.

problem.generate_random_dataset(seed=17)
{'b': array([[5, 5, 3, 3, 0],
        [5, 2, 1, 2, 0],
        [0, 1, 5, 3, 5],
        [0, 2, 3, 0, 2]], dtype=object),
 'a': array([1.9051444149700796, 4.388381466224443, 4.6746291952632575,
        1.6632417823227748], dtype=object)}

Fixed a bug where generation failed when unused placeholders were present#

Data generation failed when there were unused placeholders not included in used_placeholder(). For example, in the following code, N is defined but never used, and previous versions raised a runtime exception.

import jijmodeling as jm

problem = jm.Problem("My Problem")
N = problem.Natural("N")

problem.generate_random_dataset(seed=17)
{'N': 3}

Starting with this release, data is generated successfully in cases like the example above.

Fixed a bug where latex specifications were ignored in LaTeX output for decision variable bounds#

We fixed a bug where the values of the latex= keyword argument for other variables were ignored when outputting decision variable bounds in \(\LaTeX\).

import jijmodeling as jm

problem = jm.Problem("LaTeX bugfix example")
L = problem.Float("L", latex=r"\ell")
U = problem.Float("U", latex=r"\mathcal{U}")
x = problem.ContinuousVar("x", lower_bound=L, upper_bound=U)
problem += x

problem
\[\begin{array}{rl} \text{Problem}\colon &\text{LaTeX bugfix example}\\\displaystyle \min &\displaystyle x\\&\\\text{where}&\\&\text{Decision Variables:}\\&\qquad \begin{alignedat}{2}x&\in \mathbb{R}\;\left(\ell\leq x\leq \mathcal{U}\right)&\quad &0\text{-dim continuous variable}\\\end{alignedat}\\&\\&\text{Placeholders:}\\&\qquad \begin{alignedat}{2}\ell&\in \mathbb{R}&\quad &\text{A scalar placeholder in }\mathbb{R}\\\mathcal{U}&\in \mathbb{R}&\quad &\text{A scalar placeholder in }\mathbb{R}\\\end{alignedat}\end{array} \]

In previous releases, the latex specifications were ignored in the code above, and the bounds were displayed as \(L \leq x \leq U\). Starting with this release, the settings are preserved as shown above, and the bounds are displayed as \(\ell \leq x \leq \mathcal{U}\).

Fixed a bug where problem evaluation with constraint detection crashed when decision variables were subscripted by tuples#

We fixed a bug where eval_problem crashed when decision variables were subscripted with tuples and constraint detection was enabled (this is the case by default, or when the constraint_detection keyword argument was set to something other than False). For example, the following code used to crash in previous versions:

import jijmodeling as jm


@jm.Problem.define("dict-keyed binary var with tuple subscripts")
def problem(problem: jm.DecoratedProblem):
    N = problem.Natural()
    K = problem.Placeholder(ndim=1, dtype=(jm.DataType.NATURAL, jm.DataType.NATURAL))
    x = problem.BinaryVar(dict_keys=K)

    problem += problem.Constraint(
        "sweeps",
        (jm.sum(x[k] for k in K if k[0] == i) <= 1 for i in jm.range(N)),
    )


instance_data = {
    "N": 3,
    "K": [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1)],
}

compiler = jm.Compiler.from_problem(problem, instance_data)
instance = compiler.eval_problem(problem, constraint_detection=True)

Fixed a bug where the sum of binary {0, 1} expressions had type Binary instead of Natural#

We fixed a bug where an expression that sums another expression of binary type ({0, 1}) was typed as Binary instead of Natural. For example, the sum \(\sum_i x_i\) of binary variables \(x_0, x_1, \ldots\) can take values of \(2\) or more, so the result type had to be Natural instead of Binary.

import jijmodeling as jm

problem = jm.Problem("Sum of binary example")
N = problem.Natural("N")
x = problem.BinaryVar("x", shape=N)
problem.infer(x.sum())
\[\mathbb{N}\]

Other Changes#

  • Relaxed version bounds to allow installation on any Python 3 version from Python 3.11 onwards.

  • Error messages for invalid comprehensions used with the Decorator API in sum and similar constructs now report the specific location in the source code.

  • Problem.used_placeholders has been deprecated because its purpose is unclear, and Compiler also requires values for all placeholders. Use Problem.placeholders instead.