asd
This commit is contained in:
@ -0,0 +1,210 @@
|
||||
r"""
|
||||
Compressed sparse graph routines (:mod:`scipy.sparse.csgraph`)
|
||||
==============================================================
|
||||
|
||||
.. currentmodule:: scipy.sparse.csgraph
|
||||
|
||||
Fast graph algorithms based on sparse matrix representations.
|
||||
|
||||
Contents
|
||||
--------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
connected_components -- determine connected components of a graph
|
||||
laplacian -- compute the laplacian of a graph
|
||||
shortest_path -- compute the shortest path between points on a positive graph
|
||||
dijkstra -- use Dijkstra's algorithm for shortest path
|
||||
floyd_warshall -- use the Floyd-Warshall algorithm for shortest path
|
||||
bellman_ford -- use the Bellman-Ford algorithm for shortest path
|
||||
johnson -- use Johnson's algorithm for shortest path
|
||||
yen -- use Yen's algorithm for K-shortest paths between to nodes.
|
||||
breadth_first_order -- compute a breadth-first order of nodes
|
||||
depth_first_order -- compute a depth-first order of nodes
|
||||
breadth_first_tree -- construct the breadth-first tree from a given node
|
||||
depth_first_tree -- construct a depth-first tree from a given node
|
||||
minimum_spanning_tree -- construct the minimum spanning tree of a graph
|
||||
reverse_cuthill_mckee -- compute permutation for reverse Cuthill-McKee ordering
|
||||
maximum_flow -- solve the maximum flow problem for a graph
|
||||
maximum_bipartite_matching -- compute a maximum matching of a bipartite graph
|
||||
min_weight_full_bipartite_matching - compute a minimum weight full matching of a bipartite graph
|
||||
structural_rank -- compute the structural rank of a graph
|
||||
NegativeCycleError
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
construct_dist_matrix
|
||||
csgraph_from_dense
|
||||
csgraph_from_masked
|
||||
csgraph_masked_from_dense
|
||||
csgraph_to_dense
|
||||
csgraph_to_masked
|
||||
reconstruct_path
|
||||
|
||||
Graph Representations
|
||||
---------------------
|
||||
This module uses graphs which are stored in a matrix format. A
|
||||
graph with N nodes can be represented by an (N x N) adjacency matrix G.
|
||||
If there is a connection from node i to node j, then G[i, j] = w, where
|
||||
w is the weight of the connection. For nodes i and j which are
|
||||
not connected, the value depends on the representation:
|
||||
|
||||
- for dense array representations, non-edges are represented by
|
||||
G[i, j] = 0, infinity, or NaN.
|
||||
|
||||
- for dense masked representations (of type np.ma.MaskedArray), non-edges
|
||||
are represented by masked values. This can be useful when graphs with
|
||||
zero-weight edges are desired.
|
||||
|
||||
- for sparse array representations, non-edges are represented by
|
||||
non-entries in the matrix. This sort of sparse representation also
|
||||
allows for edges with zero weights.
|
||||
|
||||
As a concrete example, imagine that you would like to represent the following
|
||||
undirected graph::
|
||||
|
||||
G
|
||||
|
||||
(0)
|
||||
/ \
|
||||
1 2
|
||||
/ \
|
||||
(2) (1)
|
||||
|
||||
This graph has three nodes, where node 0 and 1 are connected by an edge of
|
||||
weight 2, and nodes 0 and 2 are connected by an edge of weight 1.
|
||||
We can construct the dense, masked, and sparse representations as follows,
|
||||
keeping in mind that an undirected graph is represented by a symmetric matrix::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> G_dense = np.array([[0, 2, 1],
|
||||
... [2, 0, 0],
|
||||
... [1, 0, 0]])
|
||||
>>> G_masked = np.ma.masked_values(G_dense, 0)
|
||||
>>> from scipy.sparse import csr_matrix
|
||||
>>> G_sparse = csr_matrix(G_dense)
|
||||
|
||||
This becomes more difficult when zero edges are significant. For example,
|
||||
consider the situation when we slightly modify the above graph::
|
||||
|
||||
G2
|
||||
|
||||
(0)
|
||||
/ \
|
||||
0 2
|
||||
/ \
|
||||
(2) (1)
|
||||
|
||||
This is identical to the previous graph, except nodes 0 and 2 are connected
|
||||
by an edge of zero weight. In this case, the dense representation above
|
||||
leads to ambiguities: how can non-edges be represented if zero is a meaningful
|
||||
value? In this case, either a masked or sparse representation must be used
|
||||
to eliminate the ambiguity::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> G2_data = np.array([[np.inf, 2, 0 ],
|
||||
... [2, np.inf, np.inf],
|
||||
... [0, np.inf, np.inf]])
|
||||
>>> G2_masked = np.ma.masked_invalid(G2_data)
|
||||
>>> from scipy.sparse.csgraph import csgraph_from_dense
|
||||
>>> # G2_sparse = csr_matrix(G2_data) would give the wrong result
|
||||
>>> G2_sparse = csgraph_from_dense(G2_data, null_value=np.inf)
|
||||
>>> G2_sparse.data
|
||||
array([ 2., 0., 2., 0.])
|
||||
|
||||
Here we have used a utility routine from the csgraph submodule in order to
|
||||
convert the dense representation to a sparse representation which can be
|
||||
understood by the algorithms in submodule. By viewing the data array, we
|
||||
can see that the zero values are explicitly encoded in the graph.
|
||||
|
||||
Directed vs. undirected
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Matrices may represent either directed or undirected graphs. This is
|
||||
specified throughout the csgraph module by a boolean keyword. Graphs are
|
||||
assumed to be directed by default. In a directed graph, traversal from node
|
||||
i to node j can be accomplished over the edge G[i, j], but not the edge
|
||||
G[j, i]. Consider the following dense graph::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> G_dense = np.array([[0, 1, 0],
|
||||
... [2, 0, 3],
|
||||
... [0, 4, 0]])
|
||||
|
||||
When ``directed=True`` we get the graph::
|
||||
|
||||
---1--> ---3-->
|
||||
(0) (1) (2)
|
||||
<--2--- <--4---
|
||||
|
||||
In a non-directed graph, traversal from node i to node j can be
|
||||
accomplished over either G[i, j] or G[j, i]. If both edges are not null,
|
||||
and the two have unequal weights, then the smaller of the two is used.
|
||||
|
||||
So for the same graph, when ``directed=False`` we get the graph::
|
||||
|
||||
(0)--1--(1)--3--(2)
|
||||
|
||||
Note that a symmetric matrix will represent an undirected graph, regardless
|
||||
of whether the 'directed' keyword is set to True or False. In this case,
|
||||
using ``directed=True`` generally leads to more efficient computation.
|
||||
|
||||
The routines in this module accept as input either scipy.sparse representations
|
||||
(csr, csc, or lil format), masked representations, or dense representations
|
||||
with non-edges indicated by zeros, infinities, and NaN entries.
|
||||
""" # noqa: E501
|
||||
|
||||
__docformat__ = "restructuredtext en"
|
||||
|
||||
__all__ = ['connected_components',
|
||||
'laplacian',
|
||||
'shortest_path',
|
||||
'floyd_warshall',
|
||||
'dijkstra',
|
||||
'bellman_ford',
|
||||
'johnson',
|
||||
'yen',
|
||||
'breadth_first_order',
|
||||
'depth_first_order',
|
||||
'breadth_first_tree',
|
||||
'depth_first_tree',
|
||||
'minimum_spanning_tree',
|
||||
'reverse_cuthill_mckee',
|
||||
'maximum_flow',
|
||||
'maximum_bipartite_matching',
|
||||
'min_weight_full_bipartite_matching',
|
||||
'structural_rank',
|
||||
'construct_dist_matrix',
|
||||
'reconstruct_path',
|
||||
'csgraph_masked_from_dense',
|
||||
'csgraph_from_dense',
|
||||
'csgraph_from_masked',
|
||||
'csgraph_to_dense',
|
||||
'csgraph_to_masked',
|
||||
'NegativeCycleError']
|
||||
|
||||
from ._laplacian import laplacian
|
||||
from ._shortest_path import (
|
||||
shortest_path, floyd_warshall, dijkstra, bellman_ford, johnson, yen,
|
||||
NegativeCycleError
|
||||
)
|
||||
from ._traversal import (
|
||||
breadth_first_order, depth_first_order, breadth_first_tree,
|
||||
depth_first_tree, connected_components
|
||||
)
|
||||
from ._min_spanning_tree import minimum_spanning_tree
|
||||
from ._flow import maximum_flow
|
||||
from ._matching import (
|
||||
maximum_bipartite_matching, min_weight_full_bipartite_matching
|
||||
)
|
||||
from ._reordering import reverse_cuthill_mckee, structural_rank
|
||||
from ._tools import (
|
||||
construct_dist_matrix, reconstruct_path, csgraph_from_dense,
|
||||
csgraph_to_dense, csgraph_masked_from_dense, csgraph_from_masked,
|
||||
csgraph_to_masked
|
||||
)
|
||||
|
||||
from scipy._lib._testutils import PytestTester
|
||||
test = PytestTester(__name__)
|
||||
del PytestTester
|
||||
Binary file not shown.
@ -0,0 +1,562 @@
|
||||
"""
|
||||
Laplacian of a compressed-sparse graph
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
from scipy.sparse import issparse
|
||||
from scipy.sparse.linalg import LinearOperator
|
||||
from scipy.sparse._sputils import convert_pydata_sparse_to_scipy, is_pydata_spmatrix
|
||||
|
||||
|
||||
###############################################################################
|
||||
# Graph laplacian
|
||||
def laplacian(
|
||||
csgraph,
|
||||
normed=False,
|
||||
return_diag=False,
|
||||
use_out_degree=False,
|
||||
*,
|
||||
copy=True,
|
||||
form="array",
|
||||
dtype=None,
|
||||
symmetrized=False,
|
||||
):
|
||||
"""
|
||||
Return the Laplacian of a directed graph.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
csgraph : array_like or sparse matrix, 2 dimensions
|
||||
compressed-sparse graph, with shape (N, N).
|
||||
normed : bool, optional
|
||||
If True, then compute symmetrically normalized Laplacian.
|
||||
Default: False.
|
||||
return_diag : bool, optional
|
||||
If True, then also return an array related to vertex degrees.
|
||||
Default: False.
|
||||
use_out_degree : bool, optional
|
||||
If True, then use out-degree instead of in-degree.
|
||||
This distinction matters only if the graph is asymmetric.
|
||||
Default: False.
|
||||
copy: bool, optional
|
||||
If False, then change `csgraph` in place if possible,
|
||||
avoiding doubling the memory use.
|
||||
Default: True, for backward compatibility.
|
||||
form: 'array', or 'function', or 'lo'
|
||||
Determines the format of the output Laplacian:
|
||||
|
||||
* 'array' is a numpy array;
|
||||
* 'function' is a pointer to evaluating the Laplacian-vector
|
||||
or Laplacian-matrix product;
|
||||
* 'lo' results in the format of the `LinearOperator`.
|
||||
|
||||
Choosing 'function' or 'lo' always avoids doubling
|
||||
the memory use, ignoring `copy` value.
|
||||
Default: 'array', for backward compatibility.
|
||||
dtype: None or one of numeric numpy dtypes, optional
|
||||
The dtype of the output. If ``dtype=None``, the dtype of the
|
||||
output matches the dtype of the input csgraph, except for
|
||||
the case ``normed=True`` and integer-like csgraph, where
|
||||
the output dtype is 'float' allowing accurate normalization,
|
||||
but dramatically increasing the memory use.
|
||||
Default: None, for backward compatibility.
|
||||
symmetrized: bool, optional
|
||||
If True, then the output Laplacian is symmetric/Hermitian.
|
||||
The symmetrization is done by ``csgraph + csgraph.T.conj``
|
||||
without dividing by 2 to preserve integer dtypes if possible
|
||||
prior to the construction of the Laplacian.
|
||||
The symmetrization will increase the memory footprint of
|
||||
sparse matrices unless the sparsity pattern is symmetric or
|
||||
`form` is 'function' or 'lo'.
|
||||
Default: False, for backward compatibility.
|
||||
|
||||
Returns
|
||||
-------
|
||||
lap : ndarray, or sparse matrix, or `LinearOperator`
|
||||
The N x N Laplacian of csgraph. It will be a NumPy array (dense)
|
||||
if the input was dense, or a sparse matrix otherwise, or
|
||||
the format of a function or `LinearOperator` if
|
||||
`form` equals 'function' or 'lo', respectively.
|
||||
diag : ndarray, optional
|
||||
The length-N main diagonal of the Laplacian matrix.
|
||||
For the normalized Laplacian, this is the array of square roots
|
||||
of vertex degrees or 1 if the degree is zero.
|
||||
|
||||
Notes
|
||||
-----
|
||||
The Laplacian matrix of a graph is sometimes referred to as the
|
||||
"Kirchhoff matrix" or just the "Laplacian", and is useful in many
|
||||
parts of spectral graph theory.
|
||||
In particular, the eigen-decomposition of the Laplacian can give
|
||||
insight into many properties of the graph, e.g.,
|
||||
is commonly used for spectral data embedding and clustering.
|
||||
|
||||
The constructed Laplacian doubles the memory use if ``copy=True`` and
|
||||
``form="array"`` which is the default.
|
||||
Choosing ``copy=False`` has no effect unless ``form="array"``
|
||||
or the matrix is sparse in the ``coo`` format, or dense array, except
|
||||
for the integer input with ``normed=True`` that forces the float output.
|
||||
|
||||
Sparse input is reformatted into ``coo`` if ``form="array"``,
|
||||
which is the default.
|
||||
|
||||
If the input adjacency matrix is not symmetric, the Laplacian is
|
||||
also non-symmetric unless ``symmetrized=True`` is used.
|
||||
|
||||
Diagonal entries of the input adjacency matrix are ignored and
|
||||
replaced with zeros for the purpose of normalization where ``normed=True``.
|
||||
The normalization uses the inverse square roots of row-sums of the input
|
||||
adjacency matrix, and thus may fail if the row-sums contain
|
||||
negative or complex with a non-zero imaginary part values.
|
||||
|
||||
The normalization is symmetric, making the normalized Laplacian also
|
||||
symmetric if the input csgraph was symmetric.
|
||||
|
||||
References
|
||||
----------
|
||||
.. [1] Laplacian matrix. https://en.wikipedia.org/wiki/Laplacian_matrix
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> import numpy as np
|
||||
>>> from scipy.sparse import csgraph
|
||||
|
||||
Our first illustration is the symmetric graph
|
||||
|
||||
>>> G = np.arange(4) * np.arange(4)[:, np.newaxis]
|
||||
>>> G
|
||||
array([[0, 0, 0, 0],
|
||||
[0, 1, 2, 3],
|
||||
[0, 2, 4, 6],
|
||||
[0, 3, 6, 9]])
|
||||
|
||||
and its symmetric Laplacian matrix
|
||||
|
||||
>>> csgraph.laplacian(G)
|
||||
array([[ 0, 0, 0, 0],
|
||||
[ 0, 5, -2, -3],
|
||||
[ 0, -2, 8, -6],
|
||||
[ 0, -3, -6, 9]])
|
||||
|
||||
The non-symmetric graph
|
||||
|
||||
>>> G = np.arange(9).reshape(3, 3)
|
||||
>>> G
|
||||
array([[0, 1, 2],
|
||||
[3, 4, 5],
|
||||
[6, 7, 8]])
|
||||
|
||||
has different row- and column sums, resulting in two varieties
|
||||
of the Laplacian matrix, using an in-degree, which is the default
|
||||
|
||||
>>> L_in_degree = csgraph.laplacian(G)
|
||||
>>> L_in_degree
|
||||
array([[ 9, -1, -2],
|
||||
[-3, 8, -5],
|
||||
[-6, -7, 7]])
|
||||
|
||||
or alternatively an out-degree
|
||||
|
||||
>>> L_out_degree = csgraph.laplacian(G, use_out_degree=True)
|
||||
>>> L_out_degree
|
||||
array([[ 3, -1, -2],
|
||||
[-3, 8, -5],
|
||||
[-6, -7, 13]])
|
||||
|
||||
Constructing a symmetric Laplacian matrix, one can add the two as
|
||||
|
||||
>>> L_in_degree + L_out_degree.T
|
||||
array([[ 12, -4, -8],
|
||||
[ -4, 16, -12],
|
||||
[ -8, -12, 20]])
|
||||
|
||||
or use the ``symmetrized=True`` option
|
||||
|
||||
>>> csgraph.laplacian(G, symmetrized=True)
|
||||
array([[ 12, -4, -8],
|
||||
[ -4, 16, -12],
|
||||
[ -8, -12, 20]])
|
||||
|
||||
that is equivalent to symmetrizing the original graph
|
||||
|
||||
>>> csgraph.laplacian(G + G.T)
|
||||
array([[ 12, -4, -8],
|
||||
[ -4, 16, -12],
|
||||
[ -8, -12, 20]])
|
||||
|
||||
The goal of normalization is to make the non-zero diagonal entries
|
||||
of the Laplacian matrix to be all unit, also scaling off-diagonal
|
||||
entries correspondingly. The normalization can be done manually, e.g.,
|
||||
|
||||
>>> G = np.array([[0, 1, 1], [1, 0, 1], [1, 1, 0]])
|
||||
>>> L, d = csgraph.laplacian(G, return_diag=True)
|
||||
>>> L
|
||||
array([[ 2, -1, -1],
|
||||
[-1, 2, -1],
|
||||
[-1, -1, 2]])
|
||||
>>> d
|
||||
array([2, 2, 2])
|
||||
>>> scaling = np.sqrt(d)
|
||||
>>> scaling
|
||||
array([1.41421356, 1.41421356, 1.41421356])
|
||||
>>> (1/scaling)*L*(1/scaling)
|
||||
array([[ 1. , -0.5, -0.5],
|
||||
[-0.5, 1. , -0.5],
|
||||
[-0.5, -0.5, 1. ]])
|
||||
|
||||
Or using ``normed=True`` option
|
||||
|
||||
>>> L, d = csgraph.laplacian(G, return_diag=True, normed=True)
|
||||
>>> L
|
||||
array([[ 1. , -0.5, -0.5],
|
||||
[-0.5, 1. , -0.5],
|
||||
[-0.5, -0.5, 1. ]])
|
||||
|
||||
which now instead of the diagonal returns the scaling coefficients
|
||||
|
||||
>>> d
|
||||
array([1.41421356, 1.41421356, 1.41421356])
|
||||
|
||||
Zero scaling coefficients are substituted with 1s, where scaling
|
||||
has thus no effect, e.g.,
|
||||
|
||||
>>> G = np.array([[0, 0, 0], [0, 0, 1], [0, 1, 0]])
|
||||
>>> G
|
||||
array([[0, 0, 0],
|
||||
[0, 0, 1],
|
||||
[0, 1, 0]])
|
||||
>>> L, d = csgraph.laplacian(G, return_diag=True, normed=True)
|
||||
>>> L
|
||||
array([[ 0., -0., -0.],
|
||||
[-0., 1., -1.],
|
||||
[-0., -1., 1.]])
|
||||
>>> d
|
||||
array([1., 1., 1.])
|
||||
|
||||
Only the symmetric normalization is implemented, resulting
|
||||
in a symmetric Laplacian matrix if and only if its graph is symmetric
|
||||
and has all non-negative degrees, like in the examples above.
|
||||
|
||||
The output Laplacian matrix is by default a dense array or a sparse matrix
|
||||
inferring its shape, format, and dtype from the input graph matrix:
|
||||
|
||||
>>> G = np.array([[0, 1, 1], [1, 0, 1], [1, 1, 0]]).astype(np.float32)
|
||||
>>> G
|
||||
array([[0., 1., 1.],
|
||||
[1., 0., 1.],
|
||||
[1., 1., 0.]], dtype=float32)
|
||||
>>> csgraph.laplacian(G)
|
||||
array([[ 2., -1., -1.],
|
||||
[-1., 2., -1.],
|
||||
[-1., -1., 2.]], dtype=float32)
|
||||
|
||||
but can alternatively be generated matrix-free as a LinearOperator:
|
||||
|
||||
>>> L = csgraph.laplacian(G, form="lo")
|
||||
>>> L
|
||||
<3x3 _CustomLinearOperator with dtype=float32>
|
||||
>>> L(np.eye(3))
|
||||
array([[ 2., -1., -1.],
|
||||
[-1., 2., -1.],
|
||||
[-1., -1., 2.]])
|
||||
|
||||
or as a lambda-function:
|
||||
|
||||
>>> L = csgraph.laplacian(G, form="function")
|
||||
>>> L
|
||||
<function _laplace.<locals>.<lambda> at 0x0000012AE6F5A598>
|
||||
>>> L(np.eye(3))
|
||||
array([[ 2., -1., -1.],
|
||||
[-1., 2., -1.],
|
||||
[-1., -1., 2.]])
|
||||
|
||||
The Laplacian matrix is used for
|
||||
spectral data clustering and embedding
|
||||
as well as for spectral graph partitioning.
|
||||
Our final example illustrates the latter
|
||||
for a noisy directed linear graph.
|
||||
|
||||
>>> from scipy.sparse import diags, random
|
||||
>>> from scipy.sparse.linalg import lobpcg
|
||||
|
||||
Create a directed linear graph with ``N=35`` vertices
|
||||
using a sparse adjacency matrix ``G``:
|
||||
|
||||
>>> N = 35
|
||||
>>> G = diags(np.ones(N-1), 1, format="csr")
|
||||
|
||||
Fix a random seed ``rng`` and add a random sparse noise to the graph ``G``:
|
||||
|
||||
>>> rng = np.random.default_rng()
|
||||
>>> G += 1e-2 * random(N, N, density=0.1, random_state=rng)
|
||||
|
||||
Set initial approximations for eigenvectors:
|
||||
|
||||
>>> X = rng.random((N, 2))
|
||||
|
||||
The constant vector of ones is always a trivial eigenvector
|
||||
of the non-normalized Laplacian to be filtered out:
|
||||
|
||||
>>> Y = np.ones((N, 1))
|
||||
|
||||
Alternating (1) the sign of the graph weights allows determining
|
||||
labels for spectral max- and min- cuts in a single loop.
|
||||
Since the graph is undirected, the option ``symmetrized=True``
|
||||
must be used in the construction of the Laplacian.
|
||||
The option ``normed=True`` cannot be used in (2) for the negative weights
|
||||
here as the symmetric normalization evaluates square roots.
|
||||
The option ``form="lo"`` in (2) is matrix-free, i.e., guarantees
|
||||
a fixed memory footprint and read-only access to the graph.
|
||||
Calling the eigenvalue solver ``lobpcg`` (3) computes the Fiedler vector
|
||||
that determines the labels as the signs of its components in (5).
|
||||
Since the sign in an eigenvector is not deterministic and can flip,
|
||||
we fix the sign of the first component to be always +1 in (4).
|
||||
|
||||
>>> for cut in ["max", "min"]:
|
||||
... G = -G # 1.
|
||||
... L = csgraph.laplacian(G, symmetrized=True, form="lo") # 2.
|
||||
... _, eves = lobpcg(L, X, Y=Y, largest=False, tol=1e-2) # 3.
|
||||
... eves *= np.sign(eves[0, 0]) # 4.
|
||||
... print(cut + "-cut labels:\\n", 1 * (eves[:, 0]>0)) # 5.
|
||||
max-cut labels:
|
||||
[1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1]
|
||||
min-cut labels:
|
||||
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
|
||||
|
||||
As anticipated for a (slightly noisy) linear graph,
|
||||
the max-cut strips all the edges of the graph coloring all
|
||||
odd vertices into one color and all even vertices into another one,
|
||||
while the balanced min-cut partitions the graph
|
||||
in the middle by deleting a single edge.
|
||||
Both determined partitions are optimal.
|
||||
"""
|
||||
is_pydata_sparse = is_pydata_spmatrix(csgraph)
|
||||
if is_pydata_sparse:
|
||||
pydata_sparse_cls = csgraph.__class__
|
||||
csgraph = convert_pydata_sparse_to_scipy(csgraph)
|
||||
if csgraph.ndim != 2 or csgraph.shape[0] != csgraph.shape[1]:
|
||||
raise ValueError('csgraph must be a square matrix or array')
|
||||
|
||||
if normed and (
|
||||
np.issubdtype(csgraph.dtype, np.signedinteger)
|
||||
or np.issubdtype(csgraph.dtype, np.uint)
|
||||
):
|
||||
csgraph = csgraph.astype(np.float64)
|
||||
|
||||
if form == "array":
|
||||
create_lap = (
|
||||
_laplacian_sparse if issparse(csgraph) else _laplacian_dense
|
||||
)
|
||||
else:
|
||||
create_lap = (
|
||||
_laplacian_sparse_flo
|
||||
if issparse(csgraph)
|
||||
else _laplacian_dense_flo
|
||||
)
|
||||
|
||||
degree_axis = 1 if use_out_degree else 0
|
||||
|
||||
lap, d = create_lap(
|
||||
csgraph,
|
||||
normed=normed,
|
||||
axis=degree_axis,
|
||||
copy=copy,
|
||||
form=form,
|
||||
dtype=dtype,
|
||||
symmetrized=symmetrized,
|
||||
)
|
||||
if is_pydata_sparse:
|
||||
lap = pydata_sparse_cls.from_scipy_sparse(lap)
|
||||
if return_diag:
|
||||
return lap, d
|
||||
return lap
|
||||
|
||||
|
||||
def _setdiag_dense(m, d):
|
||||
step = len(d) + 1
|
||||
m.flat[::step] = d
|
||||
|
||||
|
||||
def _laplace(m, d):
|
||||
return lambda v: v * d[:, np.newaxis] - m @ v
|
||||
|
||||
|
||||
def _laplace_normed(m, d, nd):
|
||||
laplace = _laplace(m, d)
|
||||
return lambda v: nd[:, np.newaxis] * laplace(v * nd[:, np.newaxis])
|
||||
|
||||
|
||||
def _laplace_sym(m, d):
|
||||
return (
|
||||
lambda v: v * d[:, np.newaxis]
|
||||
- m @ v
|
||||
- np.transpose(np.conjugate(np.transpose(np.conjugate(v)) @ m))
|
||||
)
|
||||
|
||||
|
||||
def _laplace_normed_sym(m, d, nd):
|
||||
laplace_sym = _laplace_sym(m, d)
|
||||
return lambda v: nd[:, np.newaxis] * laplace_sym(v * nd[:, np.newaxis])
|
||||
|
||||
|
||||
def _linearoperator(mv, shape, dtype):
|
||||
return LinearOperator(matvec=mv, matmat=mv, shape=shape, dtype=dtype)
|
||||
|
||||
|
||||
def _laplacian_sparse_flo(graph, normed, axis, copy, form, dtype, symmetrized):
|
||||
# The keyword argument `copy` is unused and has no effect here.
|
||||
del copy
|
||||
|
||||
if dtype is None:
|
||||
dtype = graph.dtype
|
||||
|
||||
graph_sum = np.asarray(graph.sum(axis=axis)).ravel()
|
||||
graph_diagonal = graph.diagonal()
|
||||
diag = graph_sum - graph_diagonal
|
||||
if symmetrized:
|
||||
graph_sum += np.asarray(graph.sum(axis=1 - axis)).ravel()
|
||||
diag = graph_sum - graph_diagonal - graph_diagonal
|
||||
|
||||
if normed:
|
||||
isolated_node_mask = diag == 0
|
||||
w = np.where(isolated_node_mask, 1, np.sqrt(diag))
|
||||
if symmetrized:
|
||||
md = _laplace_normed_sym(graph, graph_sum, 1.0 / w)
|
||||
else:
|
||||
md = _laplace_normed(graph, graph_sum, 1.0 / w)
|
||||
if form == "function":
|
||||
return md, w.astype(dtype, copy=False)
|
||||
elif form == "lo":
|
||||
m = _linearoperator(md, shape=graph.shape, dtype=dtype)
|
||||
return m, w.astype(dtype, copy=False)
|
||||
else:
|
||||
raise ValueError(f"Invalid form: {form!r}")
|
||||
else:
|
||||
if symmetrized:
|
||||
md = _laplace_sym(graph, graph_sum)
|
||||
else:
|
||||
md = _laplace(graph, graph_sum)
|
||||
if form == "function":
|
||||
return md, diag.astype(dtype, copy=False)
|
||||
elif form == "lo":
|
||||
m = _linearoperator(md, shape=graph.shape, dtype=dtype)
|
||||
return m, diag.astype(dtype, copy=False)
|
||||
else:
|
||||
raise ValueError(f"Invalid form: {form!r}")
|
||||
|
||||
|
||||
def _laplacian_sparse(graph, normed, axis, copy, form, dtype, symmetrized):
|
||||
# The keyword argument `form` is unused and has no effect here.
|
||||
del form
|
||||
|
||||
if dtype is None:
|
||||
dtype = graph.dtype
|
||||
|
||||
needs_copy = False
|
||||
if graph.format in ('lil', 'dok'):
|
||||
m = graph.tocoo()
|
||||
else:
|
||||
m = graph
|
||||
if copy:
|
||||
needs_copy = True
|
||||
|
||||
if symmetrized:
|
||||
m += m.T.conj()
|
||||
|
||||
w = np.asarray(m.sum(axis=axis)).ravel() - m.diagonal()
|
||||
if normed:
|
||||
m = m.tocoo(copy=needs_copy)
|
||||
isolated_node_mask = (w == 0)
|
||||
w = np.where(isolated_node_mask, 1, np.sqrt(w))
|
||||
m.data /= w[m.row]
|
||||
m.data /= w[m.col]
|
||||
m.data *= -1
|
||||
m.setdiag(1 - isolated_node_mask)
|
||||
else:
|
||||
if m.format == 'dia':
|
||||
m = m.copy()
|
||||
else:
|
||||
m = m.tocoo(copy=needs_copy)
|
||||
m.data *= -1
|
||||
m.setdiag(w)
|
||||
|
||||
return m.astype(dtype, copy=False), w.astype(dtype)
|
||||
|
||||
|
||||
def _laplacian_dense_flo(graph, normed, axis, copy, form, dtype, symmetrized):
|
||||
|
||||
if copy:
|
||||
m = np.array(graph)
|
||||
else:
|
||||
m = np.asarray(graph)
|
||||
|
||||
if dtype is None:
|
||||
dtype = m.dtype
|
||||
|
||||
graph_sum = m.sum(axis=axis)
|
||||
graph_diagonal = m.diagonal()
|
||||
diag = graph_sum - graph_diagonal
|
||||
if symmetrized:
|
||||
graph_sum += m.sum(axis=1 - axis)
|
||||
diag = graph_sum - graph_diagonal - graph_diagonal
|
||||
|
||||
if normed:
|
||||
isolated_node_mask = diag == 0
|
||||
w = np.where(isolated_node_mask, 1, np.sqrt(diag))
|
||||
if symmetrized:
|
||||
md = _laplace_normed_sym(m, graph_sum, 1.0 / w)
|
||||
else:
|
||||
md = _laplace_normed(m, graph_sum, 1.0 / w)
|
||||
if form == "function":
|
||||
return md, w.astype(dtype, copy=False)
|
||||
elif form == "lo":
|
||||
m = _linearoperator(md, shape=graph.shape, dtype=dtype)
|
||||
return m, w.astype(dtype, copy=False)
|
||||
else:
|
||||
raise ValueError(f"Invalid form: {form!r}")
|
||||
else:
|
||||
if symmetrized:
|
||||
md = _laplace_sym(m, graph_sum)
|
||||
else:
|
||||
md = _laplace(m, graph_sum)
|
||||
if form == "function":
|
||||
return md, diag.astype(dtype, copy=False)
|
||||
elif form == "lo":
|
||||
m = _linearoperator(md, shape=graph.shape, dtype=dtype)
|
||||
return m, diag.astype(dtype, copy=False)
|
||||
else:
|
||||
raise ValueError(f"Invalid form: {form!r}")
|
||||
|
||||
|
||||
def _laplacian_dense(graph, normed, axis, copy, form, dtype, symmetrized):
|
||||
|
||||
if form != "array":
|
||||
raise ValueError(f'{form!r} must be "array"')
|
||||
|
||||
if dtype is None:
|
||||
dtype = graph.dtype
|
||||
|
||||
if copy:
|
||||
m = np.array(graph)
|
||||
else:
|
||||
m = np.asarray(graph)
|
||||
|
||||
if dtype is None:
|
||||
dtype = m.dtype
|
||||
|
||||
if symmetrized:
|
||||
m += m.T.conj()
|
||||
np.fill_diagonal(m, 0)
|
||||
w = m.sum(axis=axis)
|
||||
if normed:
|
||||
isolated_node_mask = (w == 0)
|
||||
w = np.where(isolated_node_mask, 1, np.sqrt(w))
|
||||
m /= w
|
||||
m /= w[:, np.newaxis]
|
||||
m *= -1
|
||||
_setdiag_dense(m, 1 - isolated_node_mask)
|
||||
else:
|
||||
m *= -1
|
||||
_setdiag_dense(m, w)
|
||||
|
||||
return m.astype(dtype, copy=False), w.astype(dtype, copy=False)
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -0,0 +1,61 @@
|
||||
import numpy as np
|
||||
from scipy.sparse import csr_matrix, issparse
|
||||
from scipy.sparse._sputils import convert_pydata_sparse_to_scipy
|
||||
from scipy.sparse.csgraph._tools import (
|
||||
csgraph_to_dense, csgraph_from_dense,
|
||||
csgraph_masked_from_dense, csgraph_from_masked
|
||||
)
|
||||
|
||||
DTYPE = np.float64
|
||||
|
||||
|
||||
def validate_graph(csgraph, directed, dtype=DTYPE,
|
||||
csr_output=True, dense_output=True,
|
||||
copy_if_dense=False, copy_if_sparse=False,
|
||||
null_value_in=0, null_value_out=np.inf,
|
||||
infinity_null=True, nan_null=True):
|
||||
"""Routine for validation and conversion of csgraph inputs"""
|
||||
if not (csr_output or dense_output):
|
||||
raise ValueError("Internal: dense or csr output must be true")
|
||||
|
||||
csgraph = convert_pydata_sparse_to_scipy(csgraph)
|
||||
|
||||
# if undirected and csc storage, then transposing in-place
|
||||
# is quicker than later converting to csr.
|
||||
if (not directed) and issparse(csgraph) and csgraph.format == "csc":
|
||||
csgraph = csgraph.T
|
||||
|
||||
if issparse(csgraph):
|
||||
if csr_output:
|
||||
csgraph = csr_matrix(csgraph, dtype=DTYPE, copy=copy_if_sparse)
|
||||
else:
|
||||
csgraph = csgraph_to_dense(csgraph, null_value=null_value_out)
|
||||
elif np.ma.isMaskedArray(csgraph):
|
||||
if dense_output:
|
||||
mask = csgraph.mask
|
||||
csgraph = np.array(csgraph.data, dtype=DTYPE, copy=copy_if_dense)
|
||||
csgraph[mask] = null_value_out
|
||||
else:
|
||||
csgraph = csgraph_from_masked(csgraph)
|
||||
else:
|
||||
if dense_output:
|
||||
csgraph = csgraph_masked_from_dense(csgraph,
|
||||
copy=copy_if_dense,
|
||||
null_value=null_value_in,
|
||||
nan_null=nan_null,
|
||||
infinity_null=infinity_null)
|
||||
mask = csgraph.mask
|
||||
csgraph = np.asarray(csgraph.data, dtype=DTYPE)
|
||||
csgraph[mask] = null_value_out
|
||||
else:
|
||||
csgraph = csgraph_from_dense(csgraph, null_value=null_value_in,
|
||||
infinity_null=infinity_null,
|
||||
nan_null=nan_null)
|
||||
|
||||
if csgraph.ndim != 2:
|
||||
raise ValueError("compressed-sparse graph must be 2-D")
|
||||
|
||||
if csgraph.shape[0] != csgraph.shape[1]:
|
||||
raise ValueError("compressed-sparse graph must be shape (N, N)")
|
||||
|
||||
return csgraph
|
||||
@ -0,0 +1,119 @@
|
||||
import numpy as np
|
||||
from numpy.testing import assert_equal, assert_array_almost_equal
|
||||
from scipy.sparse import csgraph, csr_array
|
||||
|
||||
|
||||
def test_weak_connections():
|
||||
Xde = np.array([[0, 1, 0],
|
||||
[0, 0, 0],
|
||||
[0, 0, 0]])
|
||||
|
||||
Xsp = csgraph.csgraph_from_dense(Xde, null_value=0)
|
||||
|
||||
for X in Xsp, Xde:
|
||||
n_components, labels =\
|
||||
csgraph.connected_components(X, directed=True,
|
||||
connection='weak')
|
||||
|
||||
assert_equal(n_components, 2)
|
||||
assert_array_almost_equal(labels, [0, 0, 1])
|
||||
|
||||
|
||||
def test_strong_connections():
|
||||
X1de = np.array([[0, 1, 0],
|
||||
[0, 0, 0],
|
||||
[0, 0, 0]])
|
||||
X2de = X1de + X1de.T
|
||||
|
||||
X1sp = csgraph.csgraph_from_dense(X1de, null_value=0)
|
||||
X2sp = csgraph.csgraph_from_dense(X2de, null_value=0)
|
||||
|
||||
for X in X1sp, X1de:
|
||||
n_components, labels =\
|
||||
csgraph.connected_components(X, directed=True,
|
||||
connection='strong')
|
||||
|
||||
assert_equal(n_components, 3)
|
||||
labels.sort()
|
||||
assert_array_almost_equal(labels, [0, 1, 2])
|
||||
|
||||
for X in X2sp, X2de:
|
||||
n_components, labels =\
|
||||
csgraph.connected_components(X, directed=True,
|
||||
connection='strong')
|
||||
|
||||
assert_equal(n_components, 2)
|
||||
labels.sort()
|
||||
assert_array_almost_equal(labels, [0, 0, 1])
|
||||
|
||||
|
||||
def test_strong_connections2():
|
||||
X = np.array([[0, 0, 0, 0, 0, 0],
|
||||
[1, 0, 1, 0, 0, 0],
|
||||
[0, 0, 0, 1, 0, 0],
|
||||
[0, 0, 1, 0, 1, 0],
|
||||
[0, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 1, 0]])
|
||||
n_components, labels =\
|
||||
csgraph.connected_components(X, directed=True,
|
||||
connection='strong')
|
||||
assert_equal(n_components, 5)
|
||||
labels.sort()
|
||||
assert_array_almost_equal(labels, [0, 1, 2, 2, 3, 4])
|
||||
|
||||
|
||||
def test_weak_connections2():
|
||||
X = np.array([[0, 0, 0, 0, 0, 0],
|
||||
[1, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 1, 0, 0],
|
||||
[0, 0, 1, 0, 1, 0],
|
||||
[0, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 1, 0]])
|
||||
n_components, labels =\
|
||||
csgraph.connected_components(X, directed=True,
|
||||
connection='weak')
|
||||
assert_equal(n_components, 2)
|
||||
labels.sort()
|
||||
assert_array_almost_equal(labels, [0, 0, 1, 1, 1, 1])
|
||||
|
||||
|
||||
def test_ticket1876():
|
||||
# Regression test: this failed in the original implementation
|
||||
# There should be two strongly-connected components; previously gave one
|
||||
g = np.array([[0, 1, 1, 0],
|
||||
[1, 0, 0, 1],
|
||||
[0, 0, 0, 1],
|
||||
[0, 0, 1, 0]])
|
||||
n_components, labels = csgraph.connected_components(g, connection='strong')
|
||||
|
||||
assert_equal(n_components, 2)
|
||||
assert_equal(labels[0], labels[1])
|
||||
assert_equal(labels[2], labels[3])
|
||||
|
||||
|
||||
def test_fully_connected_graph():
|
||||
# Fully connected dense matrices raised an exception.
|
||||
# https://github.com/scipy/scipy/issues/3818
|
||||
g = np.ones((4, 4))
|
||||
n_components, labels = csgraph.connected_components(g)
|
||||
assert_equal(n_components, 1)
|
||||
|
||||
|
||||
def test_int64_indices_undirected():
|
||||
# See https://github.com/scipy/scipy/issues/18716
|
||||
g = csr_array(([1], np.array([[0], [1]], dtype=np.int64)), shape=(2, 2))
|
||||
assert g.indices.dtype == np.int64
|
||||
n, labels = csgraph.connected_components(g, directed=False)
|
||||
assert n == 1
|
||||
assert_array_almost_equal(labels, [0, 0])
|
||||
|
||||
|
||||
def test_int64_indices_directed():
|
||||
# See https://github.com/scipy/scipy/issues/18716
|
||||
g = csr_array(([1], np.array([[0], [1]], dtype=np.int64)), shape=(2, 2))
|
||||
assert g.indices.dtype == np.int64
|
||||
n, labels = csgraph.connected_components(g, directed=True,
|
||||
connection='strong')
|
||||
assert n == 2
|
||||
assert_array_almost_equal(labels, [1, 0])
|
||||
|
||||
@ -0,0 +1,61 @@
|
||||
import numpy as np
|
||||
from numpy.testing import assert_array_almost_equal
|
||||
from scipy.sparse import csr_matrix
|
||||
from scipy.sparse.csgraph import csgraph_from_dense, csgraph_to_dense
|
||||
|
||||
|
||||
def test_csgraph_from_dense():
|
||||
np.random.seed(1234)
|
||||
G = np.random.random((10, 10))
|
||||
some_nulls = (G < 0.4)
|
||||
all_nulls = (G < 0.8)
|
||||
|
||||
for null_value in [0, np.nan, np.inf]:
|
||||
G[all_nulls] = null_value
|
||||
with np.errstate(invalid="ignore"):
|
||||
G_csr = csgraph_from_dense(G, null_value=0)
|
||||
|
||||
G[all_nulls] = 0
|
||||
assert_array_almost_equal(G, G_csr.toarray())
|
||||
|
||||
for null_value in [np.nan, np.inf]:
|
||||
G[all_nulls] = 0
|
||||
G[some_nulls] = null_value
|
||||
with np.errstate(invalid="ignore"):
|
||||
G_csr = csgraph_from_dense(G, null_value=0)
|
||||
|
||||
G[all_nulls] = 0
|
||||
assert_array_almost_equal(G, G_csr.toarray())
|
||||
|
||||
|
||||
def test_csgraph_to_dense():
|
||||
np.random.seed(1234)
|
||||
G = np.random.random((10, 10))
|
||||
nulls = (G < 0.8)
|
||||
G[nulls] = np.inf
|
||||
|
||||
G_csr = csgraph_from_dense(G)
|
||||
|
||||
for null_value in [0, 10, -np.inf, np.inf]:
|
||||
G[nulls] = null_value
|
||||
assert_array_almost_equal(G, csgraph_to_dense(G_csr, null_value))
|
||||
|
||||
|
||||
def test_multiple_edges():
|
||||
# create a random square matrix with an even number of elements
|
||||
np.random.seed(1234)
|
||||
X = np.random.random((10, 10))
|
||||
Xcsr = csr_matrix(X)
|
||||
|
||||
# now double-up every other column
|
||||
Xcsr.indices[::2] = Xcsr.indices[1::2]
|
||||
|
||||
# normal sparse toarray() will sum the duplicated edges
|
||||
Xdense = Xcsr.toarray()
|
||||
assert_array_almost_equal(Xdense[:, 1::2],
|
||||
X[:, ::2] + X[:, 1::2])
|
||||
|
||||
# csgraph_to_dense chooses the minimum of each duplicated edge
|
||||
Xdense = csgraph_to_dense(Xcsr)
|
||||
assert_array_almost_equal(Xdense[:, 1::2],
|
||||
np.minimum(X[:, ::2], X[:, 1::2]))
|
||||
@ -0,0 +1,201 @@
|
||||
import numpy as np
|
||||
from numpy.testing import assert_array_equal
|
||||
import pytest
|
||||
|
||||
from scipy.sparse import csr_matrix, csc_matrix
|
||||
from scipy.sparse.csgraph import maximum_flow
|
||||
from scipy.sparse.csgraph._flow import (
|
||||
_add_reverse_edges, _make_edge_pointers, _make_tails
|
||||
)
|
||||
|
||||
methods = ['edmonds_karp', 'dinic']
|
||||
|
||||
def test_raises_on_dense_input():
|
||||
with pytest.raises(TypeError):
|
||||
graph = np.array([[0, 1], [0, 0]])
|
||||
maximum_flow(graph, 0, 1)
|
||||
maximum_flow(graph, 0, 1, method='edmonds_karp')
|
||||
|
||||
|
||||
def test_raises_on_csc_input():
|
||||
with pytest.raises(TypeError):
|
||||
graph = csc_matrix([[0, 1], [0, 0]])
|
||||
maximum_flow(graph, 0, 1)
|
||||
maximum_flow(graph, 0, 1, method='edmonds_karp')
|
||||
|
||||
|
||||
def test_raises_on_floating_point_input():
|
||||
with pytest.raises(ValueError):
|
||||
graph = csr_matrix([[0, 1.5], [0, 0]], dtype=np.float64)
|
||||
maximum_flow(graph, 0, 1)
|
||||
maximum_flow(graph, 0, 1, method='edmonds_karp')
|
||||
|
||||
|
||||
def test_raises_on_non_square_input():
|
||||
with pytest.raises(ValueError):
|
||||
graph = csr_matrix([[0, 1, 2], [2, 1, 0]])
|
||||
maximum_flow(graph, 0, 1)
|
||||
|
||||
|
||||
def test_raises_when_source_is_sink():
|
||||
with pytest.raises(ValueError):
|
||||
graph = csr_matrix([[0, 1], [0, 0]])
|
||||
maximum_flow(graph, 0, 0)
|
||||
maximum_flow(graph, 0, 0, method='edmonds_karp')
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
@pytest.mark.parametrize('source', [-1, 2, 3])
|
||||
def test_raises_when_source_is_out_of_bounds(source, method):
|
||||
with pytest.raises(ValueError):
|
||||
graph = csr_matrix([[0, 1], [0, 0]])
|
||||
maximum_flow(graph, source, 1, method=method)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
@pytest.mark.parametrize('sink', [-1, 2, 3])
|
||||
def test_raises_when_sink_is_out_of_bounds(sink, method):
|
||||
with pytest.raises(ValueError):
|
||||
graph = csr_matrix([[0, 1], [0, 0]])
|
||||
maximum_flow(graph, 0, sink, method=method)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_simple_graph(method):
|
||||
# This graph looks as follows:
|
||||
# (0) --5--> (1)
|
||||
graph = csr_matrix([[0, 5], [0, 0]])
|
||||
res = maximum_flow(graph, 0, 1, method=method)
|
||||
assert res.flow_value == 5
|
||||
expected_flow = np.array([[0, 5], [-5, 0]])
|
||||
assert_array_equal(res.flow.toarray(), expected_flow)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_bottle_neck_graph(method):
|
||||
# This graph cannot use the full capacity between 0 and 1:
|
||||
# (0) --5--> (1) --3--> (2)
|
||||
graph = csr_matrix([[0, 5, 0], [0, 0, 3], [0, 0, 0]])
|
||||
res = maximum_flow(graph, 0, 2, method=method)
|
||||
assert res.flow_value == 3
|
||||
expected_flow = np.array([[0, 3, 0], [-3, 0, 3], [0, -3, 0]])
|
||||
assert_array_equal(res.flow.toarray(), expected_flow)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_backwards_flow(method):
|
||||
# This example causes backwards flow between vertices 3 and 4,
|
||||
# and so this test ensures that we handle that accordingly. See
|
||||
# https://stackoverflow.com/q/38843963/5085211
|
||||
# for more information.
|
||||
graph = csr_matrix([[0, 10, 0, 0, 10, 0, 0, 0],
|
||||
[0, 0, 10, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 10, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 0, 0, 0, 10],
|
||||
[0, 0, 0, 10, 0, 10, 0, 0],
|
||||
[0, 0, 0, 0, 0, 0, 10, 0],
|
||||
[0, 0, 0, 0, 0, 0, 0, 10],
|
||||
[0, 0, 0, 0, 0, 0, 0, 0]])
|
||||
res = maximum_flow(graph, 0, 7, method=method)
|
||||
assert res.flow_value == 20
|
||||
expected_flow = np.array([[0, 10, 0, 0, 10, 0, 0, 0],
|
||||
[-10, 0, 10, 0, 0, 0, 0, 0],
|
||||
[0, -10, 0, 10, 0, 0, 0, 0],
|
||||
[0, 0, -10, 0, 0, 0, 0, 10],
|
||||
[-10, 0, 0, 0, 0, 10, 0, 0],
|
||||
[0, 0, 0, 0, -10, 0, 10, 0],
|
||||
[0, 0, 0, 0, 0, -10, 0, 10],
|
||||
[0, 0, 0, -10, 0, 0, -10, 0]])
|
||||
assert_array_equal(res.flow.toarray(), expected_flow)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_example_from_clrs_chapter_26_1(method):
|
||||
# See page 659 in CLRS second edition, but note that the maximum flow
|
||||
# we find is slightly different than the one in CLRS; we push a flow of
|
||||
# 12 to v_1 instead of v_2.
|
||||
graph = csr_matrix([[0, 16, 13, 0, 0, 0],
|
||||
[0, 0, 10, 12, 0, 0],
|
||||
[0, 4, 0, 0, 14, 0],
|
||||
[0, 0, 9, 0, 0, 20],
|
||||
[0, 0, 0, 7, 0, 4],
|
||||
[0, 0, 0, 0, 0, 0]])
|
||||
res = maximum_flow(graph, 0, 5, method=method)
|
||||
assert res.flow_value == 23
|
||||
expected_flow = np.array([[0, 12, 11, 0, 0, 0],
|
||||
[-12, 0, 0, 12, 0, 0],
|
||||
[-11, 0, 0, 0, 11, 0],
|
||||
[0, -12, 0, 0, -7, 19],
|
||||
[0, 0, -11, 7, 0, 4],
|
||||
[0, 0, 0, -19, -4, 0]])
|
||||
assert_array_equal(res.flow.toarray(), expected_flow)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_disconnected_graph(method):
|
||||
# This tests the following disconnected graph:
|
||||
# (0) --5--> (1) (2) --3--> (3)
|
||||
graph = csr_matrix([[0, 5, 0, 0],
|
||||
[0, 0, 0, 0],
|
||||
[0, 0, 9, 3],
|
||||
[0, 0, 0, 0]])
|
||||
res = maximum_flow(graph, 0, 3, method=method)
|
||||
assert res.flow_value == 0
|
||||
expected_flow = np.zeros((4, 4), dtype=np.int32)
|
||||
assert_array_equal(res.flow.toarray(), expected_flow)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_add_reverse_edges_large_graph(method):
|
||||
# Regression test for https://github.com/scipy/scipy/issues/14385
|
||||
n = 100_000
|
||||
indices = np.arange(1, n)
|
||||
indptr = np.array(list(range(n)) + [n - 1])
|
||||
data = np.ones(n - 1, dtype=np.int32)
|
||||
graph = csr_matrix((data, indices, indptr), shape=(n, n))
|
||||
res = maximum_flow(graph, 0, n - 1, method=method)
|
||||
assert res.flow_value == 1
|
||||
expected_flow = graph - graph.transpose()
|
||||
assert_array_equal(res.flow.data, expected_flow.data)
|
||||
assert_array_equal(res.flow.indices, expected_flow.indices)
|
||||
assert_array_equal(res.flow.indptr, expected_flow.indptr)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("a,b_data_expected", [
|
||||
([[]], []),
|
||||
([[0], [0]], []),
|
||||
([[1, 0, 2], [0, 0, 0], [0, 3, 0]], [1, 2, 0, 0, 3]),
|
||||
([[9, 8, 7], [4, 5, 6], [0, 0, 0]], [9, 8, 7, 4, 5, 6, 0, 0])])
|
||||
def test_add_reverse_edges(a, b_data_expected):
|
||||
"""Test that the reversal of the edges of the input graph works
|
||||
as expected.
|
||||
"""
|
||||
a = csr_matrix(a, dtype=np.int32, shape=(len(a), len(a)))
|
||||
b = _add_reverse_edges(a)
|
||||
assert_array_equal(b.data, b_data_expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("a,expected", [
|
||||
([[]], []),
|
||||
([[0]], []),
|
||||
([[1]], [0]),
|
||||
([[0, 1], [10, 0]], [1, 0]),
|
||||
([[1, 0, 2], [0, 0, 3], [4, 5, 0]], [0, 3, 4, 1, 2])
|
||||
])
|
||||
def test_make_edge_pointers(a, expected):
|
||||
a = csr_matrix(a, dtype=np.int32)
|
||||
rev_edge_ptr = _make_edge_pointers(a)
|
||||
assert_array_equal(rev_edge_ptr, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("a,expected", [
|
||||
([[]], []),
|
||||
([[0]], []),
|
||||
([[1]], [0]),
|
||||
([[0, 1], [10, 0]], [0, 1]),
|
||||
([[1, 0, 2], [0, 0, 3], [4, 5, 0]], [0, 0, 1, 2, 2])
|
||||
])
|
||||
def test_make_tails(a, expected):
|
||||
a = csr_matrix(a, dtype=np.int32)
|
||||
tails = _make_tails(a)
|
||||
assert_array_equal(tails, expected)
|
||||
@ -0,0 +1,369 @@
|
||||
import pytest
|
||||
import numpy as np
|
||||
from numpy.testing import assert_allclose
|
||||
from pytest import raises as assert_raises
|
||||
from scipy import sparse
|
||||
|
||||
from scipy.sparse import csgraph
|
||||
from scipy._lib._util import np_long, np_ulong
|
||||
|
||||
|
||||
def check_int_type(mat):
|
||||
return np.issubdtype(mat.dtype, np.signedinteger) or np.issubdtype(
|
||||
mat.dtype, np_ulong
|
||||
)
|
||||
|
||||
|
||||
def test_laplacian_value_error():
|
||||
for t in int, float, complex:
|
||||
for m in ([1, 1],
|
||||
[[[1]]],
|
||||
[[1, 2, 3], [4, 5, 6]],
|
||||
[[1, 2], [3, 4], [5, 5]]):
|
||||
A = np.array(m, dtype=t)
|
||||
assert_raises(ValueError, csgraph.laplacian, A)
|
||||
|
||||
|
||||
def _explicit_laplacian(x, normed=False):
|
||||
if sparse.issparse(x):
|
||||
x = x.toarray()
|
||||
x = np.asarray(x)
|
||||
y = -1.0 * x
|
||||
for j in range(y.shape[0]):
|
||||
y[j,j] = x[j,j+1:].sum() + x[j,:j].sum()
|
||||
if normed:
|
||||
d = np.diag(y).copy()
|
||||
d[d == 0] = 1.0
|
||||
y /= d[:,None]**.5
|
||||
y /= d[None,:]**.5
|
||||
return y
|
||||
|
||||
|
||||
def _check_symmetric_graph_laplacian(mat, normed, copy=True):
|
||||
if not hasattr(mat, 'shape'):
|
||||
mat = eval(mat, dict(np=np, sparse=sparse))
|
||||
|
||||
if sparse.issparse(mat):
|
||||
sp_mat = mat
|
||||
mat = sp_mat.toarray()
|
||||
else:
|
||||
sp_mat = sparse.csr_matrix(mat)
|
||||
|
||||
mat_copy = np.copy(mat)
|
||||
sp_mat_copy = sparse.csr_matrix(sp_mat, copy=True)
|
||||
|
||||
n_nodes = mat.shape[0]
|
||||
explicit_laplacian = _explicit_laplacian(mat, normed=normed)
|
||||
laplacian = csgraph.laplacian(mat, normed=normed, copy=copy)
|
||||
sp_laplacian = csgraph.laplacian(sp_mat, normed=normed,
|
||||
copy=copy)
|
||||
|
||||
if copy:
|
||||
assert_allclose(mat, mat_copy)
|
||||
_assert_allclose_sparse(sp_mat, sp_mat_copy)
|
||||
else:
|
||||
if not (normed and check_int_type(mat)):
|
||||
assert_allclose(laplacian, mat)
|
||||
if sp_mat.format == 'coo':
|
||||
_assert_allclose_sparse(sp_laplacian, sp_mat)
|
||||
|
||||
assert_allclose(laplacian, sp_laplacian.toarray())
|
||||
|
||||
for tested in [laplacian, sp_laplacian.toarray()]:
|
||||
if not normed:
|
||||
assert_allclose(tested.sum(axis=0), np.zeros(n_nodes))
|
||||
assert_allclose(tested.T, tested)
|
||||
assert_allclose(tested, explicit_laplacian)
|
||||
|
||||
|
||||
def test_symmetric_graph_laplacian():
|
||||
symmetric_mats = (
|
||||
'np.arange(10) * np.arange(10)[:, np.newaxis]',
|
||||
'np.ones((7, 7))',
|
||||
'np.eye(19)',
|
||||
'sparse.diags([1, 1], [-1, 1], shape=(4, 4))',
|
||||
'sparse.diags([1, 1], [-1, 1], shape=(4, 4)).toarray()',
|
||||
'sparse.diags([1, 1], [-1, 1], shape=(4, 4)).todense()',
|
||||
'np.vander(np.arange(4)) + np.vander(np.arange(4)).T'
|
||||
)
|
||||
for mat in symmetric_mats:
|
||||
for normed in True, False:
|
||||
for copy in True, False:
|
||||
_check_symmetric_graph_laplacian(mat, normed, copy)
|
||||
|
||||
|
||||
def _assert_allclose_sparse(a, b, **kwargs):
|
||||
# helper function that can deal with sparse matrices
|
||||
if sparse.issparse(a):
|
||||
a = a.toarray()
|
||||
if sparse.issparse(b):
|
||||
b = b.toarray()
|
||||
assert_allclose(a, b, **kwargs)
|
||||
|
||||
|
||||
def _check_laplacian_dtype_none(
|
||||
A, desired_L, desired_d, normed, use_out_degree, copy, dtype, arr_type
|
||||
):
|
||||
mat = arr_type(A, dtype=dtype)
|
||||
L, d = csgraph.laplacian(
|
||||
mat,
|
||||
normed=normed,
|
||||
return_diag=True,
|
||||
use_out_degree=use_out_degree,
|
||||
copy=copy,
|
||||
dtype=None,
|
||||
)
|
||||
if normed and check_int_type(mat):
|
||||
assert L.dtype == np.float64
|
||||
assert d.dtype == np.float64
|
||||
_assert_allclose_sparse(L, desired_L, atol=1e-12)
|
||||
_assert_allclose_sparse(d, desired_d, atol=1e-12)
|
||||
else:
|
||||
assert L.dtype == dtype
|
||||
assert d.dtype == dtype
|
||||
desired_L = np.asarray(desired_L).astype(dtype)
|
||||
desired_d = np.asarray(desired_d).astype(dtype)
|
||||
_assert_allclose_sparse(L, desired_L, atol=1e-12)
|
||||
_assert_allclose_sparse(d, desired_d, atol=1e-12)
|
||||
|
||||
if not copy:
|
||||
if not (normed and check_int_type(mat)):
|
||||
if type(mat) is np.ndarray:
|
||||
assert_allclose(L, mat)
|
||||
elif mat.format == "coo":
|
||||
_assert_allclose_sparse(L, mat)
|
||||
|
||||
|
||||
def _check_laplacian_dtype(
|
||||
A, desired_L, desired_d, normed, use_out_degree, copy, dtype, arr_type
|
||||
):
|
||||
mat = arr_type(A, dtype=dtype)
|
||||
L, d = csgraph.laplacian(
|
||||
mat,
|
||||
normed=normed,
|
||||
return_diag=True,
|
||||
use_out_degree=use_out_degree,
|
||||
copy=copy,
|
||||
dtype=dtype,
|
||||
)
|
||||
assert L.dtype == dtype
|
||||
assert d.dtype == dtype
|
||||
desired_L = np.asarray(desired_L).astype(dtype)
|
||||
desired_d = np.asarray(desired_d).astype(dtype)
|
||||
_assert_allclose_sparse(L, desired_L, atol=1e-12)
|
||||
_assert_allclose_sparse(d, desired_d, atol=1e-12)
|
||||
|
||||
if not copy:
|
||||
if not (normed and check_int_type(mat)):
|
||||
if type(mat) is np.ndarray:
|
||||
assert_allclose(L, mat)
|
||||
elif mat.format == 'coo':
|
||||
_assert_allclose_sparse(L, mat)
|
||||
|
||||
|
||||
INT_DTYPES = {np.intc, np_long, np.longlong}
|
||||
REAL_DTYPES = {np.float32, np.float64, np.longdouble}
|
||||
COMPLEX_DTYPES = {np.complex64, np.complex128, np.clongdouble}
|
||||
# use sorted list to ensure fixed order of tests
|
||||
DTYPES = sorted(INT_DTYPES ^ REAL_DTYPES ^ COMPLEX_DTYPES, key=str)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("dtype", DTYPES)
|
||||
@pytest.mark.parametrize("arr_type", [np.array,
|
||||
sparse.csr_matrix,
|
||||
sparse.coo_matrix,
|
||||
sparse.csr_array,
|
||||
sparse.coo_array])
|
||||
@pytest.mark.parametrize("copy", [True, False])
|
||||
@pytest.mark.parametrize("normed", [True, False])
|
||||
@pytest.mark.parametrize("use_out_degree", [True, False])
|
||||
def test_asymmetric_laplacian(use_out_degree, normed,
|
||||
copy, dtype, arr_type):
|
||||
# adjacency matrix
|
||||
A = [[0, 1, 0],
|
||||
[4, 2, 0],
|
||||
[0, 0, 0]]
|
||||
A = arr_type(np.array(A), dtype=dtype)
|
||||
A_copy = A.copy()
|
||||
|
||||
if not normed and use_out_degree:
|
||||
# Laplacian matrix using out-degree
|
||||
L = [[1, -1, 0],
|
||||
[-4, 4, 0],
|
||||
[0, 0, 0]]
|
||||
d = [1, 4, 0]
|
||||
|
||||
if normed and use_out_degree:
|
||||
# normalized Laplacian matrix using out-degree
|
||||
L = [[1, -0.5, 0],
|
||||
[-2, 1, 0],
|
||||
[0, 0, 0]]
|
||||
d = [1, 2, 1]
|
||||
|
||||
if not normed and not use_out_degree:
|
||||
# Laplacian matrix using in-degree
|
||||
L = [[4, -1, 0],
|
||||
[-4, 1, 0],
|
||||
[0, 0, 0]]
|
||||
d = [4, 1, 0]
|
||||
|
||||
if normed and not use_out_degree:
|
||||
# normalized Laplacian matrix using in-degree
|
||||
L = [[1, -0.5, 0],
|
||||
[-2, 1, 0],
|
||||
[0, 0, 0]]
|
||||
d = [2, 1, 1]
|
||||
|
||||
_check_laplacian_dtype_none(
|
||||
A,
|
||||
L,
|
||||
d,
|
||||
normed=normed,
|
||||
use_out_degree=use_out_degree,
|
||||
copy=copy,
|
||||
dtype=dtype,
|
||||
arr_type=arr_type,
|
||||
)
|
||||
|
||||
_check_laplacian_dtype(
|
||||
A_copy,
|
||||
L,
|
||||
d,
|
||||
normed=normed,
|
||||
use_out_degree=use_out_degree,
|
||||
copy=copy,
|
||||
dtype=dtype,
|
||||
arr_type=arr_type,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("fmt", ['csr', 'csc', 'coo', 'lil',
|
||||
'dok', 'dia', 'bsr'])
|
||||
@pytest.mark.parametrize("normed", [True, False])
|
||||
@pytest.mark.parametrize("copy", [True, False])
|
||||
def test_sparse_formats(fmt, normed, copy):
|
||||
mat = sparse.diags([1, 1], [-1, 1], shape=(4, 4), format=fmt)
|
||||
_check_symmetric_graph_laplacian(mat, normed, copy)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"arr_type", [np.asarray,
|
||||
sparse.csr_matrix,
|
||||
sparse.coo_matrix,
|
||||
sparse.csr_array,
|
||||
sparse.coo_array]
|
||||
)
|
||||
@pytest.mark.parametrize("form", ["array", "function", "lo"])
|
||||
def test_laplacian_symmetrized(arr_type, form):
|
||||
# adjacency matrix
|
||||
n = 3
|
||||
mat = arr_type(np.arange(n * n).reshape(n, n))
|
||||
L_in, d_in = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
form=form,
|
||||
)
|
||||
L_out, d_out = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
use_out_degree=True,
|
||||
form=form,
|
||||
)
|
||||
Ls, ds = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
symmetrized=True,
|
||||
form=form,
|
||||
)
|
||||
Ls_normed, ds_normed = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
symmetrized=True,
|
||||
normed=True,
|
||||
form=form,
|
||||
)
|
||||
mat += mat.T
|
||||
Lss, dss = csgraph.laplacian(mat, return_diag=True, form=form)
|
||||
Lss_normed, dss_normed = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
normed=True,
|
||||
form=form,
|
||||
)
|
||||
|
||||
assert_allclose(ds, d_in + d_out)
|
||||
assert_allclose(ds, dss)
|
||||
assert_allclose(ds_normed, dss_normed)
|
||||
|
||||
d = {}
|
||||
for L in ["L_in", "L_out", "Ls", "Ls_normed", "Lss", "Lss_normed"]:
|
||||
if form == "array":
|
||||
d[L] = eval(L)
|
||||
else:
|
||||
d[L] = eval(L)(np.eye(n, dtype=mat.dtype))
|
||||
|
||||
_assert_allclose_sparse(d["Ls"], d["L_in"] + d["L_out"].T)
|
||||
_assert_allclose_sparse(d["Ls"], d["Lss"])
|
||||
_assert_allclose_sparse(d["Ls_normed"], d["Lss_normed"])
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"arr_type", [np.asarray,
|
||||
sparse.csr_matrix,
|
||||
sparse.coo_matrix,
|
||||
sparse.csr_array,
|
||||
sparse.coo_array]
|
||||
)
|
||||
@pytest.mark.parametrize("dtype", DTYPES)
|
||||
@pytest.mark.parametrize("normed", [True, False])
|
||||
@pytest.mark.parametrize("symmetrized", [True, False])
|
||||
@pytest.mark.parametrize("use_out_degree", [True, False])
|
||||
@pytest.mark.parametrize("form", ["function", "lo"])
|
||||
def test_format(dtype, arr_type, normed, symmetrized, use_out_degree, form):
|
||||
n = 3
|
||||
mat = [[0, 1, 0], [4, 2, 0], [0, 0, 0]]
|
||||
mat = arr_type(np.array(mat), dtype=dtype)
|
||||
Lo, do = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
normed=normed,
|
||||
symmetrized=symmetrized,
|
||||
use_out_degree=use_out_degree,
|
||||
dtype=dtype,
|
||||
)
|
||||
La, da = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
normed=normed,
|
||||
symmetrized=symmetrized,
|
||||
use_out_degree=use_out_degree,
|
||||
dtype=dtype,
|
||||
form="array",
|
||||
)
|
||||
assert_allclose(do, da)
|
||||
_assert_allclose_sparse(Lo, La)
|
||||
|
||||
L, d = csgraph.laplacian(
|
||||
mat,
|
||||
return_diag=True,
|
||||
normed=normed,
|
||||
symmetrized=symmetrized,
|
||||
use_out_degree=use_out_degree,
|
||||
dtype=dtype,
|
||||
form=form,
|
||||
)
|
||||
assert_allclose(d, do)
|
||||
assert d.dtype == dtype
|
||||
Lm = L(np.eye(n, dtype=mat.dtype)).astype(dtype)
|
||||
_assert_allclose_sparse(Lm, Lo, rtol=2e-7, atol=2e-7)
|
||||
x = np.arange(6).reshape(3, 2)
|
||||
if not (normed and dtype in INT_DTYPES):
|
||||
assert_allclose(L(x), Lo @ x)
|
||||
else:
|
||||
# Normalized Lo is casted to integer, but L() is not
|
||||
pass
|
||||
|
||||
|
||||
def test_format_error_message():
|
||||
with pytest.raises(ValueError, match="Invalid form: 'toto'"):
|
||||
_ = csgraph.laplacian(np.eye(1), form='toto')
|
||||
@ -0,0 +1,294 @@
|
||||
from itertools import product
|
||||
|
||||
import numpy as np
|
||||
from numpy.testing import assert_array_equal, assert_equal
|
||||
import pytest
|
||||
|
||||
from scipy.sparse import csr_matrix, coo_matrix, diags
|
||||
from scipy.sparse.csgraph import (
|
||||
maximum_bipartite_matching, min_weight_full_bipartite_matching
|
||||
)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_raises_on_dense_input():
|
||||
with pytest.raises(TypeError):
|
||||
graph = np.array([[0, 1], [0, 0]])
|
||||
maximum_bipartite_matching(graph)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_empty_graph():
|
||||
graph = csr_matrix((0, 0))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
expected_matching = np.array([])
|
||||
assert_array_equal(expected_matching, x)
|
||||
assert_array_equal(expected_matching, y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_empty_left_partition():
|
||||
graph = csr_matrix((2, 0))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
assert_array_equal(np.array([]), x)
|
||||
assert_array_equal(np.array([-1, -1]), y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_empty_right_partition():
|
||||
graph = csr_matrix((0, 3))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
assert_array_equal(np.array([-1, -1, -1]), x)
|
||||
assert_array_equal(np.array([]), y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_graph_with_no_edges():
|
||||
graph = csr_matrix((2, 2))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
assert_array_equal(np.array([-1, -1]), x)
|
||||
assert_array_equal(np.array([-1, -1]), y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_graph_that_causes_augmentation():
|
||||
# In this graph, column 1 is initially assigned to row 1, but it should be
|
||||
# reassigned to make room for row 2.
|
||||
graph = csr_matrix([[1, 1], [1, 0]])
|
||||
x = maximum_bipartite_matching(graph, perm_type='column')
|
||||
y = maximum_bipartite_matching(graph, perm_type='row')
|
||||
expected_matching = np.array([1, 0])
|
||||
assert_array_equal(expected_matching, x)
|
||||
assert_array_equal(expected_matching, y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_graph_with_more_rows_than_columns():
|
||||
graph = csr_matrix([[1, 1], [1, 0], [0, 1]])
|
||||
x = maximum_bipartite_matching(graph, perm_type='column')
|
||||
y = maximum_bipartite_matching(graph, perm_type='row')
|
||||
assert_array_equal(np.array([0, -1, 1]), x)
|
||||
assert_array_equal(np.array([0, 2]), y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_graph_with_more_columns_than_rows():
|
||||
graph = csr_matrix([[1, 1, 0], [0, 0, 1]])
|
||||
x = maximum_bipartite_matching(graph, perm_type='column')
|
||||
y = maximum_bipartite_matching(graph, perm_type='row')
|
||||
assert_array_equal(np.array([0, 2]), x)
|
||||
assert_array_equal(np.array([0, -1, 1]), y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_explicit_zeros_count_as_edges():
|
||||
data = [0, 0]
|
||||
indices = [1, 0]
|
||||
indptr = [0, 1, 2]
|
||||
graph = csr_matrix((data, indices, indptr), shape=(2, 2))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
expected_matching = np.array([1, 0])
|
||||
assert_array_equal(expected_matching, x)
|
||||
assert_array_equal(expected_matching, y)
|
||||
|
||||
|
||||
def test_maximum_bipartite_matching_feasibility_of_result():
|
||||
# This is a regression test for GitHub issue #11458
|
||||
data = np.ones(50, dtype=int)
|
||||
indices = [11, 12, 19, 22, 23, 5, 22, 3, 8, 10, 5, 6, 11, 12, 13, 5, 13,
|
||||
14, 20, 22, 3, 15, 3, 13, 14, 11, 12, 19, 22, 23, 5, 22, 3, 8,
|
||||
10, 5, 6, 11, 12, 13, 5, 13, 14, 20, 22, 3, 15, 3, 13, 14]
|
||||
indptr = [0, 5, 7, 10, 10, 15, 20, 22, 22, 23, 25, 30, 32, 35, 35, 40, 45,
|
||||
47, 47, 48, 50]
|
||||
graph = csr_matrix((data, indices, indptr), shape=(20, 25))
|
||||
x = maximum_bipartite_matching(graph, perm_type='row')
|
||||
y = maximum_bipartite_matching(graph, perm_type='column')
|
||||
assert (x != -1).sum() == 13
|
||||
assert (y != -1).sum() == 13
|
||||
# Ensure that each element of the matching is in fact an edge in the graph.
|
||||
for u, v in zip(range(graph.shape[0]), y):
|
||||
if v != -1:
|
||||
assert graph[u, v]
|
||||
for u, v in zip(x, range(graph.shape[1])):
|
||||
if u != -1:
|
||||
assert graph[u, v]
|
||||
|
||||
|
||||
def test_matching_large_random_graph_with_one_edge_incident_to_each_vertex():
|
||||
np.random.seed(42)
|
||||
A = diags(np.ones(25), offsets=0, format='csr')
|
||||
rand_perm = np.random.permutation(25)
|
||||
rand_perm2 = np.random.permutation(25)
|
||||
|
||||
Rrow = np.arange(25)
|
||||
Rcol = rand_perm
|
||||
Rdata = np.ones(25, dtype=int)
|
||||
Rmat = coo_matrix((Rdata, (Rrow, Rcol))).tocsr()
|
||||
|
||||
Crow = rand_perm2
|
||||
Ccol = np.arange(25)
|
||||
Cdata = np.ones(25, dtype=int)
|
||||
Cmat = coo_matrix((Cdata, (Crow, Ccol))).tocsr()
|
||||
# Randomly permute identity matrix
|
||||
B = Rmat * A * Cmat
|
||||
|
||||
# Row permute
|
||||
perm = maximum_bipartite_matching(B, perm_type='row')
|
||||
Rrow = np.arange(25)
|
||||
Rcol = perm
|
||||
Rdata = np.ones(25, dtype=int)
|
||||
Rmat = coo_matrix((Rdata, (Rrow, Rcol))).tocsr()
|
||||
C1 = Rmat * B
|
||||
|
||||
# Column permute
|
||||
perm2 = maximum_bipartite_matching(B, perm_type='column')
|
||||
Crow = perm2
|
||||
Ccol = np.arange(25)
|
||||
Cdata = np.ones(25, dtype=int)
|
||||
Cmat = coo_matrix((Cdata, (Crow, Ccol))).tocsr()
|
||||
C2 = B * Cmat
|
||||
|
||||
# Should get identity matrix back
|
||||
assert_equal(any(C1.diagonal() == 0), False)
|
||||
assert_equal(any(C2.diagonal() == 0), False)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('num_rows,num_cols', [(0, 0), (2, 0), (0, 3)])
|
||||
def test_min_weight_full_matching_trivial_graph(num_rows, num_cols):
|
||||
biadjacency_matrix = csr_matrix((num_cols, num_rows))
|
||||
row_ind, col_ind = min_weight_full_bipartite_matching(biadjacency_matrix)
|
||||
assert len(row_ind) == 0
|
||||
assert len(col_ind) == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize('biadjacency_matrix',
|
||||
[
|
||||
[[1, 1, 1], [1, 0, 0], [1, 0, 0]],
|
||||
[[1, 1, 1], [0, 0, 1], [0, 0, 1]],
|
||||
[[1, 0, 0, 1], [1, 1, 0, 1], [0, 0, 0, 0]],
|
||||
[[1, 0, 0], [2, 0, 0]],
|
||||
[[0, 1, 0], [0, 2, 0]],
|
||||
[[1, 0], [2, 0], [5, 0]]
|
||||
])
|
||||
def test_min_weight_full_matching_infeasible_problems(biadjacency_matrix):
|
||||
with pytest.raises(ValueError):
|
||||
min_weight_full_bipartite_matching(csr_matrix(biadjacency_matrix))
|
||||
|
||||
|
||||
def test_min_weight_full_matching_large_infeasible():
|
||||
# Regression test for GitHub issue #17269
|
||||
a = np.asarray([
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001],
|
||||
[0.0, 0.11687445, 0.0, 0.0, 0.01319788, 0.07509257, 0.0,
|
||||
0.0, 0.0, 0.74228317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.81087935, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.8408466, 0.0, 0.0, 0.0, 0.0, 0.01194389,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.82994211, 0.0, 0.0, 0.0, 0.11468516, 0.0, 0.0, 0.0,
|
||||
0.11173505, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0],
|
||||
[0.18796507, 0.0, 0.04002318, 0.0, 0.0, 0.0, 0.0, 0.0, 0.75883335,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.71545464, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02748488,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.78470564, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.14829198,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.10870609, 0.0, 0.0, 0.0, 0.8918677, 0.0, 0.0, 0.0, 0.06306644,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
|
||||
0.63844085, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7442354, 0.0, 0.0, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09850549, 0.0, 0.0, 0.18638258,
|
||||
0.2769244, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.73182464, 0.0, 0.0, 0.46443561,
|
||||
0.38589284, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
|
||||
[0.29510278, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09666032, 0.0,
|
||||
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
|
||||
])
|
||||
with pytest.raises(ValueError, match='no full matching exists'):
|
||||
min_weight_full_bipartite_matching(csr_matrix(a))
|
||||
|
||||
|
||||
def test_explicit_zero_causes_warning():
|
||||
with pytest.warns(UserWarning):
|
||||
biadjacency_matrix = csr_matrix(((2, 0, 3), (0, 1, 1), (0, 2, 3)))
|
||||
min_weight_full_bipartite_matching(biadjacency_matrix)
|
||||
|
||||
|
||||
# General test for linear sum assignment solvers to make it possible to rely
|
||||
# on the same tests for scipy.optimize.linear_sum_assignment.
|
||||
def linear_sum_assignment_assertions(
|
||||
solver, array_type, sign, test_case
|
||||
):
|
||||
cost_matrix, expected_cost = test_case
|
||||
maximize = sign == -1
|
||||
cost_matrix = sign * array_type(cost_matrix)
|
||||
expected_cost = sign * np.array(expected_cost)
|
||||
|
||||
row_ind, col_ind = solver(cost_matrix, maximize=maximize)
|
||||
assert_array_equal(row_ind, np.sort(row_ind))
|
||||
assert_array_equal(expected_cost,
|
||||
np.array(cost_matrix[row_ind, col_ind]).flatten())
|
||||
|
||||
cost_matrix = cost_matrix.T
|
||||
row_ind, col_ind = solver(cost_matrix, maximize=maximize)
|
||||
assert_array_equal(row_ind, np.sort(row_ind))
|
||||
assert_array_equal(np.sort(expected_cost),
|
||||
np.sort(np.array(
|
||||
cost_matrix[row_ind, col_ind])).flatten())
|
||||
|
||||
|
||||
linear_sum_assignment_test_cases = product(
|
||||
[-1, 1],
|
||||
[
|
||||
# Square
|
||||
([[400, 150, 400],
|
||||
[400, 450, 600],
|
||||
[300, 225, 300]],
|
||||
[150, 400, 300]),
|
||||
|
||||
# Rectangular variant
|
||||
([[400, 150, 400, 1],
|
||||
[400, 450, 600, 2],
|
||||
[300, 225, 300, 3]],
|
||||
[150, 2, 300]),
|
||||
|
||||
([[10, 10, 8],
|
||||
[9, 8, 1],
|
||||
[9, 7, 4]],
|
||||
[10, 1, 7]),
|
||||
|
||||
# Square
|
||||
([[10, 10, 8, 11],
|
||||
[9, 8, 1, 1],
|
||||
[9, 7, 4, 10]],
|
||||
[10, 1, 4]),
|
||||
|
||||
# Rectangular variant
|
||||
([[10, float("inf"), float("inf")],
|
||||
[float("inf"), float("inf"), 1],
|
||||
[float("inf"), 7, float("inf")]],
|
||||
[10, 1, 7])
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.parametrize('sign,test_case', linear_sum_assignment_test_cases)
|
||||
def test_min_weight_full_matching_small_inputs(sign, test_case):
|
||||
linear_sum_assignment_assertions(
|
||||
min_weight_full_bipartite_matching, csr_matrix, sign, test_case)
|
||||
@ -0,0 +1,149 @@
|
||||
import pytest
|
||||
|
||||
import numpy as np
|
||||
import scipy.sparse as sp
|
||||
import scipy.sparse.csgraph as spgraph
|
||||
|
||||
from numpy.testing import assert_equal
|
||||
|
||||
try:
|
||||
import sparse
|
||||
except Exception:
|
||||
sparse = None
|
||||
|
||||
pytestmark = pytest.mark.skipif(sparse is None,
|
||||
reason="pydata/sparse not installed")
|
||||
|
||||
|
||||
msg = "pydata/sparse (0.15.1) does not implement necessary operations"
|
||||
|
||||
|
||||
sparse_params = (pytest.param("COO"),
|
||||
pytest.param("DOK", marks=[pytest.mark.xfail(reason=msg)]))
|
||||
|
||||
|
||||
@pytest.fixture(params=sparse_params)
|
||||
def sparse_cls(request):
|
||||
return getattr(sparse, request.param)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def graphs(sparse_cls):
|
||||
graph = [
|
||||
[0, 1, 1, 0, 0],
|
||||
[0, 0, 1, 0, 0],
|
||||
[0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 0],
|
||||
]
|
||||
A_dense = np.array(graph)
|
||||
A_sparse = sparse_cls(A_dense)
|
||||
return A_dense, A_sparse
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"func",
|
||||
[
|
||||
spgraph.shortest_path,
|
||||
spgraph.dijkstra,
|
||||
spgraph.floyd_warshall,
|
||||
spgraph.bellman_ford,
|
||||
spgraph.johnson,
|
||||
spgraph.reverse_cuthill_mckee,
|
||||
spgraph.maximum_bipartite_matching,
|
||||
spgraph.structural_rank,
|
||||
]
|
||||
)
|
||||
def test_csgraph_equiv(func, graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
actual = func(A_sparse)
|
||||
desired = func(sp.csc_matrix(A_dense))
|
||||
assert_equal(actual, desired)
|
||||
|
||||
|
||||
def test_connected_components(graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
func = spgraph.connected_components
|
||||
|
||||
actual_comp, actual_labels = func(A_sparse)
|
||||
desired_comp, desired_labels, = func(sp.csc_matrix(A_dense))
|
||||
|
||||
assert actual_comp == desired_comp
|
||||
assert_equal(actual_labels, desired_labels)
|
||||
|
||||
|
||||
def test_laplacian(graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
sparse_cls = type(A_sparse)
|
||||
func = spgraph.laplacian
|
||||
|
||||
actual = func(A_sparse)
|
||||
desired = func(sp.csc_matrix(A_dense))
|
||||
|
||||
assert isinstance(actual, sparse_cls)
|
||||
|
||||
assert_equal(actual.todense(), desired.todense())
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"func", [spgraph.breadth_first_order, spgraph.depth_first_order]
|
||||
)
|
||||
def test_order_search(graphs, func):
|
||||
A_dense, A_sparse = graphs
|
||||
|
||||
actual = func(A_sparse, 0)
|
||||
desired = func(sp.csc_matrix(A_dense), 0)
|
||||
|
||||
assert_equal(actual, desired)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"func", [spgraph.breadth_first_tree, spgraph.depth_first_tree]
|
||||
)
|
||||
def test_tree_search(graphs, func):
|
||||
A_dense, A_sparse = graphs
|
||||
sparse_cls = type(A_sparse)
|
||||
|
||||
actual = func(A_sparse, 0)
|
||||
desired = func(sp.csc_matrix(A_dense), 0)
|
||||
|
||||
assert isinstance(actual, sparse_cls)
|
||||
|
||||
assert_equal(actual.todense(), desired.todense())
|
||||
|
||||
|
||||
def test_minimum_spanning_tree(graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
sparse_cls = type(A_sparse)
|
||||
func = spgraph.minimum_spanning_tree
|
||||
|
||||
actual = func(A_sparse)
|
||||
desired = func(sp.csc_matrix(A_dense))
|
||||
|
||||
assert isinstance(actual, sparse_cls)
|
||||
|
||||
assert_equal(actual.todense(), desired.todense())
|
||||
|
||||
|
||||
def test_maximum_flow(graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
sparse_cls = type(A_sparse)
|
||||
func = spgraph.maximum_flow
|
||||
|
||||
actual = func(A_sparse, 0, 2)
|
||||
desired = func(sp.csr_matrix(A_dense), 0, 2)
|
||||
|
||||
assert actual.flow_value == desired.flow_value
|
||||
assert isinstance(actual.flow, sparse_cls)
|
||||
|
||||
assert_equal(actual.flow.todense(), desired.flow.todense())
|
||||
|
||||
|
||||
def test_min_weight_full_bipartite_matching(graphs):
|
||||
A_dense, A_sparse = graphs
|
||||
func = spgraph.min_weight_full_bipartite_matching
|
||||
|
||||
actual = func(A_sparse[0:2, 1:3])
|
||||
desired = func(sp.csc_matrix(A_dense)[0:2, 1:3])
|
||||
|
||||
assert_equal(actual, desired)
|
||||
@ -0,0 +1,70 @@
|
||||
import numpy as np
|
||||
from numpy.testing import assert_equal
|
||||
from scipy.sparse.csgraph import reverse_cuthill_mckee, structural_rank
|
||||
from scipy.sparse import csc_matrix, csr_matrix, coo_matrix
|
||||
|
||||
|
||||
def test_graph_reverse_cuthill_mckee():
|
||||
A = np.array([[1, 0, 0, 0, 1, 0, 0, 0],
|
||||
[0, 1, 1, 0, 0, 1, 0, 1],
|
||||
[0, 1, 1, 0, 1, 0, 0, 0],
|
||||
[0, 0, 0, 1, 0, 0, 1, 0],
|
||||
[1, 0, 1, 0, 1, 0, 0, 0],
|
||||
[0, 1, 0, 0, 0, 1, 0, 1],
|
||||
[0, 0, 0, 1, 0, 0, 1, 0],
|
||||
[0, 1, 0, 0, 0, 1, 0, 1]], dtype=int)
|
||||
|
||||
graph = csr_matrix(A)
|
||||
perm = reverse_cuthill_mckee(graph)
|
||||
correct_perm = np.array([6, 3, 7, 5, 1, 2, 4, 0])
|
||||
assert_equal(perm, correct_perm)
|
||||
|
||||
# Test int64 indices input
|
||||
graph.indices = graph.indices.astype('int64')
|
||||
graph.indptr = graph.indptr.astype('int64')
|
||||
perm = reverse_cuthill_mckee(graph, True)
|
||||
assert_equal(perm, correct_perm)
|
||||
|
||||
|
||||
def test_graph_reverse_cuthill_mckee_ordering():
|
||||
data = np.ones(63,dtype=int)
|
||||
rows = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2,
|
||||
2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5,
|
||||
6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9,
|
||||
9, 10, 10, 10, 10, 10, 11, 11, 11, 11,
|
||||
12, 12, 12, 13, 13, 13, 13, 14, 14, 14,
|
||||
14, 15, 15, 15, 15, 15])
|
||||
cols = np.array([0, 2, 5, 8, 10, 1, 3, 9, 11, 0, 2,
|
||||
7, 10, 1, 3, 11, 4, 6, 12, 14, 0, 7, 13,
|
||||
15, 4, 6, 14, 2, 5, 7, 15, 0, 8, 10, 13,
|
||||
1, 9, 11, 0, 2, 8, 10, 15, 1, 3, 9, 11,
|
||||
4, 12, 14, 5, 8, 13, 15, 4, 6, 12, 14,
|
||||
5, 7, 10, 13, 15])
|
||||
graph = coo_matrix((data, (rows,cols))).tocsr()
|
||||
perm = reverse_cuthill_mckee(graph)
|
||||
correct_perm = np.array([12, 14, 4, 6, 10, 8, 2, 15,
|
||||
0, 13, 7, 5, 9, 11, 1, 3])
|
||||
assert_equal(perm, correct_perm)
|
||||
|
||||
|
||||
def test_graph_structural_rank():
|
||||
# Test square matrix #1
|
||||
A = csc_matrix([[1, 1, 0],
|
||||
[1, 0, 1],
|
||||
[0, 1, 0]])
|
||||
assert_equal(structural_rank(A), 3)
|
||||
|
||||
# Test square matrix #2
|
||||
rows = np.array([0,0,0,0,0,1,1,2,2,3,3,3,3,3,3,4,4,5,5,6,6,7,7])
|
||||
cols = np.array([0,1,2,3,4,2,5,2,6,0,1,3,5,6,7,4,5,5,6,2,6,2,4])
|
||||
data = np.ones_like(rows)
|
||||
B = coo_matrix((data,(rows,cols)), shape=(8,8))
|
||||
assert_equal(structural_rank(B), 6)
|
||||
|
||||
#Test non-square matrix
|
||||
C = csc_matrix([[1, 0, 2, 0],
|
||||
[2, 0, 4, 0]])
|
||||
assert_equal(structural_rank(C), 2)
|
||||
|
||||
#Test tall matrix
|
||||
assert_equal(structural_rank(C.T), 2)
|
||||
@ -0,0 +1,454 @@
|
||||
from io import StringIO
|
||||
import warnings
|
||||
import numpy as np
|
||||
from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_allclose
|
||||
from pytest import raises as assert_raises
|
||||
from scipy.sparse.csgraph import (shortest_path, dijkstra, johnson,
|
||||
bellman_ford, construct_dist_matrix, yen,
|
||||
NegativeCycleError)
|
||||
import scipy.sparse
|
||||
from scipy.io import mmread
|
||||
import pytest
|
||||
|
||||
directed_G = np.array([[0, 3, 3, 0, 0],
|
||||
[0, 0, 0, 2, 4],
|
||||
[0, 0, 0, 0, 0],
|
||||
[1, 0, 0, 0, 0],
|
||||
[2, 0, 0, 2, 0]], dtype=float)
|
||||
|
||||
undirected_G = np.array([[0, 3, 3, 1, 2],
|
||||
[3, 0, 0, 2, 4],
|
||||
[3, 0, 0, 0, 0],
|
||||
[1, 2, 0, 0, 2],
|
||||
[2, 4, 0, 2, 0]], dtype=float)
|
||||
|
||||
unweighted_G = (directed_G > 0).astype(float)
|
||||
|
||||
directed_SP = [[0, 3, 3, 5, 7],
|
||||
[3, 0, 6, 2, 4],
|
||||
[np.inf, np.inf, 0, np.inf, np.inf],
|
||||
[1, 4, 4, 0, 8],
|
||||
[2, 5, 5, 2, 0]]
|
||||
|
||||
directed_2SP_0_to_3 = [[-9999, 0, -9999, 1, -9999],
|
||||
[-9999, 0, -9999, 4, 1]]
|
||||
|
||||
directed_sparse_zero_G = scipy.sparse.csr_matrix(([0, 1, 2, 3, 1],
|
||||
([0, 1, 2, 3, 4],
|
||||
[1, 2, 0, 4, 3])),
|
||||
shape = (5, 5))
|
||||
|
||||
directed_sparse_zero_SP = [[0, 0, 1, np.inf, np.inf],
|
||||
[3, 0, 1, np.inf, np.inf],
|
||||
[2, 2, 0, np.inf, np.inf],
|
||||
[np.inf, np.inf, np.inf, 0, 3],
|
||||
[np.inf, np.inf, np.inf, 1, 0]]
|
||||
|
||||
undirected_sparse_zero_G = scipy.sparse.csr_matrix(([0, 0, 1, 1, 2, 2, 1, 1],
|
||||
([0, 1, 1, 2, 2, 0, 3, 4],
|
||||
[1, 0, 2, 1, 0, 2, 4, 3])),
|
||||
shape = (5, 5))
|
||||
|
||||
undirected_sparse_zero_SP = [[0, 0, 1, np.inf, np.inf],
|
||||
[0, 0, 1, np.inf, np.inf],
|
||||
[1, 1, 0, np.inf, np.inf],
|
||||
[np.inf, np.inf, np.inf, 0, 1],
|
||||
[np.inf, np.inf, np.inf, 1, 0]]
|
||||
|
||||
directed_pred = np.array([[-9999, 0, 0, 1, 1],
|
||||
[3, -9999, 0, 1, 1],
|
||||
[-9999, -9999, -9999, -9999, -9999],
|
||||
[3, 0, 0, -9999, 1],
|
||||
[4, 0, 0, 4, -9999]], dtype=float)
|
||||
|
||||
undirected_SP = np.array([[0, 3, 3, 1, 2],
|
||||
[3, 0, 6, 2, 4],
|
||||
[3, 6, 0, 4, 5],
|
||||
[1, 2, 4, 0, 2],
|
||||
[2, 4, 5, 2, 0]], dtype=float)
|
||||
|
||||
undirected_SP_limit_2 = np.array([[0, np.inf, np.inf, 1, 2],
|
||||
[np.inf, 0, np.inf, 2, np.inf],
|
||||
[np.inf, np.inf, 0, np.inf, np.inf],
|
||||
[1, 2, np.inf, 0, 2],
|
||||
[2, np.inf, np.inf, 2, 0]], dtype=float)
|
||||
|
||||
undirected_SP_limit_0 = np.ones((5, 5), dtype=float) - np.eye(5)
|
||||
undirected_SP_limit_0[undirected_SP_limit_0 > 0] = np.inf
|
||||
|
||||
undirected_pred = np.array([[-9999, 0, 0, 0, 0],
|
||||
[1, -9999, 0, 1, 1],
|
||||
[2, 0, -9999, 0, 0],
|
||||
[3, 3, 0, -9999, 3],
|
||||
[4, 4, 0, 4, -9999]], dtype=float)
|
||||
|
||||
directed_negative_weighted_G = np.array([[0, 0, 0],
|
||||
[-1, 0, 0],
|
||||
[0, -1, 0]], dtype=float)
|
||||
|
||||
directed_negative_weighted_SP = np.array([[0, np.inf, np.inf],
|
||||
[-1, 0, np.inf],
|
||||
[-2, -1, 0]], dtype=float)
|
||||
|
||||
methods = ['auto', 'FW', 'D', 'BF', 'J']
|
||||
|
||||
|
||||
def test_dijkstra_limit():
|
||||
limits = [0, 2, np.inf]
|
||||
results = [undirected_SP_limit_0,
|
||||
undirected_SP_limit_2,
|
||||
undirected_SP]
|
||||
|
||||
def check(limit, result):
|
||||
SP = dijkstra(undirected_G, directed=False, limit=limit)
|
||||
assert_array_almost_equal(SP, result)
|
||||
|
||||
for limit, result in zip(limits, results):
|
||||
check(limit, result)
|
||||
|
||||
|
||||
def test_directed():
|
||||
def check(method):
|
||||
SP = shortest_path(directed_G, method=method, directed=True,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP, directed_SP)
|
||||
|
||||
for method in methods:
|
||||
check(method)
|
||||
|
||||
|
||||
def test_undirected():
|
||||
def check(method, directed_in):
|
||||
if directed_in:
|
||||
SP1 = shortest_path(directed_G, method=method, directed=False,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP1, undirected_SP)
|
||||
else:
|
||||
SP2 = shortest_path(undirected_G, method=method, directed=True,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP2, undirected_SP)
|
||||
|
||||
for method in methods:
|
||||
for directed_in in (True, False):
|
||||
check(method, directed_in)
|
||||
|
||||
|
||||
def test_directed_sparse_zero():
|
||||
# test directed sparse graph with zero-weight edge and two connected components
|
||||
def check(method):
|
||||
SP = shortest_path(directed_sparse_zero_G, method=method, directed=True,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP, directed_sparse_zero_SP)
|
||||
|
||||
for method in methods:
|
||||
check(method)
|
||||
|
||||
|
||||
def test_undirected_sparse_zero():
|
||||
def check(method, directed_in):
|
||||
if directed_in:
|
||||
SP1 = shortest_path(directed_sparse_zero_G, method=method, directed=False,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP1, undirected_sparse_zero_SP)
|
||||
else:
|
||||
SP2 = shortest_path(undirected_sparse_zero_G, method=method, directed=True,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP2, undirected_sparse_zero_SP)
|
||||
|
||||
for method in methods:
|
||||
for directed_in in (True, False):
|
||||
check(method, directed_in)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('directed, SP_ans',
|
||||
((True, directed_SP),
|
||||
(False, undirected_SP)))
|
||||
@pytest.mark.parametrize('indices', ([0, 2, 4], [0, 4], [3, 4], [0, 0]))
|
||||
def test_dijkstra_indices_min_only(directed, SP_ans, indices):
|
||||
SP_ans = np.array(SP_ans)
|
||||
indices = np.array(indices, dtype=np.int64)
|
||||
min_ind_ans = indices[np.argmin(SP_ans[indices, :], axis=0)]
|
||||
min_d_ans = np.zeros(SP_ans.shape[0], SP_ans.dtype)
|
||||
for k in range(SP_ans.shape[0]):
|
||||
min_d_ans[k] = SP_ans[min_ind_ans[k], k]
|
||||
min_ind_ans[np.isinf(min_d_ans)] = -9999
|
||||
|
||||
SP, pred, sources = dijkstra(directed_G,
|
||||
directed=directed,
|
||||
indices=indices,
|
||||
min_only=True,
|
||||
return_predecessors=True)
|
||||
assert_array_almost_equal(SP, min_d_ans)
|
||||
assert_array_equal(min_ind_ans, sources)
|
||||
SP = dijkstra(directed_G,
|
||||
directed=directed,
|
||||
indices=indices,
|
||||
min_only=True,
|
||||
return_predecessors=False)
|
||||
assert_array_almost_equal(SP, min_d_ans)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('n', (10, 100, 1000))
|
||||
def test_dijkstra_min_only_random(n):
|
||||
np.random.seed(1234)
|
||||
data = scipy.sparse.rand(n, n, density=0.5, format='lil',
|
||||
random_state=42, dtype=np.float64)
|
||||
data.setdiag(np.zeros(n, dtype=np.bool_))
|
||||
# choose some random vertices
|
||||
v = np.arange(n)
|
||||
np.random.shuffle(v)
|
||||
indices = v[:int(n*.1)]
|
||||
ds, pred, sources = dijkstra(data,
|
||||
directed=True,
|
||||
indices=indices,
|
||||
min_only=True,
|
||||
return_predecessors=True)
|
||||
for k in range(n):
|
||||
p = pred[k]
|
||||
s = sources[k]
|
||||
while p != -9999:
|
||||
assert sources[p] == s
|
||||
p = pred[p]
|
||||
|
||||
|
||||
def test_dijkstra_random():
|
||||
# reproduces the hang observed in gh-17782
|
||||
n = 10
|
||||
indices = [0, 4, 4, 5, 7, 9, 0, 6, 2, 3, 7, 9, 1, 2, 9, 2, 5, 6]
|
||||
indptr = [0, 0, 2, 5, 6, 7, 8, 12, 15, 18, 18]
|
||||
data = [0.33629, 0.40458, 0.47493, 0.42757, 0.11497, 0.91653, 0.69084,
|
||||
0.64979, 0.62555, 0.743, 0.01724, 0.99945, 0.31095, 0.15557,
|
||||
0.02439, 0.65814, 0.23478, 0.24072]
|
||||
graph = scipy.sparse.csr_matrix((data, indices, indptr), shape=(n, n))
|
||||
dijkstra(graph, directed=True, return_predecessors=True)
|
||||
|
||||
|
||||
def test_gh_17782_segfault():
|
||||
text = """%%MatrixMarket matrix coordinate real general
|
||||
84 84 22
|
||||
2 1 4.699999809265137e+00
|
||||
6 14 1.199999973177910e-01
|
||||
9 6 1.199999973177910e-01
|
||||
10 16 2.012000083923340e+01
|
||||
11 10 1.422000026702881e+01
|
||||
12 1 9.645999908447266e+01
|
||||
13 18 2.012000083923340e+01
|
||||
14 13 4.679999828338623e+00
|
||||
15 11 1.199999973177910e-01
|
||||
16 12 1.199999973177910e-01
|
||||
18 15 1.199999973177910e-01
|
||||
32 2 2.299999952316284e+00
|
||||
33 20 6.000000000000000e+00
|
||||
33 32 5.000000000000000e+00
|
||||
36 9 3.720000028610229e+00
|
||||
36 37 3.720000028610229e+00
|
||||
36 38 3.720000028610229e+00
|
||||
37 44 8.159999847412109e+00
|
||||
38 32 7.903999328613281e+01
|
||||
43 20 2.400000000000000e+01
|
||||
43 33 4.000000000000000e+00
|
||||
44 43 6.028000259399414e+01
|
||||
"""
|
||||
data = mmread(StringIO(text))
|
||||
dijkstra(data, directed=True, return_predecessors=True)
|
||||
|
||||
|
||||
def test_shortest_path_indices():
|
||||
indices = np.arange(4)
|
||||
|
||||
def check(func, indshape):
|
||||
outshape = indshape + (5,)
|
||||
SP = func(directed_G, directed=False,
|
||||
indices=indices.reshape(indshape))
|
||||
assert_array_almost_equal(SP, undirected_SP[indices].reshape(outshape))
|
||||
|
||||
for indshape in [(4,), (4, 1), (2, 2)]:
|
||||
for func in (dijkstra, bellman_ford, johnson, shortest_path):
|
||||
check(func, indshape)
|
||||
|
||||
assert_raises(ValueError, shortest_path, directed_G, method='FW',
|
||||
indices=indices)
|
||||
|
||||
|
||||
def test_predecessors():
|
||||
SP_res = {True: directed_SP,
|
||||
False: undirected_SP}
|
||||
pred_res = {True: directed_pred,
|
||||
False: undirected_pred}
|
||||
|
||||
def check(method, directed):
|
||||
SP, pred = shortest_path(directed_G, method, directed=directed,
|
||||
overwrite=False,
|
||||
return_predecessors=True)
|
||||
assert_array_almost_equal(SP, SP_res[directed])
|
||||
assert_array_almost_equal(pred, pred_res[directed])
|
||||
|
||||
for method in methods:
|
||||
for directed in (True, False):
|
||||
check(method, directed)
|
||||
|
||||
|
||||
def test_construct_shortest_path():
|
||||
def check(method, directed):
|
||||
SP1, pred = shortest_path(directed_G,
|
||||
directed=directed,
|
||||
overwrite=False,
|
||||
return_predecessors=True)
|
||||
SP2 = construct_dist_matrix(directed_G, pred, directed=directed)
|
||||
assert_array_almost_equal(SP1, SP2)
|
||||
|
||||
for method in methods:
|
||||
for directed in (True, False):
|
||||
check(method, directed)
|
||||
|
||||
|
||||
def test_unweighted_path():
|
||||
def check(method, directed):
|
||||
SP1 = shortest_path(directed_G,
|
||||
directed=directed,
|
||||
overwrite=False,
|
||||
unweighted=True)
|
||||
SP2 = shortest_path(unweighted_G,
|
||||
directed=directed,
|
||||
overwrite=False,
|
||||
unweighted=False)
|
||||
assert_array_almost_equal(SP1, SP2)
|
||||
|
||||
for method in methods:
|
||||
for directed in (True, False):
|
||||
check(method, directed)
|
||||
|
||||
|
||||
def test_negative_cycles():
|
||||
# create a small graph with a negative cycle
|
||||
graph = np.ones([5, 5])
|
||||
graph.flat[::6] = 0
|
||||
graph[1, 2] = -2
|
||||
|
||||
def check(method, directed):
|
||||
assert_raises(NegativeCycleError, shortest_path, graph, method,
|
||||
directed)
|
||||
|
||||
for directed in (True, False):
|
||||
for method in ['FW', 'J', 'BF']:
|
||||
check(method, directed)
|
||||
|
||||
assert_raises(NegativeCycleError, yen, graph, 0, 1, 1,
|
||||
directed=directed)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("method", ['FW', 'J', 'BF'])
|
||||
def test_negative_weights(method):
|
||||
SP = shortest_path(directed_negative_weighted_G, method, directed=True)
|
||||
assert_allclose(SP, directed_negative_weighted_SP, atol=1e-10)
|
||||
|
||||
|
||||
def test_masked_input():
|
||||
np.ma.masked_equal(directed_G, 0)
|
||||
|
||||
def check(method):
|
||||
SP = shortest_path(directed_G, method=method, directed=True,
|
||||
overwrite=False)
|
||||
assert_array_almost_equal(SP, directed_SP)
|
||||
|
||||
for method in methods:
|
||||
check(method)
|
||||
|
||||
|
||||
def test_overwrite():
|
||||
G = np.array([[0, 3, 3, 1, 2],
|
||||
[3, 0, 0, 2, 4],
|
||||
[3, 0, 0, 0, 0],
|
||||
[1, 2, 0, 0, 2],
|
||||
[2, 4, 0, 2, 0]], dtype=float)
|
||||
foo = G.copy()
|
||||
shortest_path(foo, overwrite=False)
|
||||
assert_array_equal(foo, G)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('method', methods)
|
||||
def test_buffer(method):
|
||||
# Smoke test that sparse matrices with read-only buffers (e.g., those from
|
||||
# joblib workers) do not cause::
|
||||
#
|
||||
# ValueError: buffer source array is read-only
|
||||
#
|
||||
G = scipy.sparse.csr_matrix([[1.]])
|
||||
G.data.flags['WRITEABLE'] = False
|
||||
shortest_path(G, method=method)
|
||||
|
||||
|
||||
def test_NaN_warnings():
|
||||
with warnings.catch_warnings(record=True) as record:
|
||||
shortest_path(np.array([[0, 1], [np.nan, 0]]))
|
||||
for r in record:
|
||||
assert r.category is not RuntimeWarning
|
||||
|
||||
|
||||
def test_sparse_matrices():
|
||||
# Test that using lil,csr and csc sparse matrix do not cause error
|
||||
G_dense = np.array([[0, 3, 0, 0, 0],
|
||||
[0, 0, -1, 0, 0],
|
||||
[0, 0, 0, 2, 0],
|
||||
[0, 0, 0, 0, 4],
|
||||
[0, 0, 0, 0, 0]], dtype=float)
|
||||
SP = shortest_path(G_dense)
|
||||
G_csr = scipy.sparse.csr_matrix(G_dense)
|
||||
G_csc = scipy.sparse.csc_matrix(G_dense)
|
||||
G_lil = scipy.sparse.lil_matrix(G_dense)
|
||||
assert_array_almost_equal(SP, shortest_path(G_csr))
|
||||
assert_array_almost_equal(SP, shortest_path(G_csc))
|
||||
assert_array_almost_equal(SP, shortest_path(G_lil))
|
||||
|
||||
|
||||
def test_yen_directed():
|
||||
distances, predecessors = yen(
|
||||
directed_G,
|
||||
source=0,
|
||||
sink=3,
|
||||
K=2,
|
||||
return_predecessors=True
|
||||
)
|
||||
assert_allclose(distances, [5., 9.])
|
||||
assert_allclose(predecessors, directed_2SP_0_to_3)
|
||||
|
||||
|
||||
def test_yen_undirected():
|
||||
distances = yen(
|
||||
undirected_G,
|
||||
source=0,
|
||||
sink=3,
|
||||
K=4,
|
||||
)
|
||||
assert_allclose(distances, [1., 4., 5., 8.])
|
||||
|
||||
def test_yen_unweighted():
|
||||
# Ask for more paths than there are, verify only the available paths are returned
|
||||
distances, predecessors = yen(
|
||||
directed_G,
|
||||
source=0,
|
||||
sink=3,
|
||||
K=4,
|
||||
unweighted=True,
|
||||
return_predecessors=True,
|
||||
)
|
||||
assert_allclose(distances, [2., 3.])
|
||||
assert_allclose(predecessors, directed_2SP_0_to_3)
|
||||
|
||||
def test_yen_no_paths():
|
||||
distances = yen(
|
||||
directed_G,
|
||||
source=2,
|
||||
sink=3,
|
||||
K=1,
|
||||
)
|
||||
assert distances.size == 0
|
||||
|
||||
def test_yen_negative_weights():
|
||||
distances = yen(
|
||||
directed_negative_weighted_G,
|
||||
source=2,
|
||||
sink=0,
|
||||
K=1,
|
||||
)
|
||||
assert_allclose(distances, [-2.])
|
||||
@ -0,0 +1,66 @@
|
||||
"""Test the minimum spanning tree function"""
|
||||
import numpy as np
|
||||
from numpy.testing import assert_
|
||||
import numpy.testing as npt
|
||||
from scipy.sparse import csr_matrix
|
||||
from scipy.sparse.csgraph import minimum_spanning_tree
|
||||
|
||||
|
||||
def test_minimum_spanning_tree():
|
||||
|
||||
# Create a graph with two connected components.
|
||||
graph = [[0,1,0,0,0],
|
||||
[1,0,0,0,0],
|
||||
[0,0,0,8,5],
|
||||
[0,0,8,0,1],
|
||||
[0,0,5,1,0]]
|
||||
graph = np.asarray(graph)
|
||||
|
||||
# Create the expected spanning tree.
|
||||
expected = [[0,1,0,0,0],
|
||||
[0,0,0,0,0],
|
||||
[0,0,0,0,5],
|
||||
[0,0,0,0,1],
|
||||
[0,0,0,0,0]]
|
||||
expected = np.asarray(expected)
|
||||
|
||||
# Ensure minimum spanning tree code gives this expected output.
|
||||
csgraph = csr_matrix(graph)
|
||||
mintree = minimum_spanning_tree(csgraph)
|
||||
mintree_array = mintree.toarray()
|
||||
npt.assert_array_equal(mintree_array, expected,
|
||||
'Incorrect spanning tree found.')
|
||||
|
||||
# Ensure that the original graph was not modified.
|
||||
npt.assert_array_equal(csgraph.toarray(), graph,
|
||||
'Original graph was modified.')
|
||||
|
||||
# Now let the algorithm modify the csgraph in place.
|
||||
mintree = minimum_spanning_tree(csgraph, overwrite=True)
|
||||
npt.assert_array_equal(mintree.toarray(), expected,
|
||||
'Graph was not properly modified to contain MST.')
|
||||
|
||||
np.random.seed(1234)
|
||||
for N in (5, 10, 15, 20):
|
||||
|
||||
# Create a random graph.
|
||||
graph = 3 + np.random.random((N, N))
|
||||
csgraph = csr_matrix(graph)
|
||||
|
||||
# The spanning tree has at most N - 1 edges.
|
||||
mintree = minimum_spanning_tree(csgraph)
|
||||
assert_(mintree.nnz < N)
|
||||
|
||||
# Set the sub diagonal to 1 to create a known spanning tree.
|
||||
idx = np.arange(N-1)
|
||||
graph[idx,idx+1] = 1
|
||||
csgraph = csr_matrix(graph)
|
||||
mintree = minimum_spanning_tree(csgraph)
|
||||
|
||||
# We expect to see this pattern in the spanning tree and otherwise
|
||||
# have this zero.
|
||||
expected = np.zeros((N, N))
|
||||
expected[idx, idx+1] = 1
|
||||
|
||||
npt.assert_array_equal(mintree.toarray(), expected,
|
||||
'Incorrect spanning tree found.')
|
||||
@ -0,0 +1,81 @@
|
||||
import numpy as np
|
||||
import pytest
|
||||
from numpy.testing import assert_array_almost_equal
|
||||
from scipy.sparse import csr_array
|
||||
from scipy.sparse.csgraph import (breadth_first_tree, depth_first_tree,
|
||||
csgraph_to_dense, csgraph_from_dense)
|
||||
|
||||
|
||||
def test_graph_breadth_first():
|
||||
csgraph = np.array([[0, 1, 2, 0, 0],
|
||||
[1, 0, 0, 0, 3],
|
||||
[2, 0, 0, 7, 0],
|
||||
[0, 0, 7, 0, 1],
|
||||
[0, 3, 0, 1, 0]])
|
||||
csgraph = csgraph_from_dense(csgraph, null_value=0)
|
||||
|
||||
bfirst = np.array([[0, 1, 2, 0, 0],
|
||||
[0, 0, 0, 0, 3],
|
||||
[0, 0, 0, 7, 0],
|
||||
[0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 0]])
|
||||
|
||||
for directed in [True, False]:
|
||||
bfirst_test = breadth_first_tree(csgraph, 0, directed)
|
||||
assert_array_almost_equal(csgraph_to_dense(bfirst_test),
|
||||
bfirst)
|
||||
|
||||
|
||||
def test_graph_depth_first():
|
||||
csgraph = np.array([[0, 1, 2, 0, 0],
|
||||
[1, 0, 0, 0, 3],
|
||||
[2, 0, 0, 7, 0],
|
||||
[0, 0, 7, 0, 1],
|
||||
[0, 3, 0, 1, 0]])
|
||||
csgraph = csgraph_from_dense(csgraph, null_value=0)
|
||||
|
||||
dfirst = np.array([[0, 1, 0, 0, 0],
|
||||
[0, 0, 0, 0, 3],
|
||||
[0, 0, 0, 0, 0],
|
||||
[0, 0, 7, 0, 0],
|
||||
[0, 0, 0, 1, 0]])
|
||||
|
||||
for directed in [True, False]:
|
||||
dfirst_test = depth_first_tree(csgraph, 0, directed)
|
||||
assert_array_almost_equal(csgraph_to_dense(dfirst_test),
|
||||
dfirst)
|
||||
|
||||
|
||||
def test_graph_breadth_first_trivial_graph():
|
||||
csgraph = np.array([[0]])
|
||||
csgraph = csgraph_from_dense(csgraph, null_value=0)
|
||||
|
||||
bfirst = np.array([[0]])
|
||||
|
||||
for directed in [True, False]:
|
||||
bfirst_test = breadth_first_tree(csgraph, 0, directed)
|
||||
assert_array_almost_equal(csgraph_to_dense(bfirst_test),
|
||||
bfirst)
|
||||
|
||||
|
||||
def test_graph_depth_first_trivial_graph():
|
||||
csgraph = np.array([[0]])
|
||||
csgraph = csgraph_from_dense(csgraph, null_value=0)
|
||||
|
||||
bfirst = np.array([[0]])
|
||||
|
||||
for directed in [True, False]:
|
||||
bfirst_test = depth_first_tree(csgraph, 0, directed)
|
||||
assert_array_almost_equal(csgraph_to_dense(bfirst_test),
|
||||
bfirst)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('directed', [True, False])
|
||||
@pytest.mark.parametrize('tree_func', [breadth_first_tree, depth_first_tree])
|
||||
def test_int64_indices(tree_func, directed):
|
||||
# See https://github.com/scipy/scipy/issues/18716
|
||||
g = csr_array(([1], np.array([[0], [1]], dtype=np.int64)), shape=(2, 2))
|
||||
assert g.indices.dtype == np.int64
|
||||
tree = tree_func(g, 0, directed=directed)
|
||||
assert_array_almost_equal(csgraph_to_dense(tree), [[0, 1], [0, 0]])
|
||||
|
||||
Reference in New Issue
Block a user