GMN Overview
Network structure is a foundation of complex systems as demonstrated in genomic Turner 2014, metabolic Jeong 2001, physiologic Bashan 2012, social, and neural systems/networks Bae 2025,Assaf 2020. Networks often express low-dimensional structure within a high-dimensional system consistent with the manifold hypothesis Thibeault 2024 evidenced in living systems Eckmann 2021 and their neural processing Fontenele 2024. The fact that fantastically complex structures such as mammalian brains express function and behavior not as single, ultra high-dimensional objects, but as interacting networks suggests the computational architecture should encompass low-dimensional, multiscale, interacting networks.
Generative manifold networks (GMN) combine these essential features into a new architecture based on interactive dynamical manifolds. GMN networks are discovered through an interaction function between observables defining an adjacency matrix from which a network graph for a desired target observable is grown. The interaction function can be a metric of causality such as convergent cross mapping (CCM) Sugihara 2012, mutual information, nonlinearity Pao 2021,Sugihara 1994,Smith 2015 or other suitable interaction metric. Each node of the network is a multivariate state space manifold leveraging the power of generalized embedding Deyle 2011. The architecture is therefore simple, low-dimensional and observable.
GMN can be used to discover manifold networks underlying complex, nonlinear dynamical systems toward understanding and prediction of their internal states and behaviors Pao et al. 2021.
Manifold Representation
Many state-space analytic methods presume the state-space can be represented as an invariant manifold completely encapsulating the system dynamics. Another common presumption is that a diffeomorphic representation of the true manifold can be derived from Takens embedding applied to a univariate time series observed from the system. In practice, univariate observations may not sufficiently encapsulate the underlying dynamics, an issue that is compounded as the dimensionaliy and complexity of the dynamics increase. Further, it is not always clear in multidimensional complex systems whether univariate observations are soley generated by the presumed dynamics. For example, noise and influences from other related or coupled dynamical systems may be present.
In cases where the dynamics arise from a complex interaction network, the realities of partial observation imply that univariate reconstructions from various time series of the system may not provide a diffeomorphic representation of a single underlying manifold, but each yeilding a manifold expressing different state-space structures. Instead, decomposing the system into an interacting network of manifolds may better represent the underlying dynamics.
Network Determination
Manifold network structure is discovered using an interaction matrix quantifying the mutual interaction between all observables.
Interaction Matrix
Given a data set with N feature vectors of length M where M corresponds to the number of time series observations, the interaction matrix (iMatrix) is an NxN matrix with each entry quantified by application of an interaction function F() between all combination of feature vectors. A low-dimensional state-space manifold corresponding to each feature vector establishes a Node
in the GMN Network
.
The application program InteractionMatrix.py
in the apps/
directory can create a variety of interaction matrices using different interaction functions F(). Results are stored in a Python pickled dictionary of pandas dataFrames or output as .csv files. See the docstring in InteractionMatrix.py
.
Avialable methods :
Method | Label | CLI argument |
---|---|---|
All Metrics | -a --allMethods | |
Cross Correlation | CC | -rho |
Simplex Cross Map | CM | -cmap |
Convergent Cross Map | CCM | -ccm |
rho Diff = max(CM, 0) - abs(CC) | rhoDiff | -rhoDiff |
Mutual Information | MI | -mi |
Mutual Information Non Linearity | MI_NL | -nl |
SMap nonlinearity | SMap | -smap |
CCM : Mutual Information | CMI | -cmi |
A natural choice for the interaction function F() is convergent cross mapping (CCM). The CCM result is the mean prediction skill (Pearson correlation) of an EDM Simplex predictor over an ensemble of predictions across a spectrum of randomly selected state-space library vectors at each library size. Specifically, predictions with small libraries with partial state-space information are compared to a large library size with more complete state-space information. If the difference in prediction skill between the large and small library sizes exceeds a user-defined threshold, CCM is considered to have converged and the positive CCM value is recorded. Otherwise, a value of 0 is recorded.
Network Creation
Once an interaction matrix is defined the application program CreateNetwork.py
in the apps/
directory can build the GMN network. The network is created as a networkx
acyclic directed graph (DiGraph) and stored in a Python dictionary with keys Graph
: the networkx DiGraph, and Map
: a Python dictionary of node names. The output is stored as either a Python pickled object or json file.
Nodes are added recursively starting at the output node(s), adding links according to the network interaction matrix until the limit of node drivers is reached, and, disallowing the creation of network cycles.
The number of drivers is a network property and can specified globally as a command line argument (-d
, --numDrivers
) to CreateNetwork.py
. It can also be assigned to each node individually with the (-df
, --driversFile
) argument specifying a .csv or .feather file mapping node names to the number of drivers.
Generative Mode
Once a network is created GMN can be run to generate time series at the output nodes. First a GMN
class object is created from the gmn
package and initialized according to the configuration file.
import gmn
G = gmn.GMN( configFile = 'config/default.cfg' )
The network can then be run in generative mode:
G.Generate()