## ISE Final Exam: CAUSAL STRUCTURE OF NETWORKS OF STOCHASTIC PROCESSES

- Event Type
- Other
- Topics
- final exam, network, stochastic
- Sponsor
- Industrial & Enterprise Systems Engineering
- Location
- 141 Coordinated Science Lab
- Date
- Apr 3, 2017 10:00 am
- Speaker
- Seyed Jalal Etesami, PhD Candidate, Industrial Engineering
- Originating Calendar
- ISE Seminar Calendar
**Abstract**We propose an approach to infer influences between agents in a network using only observed time series. It includes graphical models to depict causal relationships in the network, algorithms to identify the graphs in different scenarios and when a subset of agents are observable. We demonstrate the utility of the methods by identifying influences between markets.

We study the relationship between Directed Information Graphs (DIG) and Linear Dynamical Graphs (LDG), both of which are graphical models where nodes represent scalar random processes. DIGs are based on directed information and represent the causal dynamics between processes in a stochastic system. LDGs capture causal dynamics but only in linear dynamical systems and there are Wiener filtering to do so in a subset of LDGs. This study shows that the DIGs are generalized version of the LDGs and any strictly causal LDGs can be reconstructed through learning the corresponding DIGs.

Another contribution is to propose an approach for learning causal interaction network of mutually exciting linear Hawkes processes. In such processes, the arrival of an event in one process increases the probability of occurrence of new events in some of the other processes. Thus, a natural notion of functional causality exists between processes. We show that the causal interaction network learnt from the above definition is equivalent to the DIG of the processes. Furthermore, We present a non-parametric algorithm for learning the support of excitation matrix (or equivalently the DIG). The performance of the algorithm is evaluated for a synthesized multivariate Hawkes network as well as a stock market dataset.

Last, we propose an approach for learning latent directed polytrees as long as there exists an appropriately defined discrepancy measure between the observed nodes. Specifically, we use our approach for learning directed information polytrees where samples are available from only a subset of processes. We prove that the approach is consistent for learning minimal latent directed trees. We analyze the sample complexity of the learning task when the empirical estimator of mutual information is used as the discrepancy measure.