Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
title: "Investigation on the Univariate Case"
output: pdf_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
## Stating the Problem
Let $y_{1:p}$ be realizations of random variables $X_{1:p}$. Let $\mu_0,\mu_1,..,\mu_k\in\mathbb{R}$ with $Y_{1:\tau_1}\sim_{iid}\mathcal{N}(\mu_0,\sigma^2)$, $Y_{\tau_1+1:\tau_2}\sim_{iid}\mathcal{N}(\mu_1,\sigma^2)$, $...$, $Y_{\tau_{k}+1,p}\sim_{iid}\mathcal{N}(\mu_k,\sigma^2)$.
The problem is to reliable identify the parameters $k$, $\tau_{1:k}$ and $\mu_0,\mu_1,..,\mu_k$ ideally with a quantification of uncertainty or evaluation criterion of some sort.
## Existing Frameworks
**Introduction to theoretical approaches:**
Niu, Y. S., Hao, N., & Zhang, H. (2016). Multiple change-point detection: a selective overview. *Statistical Science*, 611-623.
Approaches vary from hypothesis testing to formulating the problem in regression framework. However, developing sufficiently effective algorithms to apply the theory is where it starts to be a lot more complicated.
**Introduction to Algrithm Evaluation:**
van den Burg, G. J., & Williams, C. K. (2020). An evaluation of change point detection algorithms. *arXiv preprint arXiv:2003.06222.*
$\implies$ A variety of approaches exist with mainly sequential analysis (online) or maximum-likelihood estimation.
Example:
(1) CUSUM (cumulative sum control chart) with
$S_0=0$ and $S_{n+1}=max(0,S_n+y_{n+1}-w_n)$ for $y_{1:p}$ samples and $w_n$ assigned weights to detect changes in positive directions by choice of $w_n$.
Page, E. S. (1957). On problems in which a change in a parameter occurs at an unknown point. *Biometrika*, 44(1/2), 248-252.
(2) Extension of idea by page in
Basseville, M., & Nikiforov, I. V. (1993). *Detection of abrupt changes: theory and application* (Vol. 104). Englewood Cliffs: prentice Hall.
## Univariate Case as a Regression Problem
Adapted from Niu, Hao & Zhang (2016):
Let $\beta_0=\mathbb{E}[Y_1]$ and $\beta_j=\mathbb{E}[Y_{j+1}]-\mathbb{E}[Y_{j}]$ for $j=1,...,p-1$.
Then $X_i$ for $i=1,...,p$ is given by
$$Y_i=\sum^{i-1}_{j=0}\beta_j + \epsilon_i$$
with $\epsilon_i\sim_{iid}\mathcal{N}(0,\sigma^2)$.
In matrix form this can pre rewritten as
$$Y=X\beta+\epsilon$$
with $X$ being a lower triangular matrix with all entries being $1$.
The closed form solution for the given univariate change point problem is the given by $\beta_0=\mu_0$, $\beta_{\tau_1}=\mu_1-\mu_0$, $...$, $\beta_{\tau_k}=\mu_k-\mu_{k-1}$ and $\beta_j=0$ otherwise.
However, as the parameters are unknown, a regularization approach is applied instead in order to reduce all columns of the design matrix $X$ which do not contain a change point:
$$||Y-X_s\hat{\beta}_s||+f(\lambda,s)$$
with $s=(s_0, s_1, ..., s_{p-1})$, $s_0=1$ and $s_{\tau_i}=1$ for $i=1,..,k$, $X_s=diag(s)\cdot X$ and $\hat{\beta}_s$ the MLE for $X_s$ and $||\cdot||$ some norm and $f(\lambda,s)$ some penalty.
Works on that approach have been done previously:
Utilizing LASSO-regularization with a $l_1$ type penalty for variable selection.
Harchaoui, Z., & Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. *Journal of the American Statistical Association*, 105(492), 1480-1493.
And fused LASSO pattern recovery as a generalizations to the problem.
Qian, J., & Jia, J. (2016). On stepwise pattern recovery of the fused lasso. *Computational Statistics & Data Analysis*, 94, 221-237.
---
Essentially the approach boils down to a set of questions:
(no specific order)
(1) Which form of regularization performs best depending on the type of change point problem?
(2) Is an application of an approach computational feasible?
(3) What is the relationship of the (unknown) parameters $\sigma^2$, $k$ and a penalty parameter (like $\lambda$)?
While (3) depends on (1) in a way, (1) itself heavily depends on the exact formulation of the change point problem. For the most general case stated here, the $l_2$-norm is the obvious and likely the best choice. However, if we loose the normality assumption, a rank based norm will be more appropriate. If we extent to the multivariate case, we want to keep the $l_2$-norm but might want to utilize generalized distances $d_i$ instead of $Y_i$.
Additionally, $Y$ should be decomposed to get rid of trend and cycles or seasonality before being inspected for change points. Also, while not required, $Y$ can be centered according to $Y_1$ in order to make reduce $s$ by one entry with $s_0=0$.
### Obtaining the vector s with COMBSS
The algorithm for best subset selection introduced with COMBSS promises fast results for $s$. In a way we answer (2) we first by wanting to utilize the strength of COMBSS to then look into (1) and from that deriving (3).
***Idea:***
Defining a continuous cost function $c(\lambda,t)$ by adapting the norm to the given type of change point problem (univariate/multivariate, known/unknown variance, normal/not normal) and adding a penalty based on sparsity of $t\in [0,1]^p$, the non-binary version of $s$.
*Example:* (multivariate case with known variance)
Let $d$ be the vector of generalized distances of $Y$ and $X$ the design matrix as described above.
$$c(\lambda,t)=||d-X_t\hat{\beta}_t||^2_2+\lambda\sum_{i=0}^{p-1}\lceil t_i\rceil$$
Question: (3)