Conformalized Kernel Ridge Regression

0 downloads 0 Views 1MB Size Report
conformal procedure for Kernel Ridge Regression (KRR), and conduct a comparative ... as a sample path of the underlying Gaussian Process, [16], which makes ...
Conformalized Kernel Ridge Regression Evgeny Burnaev Skoltech, IITP RAS Moscow, Russia Email: [email protected]

Ivan Nazarov IITP RAS Moscow, Russia Email: [email protected]

Abstract—General predictive models do not provide a measure of confidence in predictions without Bayesian assumptions. A way to circumvent potential restrictions is to use conformal methods for constructing non-parametric confidence regions, that offer guarantees regarding validity. In this paper we provide a detailed description of a computationally efficient conformal procedure for Kernel Ridge Regression (KRR), and conduct a comparative numerical study to see how well conformal regions perform against the Bayesian confidence sets. The results suggest that conformalized KRR can yield predictive confidence regions with specified coverage rate, which is essential in constructing anomaly detection systems based on predictive models. Keywords-kernel ridge regression, gaussian process regression, conformal prediction, confidence region.

I. I NTRODUCTION In many applied situations, like anomaly detection in telemetry of some equipment, online filtering and monitoring of potentially interesting events, or power grid load balancing, it is necessary not only to make optimal predictions with respect to some loss, but also to be able to quantify the degree of confidence in the obtained forecasts. At the same time it is necessary to take into consideration exogenous variables, that in certain ways affect the object of study. Practical importance and difficulty of anomaly detection in general spurred a great deal of research, which resulted in a large volume of heterogeneous approaches and methods to its solution [1], [2], [3]. There are many approaches: probabilistic, which rely on approximating the generative distribution of the observed data [4], [5], metric-based anomaly detection, that measure similarity between normal and abnormal observations [6], [7], [8], predictive modelling approaches, which use the forecast error to measure abnormality [9], [10], [11], [12]. Predictive modelling is concerned with recovering an unobserved relation x 7→ f (x) from a sample (xi , yi )ni=1 of noisy observations yi = f (xi ) + εi ,

(1)

where εi is iid with mean zero. One way of assigning a confidence measure to a prediction fˆ is by using assumptions on the geometric structure of the manifold approximating the sample data which provide estimates of the likelihood

of the prediction error for the test data [13], [14]. Another approach is to quantify estimate’s, model’s and observational uncertainty, by imposing Bayesian assumptions in (1). Consider Kernel Ridge Regression – a model that combines ridge regression with the kernel trick. It learns a function fˆ which, given training sample X = (xi )ni=1 , X ∈ Xn×1 , solves ky − f (X)k2 + λ k f k2 → min , f ∈H

( f (xi ))ni=1

Rn×1 ,

with f (X) = ∈ and y = (yi )ni=1 ∈ Rn×1 . Here H is the canonical Reproducing Kernel Hilbert Space associated with a Mercer-type kernel K : X × X 7→ R. The Representer theorem, [15], states that the solution fˆ : X 7→ R is of the form fˆ(x) = kX0 (x)β with kX : X 7→ Rn×1 given by the vector of canonical feature maps kX = (K(xi , ·))ni=1 ∈ Hn×1 . If e j is the j-th unit vector in Rn×1 , and KXX is the n × n Gram matrix of K over (xi )ni=1 , then f (x j ) = kx0 j β = e0j KXX β and k f k2 = β 0 KXX β . Thus the kernel ridge regression problem is equivalent to this finite-dimensional convex minimization problem: ky − KXX β k2 + λ β 0 KXX β → min , β ∈Rn×1

which yields the optimal weight vector βˆ and prediction at x∗ ∈ X given by βˆ = (λ In + KXX )−1 y and y(x ˆ ∗ ) = kx0 ∗ βˆ , respectively. Bayesian Kernel Ridge Regression views the model (1) as a sample path of the underlying Gaussian Process, [16], which makes predictive confidence intervals readily available. A Gaussian Process (yx )x∈X , with mean m : X 7→ R and covariance kernel K : X × X 7→ R is a random process such that for any n ≥ 1 and any X = (xi )ni=1 ∈ X the n×1 vector yX = (y(xi ))ni=1 is Gaussian, yX ∼ Nn (mX , KXX ), where mX = (m(xi ))ni=1 . The conditional distribution of targets in a test sample yX ∗ = (yx∗j )lj=1 with respect to the train sample yX = (yxi )ni=1 is given by  yX ∗ |yX ∼ Nl mX ∗ + KX ∗ X QX (yX − mX ), ΣK (X ∗ ) , (2)  −1 where ΣK (X ∗ ) = KX ∗ X ∗ − KX ∗ X QX KXX ∗ , QX = KXX , and KXX ∗ = (K(xi , x∗j )) ∈ Rn×l . Gaussian Process Regression, or Kriging, generalizes both linear and kernel regression and assumes linearity of the mean function with respect to x and external factors h.

Bayesian KRR assumes a prior on functions f ∼ GP(0, σ 2 K) and independent Gaussian white noise εx ∼ N(0, σ 2 λ ) in model (1), for σ 2 > 0. In this setting (2) implies that the distribution of a yet unobserved target yx∗ at x∗ ∈ X, conditional on the train data (X, yX ), is  yx∗ |yX ∼ N yˆyX (x∗ ), σ 2 σK2 (x∗ ) , (3) with yˆyX (x∗ ) = kX (x∗ )0 QX yX , and σK2 (x∗ ) = λ + K(x∗ , x∗ ) − kX (x∗ )0 QX kX (x∗ ) , −1 where KXX = (K(xi , x j ))i j , QX = λ In + KXX , and kX = (K(xi , ·))ni=1 . Thus, the 1 − α confidence interval is given by q α ∗ ∗ (4) ΓyX (x ) = yˆyX (x ) + σ σK2 (x∗ ) × [z α , z1− α ] , 2

2

where zγ is the γ quantile of N(0, 1). Additionally, this version naturally permits estimation of parameters of the underlying kernel K through maximization of the joint likelihood of the train data (X, yX ): n 1 1 n L = − log 2π − log σ 2 − log|RX | − 2 y0 R−1 X y , (5) 2 2 2 2σ where RX = λ In + KXX , and KXX depends on the hyperparameters of K (shape, precision et c.). Other approaches to estimating the covariance function’s hyper-parameters are reported in [17], properties of posterior parameter distribution in Bayesian KRR are studied in [18], methods of estimating Gaussian Process Regression on large structured datasets are considered in [19], [20], and the problem of estimating in non-stationary case with regularization is considered in [21]. It is desirable to have distribution-free method that measures confidence of predictions of a machine learning algorithm. One such method is “Conformal prediction” – an approach developed in [22], which under standard independence assumptions yields a set in the space of targets, that contains yet unobserved data with a pre-specified probability. In this study, we provide empirical evidence supporting the claim that when model assumptions do hold, the conformal confidence sets, constructed over the Kernel Ridge Regression with isotropic Gaussian kernel do not perform worse than the prediction confidence intervals of a Bayesian version of the KRR. The paper is structured as follows: in section II a concise overview of what conformal prediction is and what is required to construct such kind of confidence predictor is given. Section III describes the particular steps needed to build a conformal predictor atop the kernel ridge regression. The main empirical study is reported in section IV, where we study the properties of the predictor in a batch learning setting for a KRR with specific kernel. II. C ONFORMAL PREDICTION Conformal prediction is a distribution-free technique designed to yield statistically valid confidence sets for predictions made by machine learning algorithms. The key

advantage of the method is that it offers coverage probability guarantees under standard iid assumptions, even in cases when assumptions of the underlying prediction algorithm fail to be satisfied. The method was introduced in [22] for online supervised and unsupervised learning. Let Z denote the object-target space X × Y. At the core of a conformal predictor is a measurable map A : Z∗ × Z 7→ R, a Non-Conformity Measure (NCM), which quantifies how much different zn+1 ∈ Z is relative to a sample Z:n = (zi )ni=1 ∈ Z. A conformal predictor over A is a procedure, which for every sample Z:n , test object xn+1 ∈ X, and level α ∈ (0, 1), produces a confidence set ΓαZ:n (x∗ ) for the target value yn+1 :  (6) ΓαZ:n (xn+1 ) = y ∈ Y : pZ:n (˜zyn+1 ) ≥ α , where z˜yn+1 = (xn+1 , y) a synthetic test observation with target label y. The function p : Z∗ × (X × Y) 7→ [0, 1] measures the likelihood of z˜ based on its non-conformity with Z:n , and is given by z˜ pZ:n (˜z) = (n + 1)−1 {i : ηiz˜ ≥ ηn+1 (7) } , z˜ where i = 1, . . . , n + 1, and ηiz˜ = A(S−i , Siz˜ ) is the nonconformity of the i-th observation with respect to the augmented sample Sz˜ = (Z:n , z˜yn+1 ) ∈ Zn+1 . For any i, Siz˜ is the z˜ i-th element of the sample, and S−i is the sample with the i-th observation omitted. For every possible value z of an object Zn+1 the conformal procedure tests H0 : Zn+1 = z, and then inverts the test to get a confidence region. The hypothesis tests are designed to have a fixed empirical type-I error rate α based on the observed sample Z:n and hypothesized z. In [22] chapter 2 it has been shown, that for sequences of iid examples (zn )n≥1 ∼ P, the coverage probability of the prediction set (6), is at least 1 − α and successive errors are independent in online learning and prediction setting. The procedure guarantees unconditional validity: for any α > 0  / ΓαZ:(n−1) (xn ) ≤ α , (8) PZ:n ∼P yn ∈

where (xn , yn ) = zn . Intuitively, the event yn ∈ / ΓαZ:(n−1) (xn ) is equivalent to ηn = A(Z−n , Zn ) being among the largest bnαc values of ηi = A(Z−i , Zi ), which is equal to n−1 bnαc, due to independence of Z:n (for a rigorous proof see [22], ch. 8). The choice of NCM affects the size of the confidence sets and the computational burden of the conformal procedure. In the general case computing (6) requires exhaustive search through the target space Y, which is infeasible in general regression setting. However, for specific non-conformity measures it is possible to come up with efficient procedures for computing the confidence region as demonstrated in [22] and sec. III of this paper. III. C ONFORMALIZED KERNEL RIDGE REGRESSION In this section we describe the construction of confidence regions of the conformal procedure (6) for the case of the

non-conformity measures based on kernel ridge regression. We consider two NCMs defined in terms of regression residuals: the one used in constructing a “Ridge Regression Confidence Machine”, proposed in chapter 2 of [22], and a “two-sided” NCM, proposed in [23]. A. Residuals In each NCM it is possible to use any kind of prediction error, but we focus on the in-sample and Leave-One-Out or deleted (LOO) residuals. Consider a sample (X, y) = (xi , yxi )ni=1 , and for any i = 1 . . . , n put X = (X−i , xi ), and y = (y−i , yi ). The in-sample residuals, rˆin (X, y), are defined for each i as e0i rˆin (X, y) = yi − yˆ|(X,y) (xi ) ,

(10)

where yˆ|(X,y) and yˆ|(X−i ,y−i ) denote predictions of a KRR fit on the whole sample (X, y), and a sample (X−i , y−i ) with the i-th observation knocked-out, respectively. For any i the residuals are related by 0 e0i rˆin (X, y) = λ m−1 i ei rˆloo (X, y) ,

(11)

0 where λ m−1 i = λ ei QX ei is the KRR “leverage” score of the i-th observation, and

mi = λ + K(xi , xi ) − k−i (xi )0 Q−i k−i (xi ) ,

rˆiz = e0i rˆin (X, y˜zn ) = λ ci + λ bi z , (15)  with ci = e0iC and C = C−n (X, y), xn ∈ Rn×1 given by   Q−n y−n − B−n (xn )K−n (xn )0 Q−n y−n , 0 where 0 is scalar and the vector B = B−n (xn ) ∈ Rn×1 is   −Q−n K−n (xn ) B−n (xn ) = m−1 (16) n . 1

(9)

and LOO rˆloo (X, y) are given by e0i rˆloo (X, y) = yi − yˆ|(X−i ,y−i ) (xi ) ,

Efficient construction of the confidence set for the NCM (13) for in-sample (and deleted) residuals relies on linear dependence on the target of the n-th observation:

(12)

with k−i (·) – the (n − 1) × 1 vector of (K(x j , ·))i6= j , Q−i = (K−i + λ In−1 )−1 , and K−i is the Gram matrix of the kernel K over subsample X−i . B. Ridge Regression Confidence Machine In this section we describe a conformal procedure for the NMC proposed in [22] chapter 2, and focus on its insample version, bearing in mind that residuals (10) and (9) are interchangeable. The Ridge Regression Confidence Machine (RRCM) constructs a non-conformity measure from the absolute value of the regression residual. The in-sample NCM, Ain , is given by  Ain (X−i , y−i ), (xi , yi ) = |e0i rˆin (X, y)| , (13) and the LOO NCM, Aloo , is defined similarly using (10). For an NCM A the 1 − α conformal confidence interval for the n-th observation is given by   (14) ΓαX−n ,y−n (xn ) = z ∈ R : pn (X, y˜zn ) ≥ α , where y˜zi = (y−i , z) – the augmented target sample y with the i-th value replaced by z. The “conformal likelihood” of the i-th observation in some sample (X, y) is given by   p j (X, y) = n−1 j = 1, . . . , n : η j ≥ ηi ,  for ηi = A (X−i , y−i ), (xi , yi ) .

Since absolute values of the residuals are compared, it is possible to consistently change the signs of each element of C and B to ensure that e0i B ≥ 0 for all i. The conformal p-value for (xn , y), (7), is can be re-defined in terms of regions Si = {z ∈ R : |ˆriz | ≥ |ˆrnz |}, for i = 1, . . . , n: (17) pX−n ,y−n (xn , y) = n−1 {i : y ∈ Si } . These regions are either closed intervals, complements of open intervals, one-side closed half-rays in R, depending on the values of C and B. In particular, with pi and qi denoting +cn n and bcin−c − bcii +b −bi , respectively (whenever each is defined), n each region Si has one of the following representations: 1) bi = bn = 0: Si = R if |ci | ≥ |cn |, or Si = 0/ otherwise; 2) bn = bi > 0: Si is either (−∞, pi ] if ci < cn , [pi , +∞) if ci > cn , or R otherwise; 3) bn > bi ≥ 0: Si is either [pi , qi ] if ci bn ≥ cn bi , or [qi , pi ] otherwise; 4) bi > bn ≥ 0: Si is R \ (qi , pi ) when ci bn ≥ cn bi , or R \ (pi , qi ) otherwise. Let P and Q be the sets of all well-defined pi and qi , respectively, and let (g j )J+1 j=0 enumerate distinct values of {±∞} ∪ P ∪ Q, so that g j < g j+1 for all j. Then the confidence region is a closed subset of R constructed from sets Gmj = [g j , g j+m ] ∩ R for m = 0, 1: ΓαX−n ,y−n (xn ) =

[

[

m∈{0,1}

j : Nm j ≥nα

Gmj ,

(18)

m m where N m j = |{i : G j ⊆ Si }| is the coverage frequency of G j , (17). In general, the resulting confidence set might contain isolated singletons G0j . This set can be constructed efficiently in O(n log n) time with O(n) memory footprint. Indeed, it is necessary to sort at most J ≤ 2n distinct endpoints of G j , then locate the values pi and qi associated with each region Si (O(n log n)). Then, since the building blocks Gmj of Γα are either singletons (m = 0), or intervals made up from adjacent singletons (m = 1), coverage numbers N m j can be computed in at most O(n) time.

C. Kernel two-sided confidence predictor Another possibility is to use the two-sided conformal procedure, proposed in [23]. The main result of that paper is that under relaxed Bayesian Ridge Regression assumptions if a sequence (xn )n≥1 ∈ X is i.i.d. with an non-singular second moment matrix E x1 x10  0, then for all sufficiently large n the conformal confidence regions lose little efficiency (the upper endpoints of the Bayesian and conformal prediction 1 intervals deviate as much as O p n− 2 ). The two-sided procedure of [23], denoted by CRR, flips the sign in (7) and uses a conformity measure A(Z−i , Zi ) = { j : rˆ j ≥ rˆi } ∧ { j : rˆ j ≤ rˆi } , (19) where (ˆri )ni=1 are the in-sample ridge regression residuals. In that paper it was also shown that for any α ∈ (0, 1) the confidence region Γα produced by CRR procedure for the conformity measure in (19) is equivalent to the intersection of confidence sets yielded by conformal procedures with non-conformity measures given by ηi = rˆi and ηi = −ˆri at significance levels α2 . Individually, these NCMs define upper and lower CRR sets, and together make up a twosided conformal procedure. Confidence regions based on this NCM, much like RRCM, can use any kind of residual: leaveone-out or in-sample. For the upper CRR the regions Ui = {z ∈ R : rˆiz ≥ rˆnz }, i = 1, . . . , n, are either empty, full R or one-side closed halfrays. Since rˆiz = λ ci + λ bi z, Ui takes one of the following forms: 1) bi = bn : Ui = R if ci ≥ cn , and 0/ otherwise; 2) bi 6= bn : Ui = [qi , +∞) if bi > bn , or Ui = (−∞, qi ] otherwise; n with qi = bcni −c −bi . The forms of regions Li for the lower CRR are computed similarly, but with the signs of ci and bi flipped for each i = 1, . . . , n. Both upper and lower confidence regions are built similarly to the kernel RRCM region (18) in sec. III-B. The final Kernel CRR confidence set is given by α,l ΓαX−n ,y−n (xn ) = Γα,u X−n ,y−n (xn ) ∩ ΓX−n ,y−n (xn ) .

(20)

This intersection can be computed efficiently in O(n log n), since the regions are built form sets anchored at a finite set Q with at most n + 2 values. Therefore, the CRR confidence set for a fixed significance level α has O(n log n) complexity. IV. N UMERICAL STUDY Validity of conformal predictors in the online learning setting has been shown in [22] chapter 2, however, no result of this kind is known in the batch learning setting. Our experiments aim to evaluate the empirical performance of the conformal prediction in this setting: with dedicated train and test datasets. In this section we conduct a set of experiments to examine the validity of the regions, produced by the conformal Kernel Ridge Regression and

compare its efficiency to the Bayesian confidence intervals. We use the isotropic Gaussian kernel with the precision parameter θ > 0, K(x, x0 ) = exp −θ kx − x0 k2 , for both the Conformal Kernel ridge regression and the Gaussian Process Regression. We experiment on a compact set X ⊂ Rd×1 , since the validity of conformal region is by design unaffected by the NCM A, which can be an arbitrary computable function and is oblivious to the structure of the domain. The dimensionality of the input data, however, may impact the width of the constructed confidence region. The off-line validity and efficiency of Bayesian and Conformal confidence regions is studied in two settings: the fully Gaussian, and the non-Gaussian cases. In the first case the Bayesian assumptions hold and experiments are run on a sample path of GP(0, K + δx,x0 γ). In the second the assumptions of Gaussian Process Regression are partially valid: a deliberately non-Gaussian f in (1) is contaminated by moderate Gaussian white noise with variance γ. The following hyper-parameters are controlled: the true noise-to-signal ratio γ ∈ {10−6 , 10−1 }, the covariance kernel precision θ ∈ {10, 102 , 103 }, the train sample size n (from 25 up to 1600), the NCM – (13) or (19), the kind of residual (9), or (10), the regularization parameter λ ∈ {10−1 , 10−6 }, and either fixed θ or θ that minimizes (5). For a given test function f : X 7→ R and a set of hyperparameters each experiment consists of the following steps: 1) The test inputs, X ∗ , are given by a regular grid in X with constant spacing; 2) Train inputs, X, are sampled from a uniform distribution over X; 3) For all x ∈ Xpool = X ∪ X ∗ , target values yx = f (x) are generated; 4) For l = 1, . . . , L independently: a) draw a random subsample of size n from train dataset; b) fit a Gaussian Process Regression with zero mean and Gaussian kernel K with the specified precision θ and λ ; c) for each x∗ ∈ X ∗ construct the Bayesian (4) and conformal (6) confidence regions using the NCM A and residuals rˆ with MLE estimated σ 2 ; d) estimate the coverage rate and the width of the convex hull of the region over the test sample X ∗ : pl (R) = |X ∗ |−1 ∑x∈X ∗ 1yx ∈Rx , and wl (R) = inf{b− a : R ⊆ [a, b]}, where R is a confidence region. A. Results: 1-d We begin with the examination of the fully-Gaussian setup with X = [0, 1]. To illustrate the constructed confidence regions, we generated a sample path of the 1-d Gaussian process with isotropic Gaussian kernel on a regular grid of 501 knots, and use a subset of 51 knots in [0.05, 0.95] for constructing the Bayesian (GPR) and conformal (RRCM) confidence regions. The confidence regions are depicted

5.0%-GPR: gaussian (θ = 100, λ = 0. 1, γ = 0. 1) yx GPR-p GPR-f yˆx

2

2

1

1

0

0

1

1

2

2

3

5.0%-RRCM: gaussian (θ = 100, λ = 0. 1, γ = 0. 1) yx RRCM RRCM-loo yˆx

3

0.2

0.4

0.6

0.8

1.0

5.0%-GPR: gaussian (θ = 100, λ = 0. 1, γ = 1e − 06)

0.0

2

2

1

1

0

0

0.2

0.4

0.6

0.8

3

1.0

2

2

1

1

1

0 1

GPR-p GPR-f

yx yˆx 1.0

0.5

2

0.0

0.5

3

1.0

1.0%-GPR: gaussian (θ = 100, λ = 0. 1, γ = 0. 1) 3

2

1

1

2

5.0%-RRCM: gaussian (θ = 100, λ = 0. 1, γ = 1e − 06)

0

1.0%-RRCM: gaussian (θ = 100, λ = 0. 1, γ = 1e − 06)

3

1

3

0.0

1.0%-GPR: gaussian (θ = 100, λ = 0. 1, γ = 1e − 06)

yx yˆx

0.0

0.2

GPR-p GPR-f 0.4

0.6

0.8

1.0

0.0

0.2

RRCM RRCM-loo 0.4

0.6

4

1.0

Figure 1: A sample path of a 1-d Gaussian Process with γ = 10−1 (top), and γ = 10−6 (bottom) constructed confidence intervals: the forecast “GPR-f” and prediction “GPR-p” (left), and the “RRCM” confidence bands (right).

in fig. 1: conformal regions closely track the Bayesian confidence bands (“GPR-f”, (4)), but the latter are too wide in the low noise case. Near the endpoints the confidence regions dramatically increase in width, reflecting increased uncertainty. In general, the conformal regions necessarily cover the KRR prediction yˆ∗|(X,y) (x∗ ) but are not necessarily symmetric around it, where as GPR regions are (4). In can be argued, that confidence regions for any observation xn sufficiently far away from the bulk of the training dataset have constant size, determined only by the train sample (fig. 2). Indeed, as kxn k2 → ∞ (n-fixed) the vector B−n in (16) approaches the n-th unit vector en , since for the Gaussian kernel the vector kK−n (xn )k2 → 0. Since the kernel is bounded, the value mn (12) is a bounded function of xn , which, in turn, implies that eventually all RRCM (similarly, CRR) regions Si assume the form of closed intervals [−|qi |, |qi |], where qi = mn (e0i Q−n y−n ) + o(kxn k2 ), i 6= n. Therefore, the conformal procedure essentially reverts to a constant-size confidence region, determined by the n−1 bn(1 − α)c-th order statistic of (|qi |)ni=1 . Analogous effects can be observed for the Gaussian Process confidence interval (4). By construction, conformal confidence region (6) allows for some uncertainty. Indeed, the residuals (15) and hence the vector of non-conformity scores (ηiz )ni=1 are continuous functions of targets y = (yi )ni=1 . Thus for small perturbations of y the relative ordering of ηiz is kept, and the interval remains unchanged. Therefore, the set (6) tends to capture more points y for the same fixed significance level. Experimental results in the perfectly noiseless case show that both kinds of confidence region are conservative (see fig. 3). In this picture we consider the ML estimate of θ , but the results for other fixed choices are qualitatively similar. By construction the procedure (6) adapts to the noise level in observations through the distribution of the non-

0.5

1.0

1.0

0.5

0.0

5

GPR-p GPR-f

yx yˆx

3 0.8

0.0

0

2

yx yˆx

0.5

5

0

2

1.0

1.0%-RRCM: gaussian (θ = 100, λ = 0. 1, γ = 0. 1) 10 yx RRCM RRCM-loo yˆx

1 2

RRCM RRCM-loo

yx yˆx

0.5

10

1.0

1.0

0.5

0.0

0.5

1.0

Figure 2: Limiting out-of-sample behaviour of GPR (left) and RRCM (right) confidence regions for a sample path of a Gaussian process with negligible (top, γ = 10−6 ) and high (bottom, γ = 10−1 ) noise-to-signal level. 0.99 0.95 0.90

GPR-f (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.99 0.95 0.90

GPR-f (θ = 'auto', λ = 0. 1, γ = 1e − 06)

1.0

GPR-f (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.8 0.6 0.4

0.75

0.75 500

0.99 0.95 0.90

1000

1500

RRCM (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.75

0.99 0.95 0.90

1000

1500

RRCM (θ = 'auto', λ = 0. 1, γ = 1e − 06)

0.75 500

1000

1500

RRCM-loo (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.99 0.95 0.90

0.75

500 0.99 0.95 0.90

1000

1500

RRCM-loo (θ = 'auto', λ = 0. 1, γ = 1e − 06)

0.75 500

0.99 0.95 0.90

0.2 500

1000

1500

CRR (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.75

500 0.99 0.95 0.90

1000

1500

CRR (θ = 'auto', λ = 0. 1, γ = 1e − 06)

0.75 500

1000

1500

CRR-loo (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

500

0.99 0.95 0.90

0.99 0.95 0.90

0.75

0.75 500

1000

(a)

1500

1000

1500

CRR-loo (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500

1000

(b)

1500

0.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

500

1000

1500

RRCM (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

500

1000

1500

RRCM-loo (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.016 0.014 0.012 0.010 0.008 0.006 0.004 0.002 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

500

1000

1500

CRR (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

500

1000

1500

CRR-loo (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.018 0.016 0.014 0.012 0.010 0.008 0.006 0.004 0.002

1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1.2

GPR-f (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500

1000

1500

RRCM (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500

1000

1500

RRCM-loo (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500

1000

1500

CRR (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500

1000

1500

CRR-loo (θ = 'auto', λ = 0. 1, γ = 1e − 06)

1.0 0.8 0.6 0.4 0.2 500

1000

(c)

1500

0.0

500

1000

1500

(d)

Figure 3: Coverage rate dynamics (a, b) and asymptotic width (c, d) of the confidence regions in the fully Gaussian low-noise case γ = 10−6 for θ = θˆML and λ = 10−6 (a, c), and λ = 10−1 (b, d). Rows from top to bottom: “GPR-f”, “RRCM”, “RRCM-loo”, “CRR”, “CRR-loo”. In columns c and d upward triangles indicate the 5% sample quantile across the whole test sample, downward triangles indicate the maximal width, the median width is drawn with a slightly thicker line.

conformity scores, rather than the regularization parameter λ , which makes Bayesian confidence intervals usually wider than conformal regions (fig. 3b). Nevertheless for higher λ the coverage rate of conformal regions gets closer to the specified confidence level. In the non-Gaussian noiseless experiment the coverage rate of conformal confidence intervals maintains the specified confidence levels and all conformal procedures demonstrate very similar asymptotic validity. For the “Heaviside”

10.0%-CRR: heaviside (θ = 100, λ = 1e − 06, γ = 1e − 06) yx CRR 1.5 CRR-loo yˆx

10.0%-RRCM: heaviside (θ = 100, λ = 1e − 06, γ = 1e − 06) yx RRCM 1.5 RRCM-loo yˆx

1.0

1.0

0.5

0.5

0.0

0.0

0.5

0.5

0.99 0.95 0.90

GPR-f (θ = 'auto', λ = 1e − 06, γ = 0. 1)

GPR-f (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75

0.75 500

0.99 0.95 0.90

1000

1500

500

RRCM (θ = 'auto', λ = 1e − 06, γ = 0. 1)

0.75

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

Figure 4: Typical conformal confidence bands for the “Heaviside” step function (train sample size n = 50): left – CRR, and right – RRCM.

1000

1500

500

RRCM-loo (θ = 'auto', λ = 1e − 06, γ = 0. 1)

0.75

0.99 0.95 0.90

GPR-f (θ = 10, λ = 1e − 06, γ = 1e − 06)

0.75

0.99 0.95 0.90

GPR-f (θ = 10, λ = 0. 1, γ = 1e − 06)

0.75

0.99 0.95 0.90

GPR-f (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.75

0.99 0.95 0.90

GPR-f (θ = 'auto', λ = 0. 1, γ = 1e − 06)

1000

1500

500

CRR (θ = 'auto', λ = 1e − 06, γ = 0. 1)

0.99 0.95 0.90

1000

1500

RRCM (θ = 10, λ = 1e − 06, γ = 1e − 06)

0.75

500 0.99 0.95 0.90

1000

1500

RRCM (θ = 10, λ = 0. 1, γ = 1e − 06)

0.75

1000

1500

RRCM (θ = 'auto', λ = 1e − 06, γ = 1e − 06)

0.75

500 0.99 0.95 0.90

1000

1000

1500

500

1000

1500

1500

0.75

1500

RRCM (θ = 'auto', λ = 0. 1, γ = 1e − 06)

500 0.99 0.95 0.90

1000

1500

500

CRR-loo (θ = 'auto', λ = 1e − 06, γ = 0. 1)

1000

1500

CRR-loo (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75 0.75

500

1000

CRR (θ = 'auto', λ = 0. 1, γ = 0. 1)

0.75 500

0.99 0.95 0.90

1500

0.99 0.95 0.90

0.75 500

1000

RRCM-loo (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75 500

0.99 0.95 0.90

1500

0.75 500

0.99 0.95 0.90

1000

RRCM (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

500

1000

1500

500

1000

0.75

1500 500

Figure 5: Coverage dynamics for the “Heaviside” (λ = 10−6 ).

1000

1500

500

(a)

step function typical confidence bands are shown in fig. 4, and the asymptotic coverage rate of various confidence bands is presented in fig. 5. In the non-Gaussian setting the GPR confidence intervals are not consistently valid, as is evident from the coverage rate dynamics for λ = 10−1 . At the same time conformal procedures show no significant departures from claimed validity (results for other measures and residuals were qualitatively similar). The main conclusion is that in the negligible noise case the conformal confidence intervals for the KRR with the Gaussian kernel perform reasonably well both in terms of validity in a non-Gaussian setting, and efficiency in fully Gaussian setting. The performance of conformal confidence regions in the noisy setting (γ = 10−1 ) is qualitatively similar to the negligible noise case, except that the bands are wider due to higher observation noise. We report the findings for the MLE of θ only, but the results for conformal regions with fixed θ are qualitatively similar. In fig. 6 the conformal confidence regions provide the specified level of validity regardless of the parameter λ of the non-conformity measure. As expected, the Bayesian confidence predictions uphold their theoretical guarantees. In the non-Gaussian setting with noise-to-signal ratio γ = 10−1 , fig. 7, all experiments yielded results similar to the negligible noise case: the conformal confidence sets are asymptotically valid, whereas the Bayesian intervals are not.

1000

In this section we conduct experiments in the 2-d setting X = [−1, 1]2 , and the experimental steps are similar to IV-A The typical sample realisations of the studied 2−d functions f are depicted in fig. 8.

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

GPR-f (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

1000

1500

RRCM (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

1000

1500

1000

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

1500

CRR (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

1000

1500

(b)

1000

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

1500

1000

1500

500

1000

1500

RRCM-loo (θ = 'auto', λ = 0. 1, γ = 0. 1)

500

1000

1500

CRR (θ = 'auto', λ = 0. 1, γ = 0. 1)

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

CRR-loo (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

500

RRCM (θ = 'auto', λ = 0. 1, γ = 0. 1)

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

RRCM-loo (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

GPR-f (θ = 'auto', λ = 0. 1, γ = 0. 1)

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

500

1000

1500

CRR-loo (θ = 'auto', λ = 0. 1, γ = 0. 1)

500

(c)

1000

1500

(d)

Figure 6: Coverage rate and region size dynamics in the noisy fully Gaussian case with γ = 10−1 for different θ = θˆML and λ = 10−6 (a), and λ = 10−1 (b). Rows from top to bottom: “GPR-f”, “RRCM”, “RRCM-loo”, “CRR”, and “CRR-loo”.

0.99 0.95 0.90

GPR-f (θ = 10, λ = 1e − 06, γ = 0. 1)

GPR-f (θ = 10, λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75

0.75 500

0.99 0.95 0.90

0.99 0.95 0.90

1000

500

RRCM (θ = 10, λ = 1e − 06, γ = 0. 1)

1000

1500 0.99 0.95 0.90

0.75 500

1000

GPR-f (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75 500

RRCM (θ = 10, λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75

GPR-f (θ = 'auto', λ = 1e − 06, γ = 0. 1)

0.75

1500

1000

1500

500

RRCM (θ = 'auto', λ = 1e − 06, γ = 0. 1)

500

1000

1500

1000

1500

RRCM (θ = 'auto', λ = 0. 1, γ = 0. 1) 0.99 0.95 0.90

0.75

1500

0.75 500

1000

1500

500

Figure 7: Coverage dynamics for the “Heaviside” (λ

1000

1500

= 10−1 ).

Table I shows the error rates (y∗ ∈ / B(x∗ )) of the Bayesian confidence intervals on the fixed test sample. Columns 1 and 4 show that the regions are approximately valid when the kernel and noise hyper-parameters are known. The Bayesian intervals are more conservative for the case of low noise (γ = 10−6 ) and high regularization λ = 10−1 . However, the validity of the GPR confidence intervals is sensitive to misspecification of kernel precision θ .

Sample: gaussian (θ = 1000, λ = 1e − 06, γ = 1e − 06)

Sample: f2 (θ = 1000, λ = 1e − 06, γ = 0. 1)

1.0

B. Results: 2-d

1500

2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

3 2 1 0 1 2 3

1.5 1.0 0.5 0.0 0.5 1.0 1.0

0.8 0.0

0.6 0.2

0.4

0.4 0.6

0.8

0.2 1.0 0.0

0.5 1.0

0.0

0.5 0.0

0.5 0.5

1.0 1.0

Figure 8: A sample path of a 2-d Gaussian process(left γ = 10−6 ) and a non-Gaussian function “f2” (right γ = 10−1 ).

Table I: The empirical error rate (%) of the GPR confidence interval for a simulated 2-d Gaussian process with train size n = 1500. γ λ α(%)

10−6 10−6

102

1 5 10 25

0.9 4.5 9.2 23.8

0.1 0.5 1.0 3.3

4.1 12.0 19.2 36.1

0.7 4.4 9.3 24.5

101

θˆML

1 5 10 25

0.8 4.4 9.0 23.6

0.1 0.5 0.9 2.3

2.3 4.9 8.0 18.9

0.8 4.5 9.4 24.7

101

1 5 10 25

1.6 5.3 9.6 22.5

0.6 4.2 8.6 23.4

1.0 5.2 10.4 26.3

103

1 5 10 25

0.4 1.2 2.0 4.7

0.4 1.2 2.0 4.5

9.1 13.1 16.1 24.0

θ

10−1

10−1 10−6

Table III: The empirical error rate (%) of the GPR confidence interval for the “f2” test function (n = 1500). 10−6 10−6

10−1

10−1 10−6

10−1

1 5 10 25

2.3 3.2 4.0 5.8

1.9 3.0 3.7 5.6

2.2 7.9 13.8 29.9

1.1 4.9 9.7 24.3

102

1 5 10 25

0.3 0.6 0.9 2.6

0.0 0.1 0.2 1.2

19.1 28.6 35.0 48.3

1.3 5.9 11.1 26.5

0.6 3.7 8.2 22.6

103

1 5 10 25

0.1 1.9 4.1 12.9

0.1 2.1 4.5 13.4

2.4 4.0 6.0 13.9

0.1 1.9 5.0 16.4

1.1 4.9 9.6 24.0

θˆML

1 5 10 25

3.4 4.4 5.2 7.1

0.6 1.1 1.4 2.4

2.0 3.9 6.9 18.1

1.1 5.1 10.0 24.9

10−1 θ

Table II: The maximal absolute deviation MAD(Γ, A; Θ) (%) of the empirical error rate from the theoretical significance level of conformal confidence regions for simulated 2-d Gaussian process for n = 1500. γ λ θ

type RRCM

RRCM-loo

CRR

CRR-loo

10−6 10−6

10−1 10−1

10−6

10−1

101 102 103 θˆML

0.3 1.7 1.1 1.7

0.2 1.3 2.6 2.2

0.8 1.1 1.2 0.1

0.4 0.7 0.5 0.5

101 102 103 θˆML

1.3 2.4 2.9 2.6

0.3 1.7 2.6 2.6

2.0 2.0 0.1 0.8

0.4 0.4 0.6 0.6

101 102 103 θˆML

0.3 1.6 0.8 1.8

0.1 1.2 2.2 2.1

0.8 1.2 1.3 0.1

0.3 0.8 0.5 0.4

101 102 103 θˆML

1.2 2.4 2.6 2.6

0.3 1.7 2.4 2.4

2.0 2.1 0.3 0.8

0.2 0.5 0.5 0.6

In contrast to the GPR confidence intervals, the conformal regions are insensitive to misspecification as shown in table II, where we report the maximal absolute deviation of the interval error rate from the specified rate α across all studied significance levels (21). MAD(Γ, A; Θ) = max m−1 #{ j : y∗j ∈ / Γαn (X j∗ ; Θ)} − α , (21) α∈A

where Θ is the vector of hyper-parameters of the experiment revealed to the conformal procedure, A = {1%, 5%, 10%, 25%}, (X j∗ , y∗j )mj=1 is the test sample. Typical profile of the test function used in non-Gaussian experiment is plotted in fig. 8. Performance of conformal

γ λ α(%)

Table IV: The maximal absolute deviation MAD(Γ, A; Θ) (%) of the empirical error rate from the theoretical significance level of conformal confidence regions for the “f2” test function (n = 1500).

type

γ λ θ

10−6 10−6

10−1

10−1 10−6

10−1

CRR

101 102 103 θˆML

1.0 0.7 0.3 1.4

1.3 2.7 1.1 2.2

1.2 1.4 1.0 0.6

0.2 0.6 0.1 0.5

CRR-loo

101 102 103 θˆML

0.8 3.0 1.0 2.6

1.4 3.2 1.0 2.6

1.3 1.9 0.2 0.5

0.2 0.5 0.2 0.2

RRCM

101 102 103 θˆML

0.9 0.7 0.4 1.3

1.4 2.6 0.5 2.3

1.2 1.3 0.9 0.8

0.2 0.6 0.3 0.4

RRCM-loo

101 102 103 θˆML

0.8 3.0 0.9 2.6

1.6 3.2 0.6 2.6

1.2 2.0 0.2 0.7

0.3 0.6 0.2 0.1

regions in the non-Gaussian experiments is summarized in table IV. Overall, the error rates do not stray too far from the stated levels, and conformal regions are weakly sensitive to the KRR hyper-parameters. In contrast, table III shows that the empirical error rate of Bayesian confidence intervals depends more prominently on the values of the precision parameter. The MLE θ produces conservatively valid confidence intervals, and for γ = 10−1 the error rate becomes closer to the specified significance level.

V. C ONCLUSION Experiments in sec. IV provide evidence suggesting that conformal procedures are insensitive to the choice of the core NCM and are asymptotically equivalent in terms of coverage and efficiency, despite being applied in the offline batch learning setting. Furthermore, the results indicate that both Bayesian and conformal confidence intervals possess asymptotic validity guarantees when the Gaussian assumptions hold, and that the conformal procedure yields asymptotically efficient regions. Further research shall focus on establishing theoretical foundations for the obtained experimental evidence for the conformalized KRR with Gaussian kernel, isolating and studying cases when the guarantees are upheld, and generalizing the efficiency result in [23]. ACKNOWLEDGMENTS The research, presented in Section IV of this paper, was supported by the RFBR grants 16-01-00576 A and 16-2909649 ofi m; the research, presented in other sections, was supported solely by the Russian Science Foundation grant (project 14-50-00150). R EFERENCES [1] S. Alestra, E. Burnaev, C. Bordry, C. Brand, P. Erofeev, A. Papanov, and C. Silveira-Freixo, “Application of rare event anticipation techniques to aircraft health management,” in Mechanical and Aerospace Engineering V, vol. 1016. Trans Tech Publications, 11 2014, pp. 413–417. [2] E. Burnaev, P. Erofeev, and D. Smolyakov, “Model selection for anomaly detection,” in Proc. SPIE, vol. 9875, 2015. [3] A. Artemov, E. Burnaev, and A. Lokot, “Nonparametric decomposition of quasi-periodic time series for change-point detection,” in Proc. SPIE, vol. 9875, 2015. [4] C. C. Aggarwal and S. Y. Philip, “Outlier detection with uncertain data.” in Proc. of the SIAM International Conference on Data Mining. SIAM, 2008, pp. 483–493. [5] D. W. Scott, “Kernel density estimators,” Multivariate Density Estimation: Theory, Practice, and Visualization, pp. 137–217, 2015. [6] V. Hautamaki, I. Karkkainen, and P. Franti, “Outlier detection using k-nearest neighbour graph,” in Proc. of the Pattern Recognition, 17th International Conference on (ICPR’04) Volume 3 - Volume 03, ser. ICPR ’04. IEEE Computer Society, 2004, pp. 430–433. [7] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: Identifying density-based local outliers,” SIGMOD Rec., vol. 29, no. 2, pp. 93–104, may 2000. [8] H.-P. Kriegel, P. Kr¨oger, E. Schubert, and A. Zimek, “Loop: Local outlier probabilities,” in Proc. of the 18th ACM Conference on Information and Knowledge Management, ser. CIKM ’09. ACM, 2009, pp. 1649–1652.

[9] M. F. Augusteijn and B. A. Folkert, “Neural network classification and novelty detection,” International Journal of Remote Sensing, vol. 23, no. 14, pp. 2891–2902, 2002. [10] S. Hawkins, H. He, G. J. Williams, and R. A. Baxter, “Outlier detection using replicator neural networks,” in Data Warehousing and Knowledge Discovery: 4th International Conference, DaWaK 2002 Aix-en-Provence, France, September 4–6, 2002 Proceedings, ser. DaWaK 2000. Springer Berlin Heidelberg, 2002, pp. 170–180. [11] H. Hoffmann, “Kernel pca for novelty detection,” Pattern Recogn., vol. 40, no. 3, pp. 863–874, Mar. 2007. [12] B. Sch¨olkopf, A. Smola, and K.-R. M¨uller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Comput., vol. 10, no. 5, pp. 1299–1319, Jul. 1998. [13] A. Bernstein, A. Kuleshov, and Y. Yanovich, Manifold Learning in Regression Tasks. Springer International Publishing, 2015, pp. 414–423. [14] A. Kuleshov and A. Bernstein, “Extended regression on manifolds estimation,” in Conformal and Probabilistic Prediction with Applications: 5th International Symposium, COPA 2016, Madrid, Spain, April 20-22, 2016, Proc. Springer International Publishing, 2016, pp. 208–228. [15] B. Sch¨olkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002. [16] C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. University Press Group Limited, 2006. [17] E. V. Burnaev, A. A. Zaytsev, and V. G. Spokoiny, “The bernstein-von mises theorem forregression based on gaussian processes,” Russian Math. Surveys, vol. 68, no. 5, p. 954, 2013. [18] A. A. Zaitsev, E. V. Burnaev, and V. G. Spokoiny, “Properties of the posterior distribution of a regression model based on gaussian random fields,” Automation and Remote Control, vol. 74, no. 10, pp. 1645–1655, 2013. [19] M. Belyaev, E. Burnaev, and Y. Kapushev, “Gaussian process regression for structured data sets,” in Statistical Learning and Data Sciences: Third International Symposium, SLDS 2015, Egham, UK, April 20-23, 2015, Proceedings. Springer International Publishing, 2015, pp. 106–115. [20] M. Belyaev, E. Burnaev, and Y. Kapushev, “Computationally efficient algorithm for gaussian process regression in case of structured samples,” Computational Mathematics and Mathematical Physics, vol. 56, no. 4, pp. 499–513, 2016. [21] E. V. Burnaev, M. E. Panov, and A. A. Zaytsev, “Regression on the basis of nonstationary gaussian processes with bayesian regularization,” Journal of Communications Technology and Electronics, vol. 61, no. 6, pp. 661–671, 2016. [22] V. Vovk, A. Gammerman, and G. Shafer, Algorithmic Learning in a Random World. Springer, 2005. [23] E. Burnaev and V. Vovk, “Efficiency of conformalized ridge regression,” in Proc. of The 27th Conference on Learning Theory, COLT 2014, Barcelona, Spain, June 13-15, 2014, 2014, pp. 605–622.