Skip to contents

fastcpd() takes in formulas, data, families and extra parameters and returns a fastcpd object.

Usage

fastcpd(
  formula = y ~ . - 1,
  data,
  beta = "MBIC",
  cost_adjustment = "MBIC",
  family = NULL,
  cost = NULL,
  cost_gradient = NULL,
  cost_hessian = NULL,
  line_search = c(1),
  lower = rep(-Inf, p),
  upper = rep(Inf, p),
  pruning_coef = 0,
  segment_count = 10,
  trim = 0.02,
  momentum_coef = 0,
  multiple_epochs = function(x) 0,
  epsilon = 1e-10,
  order = c(0, 0, 0),
  p = ncol(data) - 1,
  cp_only = FALSE,
  vanilla_percentage = 0,
  warm_start = FALSE,
  ...
)

Arguments

formula

A formula object specifying the model to be fitted. The (optional) response variable should be on the LHS of the formula, while the covariates should be on the RHS. The naming of variables used in the formula should be consistent with the column names in the data frame provided in data. The intercept term should be removed from the formula. The response variable is not needed for mean/variance change models and time series models. By default, an intercept column will be added to the data, similar to the lm() function. Thus, it is suggested that users should remove the intercept term by appending - 1 to the formula. Note that the fastcpd.family functions do not require a formula input.

data

A data frame of dimension \(T \times d\) containing the data to be segmented (where each row denotes a data point \(z_t \in \mathbb{R}^d\) for \(t = 1, \ldots, T\)) is required in the main function, while a matrix or a vector input is also accepted in the fastcpd.family functions.

beta

Penalty criterion for the number of change points. This parameter takes a string value of "BIC", "MBIC", "MDL" or a numeric value. If a numeric value is provided, the value will be used as the penalty. By default, the mBIC criterion is used, where \(\beta = (p + 2) \log(T) / 2\). This parameter usage should be paired with cost_adjustment described below. Discussions about the penalty criterion can be found in the references.

cost_adjustment

Cost adjustment criterion. It can be "BIC", "MBIC", "MDL" or NULL. By default, the cost adjustment criterion is set to be "MBIC". The "MBIC" and "MDL" criteria modify the cost function by adding a negative adjustment term to the cost function. "BIC" or NULL does not modify the cost function. Details can in found in the references.

family

Family class of the change point model. It can be "mean" for mean change, "variance" for variance change, "meanvariance" for mean and/or variance change, "lm" for linear regression, "binomial" for logistic regression, "poisson" for Poisson regression, "lasso" for penalized linear regression, "ar" for AR(\(p\)) models, "arma" for ARMA(\(p\), \(q\)) models, "arima" for ARIMA(\(p\), \(d\), \(q\)) models, "garch" for GARCH(\(p\), \(q\)) models, "var" for VAR(\(p\)) models and "custom" for user-specified custom models. Omitting this parameter is the same as specifying the parameter to be "custom" or NULL, in which case, users must specify the custom cost function.

cost

Cost function to be used. cost, cost_gradient, and cost_hessian should not be specified at the same time with family as built-in families have cost functions implemented in C++ to provide better performance. If not specified, the default is the negative log-likelihood for the corresponding family. Custom cost functions can be provided in the following two formats:

  • cost = function(data) {...}

  • cost = function(data, theta) {...}

Users can specify a loss function using the second format that will be used to calculate the cost value. In both formats, the input data is a subset of the original data frame in the form of a matrix (a matrix with a single column in the case of a univariate data set). In the first format, the specified cost function directly calculates the cost value. fastcpd() performs the vanilla PELT algorithm, and cost_gradient and cost_hessian should not be provided since no parameter updating is necessary for vanilla PELT. In the second format, the loss function \(\sum_{i = s}^t l(z_i, \theta)\) is provided, which has to be optimized over the parameter \(\theta\) to obtain the cost value. A detailed discussion about the custom cost function usage can be found in the references.

cost_gradient

Gradient of the custom cost function. Example usage:

cost_gradient = function(data, theta) {
  ...
  return(gradient)
}

The gradient function takes two inputs, the first being a matrix representing a segment of the data, similar to the format used in the cost function, and the second being the parameter that needs to be optimized. The gradient function returns the value of the gradient of the loss function, i.e., \(\sum_{i = s}^t \nabla l(z_i, \theta)\).

cost_hessian

Hessian of the custom loss function. The Hessian function takes two inputs, the first being a matrix representing a segment of the data, similar to the format used in the cost function, and the second being the parameter that needs to be optimized. The gradient function returns the Hessian of the loss function, i.e., \(\sum_{i = s}^t \nabla^2 l(z_i, \theta)\).

line_search

If a vector of numeric values is provided, a line search will be performed to find the optimal step size for each update. Detailed usage of line_search can be found in the references.

lower

Lower bound for the parameters. Used to specify the domain of the parameters after each gradient descent step. If not specified, the lower bound is set to be -Inf for all parameters. lower is especially useful when the estimated parameters take only positive values, such as the noise variance.

upper

Upper bound for the parameters. Used to specify the domain of the parameters after each gradient descent step. If not specified, the upper bound is set to be Inf for all parameters.

pruning_coef

Pruning coefficient $c_0$ used in the pruning step of the PELT algorithm with the default value 0. If cost_adjustment is specified as "MBIC", an adjustment term \(p\log(2)\) will be added to the pruning coefficient. If cost_adjustment is specified as "MDL", an adjustment term \(p\log_2(2)\) will be added to the pruning coefficient. Detailed discussion about the pruning coefficient can be found in the references.

segment_count

An initial guess of the number of segments. If not specified, the initial guess of the number of segments is 10. The initial guess affects the initial estimates of the parameters in SeGD.

trim

Trimming for the boundary change points so that a change point close to the boundary will not be counted as a change point. This parameter also specifies the minimum distance between two change points. If several change points have mutual distances smaller than trim * nrow(data), those change points will be merged into one single change point. The value of this parameter should be between 0 and 1.

momentum_coef

Momentum coefficient to be applied to each update. This parameter is used when the loss function is bad-shaped so that maintaining a momentum from previous update is desired. Default value is 0, meaning the algorithm doesn't maintain a momentum by default.

multiple_epochs

A function can be specified such that an adaptive number of multiple epochs can be utilized to improve the algorithm's performance. multiple_epochs is a function of the length of the data segment. The function returns an integer indicating how many epochs should be performed apart from the default update. By default, the function returns zero, meaning no multiple epochs will be used to update the parameters. Example usage:

multiple_epochs = function(segment_length) {
  if (segment_length < 100) 1
  else 0
}

This function will let SeGD perform parameter updates with an additional epoch for each segment with a length less than 100 and no additional epoch for segments with lengths greater or equal to 100.

epsilon

Epsilon to avoid numerical issues. Only used for the Hessian computation in Logistic Regression and Poisson Regression.

order

Order of the AR(\(p\)), VAR(\(p\)) or ARIMA(\(p\), \(d\), \(q\)) model.

p

Number of covariates in the model. If not specified, the number of covariates will be inferred from the data, i.e., p = ncol(data) - 1. This parameter is superseded by order in the case of time series models: "ar", "var", "arima".

cp_only

If TRUE, only the change points are returned. Otherwise, the cost function values together with the estimated parameters for each segment are also returned. By default the value is set to be FALSE so that plot can be used to visualize the results for a built-in model. cp_only has some performance impact on the algorithm, since the cost values and estimated parameters for each segment need to be calculated and stored. If the users are only interested in the change points, setting cp_only to be TRUE will help with the computational cost.

vanilla_percentage

The parameter \(v\) is between zero and one. For each segment, when its length is no more than \(vT\), the cost value will be computed by performing an exact minimization of the loss function over the parameter. When its length is greater than \(vT\), the cost value is approximated through SeGD. Therefore, this parameter induces an algorithm that can be interpreted as an interpolation between dynamic programming with SeGD (\(v = 0\)) and the vanilla PELT (\(v = 1\)). The readers are referred to the references for more details.

warm_start

If TRUE, the algorithm will use the estimated parameters from the previous segment as the initial value for the current segment. This parameter is only used for the "glm" families.

...

Other parameters for specific models.

  • include.mean is used to determine if a mean/intercept term should be included in the ARIMA(\(p\), \(d\), \(q\)) or GARCH(\(p\), \(q\)) models.

  • r.clock is used to create an RcppClock object to record the time spent in the C++ code. Default is an empty string. If set to any non-empty string, an object with specified name will be created. Usage: library(RcppClock); plot(VARIABLE_NAME).

  • r.progress is used to control the progress bar. By default the progress bar will be shown. To disable it, set r.progress = FALSE.

  • p.response is used to specify the number of response variables. This parameter is especially useful for linear models with multivariate responses.

Value

A fastcpd object.

References

Xingchi Li, Xianyang Zhang (2024). ``fastcpd: Fast Change Point Detection in R.'' arXiv:2404.05933, https://arxiv.org/abs/2404.05933.

Xianyang Zhang, Trisha Dawn (2023). ``Sequential Gradient Descent and Quasi-Newton's Method for Change-Point Analysis.'' In Ruiz, Francisco, Dy, Jennifer, van de Meent, Jan-Willem (eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 series Proceedings of Machine Learning Research, 1129-1143.

See also

fastcpd.family for the family-specific function; plot.fastcpd() for plotting the results, summary.fastcpd() for summarizing the results.

Examples

if (requireNamespace("mvtnorm", quietly = TRUE)) {
  set.seed(1)
  n <- 200
  p <- 4
  d <- 2
  x <- mvtnorm::rmvnorm(n, rep(0, p), diag(p))
  theta_1 <- matrix(runif(8, -3, -1), nrow = p)
  theta_2 <- matrix(runif(8, -1, 3), nrow = p)
  y <- rbind(
    x[1:125, ] %*% theta_1 + mvtnorm::rmvnorm(125, rep(0, d), 3 * diag(d)),
    x[126:n, ] %*% theta_2 + mvtnorm::rmvnorm(75, rep(0, d), 3 * diag(d))
  )
  result_mlm <- fastcpd(
    cbind(y.1, y.2) ~ . - 1, cbind.data.frame(y = y, x = x), family = "lm"
  )
  summary(result_mlm)
}
#> 
#> Call:
#> fastcpd(formula = cbind(y.1, y.2) ~ . - 1, data = cbind.data.frame(y = y, 
#>     x = x), family = "lm")
#> 
#> Change points:
#> 125 
#> 
#> Cost values:
#> 501.0668 296.4026 
if (
  requireNamespace("mvtnorm", quietly = TRUE) &&
    requireNamespace("stats", quietly = TRUE)
) {
  set.seed(1)
  n <- 400 + 300 + 500
  p <- 5
  x <- mvtnorm::rmvnorm(n, mean = rep(0, p), sigma = diag(p))
  theta <- rbind(
    mvtnorm::rmvnorm(1, mean = rep(0, p - 3), sigma = diag(p - 3)),
    mvtnorm::rmvnorm(1, mean = rep(5, p - 3), sigma = diag(p - 3)),
    mvtnorm::rmvnorm(1, mean = rep(9, p - 3), sigma = diag(p - 3))
  )
  theta <- cbind(theta, matrix(0, 3, 3))
  theta <- theta[rep(seq_len(3), c(400, 300, 500)), ]
  y_true <- rowSums(x * theta)
  factor <- c(
    2 * stats::rbinom(400, size = 1, prob = 0.95) - 1,
    2 * stats::rbinom(300, size = 1, prob = 0.95) - 1,
    2 * stats::rbinom(500, size = 1, prob = 0.95) - 1
  )
  y <- factor * y_true + stats::rnorm(n)
  data <- cbind.data.frame(y, x)
  huber_threshold <- 1
  huber_loss <- function(data, theta) {
    residual <- data[, 1] - data[, -1, drop = FALSE] %*% theta
    indicator <- abs(residual) <= huber_threshold
    sum(
      residual^2 / 2 * indicator +
        huber_threshold * (
          abs(residual) - huber_threshold / 2
        ) * (1 - indicator)
    )
  }
  huber_loss_gradient <- function(data, theta) {
    residual <- c(data[nrow(data), 1] - data[nrow(data), -1] %*% theta)
    if (abs(residual) <= huber_threshold) {
      -residual * data[nrow(data), -1]
    } else {
      -huber_threshold * sign(residual) * data[nrow(data), -1]
    }
  }
  huber_loss_hessian <- function(data, theta) {
    residual <- c(data[nrow(data), 1] - data[nrow(data), -1] %*% theta)
    if (abs(residual) <= huber_threshold) {
      outer(data[nrow(data), -1], data[nrow(data), -1])
    } else {
      0.01 * diag(length(theta))
    }
  }
  huber_regression_result <- fastcpd(
    formula = y ~ . - 1,
    data = data,
    beta = (p + 1) * log(n) / 2,
    cost = huber_loss,
    cost_gradient = huber_loss_gradient,
    cost_hessian = huber_loss_hessian
  )
  summary(huber_regression_result)
}
#> 
#> Call:
#> fastcpd(formula = y ~ . - 1, data = data, beta = (p + 1) * log(n)/2, 
#>     cost = huber_loss, cost_gradient = huber_loss_gradient, cost_hessian = huber_loss_hessian)
#> 
#> Change points:
#> 418 726 
#> 
#> Parameters:
#>      segment 1    segment 2    segment 3
#> 1 -0.464532607  2.760546489  8.744706508
#> 2 -0.995964940  5.059690163  9.506534878
#> 3 -0.093877079  0.023321737 -0.008908851
#> 4 -0.002000477 -0.090091843 -0.047909865
#> 5  0.009364910 -0.009543696  0.025171681
# \donttest{
set.seed(1)
p <- 5
x <- matrix(rnorm(375 * p, 0, 1), ncol = p)
theta <- rbind(rnorm(p, 0, 1), rnorm(p, 2, 1))
y <- c(
  rbinom(200, 1, 1 / (1 + exp(-x[1:200, ] %*% theta[1, ]))),
  rbinom(175, 1, 1 / (1 + exp(-x[201:375, ] %*% theta[2, ])))
)
data <- data.frame(y = y, x = x)
result_builtin <- suppressWarnings(fastcpd.binomial(data))
logistic_loss <- function(data, theta) {
  x <- data[, -1]
  y <- data[, 1]
  u <- x %*% theta
  nll <- -y * u + log(1 + exp(u))
  nll[u > 10] <- -y[u > 10] * u[u > 10] + u[u > 10]
  sum(nll)
}
logistic_loss_gradient <- function(data, theta) {
  x <- data[nrow(data), -1]
  y <- data[nrow(data), 1]
  c(-(y - 1 / (1 + exp(-x %*% theta)))) * x
}
logistic_loss_hessian <- function(data, theta) {
  x <- data[nrow(data), -1]
  prob <- 1 / (1 + exp(-x %*% theta))
  (x %o% x) * c((1 - prob) * prob)
}
result_custom <- fastcpd(
  formula = y ~ . - 1,
  data = data,
  epsilon = 1e-5,
  cost = logistic_loss,
  cost_gradient = logistic_loss_gradient,
  cost_hessian = logistic_loss_hessian
)
cat(
  "Change points detected by built-in logistic regression model: ",
  result_builtin@cp_set, "\n",
  "Change points detected by custom logistic regression model: ",
  result_custom@cp_set, "\n",
  sep = ""
)
#> Change points detected by built-in logistic regression model: 202
#> Change points detected by custom logistic regression model: 202
result_custom_two_epochs <- fastcpd(
  formula = y ~ . - 1,
  data = data,
  multiple_epochs = function(segment_length) 1,
  epsilon = 1e-5,
  cost = logistic_loss,
  cost_gradient = logistic_loss_gradient,
  cost_hessian = logistic_loss_hessian
)
summary(result_custom_two_epochs)
#> 
#> Call:
#> fastcpd(formula = y ~ . - 1, data = data, cost = logistic_loss, 
#>     cost_gradient = logistic_loss_gradient, cost_hessian = logistic_loss_hessian, 
#>     multiple_epochs = function(segment_length) 1, epsilon = 1e-05)
#> 
#> Change points:
#> 202 
#> 
#> Parameters:
#>    segment 1  segment 2
#> 1 -0.4884741  1.9829469
#> 2 -1.5637585  1.6145734
#> 3 -1.6874162  4.6191745
#> 4 -0.4192355 -0.1568114
#> 5  2.0315690  2.3956803
# }
# \donttest{
if (requireNamespace("mvtnorm", quietly = TRUE)) {
  set.seed(1)
  n <- 480
  p_true <- 6
  p <- 50
  x <- mvtnorm::rmvnorm(n, rep(0, p), diag(p))
  theta_0 <- rbind(
    runif(p_true, -5, -2),
    runif(p_true, -3, 3),
    runif(p_true, 2, 5),
    runif(p_true, -5, 5)
  )
  theta_0 <- cbind(theta_0, matrix(0, ncol = p - p_true, nrow = 4))
  y <- c(
    x[1:80, ] %*% theta_0[1, ] + rnorm(80, 0, 1),
    x[81:200, ] %*% theta_0[2, ] + rnorm(120, 0, 1),
    x[201:320, ] %*% theta_0[3, ] + rnorm(120, 0, 1),
    x[321:n, ] %*% theta_0[4, ] + rnorm(160, 0, 1)
  )
  small_lasso_data <- cbind.data.frame(y, x)
  result_no_vp <- fastcpd.lasso(
    small_lasso_data,
    beta = "BIC",
    cost_adjustment = NULL,
    pruning_coef = 0
  )
  summary(result_no_vp)
  result_20_vp <- fastcpd.lasso(
    small_lasso_data,
    beta = "BIC",
    cost_adjustment = NULL,
    vanilla_percentage = 0.2,
    pruning_coef = 0
  )
  summary(result_20_vp)
}
#> 
#> Call:
#> fastcpd.lasso(data = small_lasso_data, beta = "BIC", cost_adjustment = NULL, 
#>     pruning_coef = 0)
#> 
#> Change points:
#> 79 202 325 
#> 
#> Cost values:
#> 13.72919 225.1447 85.69365 47.51894 
#> 
#> Parameters:
#> 50 x 4 sparse Matrix of class "dgCMatrix"
#>          segment 1     segment 2    segment 3    segment 4
#>  [1,] -2.580352409 -2.2528808428  4.848522259 -2.995657982
#>  [2,] -2.668688731 -2.2387392752  4.183821811  3.100656318
#>  [3,] -4.354248176  1.1867292005  4.878538526  3.711902167
#>  [4,] -4.662566697  0.5043975083  2.724613946 -3.128550136
#>  [5,] -4.820139326  0.3907766479  4.658932261  3.187247302
#>  [6,] -3.860308911 -2.4708867317  3.364992430  4.322270139
#>  [7,]  0.117234534 -0.2337451460  0.037468301 -0.005247904
#>  [8,]  0.117552281 -0.0774905233 -0.171677542 -0.029653585
#>  [9,] -0.188756652  0.4850028633 -0.149864384  0.068109708
#> [10,] -0.074530965 -0.2693267166 -0.099057468  0.007239663
#> [11,] -0.061721013 -0.3929366442  0.057377332 -0.168703949
#> [12,]  0.021874954 -0.4294667473  0.112828002  0.080008740
#> [13,]  0.067777695  0.3501129775  0.065290363  0.081191140
#> [14,] -0.122270920  0.4348764010  0.124026153  0.093730519
#> [15,] -0.014192027 -0.1009201441  0.055676406 -0.022302179
#> [16,] -0.135970691 -0.3228209051 -0.322266525 -0.093584779
#> [17,]  0.007608049 -0.0697777666 -0.173314905  0.003534974
#> [18,] -0.010343806  0.1576855458  0.036483478 -0.169347885
#> [19,]  0.252797503  0.5568680926  0.079891379  0.146785657
#> [20,] -0.044505498 -0.5831313510 -0.136705472  0.008129199
#> [21,]  0.084292730  0.5462807777 -0.201700276 -0.168701992
#> [22,] -0.373913824  0.3760916945  0.061397643  0.032865121
#> [23,]  0.316816197 -0.2806604425 -0.162557542 -0.007149145
#> [24,]  0.008174405  0.2296689003 -0.036906468 -0.133226098
#> [25,] -0.388590727  0.6583142640 -0.162051056  0.017998768
#> [26,] -0.109916673  0.0687814570  0.068395416 -0.060802578
#> [27,] -0.040845604 -0.0868845012 -0.208878583  0.028223323
#> [28,]  0.080696347 -0.1975246890 -0.131401498 -0.070869199
#> [29,] -0.117530646  0.0006050774 -0.034054322 -0.066685344
#> [30,] -0.037935138  0.2909132628  0.054106263  0.030597693
#> [31,]  0.109674339  0.2194249070 -0.124483059  0.076983144
#> [32,]  0.069971569 -0.7939801174  0.065441362  0.040494938
#> [33,]  0.050674415 -0.2109295276  0.002639198  0.069521670
#> [34,]  0.065490413  0.6682523717  0.207420084  0.045711660
#> [35,] -0.045676268 -0.5333420170  0.171445009 -0.109004764
#> [36,]  0.368217440 -0.0662157788 -0.032932140 -0.018136377
#> [37,]  0.037286542 -0.2511805574 -0.363798390  0.117605382
#> [38,]  0.094379823 -0.0542487631  0.164432960 -0.049479163
#> [39,]  0.153890231 -0.0926699875 -0.025621414 -0.121494687
#> [40,] -0.146742186 -0.1892763351 -0.102276477 -0.048861175
#> [41,] -0.274840831 -0.0377048905  0.059701562  0.110453514
#> [42,] -0.008508998 -0.0555288478 -0.052656118  0.006037477
#> [43,]  0.014213371 -0.4104375842 -0.017832766 -0.058840493
#> [44,] -0.080890432  0.0356020878 -0.065307053 -0.003562026
#> [45,]  0.051925330  0.0175766032 -0.016616109 -0.016646624
#> [46,]  0.213031014 -0.0719719280  0.025295461 -0.024318200
#> [47,]  0.175716867  0.1820716574  0.253325834  0.118459579
#> [48,] -0.040155598 -0.1721349330 -0.276607899 -0.058926739
#> [49,]  0.164103981  0.0096690123 -0.109759177 -0.055922804
#> [50,] -0.103164763  0.0179753021  0.228753665 -0.031882016
#> 
#> Call:
#> fastcpd.lasso(data = small_lasso_data, beta = "BIC", cost_adjustment = NULL, 
#>     vanilla_percentage = 0.2, pruning_coef = 0)
#> 
#> Change points:
#> 80 202 320 
#> 
#> Cost values:
#> 15.30678 221.5585 30.87735 48.53091 
#> 
#> Parameters:
#> 50 x 4 sparse Matrix of class "dgCMatrix"
#>          segment 1    segment 2     segment 3    segment 4
#>  [1,] -2.615455985 -2.309283420  4.8750817914 -3.006202485
#>  [2,] -2.678202261 -2.259223507  4.2707257840  3.096081968
#>  [3,] -4.388362187  1.150652841  5.1265068595  3.718141515
#>  [4,] -4.586413963  0.532197076  2.6470978568 -3.130632564
#>  [5,] -4.852159806  0.429244510  4.6494743503  3.175562645
#>  [6,] -3.823465522 -2.436663308  3.3618026781  4.342890940
#>  [7,]  0.116154596 -0.264242115 -0.0727840814 -0.016395368
#>  [8,]  0.160209372 -0.109364822 -0.0084325962 -0.022022294
#>  [9,] -0.254798925  0.467114039  0.1063025022  0.070683538
#> [10,] -0.012955289 -0.319108692  0.0158283310  0.012791767
#> [11,]  0.018525857 -0.400042642  0.0460933309 -0.166042077
#> [12,]  0.015730602 -0.472837555  0.1629334863  0.077820866
#> [13,]  0.020134823  0.345191970 -0.0685630320  0.077436979
#> [14,] -0.019808550  0.469540855  0.0999989714  0.114046507
#> [15,] -0.040890576 -0.151353540  0.0281131032 -0.015966855
#> [16,] -0.213173204 -0.410553236 -0.0786859735 -0.095751983
#> [17,]  0.002688288 -0.026211969 -0.0836390903 -0.010689927
#> [18,] -0.013245030  0.110930058 -0.0648479795 -0.138057246
#> [19,]  0.153187151  0.519156799 -0.1634987155  0.158967697
#> [20,] -0.029584251 -0.562329939  0.0092242673  0.030169316
#> [21,]  0.100615839  0.571051467 -0.1762195594 -0.181503446
#> [22,] -0.313718826  0.415755188  0.1355664556  0.045649997
#> [23,]  0.246017007 -0.316648014 -0.1498815469 -0.025305755
#> [24,]  0.008216104  0.225545561 -0.0316699448 -0.137396624
#> [25,] -0.325075840  0.679118982 -0.0740016267  0.022336212
#> [26,] -0.057726812  0.117150669  0.0487654853 -0.059782090
#> [27,]  0.001998315 -0.033072437  0.0703865789  0.030431412
#> [28,] -0.019721901 -0.226018232  0.0632399484 -0.080706668
#> [29,] -0.100102752  0.034648703  0.0026192106 -0.051542901
#> [30,] -0.002193672  0.273435850  0.1065809370  0.048892128
#> [31,]  0.096046658  0.202400480 -0.1630891704  0.075434045
#> [32,]  0.151150818 -0.756337115  0.0007249262  0.033375101
#> [33,]  0.041988689 -0.224816396  0.0347939295  0.070983107
#> [34,]  0.053090539  0.656857298  0.1766941914  0.032575571
#> [35,]  0.048963552 -0.523861501  0.0843866955 -0.134534869
#> [36,]  0.426376926 -0.057992972  0.0393842237 -0.020673038
#> [37,]  0.090812237 -0.282803290 -0.0408241929  0.102043216
#> [38,]  0.179330109 -0.048614561  0.0909078160 -0.052763017
#> [39,]  0.138776838 -0.045196412 -0.0771155793 -0.120642848
#> [40,] -0.240934510 -0.179273912 -0.2239376855 -0.045775725
#> [41,] -0.343869848 -0.035394612  0.0549633763  0.123910218
#> [42,]  0.010690576 -0.111879602  0.0832100573 -0.005505113
#> [43,]  0.081490861 -0.376798426  0.0415210753 -0.056131976
#> [44,] -0.171008829 -0.007286424  0.0731242211  0.013524984
#> [45,]  0.047604778 -0.015973066  0.0405922084 -0.033628617
#> [46,]  0.127081043 -0.074390064 -0.0201907706 -0.029655833
#> [47,]  0.138821573  0.144223868  0.1718247772  0.119507574
#> [48,] -0.044021065 -0.180536724 -0.2437603101 -0.054349982
#> [49,]  0.198647494  0.008399368  0.0231802136 -0.056092814
#> [50,] -0.092465571  0.015580790  0.0812578961 -0.032491294
# }