Model Comparison
----------------
Data variable: AUTOADJ/CPI
Number of observations = 314
Start index = 1/70            
Sampling interval = 1.0 month(s)
Length of seasonality = 12
Number of periods withheld for validation: 26

Model Comparison
----------------
Data variable: AUTOADJ/CPI
Number of observations = 314
Start index = 1/70            
Sampling interval = 1.0 month(s)
Length of seasonality = 12
Number of periods withheld for validation: 26

Models
------
(A) Constant mean = 1.40164 + 1 regressor
(B) Random walk
(C) ARIMA(0,1,0) with constant
(D) Simple exponential smoothing with alpha = 0.4753
(E) Brown's linear exp. smoothing with alpha = 0.2095

Estimation Period
Model  MSE          MAE          MAPE         ME           MPE
------------------------------------------------------------------------
(A)    4.10166      1.64644      8.3251       -1.43836E-14 -1.041       
(B)    1.88523      0.983196     4.77414      0.0503052    0.0388061    
(C)    1.8893       0.980477     4.76469      -0.00409051  -0.232459    
(D)    1.51792      0.91963      4.50505      0.0997052    0.221245     
(E)    1.57124      0.947813     4.69309      0.0256061    -0.0466592   

Model  RMSE         RUNS  RUNM  AUTO  MEAN  VAR
-----------------------------------------------
(A)    2.02526       **    ***   ***   OK   OK   
(B)    1.37304       OK    ***   ***   OK   ***  
(C)    1.37452       OK    ***   ***   OK   ***  
(D)    1.23204       OK    OK    **    OK   ***  
(E)    1.25349       **    **    **    OK   **   

Validation Period
Model  MSE          MAE          MAPE         ME           MPE
------------------------------------------------------------------------
(A)    10.6607      3.05438      9.9606       3.05438      9.9606       
(B)    1.51437      1.12022      3.7068       0.198035     0.542558     
(C)    1.49579      1.11603      3.69692      0.14364      0.36204      
(D)    1.1757       0.90882      2.97726      0.389334     1.18069      
(E)    1.06318      0.840403     2.78052      -0.0247549   -0.205443    


Key:
RMSE = Root Mean Squared Error
RUNS = Test for excessive runs up and down
RUNM = Test for excessive runs above and below median
AUTO = Box-Pierce test for excessive autocorrelation
MEAN = Test for difference in mean 1st half to 2nd half
VAR = Test for difference in variance 1st half to 2nd half
OK = not significant (p >= 0.10)
* = marginally significant (0.05 < p <= 0.10)
** = significant (0.01 < p <= 0.05)
*** = highly significant (p <= 0.01)



The StatAdvisor
---------------
   This table compares the results of five different forecasting
models.  You can change any of the models by pressing the alternate
mouse button and selecting Analysis Options.  Looking at the error
statistics, the model with the smallest mean squared error (MSE)
during the estimation period is model D.  The model with the smallest
mean absolute error (MAE) is model D.  The model with the smallest
mean absolute percentage error (MAPE) is model D.  During the
validation period, the model with the smallest mean squared error
(MSE) is model E.  The model with the smallest mean absolute error
(MAE) is model E.  The model with the smallest mean absolute
percentage error (MAPE) is model E.  You can use these results to
select the most appropriate model for your needs.

   The table also summarizes the results of five tests run on the
residuals to determine whether each model is adequate for the data. 
An OK means that the model passes the test.  One * means that it fails
at the 90% confidence level.  Two *'s means that it fails at the 95%
confidence level.  Three *'s means that it fails at the 99% confidence
level.  Note that the currently selected model, model C, passes 2
tests.  Since one or more tests are statistically significant at the
95% or higher confidence level, you should seriously consider
selecting another model.  

Key:
RMSE = Root Mean Squared Error
RUNS = Test for excessive runs up and down
RUNM = Test for excessive runs above and below median
AUTO = Box-Pierce test for excessive autocorrelation
MEAN = Test for difference in mean 1st half to 2nd half
VAR = Test for difference in variance 1st half to 2nd half
OK = not significant (p >= 0.10)
* = marginally significant (0.05 < p <= 0.10)
** = significant (0.01 < p <= 0.05)
*** = highly significant (p <= 0.01)



The StatAdvisor
---------------
   This table compares the results of five different forecasting
models.  You can change any of the models by pressing the alternate
mouse button and selecting Analysis Options.  Looking at the error
statistics, the model with the smallest mean squared error (MSE)
during the estimation period is model C.  The model with the smallest
mean absolute error (MAE) is model C.  The model with the smallest
mean absolute percentage error (MAPE) is model C.  During the
validation period, the model with the smallest mean squared error
(MSE) is model D.  The model with the smallest mean absolute error
(MAE) is model D.  The model with the smallest mean absolute
percentage error (MAPE) is model D.  You can use these results to
select the most appropriate model for your needs.

   The table also summarizes the results of five tests run on the
residuals to determine whether each model is adequate for the data. 
An OK means that the model passes the test.  One * means that it fails
at the 90% confidence level.  Two *'s means that it fails at the 95%
confidence level.  Three *'s means that it fails at the 99% confidence
level.  Note that the currently selected model, model A, passes 2
tests.  Since one or more tests are statistically significant at the
95% or higher confidence level, you should seriously consider
selecting another model.