Tips & Demos
Latest tips, movie demos, white papers and more available for all of our products. Learn how to use NumXL and all of our products more efficiently and accurately. Discover how to use all of our products to their full potential.
In this document, we start with historical residential electric power demand (monthly) between Jan 2003 and Dec 2010. Next, we demonstrate the minimal steps needed to process the time series, and fit a special seasonal ARIMA model - AirLine model - to the data, and construct a statistical forecast for the following 24 months using only NumXL.
This is the fourth issue in our ARMAi Unplugged modeling series. In this issue, we start by examining the nature of input time series and qualify them as Flow/Stock-type of series. This breakdown holds a barrier on their exposure to or influence by calendar-based events.
This is the second entry in our ongoing series of volatility modeling. In this issue, we start by defining the various terms in an asset’s return time (e.g. holding period) and explain in detail the multi-period forecast of returns and volatility. Next, we discuss the scaling issue with volatility computed with different holding periods and establish a common scaling or unit base. Finally, we define the different types of volatility terms (e.g. local volatility, term structure, long-run and forward volatility) that we will come across in our future volatility modeling.
In this issue, we will show how to make a backward forecast using only NumXLi functions in Excel. We will also discuss the relationship between a regular time series model and an implied backward/reversed time series model.
This is the first entry in what will become an ongoing series on volatility modeling. In this paper, we will start with the definition and general dynamics of volatility in financial time series. Next, we will use historical data to develop a few methods to forecast volatility. These methods will pave the road to a more advanced treatment in future issues.
In this issue, we will tackle the probability distribution inference for a random variable. we’ll start with the non-parametric distributions functions: (1) empirical (cumulative) density function and (2) the histogram. In later issue, we’ll also go over the kernel density function (KDEi).
This is the third entry in our ongoing series on volatility modeling. In this issue, we’ll take the prior discussion further and develop an understanding of autoregressive conditional heteroskedasticity (ARCHi) volatility modeling.
This is the third issue in our ARMAi Unplugged modeling series. In this issue, we introduce the common patterns often found in real time series data and discuss a few techniques to identify/model those patterns, paving the way for more elaborate discussion decomposition and seasonal adjustment methodologies in future issues.
In this tutorial, we will use a sample data gathered during a clinical trial of a new chemical/pesticide on tobacco Budworms.Our objective here is to model (and forecast) the effectiveness of the new chemical pesticide using different dosages, and explain, to some extent, any effect variation based on the gender of the budworm.
In this paper, we will use NumXL to carry out three different normality tests - the Jarque-Bera, the Shapiro-Wilk and the Anderson Darling - to examine the sensitivity of each test in detecting deviation from normality for different sample sizes. For sample data, we will generate 5 series of random numbers, each from a different distribution. The objective of this exercise is to demonstrate the strengths of each test, and to provide a tutorial for using the NumXL Normality Test function.
In this paper, we will use NUMXL to explain a very common - and sometimes mystifying - tool in econometric and time series analysis: the Auto Regressive Conditional Heteroskedacity test, or ARCH test for short. For sample data, we will use the IBM stock price data set from May 17th, 1961 to November 2nd, 1962. The objective of this exercise is to demonstrate the ARCH test's utility as a tool for examining the time dynamics of the second moments, and to provide a tutorial for using the NUMXL ARCH test function.
In this paper, we will use NumXL to explain several different goodness-of-fit functions. For sample data, we will use the time series of the monthly average of the hourly ozone level for Los Angeles downtown area, from January 1955 to December 1972. We will start with the log-likelihood function, then expand our focus to cover other derivative measures - namely Akaike's Information Criterion (AIC) and Bayesian/Schwarz Information Criterion (BIC/SIC/SBC). The objective of this exercise is to provide a tutorial for using different goodness-of-fit functions to find a model to ideally fit your data.
This issue is brimming with info about a key tool in our time series arsenal toolbox: correlogram analysis. We take you first through the auto-correlation and the partial auto-correlation functions definition.Next, we derive and highlight the common patterns in the ACF and PACF plots generated by AR, MA and ARMA type of processes. Finally, we draw a number of observations and drive quick intuitions to further help us identifying the candidate model(s) and its order using only ACF/PACF plots set.
This issue tackles the time series sampling assumptions: equal spacing and completeness. First, we consider a time series with missing values and discuss how to represent them in Excel, and to use them in our analysis with the aid of NumXL processing. Next, we look at unequally-spaced time series, how they come into existence, how they are related to the missing values scenario, and how best to deal with them.
This week's tutorial explores "Stationarity," one of the most fundamental assumptions in time series analysis. First, we'll define the stationary process and address the minimum requirements for a time series analysis. Then, we'll use different statistical tests to examine our stationary assumptions and, finally, delve into some of the intuitions underpinning out analysis. For sample data, we use the closing prices for IBM's stock between January 2, 2012 and April 3rd, 2012.
In this paper, we explore one of the fundamental assumptions of data preparation in time series analysis: "homogeneity," or the assumption that a time series sample is drawn from a stable/homogeneous process. We’ll start by defining the homogeneous stochastic process and stating the minimum stationary requirements for our time series analysis. Then we demonstrate how to examine the sample data, draw a few observations, and highlight some underlying intuitions behind them.
In this week's issue, the fifth entry in our series of data preparation tutorials, we discuss outliers and "black swan" events and their influence on financial markets. "Black swans" are rare, unanticipated, yet very influential events that are generally difficult or impossible to predict. In this tutorial, we discuss the problem of outliers, how to detect them, and what we can do with them.
In this week's issue, the first in our series of "Unplugged" tutorials, we dig deep into the ARMA model, one of the most important modeling methods in time series analysis. We'll start by defining the ARMA process, laying out its inputs, outputs, constraints and parameters. This tutorial should give you a rich understanding of the assumptions behind the ARMA model, while helping you develop an understanding of the basic intuitions behind the process.