Bayesian methods have long attracted the interest of statisticians but have only been infrequently used in statistical practice in most areas. This is due in part to the lack of accessible software. A recent paper said, “However, most of this work can be understood and used only by those with a high level of statistically sophistication, a fetish for archaic notation, and a desire for programming and debugging.”
The apparent subjective nature of the requirement for prior distributions has also been an issue.

More recently, computational methods and statistical software improvements have mitigated much of the first concern, and the development and use of “noninformative” objective priors and Bayes factors has addressed the second. Of course, in those cases where there is prior information, perhaps from prior studies, informative priors can be used.

The Bayes factor calculates the odds of null and alternative hypotheses or one model versus another based on the prior distribution and the data. It measures the change in the odds given in the prior to the posterior odds that is produced by the data. The Bayes factor is a measure of the strength of the evidence and is used in place of p values to reach a conclusion. A large Bayes factor says that the evidence favors or strongly favors the alternative hypothesis compared to the null, or of one model over the other. A BF of 10, for example, says that the model is 10 times more likely than the comparison model. Bayes factors can be used for any pair of models. A Bayes factor larger than 10 may be considered strong or very strong evidence for that model while very small values strongly favor the null, but there is no generally accepted standard.

In keeping with this trend, four Bayesian extension commands have been released for SPSS Statistics. They are STATS BAYES TTEST (Analyze > Compare Means > Bayesian T Test), STATS BAYES ANOVA (Analyze > General Linear Model > Bayesian Anova), STATS BAYES REGR (Analyze > Regression > Bayesian Regression), and STATS BAYES CROSSTABS (Analyze > Descriptive Statistics > Bayesian Crosstab). We will demonstrate the Bayesian t test and Bayesian Regression procedures in this post.

The T Test

T tests come in several flavors â€“ one sample, two sample paired, and two sample independent. The traditional t test and the Bayesian equivalent can handle all these cases, but we will look only at the two-sample independent case. Using the creditpromo.sav file shipped with Statistics, we test the dollars variable mean grouping by the insert variable. You can read about this example for the traditional analysis in the Case Studies available from the Help menu.
The traditional test output main table looks like this.

It shows a moderately significant difference in dollar spent with a t value of -2.26 and a significance level of .024.
Now let’s look at the Bayesian test. For this test, all three types of t test are handled in one dialog box. For the independent samples test, it would look like this.

The Group variable values will be determined from the data, so there must be only two distinct, nonmissing values. In Options, we have specified three different values for the prior scale parameter representing different standardized effect sizes.
Here is the table of Bayes factors.

This shows that for the medium effect size prior parameter (.7071), which is the default, there is very slight evidence in favor of the alternative hypothese of a nonzero difference, while with the other values, there is no such evidence and even a little evidence in favor of the null. The posterior effect size (table not shown), which is the standardized mean difference, is between -.361 and -.017 using the first prior value. If we calculate this using the traditional t test output, we get -.203.
We are left with, using the traditional t test, rejecting the null â€“ there is a difference, but the effect size is small, or we can report that the evidence in favor of the alternative is very weak â€“ just anecdotal, along with a range of effect sizes that includes the value from the traditional test.

The Regression Example

For this example we use the employee data.sav file shipped with Statistics and salary as the dependent variable. Change the measurement level of the educ variable to scale. Using the traditional linear regression procedure with educ and jobtime as the predictors, we get this output.

If we use the stepwise method, we get this.

Using Bayesian regression we have a choice of calculating the Bayes factor for all possible regressions, or for various subsets. Since we only have two predictors here, we choose all possible, but with many regressors this might be too many models. The dialog box and the Bayes factor output table look like this.

The model using only the educ variable is very strongly favored, which is consistent with the stepwise model, and it reports a posterior model probability of .837. The procedure also allows you to compare, i.e., compute Bayes factors, for each model compared to any other. We can also choose a single model for which the posterior distribution of the coefficients can be computed. Choosing model 2, we get

The estimated education effect is quite close to the traditional regression estimate of 3895.067, and the standard error is also quite close.

Summary

In summary, we have seen two of the four new Bayesian procedures and compared them to the output from the corresponding traditional procedures. There are additional features of these procedures that you can explore for yourself. These procedures help you to do statistical analysis without relying on traditional p value-based procedures and can be especially helpful for model selection. The dialog help for these extensions has some references to Bayesian methods that may help you get started.

You can download and install these procedures using Statistics version 22 or later from the Utilities menu. For earlier versions, you need to download them from the Extension Commands collection (https://www.ibm.com/developerworks/community/files/app?lang=en#/collection/23c2eac7-e524-4393-a4b9-0d224a2a0eda) and install from Utilities. At this writing, these procedures are not yet available on the GitHub Downloads feature of the new Predictive Analytics community, but they will be posted there. These procedures all require the R Essentials. When you install any of these procedures, the R BayesFactor package by Richard D. Morey and Jeffrey N. Rouder that is used by the procedures is also installed.

10 comments on"New Bayesian Extension Commands for SPSS Statistics"

1. Hello,I would like to ask whether Dynamic Bayesian Network are also included in this New Bayesian Extension Commands for SPSS Statistics.I appreciate if you will be able to provide the information. Thank you.

2. I have been trying to install the Bayesian extension (ANOVA, t-student etc.) and I have the following error (64 bit version, after installing the BayesFactor with RStudio, SPSS v22):

************

Type BFManual() to open the manual.
************
Error in data.frame(as.BFprobability(newPriorOdds(res) * res)) :
could not find function “as.BFprobability”

What should I do? I cannot see the solution over the internet…

3. Hi, I would like to know if the BAYES ANOVA contained in the new Bayesian Extension Commands for SPSS Statistics can be used to analyse data obtained via a repeated measure design. Thanks for your response.

4. Hello,JONPECK
I got your email address from the following webpage. I am interested in using baysian analysis and I am using SPSS version 23. However, I could not find the baysian T or regression test at my SPSS software. Shall I download the baysian extention from somewhere to the SPSS first?
Huang, Yue

5. liz hernandez May 22, 2017

hi i need the bayessian regression is it included in spss????? where can i donwloaded

6. Bayesian is now Native in SPSS Statistics version 25. See more info here: https://developer.ibm.com/predictiveanalytics/2017/07/18/spss-25-subscription-summary/

7. I’ve download the extension for a Bayesian t-test and I keep getting an error message when I try to runt the t-test stating that I don’t have RBayesFactor package

8. The BayesFactor R module should have been installed when you installed the extension, but if there was a problem with Internet access or the site, that might have failed. Try running the following code from the syntax window.

begin program r.
install.packages(“BayesFactor”)
end program.

It will pop up a list of CRAN sites. Just pick one somewhere near you.

If that doesn’t solve the problem, please post the messages from this code along with the Statistics version you are using and the platform.

9. Andrew Waters October 08, 2018

I am seeking to use the Bayes Procedures in SPSS Version 24 for Teaching Purposes in a Graduate Psychology class (the university does not yet have SPSS version 25) but am experiencing difficulty loading BayesFactor into R.3.2.5. I can install BayesFactor in R.3.2.5, but when I try to load BayesFactor (using library(BayesFactor) I get the error below. There appears to be an objection to a file named stringi. I wonder whether you have seen this error before, or if there is a workaround? Needless to say, because BayesFactor does not load, the extension does not function correctly in SPSS. An internet search did not lead to a solution

This is frustrating as I am looking forward to playing with these procedures

Error in inDL(x, as.logical(local), as.logical(now), …) :
unable to load shared object ‘C:/Users/andre/Documents/R/win-library/3.2/stringi/libs/x64/stringi.dll’:
LoadLibrary failure: The specified procedure could not be found.

Error: package or namespace load failed for â€˜BayesFactorâ€™

10. Andrew Waters October 08, 2018

I am getting the error below when trying to load BayesFactor in R.3.2.5 (I am able to install BayesFactor in R.3.2.5). I wonder if you have seen this error before? There seems to be an error with stringi