Skip to main content

Posts

Kurtosis

 Kurtosis is a statistical concept. The value indicates whether a distribution is similar to the normal curve or different from the normal curve. Compared to the normal curve, kurtotic distributions of data appear either peaked in the middle or flat. In a normal distribution, the value of kurtosis = 0. The peaked distribution has a positive value. It's called leptokurtic (think leap). The flatter distribution has a negative value. It's called platykurtic (think of the animal, Platypus). There are different formulas for calculating kurtosis. In Excel, the function for kurtosis can be found under Formulas, More Functions. In the drop down list, choose KURT. Please check out my website     www.suttong.com    and see my books on    AMAZON         or   GOOGLE STORE Also, consider connecting with me on     FACEBOOK     Geoff W. Sutton         TWITTER    @Geoff.W.Sutton     You can read many published articles at no charge:   Academia    Geoff W Sutton       ResearchGate   

Santa Clara Strength of Religious Faith Questionnaire

  Scale name: Santa Clara Strength of Religious Faith Questionnaire,   SCSRF, SCSRFQ Short form as an “Abbreviated” form, ASCSRFQ Scale overview A short easy to score measure of the strength of a person’s religious or spiritual faith. It is a available in 10-item and 5-item Likert-type scale formats. Author(s) Thomas G. Plante and Marcus T. Boccaccini introduced the 10-item version in 1997. Items: 10 and 5 for the short form   Response Type: 4-point self-report rating scale Subscales: None   Sample items 2. I pray daily. 10. My faith impacts many of my decisions. The short form uses the following 5-items: 2,4,5,8 (Plante et al., 2002). Statistics In the 1997 article, psychology students M = 26.39, SD   = 8.55, R = 33, Mdn = 26. A summary of previous studies using the 10-item version (Plante, 2010) found M = 26-33 in college samples with SD   = 6 to 8. There were no significant differences between the means of men ( M = 17.48, SD   = 2.52) and

Coefficient Alpha or Cronbach's Alpha

  Coefficient Alpha (also called "alpha") is a statistical value indicating the degree of internal consistency of items in a multiple-item scale like survey items or Likert-type scales. Internal consistency is one measure of reliability for scores from scales, measures, and survey items. The alpha statistic was developed by Lee Cronbach in 1951 thus it is also called Cronbach's alpha . In research reports, you may just see the Greek lower case letter alpha,  α. The procedure to calculate alpha can be found in SPSS under Analyze > Scale > Reliabilty. For research purposes, scales with alpha levels equal to or above alpha = .70 are acceptable. The best scales have values of alpha = .9 or higher. The alpha method works best to evaluate unidimensional measures. If there are two or more dimensions in a set of items, the alpha value will be lower so, when alpha values are low, consider which item or items do not support the primary dimension. Cite this Post Sutton, G.W.

Normal Distribution or Bell Curve

  The bell curve is also known as the normal curve or normal distribution . The bell curve has mathematical  properties that allow researchers to draw conclusions about where scores (or data) are located relative to other scores (or data). Click hyperlinks for more details. The three measures of central tendency (mode, median, mean ) are at the same middle point in a normal curve. The numbers representing the middle of the bell curve divide the distribution in half. On the x -axis in the normal distribution, the mean is at zero and there are standard deviation units above and below the mean.  The height of the curve indicates the percentage of scores in that are a. You can see that a large percentage of the scores are between 1 and -1 standard deviations. About 68% of scores fall between +1 and -1 standard deviations from the mean.  Look at the illustration below to see that there are about 34% of the scores in falling one standard deviation above the mean and another 34% in one st

Correlation coefficient the Pearson r in statistics

  The term correlation can refer to a statistic and a type of research.  Understanding correlations is an important building block of many complex ideas in statistics and research methods. My focus in this post is on the common correlation statistic, also called the Pearson r . The Pearson r is a statistical value that tells the strength and direction of the relationship between two normally distributed variables measured on an interval or ratio scale . Researchers examine the two sets of values and calculate a summary statistic called a correlation coefficient . The longer name for a common correlation statistic is the Pearson Product Moment Correlation Coefficient but sometimes it is referred to as the Pearson r . The symbol for correlation is a lower case and italicized r .  In behavioural research, we normally round values to two decimal points. A moderately strong positive correlation example is r = .78.       Sometimes, the relationship between the two variables is negativ

Skewed Distributions

  Skewed Distributions* Skewed distributions have one tail that is longer than the other tail compared to the "normal" distribution, which is perfectly symmetrical. Skew affects the location of the central values of the mean and median. Positive Skew Below is an image of positive skew, which is also called right skew. Skew is named for the "tail." If you had statistics, you may have heard a professor say, "the tail tells the tale." The tail is the extended part of the distribution close to the horizontal axis. The large "hump" area to the left represents the location of most data. In behavioural science, the high part often refers to the location of most of the scores. Thus, in positively skewed distributions, most of the participants earned low scores and few obtained high scores as you can see by the low level of the curve, or the tail, to the right. Negative Skew As you might expect, negatively skewed distributions have the long tail on the le

Dependent Samples Matched Pairs t test

 The Dependent Samples t test is used to test for significant differences between two sets of numerical data produced by the same organisms or organisms that are matched on all relevant variables. In one example, a group of people who attend a workshop may complete a pretest and a posttest. A Dependent Samples t test can be used to compare the mean differences between the pretest and the posttest. An example of a Matched Pairs t test can be used to compare two groups of people in a reading method experiment. A relevant variable would be reading ability. A reading test could be used to identify people with similar scores. One member of the pair is then randomly assigned to a new reading method group and the matching person is then assigned to the traditional reading group. At the end of the study, a Matched Pairs t test can be used to compare mean scores for the groups. When the same person produces two sets of scores, each person is their own control. Because of the level of control,