Tuesday, December 3, 2019

Response set and bias in surveys

Response set is a tendency to respond similarly to all or many questions such as frequently chosing "somewhat agree" on scale options ranging from "strongly agree" to "strongly disagree."

Response bias occurs when respondents deliberately give false responses.

There are many forms of response bias.

Acquiescence bias occurs when respondents select only positive answers. This is also called "yea-saying."

Demand characteristics influence answers to survey items when the respondent attempts to provide answers according to the way they think an ideal participant should respond.

Extreme bias occurs when respondents frequently choose the extreme options on survey items such as the "Strongly Agree" and "Strongly Disagree" options.

Hostility bias occurs when respondents feel provoked by items in the survey. Researchers must take care in wording items that may be sensitive. Explanations and instructions might help.

Nay-saying is the opposite of Acquiescence bias. Respondents select only, or mostly, negative responses.

Nonresponse bias refers to suspected differences between the people who respond to take a survey and those who do not.


Ad. Learn more about creating surveys available on AMAZON.















Prestige bias is like social desirability bias focused on a specific aspect of a tendency to want to appear as having a higher social status in terms of a culture's values such as education, wealth, or social power.

Primacy bias or primacy effects occur when respondents choose the first available response to each item.

Recall bias occurs whenever respondents rely on their memory to respond to survey items. Human memory is not infallible and subject to many biases.

Recency bias, or recency effect, is the tendency of respondents to choose a response based on their previous--most recent--response. Participants may disengage after a long survey.

Response order bias occurs when respondents do not carefully weigh all the options but chose one that comes easily to mind. Context can make a difference. Contrast effects can be seen when the order of questions reveals large differences in the responses obtained. Assimilation effects occur when the order of survey items leads to more similar responses.



Self-selection bias occurs when people participate who were not chosen to be a part of the sample.

Social response bias, also called social desirability bias, refers to a tendency of respondents to over-report socially desirable or "good" responses.

Sponsorship bias occurs when respondents are aware of who is sponsoring the survey and their perception of that organization influences their responses.

Stereotype bias occurs when items evoke a personal response, which activates a respondent's stereotypes. Stereotypes are widely held  simplistic and relatively fixed beliefs about groups of people such as "all men or all women" and "all Blacks or all Whites." Stereotypes can also exist about companies, things, and ideas.

Straight lining occurs when respondents choose the same answer. Sometimes this can be avoided by using reverse scaling of items or identified by including items that people would rarely endorse.

Related Issues

Satisficing is a term referring to the degree to which a respondent processes the survey item. Some may quickly respond and others may think carefully about the item.

Commonly misreported topics in surveys include abilities and skills, personality characteristics, sexual behavior, religion and spirituality, income, and unlawful behavior.

Response bias is difficult to eliminate.

Connections

My Page    www.suttong.com
  
My Books  AMAZON                       GOOGLE STORE

FACEBOOK   Geoff W. Sutton
TWITTER  @Geoff.W.Sutton

Publications (many free downloads)
 
Academia   Geoff W Sutton   (PhD)     

  ResearchGate   Geoffrey W Sutton   (PhD)

If you are a counselor, you may find this book helpful. It is also available on AMAZON.




Monday, November 4, 2019

Dispositional Greed Scale Measuring Greed



The Dispositional Greed Scale is a 7-item rating scale. Participants rate each item on a scale of 1 = strongly disagree to 5 = strongly agree.








Permission:
The test items may be reproduced and used for noncommercial research and educational purposes. The list of items is available from PsycTESTS.

Sample

1. I always want more.
2. Actually, I’m kind of greedy.

Read more about greed in the Psychology of Greed.

Note:
In psychology, a disposition is a relatively durable behavior pattern or trait in contrast to a state, which can vary with situations.

References

For the test items in PsycTESTS, see:

Seuntjens, T. G., Zeelenberg, M., van de Ven, N., & Breugelmans, S. M. (2015). Dispositional Greed Scale [Database record]. Retrieved from PsycTESTS. doi: https://dx.doi.org/10.1037/t41245-000

For the article about dispositional greed, see the following reference:

Seuntjens, Terri G., Zeelenberg, Marcel, van de Ven, Niels, & Breugelmans, Seger M. (2015). Dispositional greed. Journal of Personality and Social Psychology, 108(6), 917-933. doi: https://dx.doi.org/10.1037/pspp0000031

Tuesday, September 17, 2019

Impulsiveness - Barratt Impulsiveness Scale-Brief (BIS)




An 8-item version of the Barratt Impulsiveness Scale is available. The 30-item BIS is a commonly used measure of impulsiveness. The original scale has undergone a number of revisions. In 2013, Lynne Steinberg and her team evaluated an 11-item version.  Based on the evidence, an 8-item version was developed. The 8-item version is knows as BIS-Brief

Each item is rated on a 4-point scale as follows.

1 = rarely/never
2 = occasionally
3 = often
4 = almost always/always

Items

The items ask the participants about thinking, planning, and self-control.

The items  may be used for education and research. purposes. The PsycTESTS entry included the following permissions statement.
Test content may be reproduced and used for non-commercial research and educational purposes without seeking written permission. Distribution must be controlled, meaning only to the participants engaged in the research or enrolled in the educational activity. Any other type of reproduction or distribution of test content is not authorized without written permission from the author and publisher. Always include a credit line that contains the source citation and copyright owner when writing about or using any test.
The 8-item list is in PsycTESTS:

Steinberg, L., Sharp, C., Stanford, M. S., & Tharp, A. T. (2013). Barratt Impulsiveness Scale–Brief [Database record]. Retrieved from PsycTESTS. doi: https://dx.doi.org/10.1037/t21455-000

Items are also included in Steinberg et al. (2013).

Reliability

Reliability findings reported by Steinberg et al. (2013) using IRT analysis was approximately .80 and Cronbach's alpha was .78.

advertisement

Learn more about test statistics in Applied Statistics: Concepts for Counselors available on AMAZON.com and in many other worldwide markets served by AMAZON.


Validity

Steinberg et al. (2013) reported evidence of construct validity based on three samples of participants in three age groups. They found similar correlations between the 8-item version and the full 30-item version. The article also includes correlations with other measures in clinical samples.

A more recent study supported the utility of the BIS-Brief in an adolescent sample. The authors noted two-dimensions of the scale (Charles, Floyd, & Barry, 2019). Link to Sage online publication.

References

Barratt, E. S. (1959). Anxiety and impulsiveness related to psychomotor efficiency. Perceptual and Motor Skills, 9, 191–198. doi:10.2466/pms.1959.9.3.191


Barratt, E. S. (1985). Impulsiveness subtraits: Arousal and information processing. In J. T. Spence & C. E. Izard (Eds.), Motivation, emotion, and personality (pp. 137–146). North Holland, the Netherlands: Elsevier.


Steinberg, L., Sharp, C., Stanford, M. S., & Tharp, A. T. (2013). New tricks for an old measure: The development of the Barratt Impulsiveness Scale–Brief (BIS-Brief). Psychological Assessment, 25, 216-226. doi: https://dx.doi.org/10.1037/a0030550

Steinberg, L., Sharp, C., Stanford, M. S., & Tharp, A. T. (2013). Barratt Impulsiveness Scale–Brief [Database record]. Retrieved from PsycTESTS. doi: https://dx.doi.org/10.1037/t21455-000

If you are working on a survey  project, you may also find Creating Surveys helpful. Available on AMAZON.


Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton


Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)


Saturday, September 14, 2019

Sacred Marriage Scales

Two scales examine couples' perspectives on the role of God in their marriages. The Sacred Marriage scales are the work of Mahoney, Pargament, and DeMaris (2009).

The first scale looks at the role of God in their marriage. There are 10-items. Couples are advised that they substitute another word for God as may be applicable to their spirituality.

Examples and a reference are included below.

Revised Manifestation of God in Marriage

     Following are the instructions

Directions: Some of the following questions use the word "God." Different people use different terms for God, such as "Higher Power," "Divine Spirit," "Spiritual Force," "Holy Spirit," "Yahweh," "Allah,", "Buddha”, or “Goddess.” Please feel free to substitute your own word for God when answering any of the questions that follow. Also, some people do not believe in God. If this is the case for you, please feel free to choose the "strongly disagree" response when needed.
  Two sample items:
   
     1) God played a role in how I ended up being married to my spouse.


     2) I sense God’s presence in my relationship with my spouse.

Revised Sacred Qualities of Marriage

  Two sample items:

     1) My marriage is holy.


     2) Being with my spouse feels like a deeply spiritual experience.

Scoring
There are 10-items in the scales. Participants are asked to respond on a 1 to 7 scale where 1 = Strongly Disagree, 4 = neutral, and 7 = Strongly Disagree.

Availability

Here's the internet address for the scales:

https://www.bgsu.edu/content/dam/BGSU/college-of-arts-and-sciences/psychology/psy-spirit-fam-mahoney/Original_Revised_Sanctification_Scales.pdf

Reference

Mahoney, A., Pargament, K. I., & DeMaris, A. (2009). Couples viewing marriage and pregnancy
through the lens of the Sacred: A descriptive study. Research in the Social Scientific Study of Religion, 20, 1-45.

Learn more about Creating Surveys in the book, Creating Surveys on AMAZON.







Holy Sex - Measuring Sanctification of Sexuality in Relationships


Two 10-item scales assess the degree to which couples view marital sexuality from a spiritual perspective.

Both scales published by Hernandez, Mahoney, and Pargament (2011) are rated on the same 7-point scale.

The wording is clearly aimed at married couples. Although they use the word God, note that in a similar scale focused on children from some of the same authors, participants are instructed to think of their own deity.



Revised Manifestation of God in Marital Sexuality

Two sample items:

     1) God played a role in my decision to have a sexual relationship with my spouse.

     2) Our sexual relationship speaks to the presence of God.

Revised Sacred Qualities of Marital Sexuality

Two sample items:

     1) Being sexually intimate with my spouse feels like a deeply spiritual experience.

     2) Our sexual relationship seems like a miracle to me.


Scoring

There are 10-items in the scale. Participants are asked to respond on a 1 to 7 scale where 1 = Strongly Disagree, 4 = neutral, and 7 = Strongly Disagree.

Availability

Link to the scales online

https://www.bgsu.edu/content/dam/BGSU/college-of-arts-and-sciences/psychology/psy-spirit-fam-mahoney/Original_Revised_Sanctification_Scales.pdf


Reference

Hernandez, K. M., Mahoney, A., & Pargament, K. I. (2011). Sanctification of sexuality: Implications
for newlyweds' marital and sexual quality. Journal of Family Psychology, 25, 775-780.


Learn more about Creating Surveys in the book, Creating Surveys on AMAZON.







Sanctification of Parenting Scale Revised

Mahoney, Pargament and deMaris (2009) published a revised scale that examines beliefs of  a mother toward her child.

The authors advise researchers that they may change the word "baby" to other child age labels depending on the age of the children in their study. Thus, researchers may use labels of "toddler," "child," or "teen" in place of "baby."

Spirituality

Although the authors use the word God, the instructions invite participants to use their own word for the deity. Following is a copy of the instructions. Notice the different term for the scale's name.


Revised Manifestation of God in Parenting:

Directions: Some of the following questions use the word "God." Different people use different terms for God, such as "Higher Power," "Divine Spirit," "Spiritual Force," "Holy Spirit," "Yahweh," "Allah,", "Buddha”, or “Goddess.” Please feel free to substitute your own word for God when answering any of the questions that follow. Also, some people do not believe in God. If this is the case for you, please feel free to choose the "strongly disagree" response when needed.
   Sample items
1) God played a role in my baby coming into my life.
 2) I sense God's presence in my relationship with my baby.
   Scoring

There are 10-items in the scale. Participants are asked to respond on a 1 to 7 scale where 1 = Strongly Disagree, 4 = neutral, and 7 = Strongly Disagree.


Advertisement
Learn more about integrating faith and parenting in Discipline with Respect: Christian Family Edition. No charge to read for those with Kindle Unlimited.  BUY ON AMAZON



Revised Sacred Qualities of Parenting

The authors include a second set of 10-items, rated on the same 7-point scale to assess the sacred qualities of parenting. Following are two examples of this scale.

     1) My baby seems like a miracle to me.

     2) Being a mother feels like a deeply spiritual experience.


Availability

You can find the full scale at this link:    https://www.bgsu.edu/content/dam/BGSU/college-of-arts-and-sciences/psychology/psy-spirit-fam-mahoney/Original_Revised_Sanctification_Scales.pdf

Reference

The full reference to the Mahoney et al. (2009) scale is below.

Mahoney, A., Pargament, K. I., & DeMaris, A. (2009). Couples viewing marriage and pregnancy
through the lens of the Sacred: A descriptive study. Research in the Social Scientific Study of Religion, 20, 1-45.

Learn more about Creating Surveys in the book, Creating Surveys on AMAZON.






Wednesday, August 28, 2019

Why Counselor's Tests Are Not Reliable





The reason counselor's tests are not reliable is that reliability is a property of scores not tests. This isn't a matter of semantics. Think about it this way.

Give all the students in one school an achievement test. The test items don't change so they appear stable, consistent, and reliable. However, when publishers report reliability values, they calculate the reliability statistics based on scores. Scores vary from one administration to another. If you ever took a test twice and got a different score, you know what I mean. Individuals change from day to day. And we change from year to year. Also, even a representative sample of students for a nation can be different each year.

Everytime we calculate a reliability statistic, the statistic is slightly different.

Reliability values vary with the sample.

Reliability values also vary with the method used for calculation. You can get high reliability values using coefficient alpha with scores from a one-time administration. This method is common in research articles. But you will see different values from the same research team in different samples in the same article.


If we use a split-half method, which usually calculates reliability based on a correlation between two halves of one test, then we can get a reliability value based on one administration. But that's only half a test! Researchers use the Spearman-Brown formula to correct for the shortened half-test problem- but that's just an estimate of what the full test could be.


There's also a test-retest reliability method. Give a test one time, wait awhile- maybe a week or several weeks, then retest. That gives you an estimate of stability. But if you have a good memory, you can score higher on the second test on some tests like intelligence and achievement.


By now you get the point. Any one test can be associated with a lot of reliability values. The problem is not necessarily with counselor tests. The problem can be misunderstanding that tests do not have one reliability value. As with many things in science, there are many variables to consider when answering a question.

Reputable test publishers include reliability values in their test manuals. Counselors, Psychologists, and other test users ought to know about test score reliability.

Learn more assessment and statistical concepts in

Applied Statistics: Concepts for Counselors

AMAZON BOOKS




Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton



Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)




Tuesday, June 25, 2019

Measuring Guilt and Shame with the GASP (Guilt and Shame Scale)





Taya Cohen of Carnegie Mellon University has made the Guilt and Shame Proneness Scale (GASP) available online. Here’s what Dr. Cohen said about the scale in 2011. I’ll include a link to the full scale below.

The Guilt and Shame Proneness scale (GASP) measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains four fouritem subscales: GuiltNegativeBehaviorEvaluation (GuiltNBE), GuiltRepair, ShameNegativeSelfEvaluation (ShameNSE), and ShameWithdraw.

Each item on the GASP is rated on a 7-point scale from 1 = very unlikely to 7 = very likely.

Here’s an example of an item from the GASP scale.

_____ 1. After realizing you have received too much change at a store, you decide to keep it because the salesclerk doesn't notice. What is the likelihood that you would feel uncomfortable about keeping the money?

Information about reliability, validity, and factor structure can be found in the 2011 reference below. The article reports the results of several studies. One interesting finding is the relationship of both shame and guilt to morality--they share some common features. People high in both guilt and shame are less likely to engage in unethical business behavior. 

Ad. Learn more about Creating Surveys -- Download a FREE sample.













There’s more to the discussion than I have stated here so, do see the entire article.

Finding the GASP scale

If the link no longer works, see the 2011 reference below.

Ad. Learn more about test statistics in counseling with Applied Statistics Read a FREE sample before you buy.


References

Cohen, T. R., Wolf, S. T., Panter, A. T., & Insko, C. A. (2011). Introducing the GASP scale: A new measure of guilt and shame proneness. Journal of Personality and Social Psychology, 100(5), 947966. doi: 10.1037/a0022641 Link: https://psycnet.apa.org/record/2011-08412-001

Wolf, S. T., Cohen, T. R., Panter, A. T., & Insko, C. A. (2010). Shame proneness and guilt proneness: Toward the further understanding of reactions to public and private transgressions. Self & Identity, 9, 337362. doi: 10.1080/15298860903106843

You may also be interested in a related post about Test of Self-Conscious Affect (TOSCA).


Getting permission to use the GASP
APA is the copyright owner. Here is the link regarding copyright permission:

Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton

LinkedIN Geoffrey Sutton  PhD



Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)


Measuring Shame and Self-Conscious Emotions TOSCA



Psychologists assess shame as one of a few measures of self-conscious emotions. In addition to shame, the list includes embarrassment, guilt, humiliation, and pride. As with many measures of person characteristics, there are measures of traits or dispositions and measures of states. 

State shame is a temporary emotion such as a state of shame following a specific act that has been made public. Trait shame is a durable condition, which means a person experiences shame for a period of time in multiple settings.

The classic measure of shame is the TOSCA (Test of Self-Conscious Affect. The TOSCA, developed by June P. Tangney, is now in its third edition and includes versions for adolescents (TOSCA-A) and children (TOSCA-C; Tangney & Dearing, 2002).

People taking the TOSCA read a scenario and provide a response. The responses reflect different ways to respond to a situation: shame-proneness, guilt-proneness, externalization, pride in one’s self (alpha pride), pride in one’s behavior (beta pride), and detachment.

The TOSCA scales are widely used. See the references in Watson, Gomez, and Gullone (2016) for a list of recent studies.

Ad. Learn more about test statistics in counseling with Applied Statistics Read a FREE sample before you buy.



If you would like copies of various measures, contact the psychology lab linked to Professor Tangney’s page at George Mason University. There is a list of scales and an email address. http://mason.gmu.edu/~jtangney/measures.html

Learn more about shame in this interview with June Tangney: https://www.apa.org/pubs/books/interviews/4317264-tangney

References

Tangney, J. P., & Dearing, R. L. (2002). Shame and guilt. New York: Guilford Press.

Watson, S. D., Gomez, R., & Gullone, E. (2016). The Shame and Guilt Scales of the Test of Self-Conscious Affect-Adolescent (TOSCA-A): Psychometric Properties for Responses from Children, and Measurement Invariance Across Children and Adolescents. Frontiers in psychology7, 635. doi:10.3389/fpsyg.2016.00635

You might also be interested in the Guilt and Shame Scale (GASP)

Ad. Learn more about Creating Surveys -- Download a FREE sample.



Watch Dr. Tangney on YouTube




Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton

LinkedIN Geoffrey Sutton  PhD



Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)


Monday, May 27, 2019

Post-Traumatic Stress Disorder Checklist and DSM-5 (PCL-5)

The PCL-5 is a 20-item self-report checklist of symptoms that can help clinicians screen patients for PTSD (Post-Traumatic Stress Disorder). The scale can assist in making a diagnosis and in monitoring change during and after treatment.


The VA site suggests the scale can be completed in 5-10 minutes.

The scores range from 0 to 80.

The items are organized according to DSM-5 clusters.

Scale availability link

Weathers, F.W., Litz, B.T., Keane, T.M., Palmieri, P.A., Marx, B.P., & Schnurr, P.P. (2013). The PTSD Checklist for DSM-5 (PCL-5). Scale available from the National Center for PTSD at www.ptsd.va.gov.

Web link detail: https://www.ptsd.va.gov/professional/assessment/adult-sr/ptsd-checklist.asp


Applied Statistics for Counselors:  Buy on Amazon



References (PCL-5)

Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of Traumatic Stress, 28, 489-498. doi: 10.1002/jts.22059
Bovin, M. J., Marx, B. P., Weathers, F. W., Gallagher, M. W., Rodriguez, P., Schnurr, P. P., & Keane, T. M. (2015). Psychometric properties of the PTSD Checklist for Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (PCL-5) in Veterans. Psychological Assessment, 28, 1379-1391. doi: 10.1037/pas0000254
Wortmann, J. H., Jordan, A. H., Weathers, F. W., Resick, P. A., Dondanville, K. A., Hall-Clark, B., Foa, E. B., Young-McCaughan, S., Yarvis, J., Hembree, E. A., Mintz, J., Peterson, A. L., & Litz, B. T. (2016). Psychometric analysis of the PTSD Checklist-5 (PCL-5) among treatment-seeking military service members. Psychological Assessment, 28, 1392-1403. doi: 10.1037/pas0000260
Read more about statistics and surveys in
Creating Surveys on AMAZON











Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton

LinkedIN Geoffrey Sutton  PhD

Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)














Monday, April 15, 2019

Charting Death and Thinking about Epidemics

What are the leading causes of death in the United States?




Based on what you have read or learned from news sources, what did you expect to see in the top five? If you thought of one that is missing, perhaps it is in the top 10. Still, when you look at the numbers, you may be surprised to learn how few people die in a given year, given the size of the US population.

My point in this post is that we ought to examine total data instead of being guided by the headlines of news stories and misleading charts when we want to understand a health or social condition.

According to the Centers for Disease Control and Prevention, the five leading causes of death in 2017 were heart disease, cancer, unintentional injury, chronic lower respiratory disease, and stroke. The report includes more causes, but I chose the top five based on the deaths per 100,000 U.S. standard population.

When you add the numbers for the five causes, you find the top five causes of death accounted for 445 people out of 100,000 people. Thus, less than 1% of the population died from the top five causes.

The chart illustrates a helpful way to report data. Instead of reporting percentages, just give the numbers of people for any condition in relationship to a population value. In this case, the relevant population value is Americans (actually an estimate of the U.S. population based on the 2010 census).

It is usually better to report data in terms of how many people with a condition out of 100 people, but as you can see, out of a sample of 100 people, we might have no dead people! And, fewer than 2 people died of these causes out of 1,000 people! So, it is important to select a population size that makes sense in terms of all the available data.

About Totals

I would like to know how many people died in the United States. It just seems to make sense. If you are going to tell me how many people died out of some portion of the population, why not tell me the total figure?

Interestingly, there is a total death figure for 2016, which is 2,744,248. That gives us a death rate of 849.3 people out of 100,000 population. The total population estimate for 2016 is 323,071,342. So, less than 1% of the population died in 2016.

I would also like to know how many people died out of 100,000 people in 2017. The online figure is 731.9 per 100,000 so I will fix that fraction of a person and say 732 people out of 100,000. The 2018 CDC report tells us that 2,813, 503 deaths were registered. I see that they only know about registered deaths. Presumably, people could die and not be registered. so, it is good to pay attention to the details.

They don't tell us how many people are in their estimate of the population-- the last census was in 2010. But they did write that they estimated the 2017 population based on 2010 census data. I did a search and found a census table estimate for 2017 = 327, 147, 121, according to the Census.gov website.

The statisticians are pretty good, but it is important to know that we are dealing with estimates. We really don't know how many people died in the USA in 2017. Still, I bet the numbers are good estimates.

I don't require absolute certainty when it comes to data about human beings. Anyway, if you are interested, you could estimate the number of people who died in 2017 or, you can wait until the data are provided.

ABOUT EPIDEMICS

Another useful lesson to note here is the lack of scary headlines. We just have the facts reported in a responsible way. There are no news media telling us about this epidemic or another in an effort to sell a story.

We know that less than 1 out of 100 people died in 2016 (849 out of 100,000 rounded). If a person had 1,000 friends on social media then, 8 or 9 might have died if, and only if, the friends were similar in age and other relevant variables to people in the general U.S. population. My guess is, any friendship group probably does not represent a proportionate sample of the US population so, we will need to be careful in generalizing about all people based on our friendship groups.

It is truly sad for loving family members when people die of any cause. When we look at the total population, we see that even for the leading causes, not many of us die every year. Of course, healthcare personnel and other decision-makers ought to pay attention to trends--especially when we can do something about a particular cause of death.

If you were going to write that we have an epidemic based on the number of people dying, what figure would you say is worthy of the term epidemic? The dictionaries are not helpful because they refer to an epdemic as a "widespread" problem such as a disease. Truly, 1% of almost 330 million people is a lot of human beings. The number of people who did not die is of course the reverse so, more than 99% did not die.

We are wise to keep total figures in mind when we want to truly understand the scope of a particular cause of death or other social concern.

HOW TO MAKE THINGS LOOK WORSE

The death rate increased 100% when two people die this year compared to one person who died last year.

Perhaps you already know this? It is a deceptive practice. Suppose one person died after taking drug XYZ this year. Then, next year two people died after taking the same drug. Two people out of thousands or millions is a very small number, but the increase in deaths equals 100%! We do not know how many people took drug XYZ and are living quite happily! It's good to be careful how people are reporting data. We really need to know all the relevant data when making informed decisions.


Read more about Creating Surveys
















Read more about Applied Statistics














COMMENTS

Corrections and helpful comments are welcome.


Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton

LinkedIN Geoffrey Sutton  PhD

Publications (many free downloads)
     
  Academia   Geoff W Sutton   (PhD)
     
  ResearchGate   Geoffrey W Sutton   (PhD)







Tuesday, April 2, 2019

7 Tips for Writing Better Survey Items



So many people are creating surveys in schools, government agencies, and major corporations. Some are better than others.

Here are seven tips.

1  Stay focused on your goal. 
Avoid asking everything you can think of on a subject. Unfortunately, I've been on project teams that would not heed this advice. Participants get frustrated and leave surveys incomplete.

2  Ask only one question at a time.
Have someone look at your items to see if they are confused about what you are asking.

3  Use easy-to-understand language.
Know your audience and how they use language. Again, ask a few people to check your wording.

4  Write well.
Some participants will drop out of your survey when they identify misspelled words, common punctuation errors, and problems of grammar.

5  Cover all possible answers.
If you aren't sure you have listed every option, then add an "other" option with a place to write in another response. This may lessen the frustration of participants who don't agree with the available options.

6  Provide a reason and time information for long surveys.
Justify why it is important for people to spend a long time answering your questions. And give them a time estimate based on how long others have taken to complete your survey.

7  Build Trust
Provide contact information and a link to your school or business so people have a way of verifying your credibility. People will help students, but let them know your professor's name and the name of your school.

Learn more about creating surveys for business and school in CREATING SURVEYS.

Download a FREE sample from AMAZON to see if it meets your needs. See why professors recommend Creating Surveys to undergraduate and graduate students.


Creating Surveys


Connections

My Page    www.suttong.com

My Books  
 AMAZON     GOOGLE PLAY STORE

FACEBOOK  
 Geoff W. Sutton

TWITTER  @Geoff.W.Sutton


Publications (many free downloads)
     
  Academia   Geoff W Sutton   (Ph.D.)
     
  ResearchGate   Geoffrey W Sutton   (Ph.D.)







Presenting Split Opinions in a Color Chart

  This color chart by Pew Research   published 10 September 2020, reveals a useful way to depict split opinions of a study. Here are a few o...