Showing posts with label factor analysis. Show all posts
Showing posts with label factor analysis. Show all posts

Monday, July 11, 2022

Factor Analysis and Assessment EFA and CFA

 



Factor Analysis and Assessment

In testing, factor analysis is a mathematical strategy to analyze groups of items within a large test to see how well they relate to each other. The goal will be to reduce the large number of items to a set of factors that appear to measure different but related constructs; hence, factor analysis is a method of data reduction. (Sutton, 2020)

A large test of various abilities may be analyzed for ways to group different abilities. Short tests of vocabulary, verbal analogies, and synonyms might form a factor that a researcher could label as "Verbal Abilities."

A factor is a group of variables that are highly correlated with each other and, although different, they appear to have something in common. Researchers choose names for groups of variables based on the content of the variables in the factor. In large research projects, each participant may have scores on a large number of variables. Factor analysis can be used to identify patterns among the variables. Thus, it may be possible to reduce 30 variables to 5 or 6 groups of variables (that is factors).

A research database may contain several variables considered relevant to understanding the risk of child sexual abuse. Such variables may include prior abuse by a person in a close relationship to the child, age of a child, family problems, child problems, family structure, parenting difficulties, sex of the child, and so forth. Theoretically, researchers could look for patterns that may suggest ways to identify key risk factors.  (Sutton, 2020)

 

Exploratory Factor Analysis (EFA)

In the early phases of creating a test or questionnaire, researchers use EFA to explore or discover the structure of the measure. That is, they are looking for the number of factors that best fit the set of data.

 

Confirmatory Factor Analysis (CFA)

 After the data have been explored and the number of factors that best fit the data have been determined, researchers perform a CFA on a new sample. The purpose of CFA is to confirm or reject the factor structure previously thought to be the best fit for the data.


 Link to an Index of Statistical Concepts in Psychology, Counseling, and Education

Reference

Sutton, G. W. (2020). Applied statistics: Concepts for counselors, second edition. Springfield, MO: Sunflower.

AMAZON  Paperback ISBN-10: 168821772X, ISBN-13: 978-168217720

More information:  Book website:   counselorstatistics

 

 Reference for using scales in research:

Buy Creating Surveys on

GOOGLE BOOKS

 

AMAZON

 


 

 




 

Reference for clinicians on understanding assessment

Buy Applied Statistics for Counselors

 

GOOGLE BOOKS

 

AMAZON

 


 

 





Resource Link:  A-Z Statistical Terms


Resource Link:  A – Z Test Index

 

Links to Connections

Checkout My Website   www.suttong.com

  

See my Books

  AMAZON      

 

  GOOGLE STORE

 

FOLLOW me on

   FACEBOOK   Geoff W. Sutton  

  

   TWITTER  @Geoff.W.Sutton

 

   PINTEREST  www.pinterest.com/GeoffWSutton

 

Read published articles:

 

  Academia   Geoff W Sutton   

 

  ResearchGate   Geoffrey W Sutton 

 

 

 

 

Tuesday, August 29, 2017

What makes a test valid?


 
What makes a test valid? is a tricky question. 



The short, and rather obnoxious response is, “nothing.” 




Like reliability, validity is a property of test scores
 rather than tests but more accurately, an interpretation
of the scores.


But it is important to take the question seriously when test-takers and users are wondering how much confidence to place in a test score. As with many aspects of science, the answers can be simply stated but there is a complicated backstory.


Validity Traditions


For many, the traditional views of test score validity will be sufficient. Tests measure constructs. Scientific constructs are ideas that have features that can be measured like reading comprehension, dominance, short-term memory, and verbal intelligence.


Construct validity is not a single entity but rather the current state of knowledge about how a test instrument’s scores have functioned in many settings and in relation to criteria. Construct validity primarily includes findings from studies of content validity, convergent validity and discriminant validity.


Content validity is based on judgment analysis from experts who mostly agree that test items measure the construct (e.g., marital satisfaction).

The other types of validity are based on the concept of correlations with a criterion. Researchers ask participants to take a specific test X along with other tests Y and Z. Test X is the test of interest such as a new math achievement test. Test Y represents other similar tests such as other math tests. When test X and test Y yield similar scores we have evidence of convergent validity.


When test X and test Z yield dissimilar results such as a relationship between our test X math achievement and test Z vocabulary, we have evidence of discriminant validity—a math test ought not to measure vocabulary aside from the minimal vocabulary used in the instructions and word problems. The relationship between the tests is based on a statistic called the validity coefficient, which will vary anytime you have a group of people taking two tests—even the very same people will get different scores on two different testing dates.


Criterion validity compares test scores to some criterion. The relationship between depression test scores measuring depression today is called concurrent validity. The relationship between test scores today and some future measurable performance is predictive validity—for example, a pre-employment test may be correlated with supervisor ratings after six months on the job.
Aside from content validity, most traditional studies are looking at the strength of the relationship between one set of test scores and another.


Factor analysis is a complex correlational procedure that examines the underlying relationship among test items and how they relate to other test items. For example, a set of vocabulary items may be correlated with answers to questions about general knowledge and be considered a “verbal factor” when the two sets of items may be grouped as representing an underlying verbal factor. These abstract underlying factors are sometimes called latent variables or latent traits.


Read more about validity of surveys and tests in CREATING SURVEYS- Chapter 18.



Counselors, read more about validity of test scores in APPLIED STATISTICS: CONCEPTS FOR COUNSELORS- Chapter 20.















Related Post






Structural Equation Modeling (SEM)

  Structural Equation Modeling (SEM) is a statistical technique that is widely used in psychology and related fields to examine the relatio...