Reading #1: The Nomological Network (https://conjointly.com/kb/nomological-network/) "The basic rule for adding a new construct or relation to a theory is that it must generate laws (nomologicals) confirmed by observation or reduce the number of nomologicals required to predict some observables." This is Ockham's Razor, and my thesis adviser's statement: The day you discover that what was formerly thought to be two things are actually one, is a great day in science. I have had some of those great days, especially in my Theory of Humor (TomVeatch.com/humor) and Bliss Theory (https://tomveatch.com/bliss/MathPsych.php). ---------------------------------------------------------------------- Reading #2: Multitrait-Multimethod Matrix (https://conjointly.com/kb/multitrait-multimethod-matrix/) Convergent validity: multiple measures of the same thing should converge. Discriminant validity: measures of different stuff should be uncorrelated, discriminable by the different measures. R = COV(X,Y)/SD(X)*SD(Y): what part of the variation is covariation. MTMA shows R values in a matrix for a number of measures ostensibly for the same underlying concept measured in different ways and different underlying concepts. The matrix is symmetric since R(a,b) = R(b,a) (true?) (A) Multiple measures of the same trait ought to be correlated strongly. (B) Measures of one trait when applied to objects of a different trait ought to have no correlation. (C) Measures of different traits ought to be uncorrelated. Validity diagonals show correlations (A). Diagonal blocks off diagonals show (B). Off-diagonal blocks off diagonals show (C). In single-method MTMM, skipping the Methods factor, we have ... a problem. The Modified MTMM "does not explicitly include a methods factor as a true MTMM would", yet it does indeed show 2 or 3 measurements ie methods for each of two traits. So I think the author is confused, as if reducing 3 to 2 traits makes a conceptual difference, which it does not. They show a (2x2) 4 block table with symmetric above-diagonal values instead of (3x3) 9 block table leaving the implied symmetric above diagonal values empty. But the proofs are the same. Only that reliability on the main diagonal is not used, the self-correlation is always 1.
---------------------------------------------------------------------- Reading #3: Pattern Matching (https://conjointly.com/kb/construct-validity-pattern-matching/) "The pattern match is accomplished by a test of significance such as the t-test or ANOVA." This considers the basic question, is there a real thing/idea/abstraction/category here? (i.e., in and out of the category are two groups, and if it is a real thing then those in should differ in some respect from those out). Fisher Exact Test also shows whether 2 nominal groups differ in 2 nomimal features, nearly the same idea. This is also the Structuralist Method: which may relate underlying feature vs observed features (at "emic" vs "etic" levels). This shows the methodological family-relationship between Psychology and Linguistics.. But "pattern match" is a much broader concept than a 2x2 table for ANOVA or Fisher or the Structuralist Method, or multivariate logistic regression, etc. Question: What is the pattern that is matched if we show a moderating variable has an influence on a (list of many) dependent-to-independent variable relationship(s)? ---------------------------------------------------------------------- Reading #4: Another take on the topic: “The Validation Crisis in Psychology” (https://replicationindex.com/category/nomological-networks/) Contrary to the current practices, Cronbach and Meehl assumed that most users of measures would be interested in a “construct validity coefficient.” My question is, Could that be the same as the fraction of all the measures that is predicted from a single factor? (E.g., in IQ, g, where the weighting of an unobserveable general factor in a multivariate regression in which IQ measures for different types of intelligence are modelled as g[p] + t[p] (p for person, t for a Type of intelligence). After multiple regression g may in general account for zero or more of the total variance, if >0 would that be a construct validity coefficient, since it is measured to that degree by all the measures? Finally, wow: "maybe the 2030s may produce the first replicable studies with valid measures" Everyone likes to be a curmudgeon.