Falsifizierbarkeit von Theorien zu den Ursachen der Lese- und Rechtschreibstörung / Falsifiability of causal theories of dyslexia
-
Deutsch:
Sehschwäche, auditiver Defizit, Aufmerksamkeitsstörung - Theorien zu den neurokognitiven Ursachen von Lese- und Rechtschreibstörung gibt es wie Sand am Meer. Oft sind solche Theorien kontrovers, und die empirische Befundlage ist selten eindeutig. In diesem Projekt untersuchen wir die theoretischen, methodischen und statistischen Merkmale von kontroversen Theorien. Zum Beispiel sind Konzepte, wie z.B. eine Aufmerksamkeitsstörung, oft vag definiert, was zu einer gemischten Befundlage führt, weil verschiedene Forscher*innen verschiedene Definitionen benutzen.
Dieses Projekt ist Teil des DFG Schwerpunktprogramms "META-REP" (https://www.psy.lmu.de/soz/meta-rep/index.html), das sich mit der Replizierbarkeit und Reproduzierbarkeit von Forschungsergebnissen befasst.
English:
Visual problems, auditory deficit, impaired attention - there is a plethora of theories about the neurocognitive causes of poor reading ability in developmental dyslexia. Such theories are often controversial, and the evidence behind them is mostly contradictory. In this project, we aim to examine theoretical, methodological, and statistical properties of controversial theories. For example, relevant concepts such as visual attention are often ill-defined, which may lead to mixed findings, as each researcher relies on a different definition of this concept.
This project is a part of the DFG Priority Programme "META-REP" (https://www.psy.lmu.de/soz/meta-rep/index.html), which focusses on replicability and reproducibilty of research findings.
-
Project Title: Falsifiability of Causal Theories of Dyslexia
Building falsifiable theories is particularly important in unravelling the scientific underpinnings of neurodevelopmental disorders, such as developmental dyslexia, as it helps establish a testable, logical skeleton for hypothesis testing (Popper, 2005; Wacker, 1998). Yet, in psychological science, several factors have been impeding the construction of falsifiable theories. These include, for example, ill-defined terminologies and indeterminate operational indicators of psychological constructs (e.g., Meehl, 1978), dubious psychological tests (e.g., Eronen & Bringmann, 2021), flexibility in data analysis (Simmons et al., 2011), and publication bias (e.g., Francis, 2013). These factors might be associated with the replication crisis in psychological sciences, manifested by the conflicting evidence and low effect sizes found in empirical studies (Open Science Collaboration, 2015). Yet, to date, the relative prominence of these theoretical, methodological, and systematic factors on replicability is unknown. Preceding the building of sound theories in psychological science, understanding and assessing the relationship between these associated factors and replicability is fundamental.
In light of the above, this meta-science project aims to evaluate the effects of theory underspecification, questionable measurements, and biased research practices on replicability in psychological sciences. Taking developmental dyslexia research as a case study, this project aims to quantify to what extent (1) a theory might be underspecified, (2) psychological measurements might be invalid or unreliable, (3) research designs and statistical models might be maladapted and reported with biases, and (4) how the above factors predict effect size variability and replicability. There are three work packages (WP1, WP2, and WP3) in this project: WP1 develops methods to quantify theory specificity, in particular, specificity in defining key terminology and hypothesis generation; WP2 examines the ways to quantify the methodological strength of empirical studies of each dyslexia theory, indicated by the psychometric properties of the cognitive tasks, the reproducibility and robustness of the studies, and publication bias; WP3 explores the relative importance of the above-examined factors, including theory underspecification, poor measurement, and publication bias, as correlates of low replicability. The developed framework for quantifying and assessing the weights of potential factors associated with low replicability, especially the importance of theory specification, is expected to be generalizable to other subfields of psychology.
References:
Eronen, M. I., & Bringmann, L. F. (2021). The Theory Crisis in Psychology: How to Move Forward. Perspectives on Psychological Science, 16(4), 779–788. https://doi.org/10.1177/1745691620970586
Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57(5), 153–169. https://doi.org/10.1016/j.jmp.2013.02.003
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46(4), 806–834. https://doi.org/10.1037/0022-006X.46.4.806
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
Popper, K. (2005). The Logic of Scientific Discovery. Routledge.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632
Wacker, J. G. (1998). A definition of theory: Research guidelines for different theory-building research methods in operations management. Journal of Operations Management, 16(4), 361–385. https://doi.org/10.1016/S0272-6963(98)00019-9
-
Mitarbeiter/-innen
Dr. Xenia Kudláčková SchmalzYi LeungDoktorandin4400 55955Øl VifuxvimW/fulhvfiuyziuemiProf. Moritz HeeneAnsprechpartner/-innen
-
Gefördert durch die DFG, Schwerpunktprogram META-REP