Results (
Thai) 1:
[Copy]Copied!
Data Abstraction and Quality AssessmentThe 2 authors used a standardized coding manual (available from the authors on request) to extract the following data from articles: author names, publication year, sample size, period of assessment, study sample risk (0 or 1; high risk coded when a study denoted this clearly, including medically assisted pregnancies and infants with feeding, sleeping, or crying problems), location, sample size, response rate, number of fathers identified as depressed, number of mothers identified as depressed (when assessed), and correlation between maternal and paternal depressive symptoms. The coding manual was developed a priori and modified after use in several studies. Coding was done independently then aggregated, with disagreements resolved through discussion and consensus. Although quality assessment can be reliably conducted in meta-analyses of experimental studies, its use in observational research is controversial, with no clear consensus on rating methods or their appropriate use in analysis. As such, we used a simple objective rating system (based on the meta-analysis of similar data by Bennett et al2) that coded studies on a scale of 0 to 10, assigning 2 points each for sampling method (systematic or probability vs convenience or not reported), presence of clearly stated inclusion criteria, racial/ethnic diversity (≥20% minority), educational diversity (≤80% at 1 educational level), and response rate (reported at ≥60%). Studies that did not report these methodological issues received lower scores. Because evidence on the validity of quality ratings in observational research is lacking, we adopted the approach of Stroup et al23 of broadly including studies and using sensitivity analysis to determine incremental effects of lower-quality studies.
Being translated, please wait..