Editor-in-Chief
The Science of Diabetes Self-Management and Care 2025, Vol. 51(5) 445 –446 © The Author(s) 2025 Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1177/26350106251377013 journals.sagepub.com/home/tde
The Science of Diabetes Self-Management and Care is dedicated to publishing high-quality, peer-reviewed research on the science of self-management related to diabetes and co-morbid conditions, as well as the expanding body of knowledge on diabetes technology, population health, and public health. While the journal has been publishing research reports for many years, with the launch of the new title in 2021, the number of quantitative research reports has grown significantly, reflecting a commitment to science and evidence-based practice (EBP). Along with a commitment to science and EBP, scientific rigor, or the meticulous application of the scientific method to ensure the accuracy, reliability, and validity of study findings, is critical. This involves particular attention to design, data collection, outcome measures, analyses, and interpretation of study findings. While all sections of the manuscript need to be well-written to ensure a high-quality research report, the methods section is critical because it includes clear and specific details about the study. The methods section provides sufficient details for replication of how the study was conducted. Components of a methods section include research design, ethical approval, description of the setting, sampling procedures, data collection procedures, outcome measures, intervention or treatment (if applicable), and data analyses.
The type of research design employed in a study needs to be clearly stated at the beginning of the methods section. The design sets the stage and provides an overall plan or blueprint that outlines aspects of the research process. It is a framework that informs the reader about how participants will be sampled, the type of data collection methods, and data analysis.1 Research reports should clearly identify, in a sentence or two, the type of research design chosen for the study, along with a brief rationale for selecting this design.
The methods section must include a statement regarding approval from an institutional review board (IRB) or ethics committee for research. Include the name of the institution along with the number assigned to the research.
The setting of a study refers to the location and environment where the research study was conducted. Writing about the setting provides the reader with an idea of what the location and environment look like and the impact on the results, interpretation, and generalization of study findings. Most studies only cite the geographical location of a study. It is also important to consider the science behind the decision to choose the location. Why was the research conducted in this setting? How was the location determined?
Sampling procedures are one of the most important components of a methods section. The researcher begins by identifying a target population; the population about which the researcher will generalize study findings. Having defined the target population, the next step is to identify a suitable sampling technique and choose between probability and non-probability sampling methods.2 A distinguishing feature of probability sampling is the random selection of participants, where all individuals have an equal chance of being selected to participate in the study. Probability techniques include simple random sampling, stratified random sampling, cluster sampling, and systematic sampling. In some situations, probability sampling may not be feasible, making non-probability sampling an acceptable alternative. Non-probability techniques lack random selection and include purposive, snowball, network, and quota sampling.
When writing up sampling procedures, it is important to discuss how and where participants were recruited and invited to participate in the study. How many participants were initially selected, along with the criteria for inclusion and exclusion? What, if anything, was done to ensure the sample was representative of the larger target population? What was the sample size at the beginning and end of the study? How many participants dropped out of the study? Procedures for handling missing data are important to consider. It is likewise important to provide a clear rationale behind decisions regarding missing data or when a data imputation method is used.
In quantitative research, data collection typically involves the use of surveys, questionnaires, or measurement instruments. Surveys and questionnaires are used to gather sociodemographic data, while scales and instruments are used to measure specific phenomena or constructs.3 Data collection procedures should be explained in detail. It is important to know whether surveys and questionnaires are administered to individuals or groups. Who administered the surveys and questionnaires? Were there scripted instructions, under conditions that were similar for all participants?3
Scales and instruments contain items analyzed as a total score or as separate sub-scales. State whether the scale or instrument was developed by the author or is an existing tool and provide its name. Indicate if published estimates of reliability and validity are available and include an assessment of reliability and validity from the current study. For each scale or instrument, describe it separately, stating the total number of items, along with who, where, and how the scale or instrument was administered. Outline the types of questions included, such as Likert-type, visual analogue, or multiple-choice. A classic Likert-type scale comprises several statements that address agreement, frequency, or evaluation.3,4 A frequent practice is to include a four, five, or six-point Likert response. For example, a 5-point Likert scale agreement may include options such as 1 (Strongly Agree), 2 (Agree), 3 (Uncertain), 4 (Disagree), and 5 (Strongly Disagree).
Clearly describe the intervention or type of treatment in experimental research to ensure validity and enable replication. Use the key points to guide your description. State whether you used a control (comparison) group. Explain how you assigned participants to each group. Specify where you conducted the intervention or treatment, along with its duration. Indicate whether you conducted it over several sessions. Describe the format (e.g., face-toface lecture or online module). Identify who delivered the intervention or treatment. Explain what instructions you gave to participants.
Researchers must clearly describe the statistical tests used to analyze the data, including the specific tests (e.g., t-test, analysis of variance, regression models) and the software used (e.g., SPSS, SAS, R, Stata). State the independent, dependent (outcome), and potential extraneous variables that could affect results. Specify the level of measurement used for each data type, requirements assessed (such as distribution normality), assumptions, data transformations, and significance or confidence interval levels.
James A. Fain https://orcid.org/0000-0002-6287-1413
Fain JA. Selecting a quantitative research design. In: Fain JA, ed. Reading, Understanding, and Applying Nursing Research. 7th ed. FA Davis Company; 2025:167-195.
Fain JA. Selecting the sample and setting. In: Fain JA, ed. Reading, Understanding, and Applying Nursing Research. 7th ed. FA Davis Company; 2025:147-166.
Fain JA. Data collection methods. In: Fain JA, ed. Reading, Understanding, and Applying Nursing Research. 7th ed. FA Davis Company; 2025:239-263.
DeVellis RF, Thorpe CT. Guidelines in scale development. In: DeVellis RF, Thorpe CT, eds. Scale development: Theory and application. 5th ed. Sage Publications; 2022:91-135.