Evaluation of Experimental Studies Published in the Jundishapur Journal of Chronic Disease Care: A Brief Report


Kourosh Zarea ORCID 1 , Shahnaz Rostami 2 , Mahin Gheibizadeh 1 , Leila Roohi Balasi 3 , Pouran Tavakoli 3 , Samira Beiranvand 3 , *

1 Department of Nursing, Nursing Care Research Center in Chronic Disease, School of Nursing and Midwifery, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran

2 Nursing Care Research Center in Chronic Disease, Nursing and Midwifery School, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran

3 Student Research Committee, School of Nursing and Midwifery, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran

How to Cite: Zarea K, Rostami S, Gheibizadeh M, Roohi Balasi L, Tavakoli P, et al. Evaluation of Experimental Studies Published in the Jundishapur Journal of Chronic Disease Care: A Brief Report, Jundishapur J Chronic Dis Care. Online ahead of Print ; 8(2):e84757. doi: 10.5812/jjcdc.84757.


Jundishapur Journal of Chronic Disease Care: 8 (2); e84757
Published Online: March 9, 2019
Article Type: Brief Report
Received: September 29, 2018
Accepted: October 10, 2018




Background: The quality of reporting the interventional studies is related to their usefulness. The reporting guidelines are effective tools in this field.

Objectives: The current study aimed at assessing the quality of reported experimental studies published in the Jundishapur Journal of Chronic Disease Care.

Methods: In the current descriptive, cross sectional study, 66 randomized (RCT) and nonrandomized clinical trials were evaluated by CONSORT and TREND checklists. These articles were published from July 2012 to July 2018.

Results: The study identified 43 RCTs and 23 nonrandomized trials. The percentage of adherence to both checklists was more than 50% since 2014 but no article met all criteria of the CONSORT and TREND statements.

Conclusions: The quality of reporting improved during the time, but it was not at the optimum level. It seems that placing a link for these guidelines is helpful, although the checklist link has already been placed on the home page of JJCDC since 2014. Therefore, persuading the authors and reviewer to benefit from them can be helpful.


CONSORT Experimental Quality Randomized Controlled Trial TREND

Copyright © 2019, Jundishapur Journal of Chronic Disease Care. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited.

1. Background

Evidence synthesis and informed decision-making need to correct reporting the interventional studies in health inquiries. Based on evidence, study validity and application of data in secondary research is under the impression of incomplete and defective reporting (1). Poor reporting of study findings eliminates the possibility of repeating the results, comparing them with existing knowledge, and generalizing them to other populations or using them in reviews and/or meta-analyses (2). The employment of reporting guidelines led to improved precision, transparency, completeness, enhanced value, and quality of publications in the field of health research (3). There are reporting guidelines for many of the study designs. Examples of the most commonly used reporting guidelines contain: CONSORT statement (consolidated standards of reporting trials) (4), TREND statement (transparent reporting of evaluations with nonrandomized designs) (5), PRISMA statement (preferred reporting items for systematic reviews and meta-analyses) (6), STARD statement (standards for reporting diagnostic accuracy) (7), and STROBE statement (strengthening the reporting of observational studies in epidemiology) (8).

The report of clinical trials (RCTs) is considered the gold standard in scientific evidence and current clinical decision-making should be mainly based on their results (9). The high-quality systematic reviews are main factors affecting policy making and clinical performance in health care, which are mainly composed of RCTs.

Based on evidence, the reports of RCTs do not have the optimum quality. Incomplete and ambiguous reports distort the judgment of readers about reliability and validity of trial results. It also challenges researchers to extract information and conduct systematic reviews (10). In 1995, in order to eliminate the existing concerns, the CONSORT statement, developed by the CONSORT Group, delineates the recommendations of items that are to be included in RCT publications to assure appropriate and complete reporting. In the early 1990s, two international groups of experts recognized the problems with the reporting of RCTs and generated the impetus to improve their reporting, and formulated the first sets of RCTs reporting guidelines. Shortly afterward, the work of each of these groups, the Asilomar Working Groups, Recommendations for Reporting of Clinical Trials in Biomedical Literature and the Standardized Reporting of Trials statement by Canadian experts were merged under the leadership of the Journal of the American Medical Association to produce the first CONSORT statement in 1996. Subsequent revisions of the CONSORT statement were published in 2001 and 2010. CONSORT statement has a 25-item list that describes how to write a title, abstract, introduction, methods, results, discussion, registration, and access study protocol as well as sources of research funding (11, 12).

Although RCTs are considered the best choice to examine the causal relationships and effectiveness of the research, the employment of these designs is not always appropriate or feasible; instead, studies with nonrandomized designs are frequently used (1). The TREND statement was published by Des Jarlais et al. (5), in 2004. It was designed to investigate the quality of nonrandomized trials reports in the field of behavioral and public health. The CONSORT (2001) guideline was the basis for designing and developing the TREND checklist. It focused on empirical studies with nonrandomized designs to report different parts of a study such as intervention and comparison conditions, research design, and methods of adjusting for possible biases in evaluations. The TREND checklist includes 22 items (59 subitems) including questions about different parts of an article such as title, abstract, introduction, methods, results, and discussion (5). On both CONSORT and TREND checklists, many of the items have two or more subitems, and response options are dichotomous (yes or no).

The adoption of reporting guidelines for health care research studies by professional journals is a publication trend that becomes more evident in the future and assists authors to compose their submissions, and reviewers to assess the merits of manuscripts. Ultimately, contribution to the body of knowledge in the specialty areas of clinical practice for the care of patients is enhanced as the finding of published studies meeting more rigorous standards of review. Standardization of published studies in health care facilitates the analysis of findings found in systematic reviews and meta-analyses that are vital to the development of evidence-based approaches to care (13). The editorial team of the Jundishapur Journal of Chronic Disease Care (JJCDC) decided to adopt the publication guidelines for all types of research papers.

2. Objectives

The current study aimed at evaluating the report quality of experimental studies published in the Jundishapur Journal of Chronic Disease Care.

3. Methods

In the current cross sectional study, all issues of journal were reviewed from the first (July 2012) to the latest (July 2018) year of publication. Three investigators separately examined each article and specified whether the authors were committed to reporting the items in each checklist. Each investigator's response to each question of the checklists was recorded as yes or no. To evaluate the quality of RCT reporting, the most recent version of the CONSORT statement was used (CONSORT 2010) (11). CONSORT statement has 25 items. If studies had a non-randomized design, the 22-item TREND checklist was used. Data for descriptive statistics were analyzed with Microsoft Excel 2010.

4. Results

The study identified 43 RCTs and 23 nonrandomized trials. The percentage of adherence to both checklists was more than 50% since 2014, but no article met all criteria of the CONSORT and TREND statements. The results are shown in Tables 1 and 2, and Figures 1-3.

Table 1. The CONSORT Checklist to Assess RCTs
Section/Topic ItemChecklist Item No.CONSORT ItemReported, No. (%)Unreported, No. (%)
Title and abstract
1aIdentification as a randomized trial in the title19 (44.1)24 (55.8)
1bStructured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts)43 (100)0
Background and objectives2aScientific background and explanation of rationale43 (100)0
2bSpecific objectives or hypotheses43 (100)0
Trial design3aDescription of trial design (such as parallel, factorial) including allocation ratio40 (93.1)3 (6.9)
3bImportant changes to methods after trial commencement (such as eligibility criteria), with reasons043 (100)
Participants4aEligibility criteria for participants43 (100)0
4bSettings and locations where the data were collected41 (95.3)2 (4.65)
Interventions5The interventions for each group with sufficient details to allow replication, including how and when they were actually administered43 (100)0
Outcomes6aCompletely defined pre-specified primary and secondary outcome measures, including how and when they were assessed43 (100)0
6bAny changes to trial outcomes after the trial commenced, with reasons043 (100)
Sample size7aHow sample size was determined35 (81.39)8 (18.60)
7bWhen applicable, explanation of any interim analyses and stopping guidelines043 (100)
Sequence generation8aMethod used to generate the random allocation sequence28 (65.1)15 (34.8)
8bType of randomization; details of any restriction (such as blocking and block size)13 (30.2)30 (69.7)
- Allocation concealment mechanism9Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned5 (11.6)38 (88.3)
- Implementation10How generated the random allocation sequence, how enrolled participants, and how assigned participants to interventions4 (9.3)39 (90.6)
Blinding11aIf done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how2 (4.65)41 (95.3)
11bIf relevant, description of the similarity of interventions043 (100)
Statistical methods12aStatistical methods used to compare groups for primary and secondary outcomes33 (76.7)10 (23.2)
12bMethods for additional analyses, such as subgroup analyses and adjusted analyses17 (39.5)26 (60.4)
Participant flow (a diagram is strongly recommended)13aFor each group, the numbers of participants who were randomly assigned, received intended treatment, and were analyzed for the primary outcome41 (95.3)2 (4.65)
13bFor each group, losses and exclusions after randomization, together with reasons12 (27.9)31 (72)
Recruitment14aDates defining the periods of recruitment and follow-up24 (55.8)19 (44.1)
14bWhy the trial ended or was stopped043 (100)
Baseline data15A table showing baseline demographic and clinical characteristics for each group40 (93)3 (6.9)
Numbers analyzed16For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups14 (32.5)29 (67.4)
Outcomes and estimation17aFor each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval)31 (72)12 (27.9)
17bFor binary outcomes, presentation of both absolute and relative effect sizes is recommended043 (100)
Ancillary analyses18Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory24 (55.8)19 (44.1)
Harms19The all-important harms or unintended effects in each group (for specific guidance see CONSORT for harms)043 (100)
Limitations20Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses21 (48.8)22 (51.1)
Generalizability21Generalizability (external validity, applicability) of the trial findings41 (95.3)2 (4.65)
Interpretation22Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence43 (100)0
Other information
Registration23Registration number and name of trial registry35 (83.1)8 (18.6)
Protocol24Where the full trial protocol can be accessed, if available043 (100)
Funding25Sources of funding and other support (such as supply of drugs), role of funders43 (100)0
Table 2. The TREND Checklist to Assess Nonrandomized Trials
Section/Topic ItemItem No.DescriptorReported, No. (%)Unreported, No. (%)
Title and Abstract
Title and abstract1
Information on how unit were allocated to interventions23 (100)0
Structured abstract recommended23 (100)0
Information on target population or study sample23 (100)0
Scientific background and explanation of rationale23 (100)0
Theories used in designing behavioural interventions14 (60.8)9 (39.1)
Eligibility criteria for participants, including criteria at different levels in recruitment/sampling plan (e g, cities, clinics, subjects)22 (95.6)1 (4.3)
Method of recruitment (e g, referral, self-selection), including the sampling method if a systematic sampling plan was implemented21 (91.3)2 (8.6)
Recruitment setting23 (100)0
Settings and locations where the data were collected23 (100)0
Details of the interventions intended for each study condition and how and when they were actually administered, specifically including:23 (100)0
Content: what was given?23 (100)0
Delivery method: how was the content given?23 (100)0
Unit of delivery: how were the subjects grouped during delivery?20 (86.9)3 (13)
Deliverer: who delivered the intervention?17 (73.9)6 (26)
Setting: where was the intervention delivered?21 (91.3)2 (8.6)
Exposure quantity and duration: how many sessions or episodes or events were intended to be delivered? How long were they intended to last?19 (82.6)4 (17.39)
Time span: how long was it intended to take to deliver the intervention to each unit?9 (39.1)14 (60.8)
Activities to increase compliance or adherence (e g, incentives)9 (39.1)14 (60.8)
Objectives5Specific objectives and hypotheses23 (100)0
Outcomes6Clearly defined primary and secondary outcome measures21 (91.3)2 (8.6)
Methods used to collect data and any methods used to enhance the quality of measurements20 (86.9)3 (13)
Information on validated instruments such as psychometric and biometric properties15 (65.2)8 (34.7)
Sample size7How sample size was determined and, when applicable, explanation of any interim analyses and stopping rules10 (43.4)13 (56.5)
Assignment method8
Unit of assignment (the unit being assigned to study condition, e g, individual, group, community)23 (100)0
Method used to assign units to study conditions, including details of any restriction (e g, blocking, stratification, minimization)21 (91.3)2 (8.6)
Inclusion of aspects employed to help minimize potential bias induced due to non-randomization (e g, matching)5 (21.7)18 (78.26)
Blinding (masking)9Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to study condition assignment; if so, statement regarding how the blinding was accomplished and how it was assessed.023 (100)
Unit of Analysis10
Description of the smallest unit that is being analyzed to assess intervention effects (e g, individual, group, or community)23 (100)0
If the unit of analysis differs from the unit of assignment, the analytical method used to account for this (e g, adjusting the standard error estimates by the design effect or using multilevel analysis)23 (100)0
Statistical methods11
Statistical methods used to compare study groups for primary methods outcome (s), including complex methods of correlated data23 (100)0
Statistical methods used for additional analyses, such as a subgroup analyses and adjusted analysis8 (34.7)15 (65.2)
Methods for imputing missing data, if used023 (100)
Statistical software or programs used23 (100)0
Participant flow12
Flow of participants through each stage of the study: enrollment, assignment, allocation, and intervention exposure, follow-up, analysis (a diagram is strongly recommended)10 (43.4)13 (56.5)
Enrollment: the numbers of participants screened for eligibility, found to be eligible or not eligible, declined to be enrolled, and enrolled in the study023 (100)
Assignment: the numbers of participants assigned to a study condition11 (47.8)12 (52.17)
Allocation and intervention exposure: the number of participants assigned to each study condition and the number of participants who received each intervention16 (69.5)7 (30.4)
Follow-up: the number of participants who completed the follow-up or did not complete the follow-up (i e, lost to follow-up), by study condition5 (21.7)18 (78.26)
Analysis: the number of participants included in or excluded from the main analysis, by study condition3 (13)20 (86.9)
Description of protocol deviations from study as planned, along with reasons2 (8.6)21 (91.3)
Recruitment13Dates defining the periods of recruitment and follow-up11 (47.8)12 (52.17)
Baseline data14
Baseline demographic and clinical characteristics of participants in each study condition11 (47.8)12 (52.17)
Baseline characteristics for each study condition relevant to specific disease prevention research23 (100)0
Baseline comparisons of those lost to follow-up and those retained, overall and by study condition15 (65.2)8 (34.7)
Comparison between study population at baseline and target population of interest21 (91.3)2 (8.6)
Baseline equivalence15Data on study group equivalence at baseline and statistical methods used to control for baseline differences21 (91.3)2 (8.6)
Numbers analyzed16
Number of participants (denominator) included in each analysis for each study condition, particularly when the denominators change for different outcomes; statement of the results in absolute numbers when feasible12 (52.1)11 (47.8)
Indication of whether the analysis strategy was intention to treat or, if not, description of how non-compliers were treated in the analyses23 (100)0
Outcomes and estimation17
For each primary and secondary outcome, a summary of results for each estimation study condition, and the estimated effect size and a confidence interval to indicate the precision23 (100)0
Inclusion of null and negative findings023 (100)
Inclusion of results from testing pre-specified causal pathways through which the intervention was intended to operate, if any2 (8.6)21 (91.3)
Ancillary analyses18Summary of other analyses performed, including subgroup or restricted analyses, indicating which are pre-specified or exploratory9 (39.1)14 (60.8)
Adverse events19Summary of all important adverse events or unintended effects in each study condition (including summary measures, effect size estimates, and confidence intervals)023 (100)
Interpretation of the results, taking into account study hypotheses, sources of potential bias, imprecision of measures, multiplicative analyses, and other limitations or weaknesses of the study16 (69.5)7 (30.4)
Discussion of results taking into account the mechanism by which the intervention was intended to work (causal pathways) or alternative mechanisms or explanations18 (78.2)5 (21.7)
Discussion of the success of and barriers to implementing the intervention, fidelity of implementation13 (56.5)10 (43.4)
Discussion of research, programmatic, or policy implications11 (47.8)12 (52.1)
Generalizability21Generalizability (external validity) of the trial findings, taking into account the study population, the characteristics of the intervention, length of follow-up, incentives, compliance rates, specific sites/settings involved in the study, and other contextual issues18 (78.2)5 (21.7)
Overall evidence22General interpretation of the results in the context of current evidence and current theory15 (65.2)8 (34.7)
The percentage of adherence to CONSORT and TREND checklists by articles per year (N = 66)
Figure 1. The percentage of adherence to CONSORT and TREND checklists by articles per year (N = 66)
The percentage of CONSORT items reported by articles (N = 43)
Figure 2. The percentage of CONSORT items reported by articles (N = 43)
The percentage of TREND items reported by articles (N = 23)
Figure 3. The percentage of TREND items reported by articles (N = 23)

5. Discussion

Accurate reporting of experimental studies such as RCTs is a significant dimension of good research and essential for health providers and other researchers to value the findings (14). The current study evaluated the quality of reporting of published experimental studies in JJCDC using the CONSORT and TREND statement checklists. According to the obtained results, the adherence to the CONSORT and TREND statements was especially acceptable since 2014, but no article met all criteria. In other words, the quality of reporting improved during the time, however, some of the improved studies have not yet reached the optimum level. Similar results are reported in other studies. The findings indicated that the desired items in reporting the articles are not thoroughly followed by the authors (14-17).

A large part of the low quality of reporting may be due to the lack of knowledge of the authors about the standard checklists. Therefore, it is helpful to inform and train the authors and reviewers to use them.

On the other hand, the editors of the journals should endorse the guidelines and checklists for each type of study in accordance with the methods, and the authors should report their research findings according to the existing guidelines. It was observed that reporting according to the guidelines and checklists affected the quality of reporting (6, 18).

In this regard, the link to these guidelines can be found on the home page of JJCDC for authors, reviewers, and readers. The CONSORT and TREND statement guidelines are also adopted for the publication of randomized and nonrandomized designs in the JJCDC.



  • 1.

    Fuller T, Pearson M, Peters JL, Anderson R. Evaluating the impact and use of transparent reporting of evaluations with non-randomised designs (TREND) reporting guidelines. BMJ Open. 2012;2(6). e002073. doi: 10.1136/bmjopen-2012-002073.

  • 2.

    Altman DG, Simera I. Responsible reporting of health research studies: Transparent, complete, accurate and timely. J Antimicrob Chemother. 2010;65(1):1-3. doi: 10.1093/jac/dkp410. [PubMed: 19900949]. [PubMed Central: PMC2793689].

  • 3.

    EQUATOR Network Group. Guidelines for reporting health research: How to promote their use in your journal. [cited 2018 Aug 20]. Available from: www.equator-network.org.

  • 4.

    Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276(8):637-9. doi: 10.1001/jama.1996.03540080059030. [PubMed: 8773637].

  • 5.

    Des Jarlais DC, Lyles C, Crepaz N; Trend Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. Am J Public Health. 2004;94(3):361-6. doi: 10.2105/AJPH.94.3.361. [PubMed: 14998794]. [PubMed Central: PMC1448256].

  • 6.

    Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009;6(7). e1000097. doi: 10.1371/journal.pmed.1000097. [PubMed: 19621072]. [PubMed Central: PMC2707599].

  • 7.

    Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative. BMJ. 2003;326(7379):41-4. doi: 10.1136/bmj.326.7379.41. [PubMed: 12511463]. [PubMed Central: PMC1124931].

  • 8.

    Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the reporting of observational studies in epidemiology (STROBE): Explanation and elaboration. PLoS Med. 2007;4(10). e297. doi: 10.1371/journal.pmed.0040297. [PubMed: 17941715]. [PubMed Central: PMC2020496].

  • 9.

    Falci SG, Marques LS. CONSORT: When and how to use it. Dental Press J Orthod. 2015;20(3):13-5. doi: 10.1590/2176-9451.20.3.013-015.ebo. [PubMed: 26154451]. [PubMed Central: PMC4520133].

  • 10.

    Pandis N, Chung B, Scherer RW, Elbourne D, Altman DG. CONSORT 2010 statement: Extension checklist for reporting within person randomised trials. BMJ. 2017;357:j2835. doi: 10.1136/bmj.j2835. [PubMed: 28667088]. [PubMed Central: PMC5492474].

  • 11.

    CONSORT (n.d.). How CONSORT began. [cited January 31, 2011]. Available from: http://www.consort-statement.org/about-consort/history/.

  • 12.

    Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185(5):263-7. [PubMed: 16948622].

  • 13.

    Peters J.P.M, Hooft L, Grolman W, Stegemanet al. Assessment of the Quality of Reporting of Randomised Controlled Trials in Otorhinolaryngologic Literature – Adherence to the CONSORT Statement. PloS one. 2015;10(3). 122328. doi: 10.1371/journal.pone.0122328.

  • 14.

    Moradi MT, Asadi-Samani M, Mobasheri M. [Evaluating the quality of materials and methods for writings of final proposal in clinical trial studies in Shahrekord University of Medical Sciences based on Consort checklist]. J Clin Nurs Midwifery. 2014;2(4):1-7. Persian.

  • 15.

    Talachi H, Jamshidi Orak R, Ravaghi H, Amanollahi A. [Assessment of the quality of methodology reporting in the randomized trials]. J Health Admin. 2012;15(48):81-92. Persian.

  • 16.

    Thoma A, Chew RT, Sprague S, Veltri K. Application of the CONSORT statement to randomized controlled trials comparing endoscopic and open carpal tunnel release. Can J Plast Surg. 2006;14(4):205-10. doi: 10.1177/229255030601400401. [PubMed: 19554136]. [PubMed Central: PMC2686052].

  • 17.

    Ayoobi F, Rahmani MR, Assar S, Jalalpour S, Rezaeian M. [The consort (consolidated standards of reporting trials)]. J Rafsanjan Univ Med Sci. 2017;15(10):977-94. Persian.

  • 18.

    Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012;11:MR000030. doi: 10.1002/14651858.MR000030.pub2. [PubMed: 23152285].