Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Evidence-based decision-making: : On the use of systematicity cases to check the compliance of reviews with reporting guidelines such as PRISMA 2020

Published: 01 May 2023 Publication History

Highlights

Systematic reviews aim to provide high-quality evidence-based syntheses.
They help make informed recommendations for practice/policy but are poorly reported.
Reporting guidelines have been developed to support systematic reviews reporting.
To assure that a review adheres to a reporting guideline, we use assurance cases.
We rely on Expert Systems to implement assurance cases and support decision-making.

Abstract

Background and context

Systematic reviews aim to provide high-quality evidence-based syntheses for efficacy under real-world conditions and allow understanding the correlations between exposures and outcomes. They are increasingly popular and have several stakeholders (e.g., healthcare providers, researchers, educators, students, journal editors, policy makers, managers) to whom they help make informed recommendations for practice or policy.

Problem

Systematic reviews usually exhibit low methodological and reporting quality. To tackle this, reporting guidelines have been developed to support systematic reviews reporting and assessment. Following such guidelines is crucial to ensure that a review is transparent, complete, trustworthy, reproducible, and unbiased. However, systematic reviewers usually fail to adhere to existing reporting guidelines, which may significantly decrease the quality of the reviews they report and may result in systematic reviews that lack methodological rigor, yield low-credible findings and may mislead decision-makers.

Methods

To assure that a review complies with reporting guidelines, we rely on assurance cases that are an emerging way of arguing and relaying various safety–critical systems’ requirements in an extensive manner, as well as checking the compliance of such systems with standards to support their certification. Since the nature of assurance cases makes them applicable to various domains and requirements/properties, we therefore propose a new type of assurance cases called systematicity cases. Systematicity cases focus on the systematicity property and allow arguing that a review is systematic i.e., that it sufficiently complies with the targeted reporting guideline. The most widespread reporting guidelines include PRISMA (Preferred Reporting Items for Systematic reviews and meta-Analyses). We measure the confidence in a systematicity case representing a review as a means to quantify the systematicity of that review i.e., the extent to which that review is systematic. We rely on rule-based Artificial Intelligence to create a knowledge-based system that automatically supports the inference mechanism that a given systematicity case embodies and that allows making a decision regarding the systematicity of a given review.

Results

An empirical evaluation performed on 25 reviews (self-identifying as systematic) showed that these reviews exhibit a suboptimal systematicity. More specifically, the systematicity of the analyzed reviews varies between 32.96% and 66.49% and its average is 54.42%. More efforts are therefore needed to report systematic reviews of higher quality. More experiments are also needed to further explore the factors hindering and/or assuring the systematicity of reviews.

Audience

The main beneficiaries of our work are journal reviewers, journal editors, managers, policymakers, researchers, organizations developing reporting guidelines, peer reviewers, students, insurers, evidence users, as well as reporting guidelines developers.

References

[1]
Abdul Qadir, A., Mujadidi, Z., & Belle, A. B. Incidium competition 2022: a preliminary systematic review centered on guidelines for reporting systematic reviews. To appear in the STEM Fellowship Journal.
[2]
A. Agrawal, S. Khoshmanesh, M. Vierhauser, M. Rahimi, J. Cleland-Huang, R. Lutz, Leveraging artifact trees to evolve and reuse safety cases, in: In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), IEEE, 2019, pp. 1222–1233.
[3]
Aromataris, E. & Munn, Z. (2020). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-01.
[4]
Ayoub, A., Chang, J., Sokolsky, O., & Lee, I. (2013). Assessing the overall sufficiency of safety arguments. In 21st Safety-critical systems symposium, Bristol, UK.
[5]
J.F. Ambros-Antemate, M.D.P. Beristain-Colorado, M. Vargas-Treviño, J. Gutiérrez-Gutiérrez, P.A. Hernández-Cruz, I.B. Gallegos-Velasco, A. Moreno-Rodríguez, Software Engineering Frameworks Used for Serious Games Development in Physical Rehabilitation: Systematic Review, JMIR Serious Games 9 (4) (2021).
[6]
J. Armstrong, Programming Erlang: software for a concurrent world, Programming Erlang (2013) 1–548.
[7]
R. Bashir, A.G. Dunn, Software engineering principles address current problems in the systematic review ecosystem, Journal of Clinical Epidemiology 109 (2019) 136–141.
[8]
A.B. Belle, G. El Boussaidi, S. Kpodjedo, Combining lexical and structural information to reconstruct software layers, Information and Software Technology 74 (2016) 1–16.
[9]
A.B. Belle, T.C. Lethbridge, M. Garzón, O.O. Adesina, Design and implementation of distributed expert systems: On a control strategy to manage the execution flow of rule activation, Expert Systems with Applications 96 (2018) 129–148.
[10]
A. Belle, T.C. Lethbridge, S. Kpodjedo, O.O. Adesina, M.A. Garzón, A novel approach to measure confidence and uncertainty in assurance cases, in: In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW), IEEE, 2019, pp. 24–33.
[11]
A. Belle, Y. Zhao, Evidence-Based Software Engineering: A Checklist-Based Approach to Assess the Abstracts of Reviews Self-Identifying as Systematic Reviews, Applied Sciences 12 (18) (2022) 9017.
[12]
Belle, A., & Zhao, Y. (2022b). A Checklist-Based Approach to Assess the Abstracts of Reviews Self-Identifying as Systematic Reviews. 29th Asia-Pacific Software Engineering Conference (APSEC 2022). In press.
[13]
J. Belur, L. Tompson, A. Thornton, M. Simon, Interrater reliability in systematic review methodology: Exploring variation in coder decision-making, Sociological Methods & Research 50 (2) (2021) 837–865.
[14]
N. bin Ali, M. Usman, A critical appraisal tool for systematic literature reviews in software engineering, Information and Software Technology 112 (2019) 48–50.
[15]
D. Blanco, D. Altman, D. Moher, I. Boutron, J.J. Kirkham, E. Cobo, Scoping review on interventions to improve adherence to reporting guidelines in health research, BMJ Open 9 (5) (2019) e026589.
[16]
Booth, A., Sutton, A., Clowes, M., & Martyn-St James, M. (2021). Systematic Approaches to a Successful Literature Review. ([n. d.]). https://study.sagepub.com/booth2e .
[17]
P. Brereton, B.A. Kitchenham, D. Budgen, M. Turner, M. Khalil, Lessons from applying the systematic literature review process within the software engineering domain, Journal of systems and software 80 (4) (2007) 571–583.
[18]
K.I. Bougioukas, A. Liakos, A. Tsapas, E. Ntzani, A.-B. Haidich, Preferred reporting items for overviews of systematic reviews including harms checklist: A pilot tool to be used for balanced reporting of benefits and harms, Journal of clinical epidemiology 93 (2018) 9–24.
[19]
D. Budgen, P. Brereton, S. Drummond, N. Williams, Reporting systematic reviews: Some lessons from a tertiary study, Information and Software Technology 95 (2018) 62–74.
[20]
S.J. Chapman, F. Dossa, E.J. de Groof, C. Keane, G.H. van Ramshorst, N.J. Smart, The AMSTAR-2 critical appraisal tool and editorial decision-making for systematic reviews: Retrospective, bibliometric study, Learned Publishing (2022).
[21]
T. Chowdhury, A. Wassyng, R.F. Paige, M. Lawford, Criteria to systematically evaluate (safety) assurance cases, in: In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), IEEE, 2019, pp. 380–390.
[22]
J.L. de La Vara, M. Borg, K. Wnuk, L. Moonen, An industrial survey of safety evidence change impact analysis practice, IEEE Transactions on Software Engineering 42 (12) (2016) 1095–1117.
[23]
J.L. de La Vara, G. Jiménez, R. Mendieta, E. Parra, Assessment of the quality of safety cases: A research preview, in: International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer, Cham, 2019, pp. 124–131.
[24]
J. Denner, E. Marsh, S. Campe, Approaches to Reviewing, Research in Education. (2017),.
[25]
M. Newman, D. Gough, Systematic reviews in educational research: Methodology, perspectives and application, Systematic reviews in educational research (2020) 3–22.
[26]
E. Denney, G. Pai, I. Habli, Towards measurement of confidence in safety cases, in: ESEM, IEEE, 2011, pp. 380–383.
[27]
Di Stefano, A., Gangemi, F., & Santoro, C. (2005, September). Eresye: Artificial intelligence in Erlang programs. In Proceedings of the 2005 ACM SIGPLAN Workshop on Erlang (pp. 62-71).
[28]
L. Duan, S. Rayadurgam, M.P. Heimdahl, A. Ayoub, O. Sokolsky, I. Lee, Reasoning about confidence and uncertainty in assurance cases: A survey, Software Engineering in Health Care (2014) 64–80.
[29]
L. Duan, S. Rayadurgam, M. Heimdahl, O. Sokolsky, I. Lee, Representation of Confidence in Assurance Cases Using the Beta Distribution, in: HASE 2016, IEEE, 2016, pp. 86–93.
[30]
G. Engin, B. Aksoyer, M. Avdagic, D. Bozanlı, U. Hanay, D. Maden, et al., Rule-based expert systems for supporting university students, Procedia Computer Science 31 (2014) 22–31.
[31]
D. Evans, Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions, Journal of clinical nursing 12 (1) (2003) 77–84.
[32]
C. Fernández-Jané, M. Solà-Madurell, M. Yu, C. Liang, Y. Fei, M. Sitjà-Rabert, et al., Completeness of reporting acupuncture interventions for chronic obstructive pulmonary disease, Review of adherence to the STRICTA statement. F1000Research (2020) 9.
[33]
J. Firestone, M.B. Cohen, The assurance recipe: Facilitating assurance patterns, in: International Conference on Computer Safety, Reliability, and Security, Springer, Cham, 2018, pp. 22–30.
[34]
S. Foster, Y. Nemouchi, M. Gleirscher, R. Wei, T. Kelly, Integration of formal proof into unified assurance cases with Isabelle/SACM, Formal Aspects of Computing 33 (6) (2021) 855–884.
[35]
M.A. Garzón, H. Aljamaan, T.C. Lethbridge, Umple: A framework for model driven development of object-oriented systems, in: In 2015 ieee 22nd international conference on software analysis, evolution, and reengineering (saner), IEEE, 2015, pp. 494–498.
[36]
A.M. George, S. Gupta, S.M. Keshwara, M.A. Mustafa, C.S. Gillespie, G.E. Richardson, et al., Meningioma systematic reviews and meta-analyses: An assessment of reporting and methodological quality, British Journal of Neurosurgery (2022) 1–8.
[37]
P.J. Graydon, C. Michael Holloway, An investigation of proposed techniques for quantifying confidence in assurance arguments, Safety Science 92 (2017) 53–65.
[38]
S. Grigorova, T.S.E. Maibaum, Taking a page from the law books: Considering evidence weight in evaluating assurance case confidence, in: In 2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), IEEE, 2013, pp. 387–390.
[39]
J.B. Goodenough, C.B. Weinstock, A.Z. Klein, Toward a theory of assurance case confidence, Software Engineering Institute, Pittsburgh, PA, USA, 2012, Technical Report CMU/SEI2012-TR-002,.
[40]
C. Grosan, A. Abraham, Rule-based expert systems, in: Intelligent systems, Springer, Berlin, Heidelberg, 2011, pp. 149–185.
[41]
J. Guiochet, G. Motet, W. Schön, Safety case confidence propagation based on Dempster-Shafer theory, International Journal of Approximate Reasoning 107 (2019) 46–64.
[42]
GSN (Goal Structuring Notation) v3: https://scsc.uk/gsn [accessed in March 2022].
[43]
R. Hawkins, I. Habli, D. Kolovos, R. Paige, T. Kelly, Weaving an assurance case from design: a model-based approach, IEEE, 2015, pp. 110–117.
[44]
P. Heus, J.A. Damen, R. Pajouheshnia, R.J. Scholten, J.B. Reitsma, G.S. Collins, et al., Uniformity in measuring adherence to reporting guidelines: The example of TRIPOD for assessing completeness of reporting of prediction model studies, British Medical Journal (Clinical Research Education) (London)Open 9 (4) (2019) e025611.
[45]
Q. Huang, J. Yang, X. Feng, A.W.C. Liew, X. Li, Automated trading point forecasting based on bicluster mining and fuzzy inference, IEEE Transactions on Fuzzy Systems 28 (2) (2019) 259–272.
[46]
Y. Idmessaoud, D. Dubois, J. Guiochet, A qualitative counterpart of belief functions with application to uncertainty propagation in safety cases, in: International Conference on Belief Functions, Springer, Cham, 2022, pp. 231–241.
[47]
P.A. Jaques, H. Seffrin, G. Rubi, F. de Morais, C. Ghilardi, I.I. Bittencourt, et al., Rule-based expert systems to support step-by-step guidance in algebraic problem solving: The case of the tutor PAT2Math, Expert Systems with Applications 40 (14) (2013) 5456–5465.
[48]
O. Jaradat, I. Sljivo, I. Habli, R. Hawkins, Challenges of safety assurance for industry 4.0, in: In 2017 13th European Dependable Computing Conference (EDCC), IEEE, 2017, pp. 103–106.
[49]
Y. Jin, N. Sanger, I. Shams, C. Luo, H. Shahid, G. Li, et al., Does the medical literature remain inadequately described despite having reporting guidelines for 21 years?–A systematic review of reviews: An update, Journal of multidisciplinary healthcare 11 (2018) 495.
[50]
Kelly, T. P. (1999). Arguing safety-a systematic approach to safety case management. DPhil Thesis York University, Department of Computer Science Report YCST.
[51]
Kelly, T. (2007, July). Reviewing assurance arguments-a step-by-step approach. In Workshop on assurance cases for security-the metrics challenge, dependable systems and networks (DSN).
[52]
B. Kitchenham, S. Charters, Guidelines for Performing Systematic Literature Reviews in Software Engineering, Version 2.3, Keele University, Newcastle, UK; Durham University: Durham, UK, 2007, Technical Report EBSE 2007–01;.
[53]
Kitchenham, B. A. (2012, September). Systematic review in software engineering: where we are and where we should be going. In Proceedings of the 2nd international workshop on Evidential assessment of software technologies (pp. 1-2).
[54]
B. Kitchenham, R. Pretorius, D. Budgen, O.P. Brereton, M. Turner, M. Niazi, S. Linkman, Systematic literature reviews in software engineering—A tertiary study, Inf. Softw. Technol. 52 (2010) 792–805.
[55]
B.A. Kitchenham, L. Madeyski, D. Budgen, SEGRESS: Software Engineering Guidelines for REporting Secondary Studies, IEEE Transactions on Software Engineering. (2022).
[56]
N. Kobayashi, A. Nakamoto, M. Kawase, F. Sussan, S. Shirasaka, What model (s) of assurance cases will increase the feasibility of accomplishing both vision and strategy?, Review of Integrative Business and Economics Research 7 (2) (2018) 1–17.
[57]
G. Kou, D. Ergu, Y. Shi, An integrated expert system for fast disaster assessment, Computers & Operations Research 42 (2014) 95–107.
[58]
Z. Langari, T. Maibaum, Safety cases: A review of challenges, in: In 2013 1st International Workshop on Assurance Cases for Software-Intensive Systems (ASSURE), IEEE, 2013, pp. 1–6.
[59]
P. Logullo, A. MacCarthy, S. Kirtley, G.S. Collins, Reporting guideline checklists are not quality evaluation forms: They are guidance for writing, Health Science Reports 3 (2) (2020).
[60]
M. Lepmets, T. McBride, F. McCaffery, Towards safer medical device software systems: Industry-wide learning from failures and the use of safety-cases to support process compliance, in: In 2016 10th International Conference on the Quality of Information and Communications Technology (QUATIC), IEEE, 2016, pp. 193–198.
[61]
N.G. Leveson, The use of safety cases in certification and regulation, ESD, MIT, Boston, 2011, Working Paper Series,.
[62]
A. Liberati, D.G. Altman, J. Tetzlaff, C. Mulrow, P.C. Gøtzsche, J.P. Ioannidis, et al., The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration, Journal of Clinical Epidemiology 62 (10) (2009) e1–e34.
[63]
Y. Luo, M. van den Brand, Z. Li, A.K. Saberi, A systematic approach and tool support for GSN-based safety case assessment, Journal of Systems Architecture 76 (2017) 1–16.
[64]
M. Maksimov, N.L. Fung, S. Kokaly, M. Chechik, Two decades of assurance case tools: A survey, in: International Conference on Computer Safety, Reliability, and Security, Springer, Cham, 2018, pp. 49–59.
[65]
M. Maksimov, S. Kokaly, M. Chechik, A survey of tool-supported assurance case assessment techniques, ACM Computing Surveys (CSUR) 52 (5) (2019) 1–34.
[66]
N. Mansourov, D. Campara, System assurance: Beyond detecting vulnerabilities, Elsevier, 2010.
[67]
Y. Matsuno, T. Takai, S. Yamamoto, Facilitating use of assurance cases in industries by workshops with an agent-based method, IEICE Transactions on Information and Systems 103 (6) (2020) 1297–1308.
[68]
D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, PRISMA Group*, Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement, Annals of Internal Medicine 151 (4) (2009) 264–269.
[69]
D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, Prisma Group, Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement, PLoSMedicine 6 (7) (2009) e1000097.
[70]
D. Moher, K.F. Schulz, I. Simera, D.G. Altman, Guidance for developers of health research reporting guidelines, PLoSMedicine 7 (2) (2010) e1000217.
[71]
D. Moher, L. Shamseer, M. Clarke, D. Ghersi, A. Liberati, M. Petticrew, et al., Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement, Systematic Reviews 4 (1) (2015) 1–9.
[72]
Nair, S., de la Vara, J. L., Sabetzadeh, M., & Briand, L. (2013, March). Classification, structuring, and assessment of evidence for safety--a systematic literature review. In 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation (pp. 94-103). IEEE.
[73]
Nair, S., Walkinshaw, N., Kelly, T., & de la Vara, J. L. (2015a, November). An evidential reasoning approach for assessing confidence in safety evidence. In 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE) (pp. 541-552). IEEE.
[74]
S. Nair, J.L. de la Vara, M. Sabetzadeh, D. Falessi, Evidence management for compliance of critical systems with safety standards: A survey on the state of practice, Information and Software Technology 60 (2015) 1–15.
[75]
D. Nešić, M. Nyberg, B. Gallina, A probabilistic model of belief in safety cases, Safety science 138 (2021).
[76]
C. Okoli, A guide to conducting a standalone systematic literature review, Communications of the Association for Information Systems 37 (1) (2015) 43.
[77]
N.L. Oliveira, C.E. Botton, A.T. De Nardi, D. Umpierre, Methodological quality and reporting standards in systematic reviews with meta-analysis of physical activity studies: A report from the Strengthening the Evidence in Exercise Sciences Initiative (SEES Initiative), Systematic Reviews 10 (1) (2021) 1–13.
[78]
A.O. Oyedeji, M.O. Osifeko, O. Folorunsho, O.R. Abolade, O.O. Ade-Ikuesan, design and implementation of a medical diagnostic expert system, Journal of Engineering 10 (2019) 103–109.
[79]
M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow, et al., The PRISMA 2020 statement: An updated guideline for reporting systematic reviews, British Medical Journal (Clinical Research Education) (London) 2021 (372) (2021),.
[80]
Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., ... & McKenzie, J. E. (2021b). PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. Bmj, 372.
[81]
M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow, et al., The PRISMA 2020 statement: An updated guideline for reporting systematic reviews, International Journal of Surgery 88 (2021).
[82]
M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow, et al., Updating guidance for reporting systematic reviews: Development of the PRISMA 2020 statement, Journal of Clinical Epidemiology 134 (2021) 103–112.
[83]
K. Petersen, S. Vakkalanka, L. Kuzniarz, Guidelines for conducting systematic mapping studies in software engineering: An update, Information and Software Technology 64 (2015) 1–18.
[84]
M. Petticrew, H. Roberts, Systematic reviews in the social sciences: A practical guide, John Wiley & Sons, 2008.
[85]
C. Pickering, J. Byrne, The benefits of publishing systematic quantitative literature reviews for PhD candidates and other early-career researchers, Higher Education Research & Development 33 (3) (2014) 534–548.
[86]
PRISMA: http://www.prisma-statement.org/ [Accessed in June 2022].
[87]
K. Pussegoda, L. Turner, C. Garritty, A. Mayhew, B. Skidmore, A. Stevens, et al., Systematic review adherence to methodological or reporting quality, Systematic Reviews 6 (1) (2017) 1–14.
[88]
M.L. Rethlefsen, S. Kirtley, S. Waffenschmidt, A.P. Ayala, D. Moher, M.J. Page, et al., PRISMA-S: An extension to the PRISMA statement for reporting literature searches in systematic reviews, Systematic Reviews 10 (1) (2021) 1–19.
[89]
W.S. Richardson, M.C. Wilson, J. Nishikawa, R.S. Hayward, The well-built clinical question: A key to evidence-based decisions, ACP Journal Club 123 (3) (1995) A12–A13.
[90]
Rushby, J. (2015). The interpretation and evaluation of assurance cases. Comp. Science Laboratory, SRI International, Tech. Rep. SRI-CSL-15-01.
[91]
Z. Samaan, L. Mbuagbaw, D. Kosa, V.B. Debono, R. Dillenburg, S. Zhang, et al., A systematic scoping review of adherence to reporting guidelines in health care literature, Journal of Multidisciplinary Healthcare 6 (2013) 169.
[92]
SACM (Structured Assurance Case Metamodel) v2.2: https://www.omg.org/spec/SACM [accessed in March 2022].
[93]
A. Saibene, M. Assale, M. Giltri, Expert systems: Definitions, advantages and issues in medical field applications, Expert Systems with Applications 177 (2021).
[94]
J.M. Sargeant, K. Reynolds, C.B. Winder, A.M. O’Connor, Completeness of reporting of systematic reviews in the animal health literature: A meta-research study, Preventive Veterinary Medicine 195 (2021).
[95]
K. Sentz, S. Ferson, Combination of evidence in Dempster-Shafer theory, Vol. 4015, Sandia National Laboratories, Albuquerque, 2002.
[96]
Shea, B. J., Reeves, B. C., Wells, G., Thuku, M., Hamel, C., Moran, J., ... & Henry, D. A. (2017). AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. bmj, 358.
[97]
M. Shrestha, C. Johansen, J. Noll, Building confidence using beliefs and arguments in security class evaluations for iot, in: In 2020 Fifth International conference on fog and mobile edge computing (FMEC), IEEE, 2020, pp. 244–249.
[98]
K.V. Shihabudheen, G.N. Pillai, Recent advances in neuro-fuzzy system: A survey, Knowledge-Based Systems 152 (2018) 136–162.
[99]
Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annual Review of Psychology 70, 1 (Jan. 2019), 747–770. https://doi.org/10.1146/annurev-psych010418-102803.
[100]
J. Straub, Impact of techniques to reduce error in high error rule-based expert system gradient descent networks, Journal of Intelligent Information Systems (2021) 1–32.
[101]
E.A. Strunk, J.C. Knight, The essential synthesis of problem frames and assurance cases, Expert Systems 25 (1) (2008) 9–27.
[102]
M. Sujan, P. Spurgeon, M. Cooke, A. Weale, P. Debenham, S. Cross, The development of safety cases for healthcare services: Practical experiences, opportunities and challenges, Reliability Engineering & System Safety 140 (2015) 200–207.
[103]
M.A. Sujan, I. Habli, T.P. Kelly, S. Pozzi, C.W. Johnson, Should healthcare providers do safety cases? Lessons from a cross-industry review of safety case practices, Safety Science 84 (2016) 181–189.
[104]
L. Sun, N. Silva, T. Kelly, Rethinking of Strategy for Safety Argument Development, in: International Conference on Computer Safety, Reliability, and Security, Springer, Cham, 2014, pp. 384–395.
[105]
Tan, X. J., Cheor, W. L., Yeo, K. S., & Leow, W. Z. (2022). Expert systems in oil palm precision agriculture: A decade systematic review. Journal of King Saud University-Computer and Information Sciences.
[106]
M. ur Rehman, M. Nazir, K. Mustafa, Credentials Safety and System Security Pay-off and Trade-off: Comfort Level Security Assurance Framework, in: Strategic System Assurance and Business Analytics, Springer, Singapore, 2020, pp. 255–274.
[107]
A.A. Veroniki, S. Tsokani, S. Zevgiti, I. Pagkalidou, K.M. Kontouli, P. Ambarcioglu, et al., Do reporting guidelines have an impact? Empirical assessment of changes in reporting before and after the PRISMA extension statement for network meta-analysis, Systematic Reviews 10 (1) (2021) 1–12.
[108]
M. Vierhauser, S. Bayley, J. Wyngaard, W. Xiong, J. Cheng, J. Huseman, et al., Interlocking Safety Cases for Unmanned Autonomous Systems in Shared Airspaces, IEEE Transactions on Software Engineering (2019).
[109]
I. Vourgidis, S.J. Mafuma, P. Wilson, J. Carter, G. Cosma, Medical expert systems–a study of trust and acceptance by healthcare stakeholders, in: UK Workshop on Computational Intelligence, Springer, Cham, 2018, pp. 108–119.
[110]
Wiehn, E., Ricci, C., Alvarez‐Perea, A., Perkin, M. R., Jones, C. J., Akdis, C., ... & Task Force ‘Adherence to reporting guidelines in articles published in EAACI Journals: a systematic review’of the EAACI Working Group on Epidemiology. (2021). Adherence to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist in articles published in EAACI Journals: A bibliographic study. Allergy, 76(12), 3581-3588.
[111]
W. Wu, T. Kelly, Combining bayesian belief networks and the goal structuring notation to support architectural reasoning about safety, in: SAFECOMP, Springer, 2007, pp. 172–186.
[112]
S. Yamamoto, Y. Matsuno, An evaluation of argument patterns to reduce pitfalls of applying assurance case, in: Assurance Cases for Software-Intensive Systems (ASSURE), 2013 1st International Workshop on, IEEE, 2013, pp. 12–17.
[113]
Y. Yu, Q. Shi, P. Zheng, L. Gao, H. Li, P. Tao, et al., Assessment of the quality of systematic reviews on COVID-19: A comparative study of previous coronavirus outbreaks, Journal of Medical Virology 92 (7) (2020) 883–890.
[114]
S. Walsh, M. Jones, D. Bressington, L. McKenna, E. Brown, S. Terhaag, et al., Adherence to COREQ reporting guidelines for qualitative research: A scientometric study in nursing social science, International Journal of Qualitative Methods 19 (2020) 1609406920982145.
[115]
Weaver, R., Fenn, J., & Kelly, T. (2003, October). A pragmatic approach to reasoning about the assurance of safety arguments. In Proceedings of the 8th Australian workshop on Safety critical systems and software-Volume 33 (pp. 57-67).
[116]
R. Wang, J. Guiochet, G. Motet, W. Schön, Safety case confidence propagation based on Dempster–Shafer theory, International Journal of Approximate Reasoning 107 (2019) 46–64.
[117]
R. Wei, T.P. Kelly, X. Dai, S. Zhao, R. Hawkins, Model based system assurance using the structured assurance case metamodel, Journal of Systems and Software 154 (2019) 211–233.
[118]
C.B. Weinstock, H.F. Lipson, J. Goodenough, Arguing security-creating security assurance cases, Software Engineering Institute Report, 2007.
[119]
C.B. Weinstock, J.B. Goodenough, A.Z. Klein, Measuring assurance case confidence using Baconian probabilities, in: In 2013 1st international workshop on assurance cases for software-intensive systems (ASSURE), IEEE Press, 2013, pp. 7–11.
[120]
P. Whiting, J. Savović, J.P. Higgins, D.M. Caldwell, B.C. Reeves, B. Shea, et al., ROBIS: A new tool to assess risk of bias in systematic reviews was developed, Journal of Clinical Epidemiology 69 (2016) 225–234.
[121]
Wohlin, C. (2014, May). Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th international conference on evaluation and assessment in software engineering (pp. 1-10).
[122]
X. Zhao, A. Banks, J. Sharp, V. Robu, D. Flynn, M. Fisher, et al., A safety framework for critical systems utilising deep neural networks, in: International Conference on Computer Safety, Reliability, and Security, Springer, Cham, 2020, pp. 244–259.
[123]
Ferdinansyah, A.; Purwandari, B. Challenges in combining agile development and CMMI: A systematic literature review. In Proceedings of the 2021 10th International Conference on Software and Computer Applications, Kuala Lumpur, Malaysia, 23– 26 February 2021; pp. 63–69.
[124]
Kitchenham, B. Procedures for performing systematic reviews. Keele, UK, Keele University, 2004, vol. 33, no 2004, p. 1-26.MLA.

Cited By

View all
  • (2023)Applications of convolutional neural networks in educationExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.120621231:COnline publication date: 30-Nov-2023

Index Terms

  1. Evidence-based decision-making: On the use of systematicity cases to check the compliance of reviews with reporting guidelines such as PRISMA 2020
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Expert Systems with Applications: An International Journal
          Expert Systems with Applications: An International Journal  Volume 217, Issue C
          May 2023
          1039 pages

          Publisher

          Pergamon Press, Inc.

          United States

          Publication History

          Published: 01 May 2023

          Author Tags

          1. Artificial intelligence
          2. Knowledge representation and reasoning
          3. Knowledge-based systems
          4. Assurance cases (systematicity cases)
          5. Reporting guideline adherence
          6. PRISMA (Preferred Reporting Items for Systematic reviews and meta-Analyses) statement

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 18 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2023)Applications of convolutional neural networks in educationExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.120621231:COnline publication date: 30-Nov-2023

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media