Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Journalclubwebpage

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

WHAT IS A JOURNAL CLUB?

With its September 2002 issue, the American Journal of Critical Care debuts a new
feature, the AJCC Journal Club. Each issue of the journal will now feature an AJCC
Journal Club article that provides questions and discussion points to stimulate a journal
club discussion in which participants can evaluate new research and its applicability to
clinical nursing practice. We encourage critical care nurses to use the AJCC Journal Club
to assist them in critically analyzing research to promote a better understanding of the
research process and to advocate evidence-based nursing practice.
The general purpose of a journal club is to facilitate the review of a specific research
study and to discuss implications of the study for clinical practice. A journal club has been
defined as an educational meeting in which a group of individuals discuss current articles,
providing a forum for a collective effort to keep up with the literature.1,2 There are many
advantages of participating in a journal club, including keeping abreast of new knowledge,
promoting awareness of current nursing research findings, learning to critique and appraise
research, becoming familiar with the best current clinical research, and encouraging
research utilization.3,4

How to Begin
The steps to beginning a journal club at your school, hospital, or medical institution
are simple:
1. Post and distribute copies of the research article and the journal club discussion
questions to interested persons
2. Set up a convenient meeting time and location (eg, monthly)
3. Identify a facilitator for the meeting (initially, this could be a clinical educator,
clinical nurse specialist, nurse practitioner, nurse manager, or senior staff member,
with journal club members then taking turns to lead subsequent journal club
sessions)
4. Hold the journal club (encourage active participation of those attending by using
the discussion questions)
5. Evaluate the journal club (eg, at the end of the session, gather feedback from
participants). Determine how the next journal club meeting could be made more
beneficial, eg, encourage more attendance, hold more than one session, tape-record the
session for those unable to attend.)
6. Schedule the next meeting, using the AJCC Journal Club feature article

Several factors are key in promoting a successful journal club, including promoting
interest, attendance, and involvement. Having a session leader to start and facilitate discus-
sion can help to ensure that the journal club meetings are productive. Scheduling the journal
club at a time and location that are convenient for staff to attend is additionally important.
The value of a journal club is that it can promote a better understanding of the research
process and an improved ability to critically appraise research. Reading and critiquing
research is most beneficial for critical care nurses, as it facilitates the evaluation of research
for use in clinical practice.

REFERENCES
1. Dwarakanath LS, Khan KS. Modernizing the journal club. Hosp Med. 2000;16:425-427.
2. Sidorov J. How are internal medicine residency journal clubs organized, and what makes them successful? Arch
Intern Med. 1995;155:1193-1197.
3. Shearer J. The Nursing Research Journal Club: an ongoing program to promote nursing research in a community hos-
pital. J Nurs Staff Dev. 1995;11:104-105.
4. Kirchhoff KT, Beck SL. Using the journal club as a component of the research utilization process. Heart Lung.
1995;24:246-250.
GUIDELINES FOR CRITIQUING RESEARCH

The overall goal of a research critique is to formulate a general evaluation of the merits
of a study and to evaluate its applicability to clinical practice. A research critique goes
beyond a review or summary of a study and carefully appraises a study’s strengths and
limitations. The critique should reflect an objective assessment of a study’s validity and
significance. A research study can be evaluated by its component parts, and a thorough
research critique examines all aspects of a research study. Some common questions used to
guide a research critique include:

A. Description of the Study


• What was the purpose of the research?
• Does the problem have significance to nursing?
• Why is the problem significant/important?
• Identify the research questions, objectives, or hypothesis(es)

B. Literature Evaluation
• Does the literature review seem thorough?
• Does the review include recent literature?
• Does the content of the review relate directly to the research problem?
• Evaluate the research cited in the literature review and the argument developed
to support the need for this study.

C. Conceptual Framework
• Does the research report use a theoretical or conceptual model for the study?
• Does the model guide the research and seem appropriate?
• How did it contribute to the design and execution of the study?
• Are the findings linked back to the model or framework?

D. Sample
• Who were the subjects?
• What were the inclusion criteria for participation in the study?
• How were subjects recruited?
• Are the size and key characteristics of the sample described?
• How representative is the sample?

E. Methods and Design


• Describe the study methods
• How were the data collected?
• Are the data collection instruments clearly described?
• Were the instruments appropriate measures of the variables under study?
• Describe and evaluate the reliability of the instruments. (Reliability
refers to the consistency of the measures.) Will the same results be found with
subsequent testing?
• Describe and evaluate the validity of the instruments. (Validity refers to the
ability of the instrument to measure what it proposes to measure.)
F. Analysis
• How were the data analyzed?
• Do the selected statistical tests appear appropriate?
• Is a rationale provided for the use of selected statistical tests?
• Were the results significant?

G. Results
• What were the findings of the research?
• Are the results presented in a clear and understandable way?
• Discuss the interpretations of the study by the authors
• Are the interpretations consistent with the results?
• Were the conclusions accurate and relevant to the problem the authors identified?
• Were the authors’ recommendations appropriate?
• Are study limitations addressed?

H. Clinical Significance
• How does the study contribute to the body of knowledge?
• Discuss implications related to practice/education/research
• What additional questions does the study raise?

REFERENCES
Brink PJ, Wood MJ. Advanced Design in Nursing Research. Thousand Oaks, Calif: Sage Publications;
1998.
Frank-Stromborg M, Olsen SJ. Instruments for Clinical Health-Care Research. Boston, Mass: Jones and
Bartlett Publishers; 1997.
Polit DF, Hungler BP. Nursing Research, Principles and Methods. Philadelphia, Pa: Lippincott; 1999.
Whitely BE. Principles of Research in Behavioral Science. Boston, Mass: McGraw; 2002.
GLOSSARY OF RESEARCH TERMS
Abstract—a brief summary of the research study

Analysis—the process of synthesizing data to answer the research question

Alpha—in tests of statistical significance, the alpha level indicates the


probability of committing a Type I error; in estimates of internal
consistency, a reliability coefficient, as in Cronbach alpha.

Analysis of variance—a statistical test for comparing mean scores among 3 or


more groups

Attrition—loss of study participants during a study. Attrition can be a threat to the


internal validity of a study, and it can change the composition of the study sample.

Beta—in statistical testing, the beta is the probability of a Type II error; in


multiple regression, the standardized coefficients indicating the relative weights
of the independent variables

Bias—any influence that can change the results of a study

Case study—a study design that provides an in-depth review of a single subject
or case

Causal relationship—a relationship between 2 variables in which the presence


or absence of one variable determines the presence or absence of the other.

Chi-square test—a nonparametric statistical test used to determine


relationships between two nominal level variables.

Cluster sampling—selecting a random sample from clustered groups

Coefficient alpha (Cronbach alpha)—a reliability index that estimates the


internal consistency of a measure with several items or subparts

Conceptual map—a diagram representing the relationship of variables

Concurrent validity—the degree to which scores on an instrument are


correlated with some external criterion, measured at the same time

Confidence interval—a range of values that a parameter is estimated to fall within


Confounding variable—a variable that might affect the dependent variable,
also termed “extraneous variable”

Consent form—a written document reflecting agreement between a researcher


and subject and outlining the subject’s participation in a study
Construct validity—the degree to which an instrument measures the construct
intended
Content analysis—the process of organizing narrative qualitative information
according to themes and concepts

Content validity—the degree to which items in an instrument reflect the


underlying concept

Control group—subjects in a research study who do not receive the


experimental treatment

Convergent validity—a type of validity that reflects the degree to which scores
from an instrument resemble scores from a different measure of the construct

Correlation coefficient—an index that reflects the degree of relationship


between 2 variables. A perfect positive relationship is +1, no relationship is 0,
and -1 is a perfect negative relationship.

Criterion validity—the degree to which scores on an instrument are correlated


with some external criterion

Cronbach alpha—a reliability index that reflects the internal consistency of a


measure

Cross-sectional study—a study design that collects data at a single point in


time for the purpose of inferring trends over time.

Data cleaning—the process of trying to find errors in the data set

Degrees of freedom—a concept used with statistical tests that refers to the
number of sample values that are free to vary. In a sample, all but one value is
free to vary, and the degrees of freedom is often N-1.

Descriptive study—a study that defines or describes a population or


phenomenon

Descriptive statistics—methods used to describe or summarize the


characteristics of data in a sample

Dependent variable—the outcome variable of interest

Dichotomous variable—a variable with only 2 categories

Effect size—a statistical term of the magnitude of the relationship between 2


variables

Experimental group—subjects in a research study who receive the


experimental treatment or intervention

Exploratory study—a type of study design used to explore or gain insights


into a phenomenon

Ex post facto—a type of research design that studies something after it has
occurred
Experiment—a research study in which the independent variables are
manipulated and subjects are randomly assigned to different conditions

External validity—refers to how representative the results of the study are


(generalizability)

Face validity—the degree to which a test appears to measure a concept based


on the judgment by experts

Factor analysis—a statistical procedure for reducing a large set of variables


into smaller sets of related variables

Focused interview—an interview that is partially structured or semi-structured

Frequency distribution—a display of data values from the lowest to the


highest, along with a count of the number of times each value occurred

Grounded theory—a method used in qualitative research to develop categories


of theories and propositions about their relationships from data

Halo effect—the tendency for an observer to rate certain subjects as high or low
because of the overall impression the subject gives the observer

Hawthorne effect—changes that occur in people’s behavior because they know


they are being studied

Histogram—a graphic display of data frequency using rectangular bars with


heights equal to the frequency count

Hypothesis—a statement of the relationship between 2 or more study variables

Independent variable—the conditions or factors that are explored in


relationship to their influence on the dependent variable

Indirect (inverse) relationship—a negative correlation between 2 variables

Internal consistency reliability—the degree to which all items in a scale are


measuring the same dimension of a concept

Internal validity—a measure of the independent variable being responsible for


an observed effect

Inter-rater reliability—the reliability of measures across different raters

Interval scale—measures data that rank orders a variable with equal distance
between measurement points (eg, temperature data)

Instruments—devices or techniques used to collect data

Likert scale—a scale of measurement in which respondents are asked to


respond to statements based on how much they agree or disagree
Literature review—the process of searching published work to find out what is
known about a research topic

Longitudinal study—a research study that is conducted over time and measures
the same variables

Mean—the average value or measure of central tendency. The mean is obtained


by dividing the sum of values by the total number of values.

Median—the middle score

Mode—the value that occurs most frequently

Multiple regression—a statistical procedure for understanding the effects of 2


or more independent variables on a dependent variable

N—used to designate the total sample size

n—used to designate the number of subjects in a subgroup

Nominal scale—a scale that measures data by assignment of characteristics into


categories (eg, male = 1, female = 2)

Nonparametric statistics—tests that can be used to analyze nominal and


ordinal data, or data that are not normally distributed

Null hypothesis—a statement that no relationship exists between study variables

Ordinal scale—a scale that measures data that rank order values

Parametric statistics—tests that are used to analyze interval level data and data
that is normally distributed

Pearson’s r—a correlation coefficient that designates the magnitude of a


relationship between 2 variables

Phenomenology—a qualitative research method that focuses on the lived


experience of subjects

Pilot study—a small scale study conducted to test the plan and methods of a
research study

Power analysis—refers to a way of calculating the number of subjects needed


for the results of a study to be considered statistically significant

Quasi-experimental—a type of research design in which subjects are not


randomly assigned to treatment conditions, but manipulation of the independent
variable does occur

Qualitative analysis—a method of analyzing non-numerical data (such as


words or statements from subjects)

r—the symbol used to designate a correlation coefficient


R2—the symbol that indicates the squared multiple correlation coefficient which
indicates the amount of variance in the dependent variable accounted for or
explained by the independent variables

Random sample—a sample selected in a way that ensures that every subject
has an equal chance of being included

Range—represents the dispersion of data or the difference between the smallest


and largest values

Ratio scale—a scale that has a zero point and equal distances between scores

Regression—a statistical procedure for predicting values of a dependent


variable based on the values of one or more independent variables

Reliability—refers to the consistency of the measures and means that an


instrument produces consistent results or data with repeated use

Research utilization—implementation of research findings in practice

Response rate—the rate of participation in a study

Scatter diagram (scatter plot)—a graphic presentation of the correlation


between two variables

Significance level—the probability that an observed relationship could be


caused by chance. A significance level of .05 indicates the probability that a
relationship would be found by chance only 5 times out of 100.

Standard deviation—a measure of variability of data. The standard deviation


is the average of the deviations from the mean.

Standard score (z-score)—refers to how many standard deviations away from


the mean a particular score is located

Test-retest reliability—a method for determining the reliability of an


instrument by administering it on 2 or more occasions to the same respondents

Triangulation—refers to the use of several methods to collect data on the same


concept

T-test—a statistical test used to determine if the means of 2 groups are


significantly different

Type I error (alpha error)—occurs when it is concluded that a difference


between groups is not due to chance when in fact it is (reject a true null
hypothesis)

Type II error (beta error)—occurs when it is concluded that differences


between groups were due to chance when in fact they were due to the effects of
the independent variable (accept a false null hypothesis)

Variable—a characteristic, attribute, or outcome


Variability—the degree to which values are widely different or dispersed

Validity—refers to the ability of the instrument to measure what it proposes to


measure

Variance—a descriptive statistic that examines how scores are distributed

Z-score—a standard score, expressed in terms of standard deviations from the


mean

REFERENCES
The Cochrane Collaboration Research Directory. Accessed on July 26, 2002. Available at:
http://www.cochrane.org/cochrane/cngloss.htm.

Mateo MA, Kirchhoff KT. Using and Conducting Nursing Research in the Clinical Setting.
Philadelphia, Pa: Saunders; 1999.

Polit DF, Hungler BP. Nursing Research: Principles and Methods. Philadelphia, Pa: Lippincott;
1999.

World Opinion Glossary of Research Terms. Accessed on July 25, 2002. Available at:
http://www.worldopinion.com/resgloss.taf.

You might also like