Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
113 views

Histogram

A histogram is a graphical display of tabulated frequencies of numerical data through bars of equal widths, without any gaps between them. It shows the distribution of data values within categories called bins. The area of each bar corresponds to the frequency of observations within each bin. Histograms are used to understand the shape of a variable's distribution and identify patterns in large datasets.

Uploaded by

amelia99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

Histogram

A histogram is a graphical display of tabulated frequencies of numerical data through bars of equal widths, without any gaps between them. It shows the distribution of data values within categories called bins. The area of each bar corresponds to the frequency of observations within each bin. Histograms are used to understand the shape of a variable's distribution and identify patterns in large datasets.

Uploaded by

amelia99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Histogram

A histogram is an approximate representation of the distribution of


Histogram
numerical data. The term was first introduced by Karl Pearson.[1]
To construct a histogram, the first step is to "bin" (or "bucket") the
range of values—that is, divide the entire range of values into a
series of intervals—and then count how many values fall into each
interval. The bins are usually specified as consecutive, non-
overlapping intervals of a variable. The bins (intervals) must be
adjacent and are often (but not required to be) of equal size.[2]

If the bins are of equal size, a bar is drawn over the bin with height
proportional to the frequency—the number of cases in each bin. A
histogram may also be normalized to display "relative" frequencies
showing the proportion of cases that fall into each of several One of the Seven Basic Tools of
categories, with the sum of the heights equaling 1. Quality
First Karl Pearson
However, bins need not be of equal width; in that case, the erected
described
rectangle is defined to have its area proportional to the frequency
by
of cases in the bin.[3] The vertical axis is then not the frequency
but frequency density—the number of cases per unit of the variable Purpose To roughly assess the
on the horizontal axis. Examples of variable bin width are probability distribution
displayed on Census bureau data below. of a given variable by
depicting the
As the adjacent bins leave no gaps, the rectangles of a histogram
frequencies of
touch each other to indicate that the original variable is
observations occurring
continuous.[4]
in certain ranges of
Histograms give a rough sense of the density of the underlying values.
distribution of the data, and often for density estimation: estimating
the probability density function of the underlying variable. The total area of a histogram used for probability
density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is
identical to a relative frequency plot.

The histogram is one of the seven basic tools of quality control.[5]

Histograms are sometimes confused with bar charts. A histogram is used for continuous data, where the
bins represent ranges of data, while a bar chart is a plot of categorical variables. Some authors recommend
that bar charts have gaps between the rectangles to clarify the distinction.[6][7]

A bar graph and a histogram are two common types of graphical representations of data. While they may
look similar, there are some key differences between the two that are important to understand.

A bar graph is a chart that uses bars to represent the frequency or quantity of different categories of data.
The bars can be either vertical or horizontal, and they are typically arranged either horizontally or vertically
to make it easy to compare the different categories. Bar graphs are useful for displaying data that can be
divided into discrete categories, such as the number of students in different grade levels at a school.
A histogram, on the other hand, is a graph that shows the distribution of numerical data. It is a type of bar
chart that shows the frequency or number of observations within different numerical ranges, called bins.
The bins are usually specified as consecutive, non-overlapping intervals of a variable. The histogram
provides a visual representation of the distribution of the data, showing the number of observations that fall
within each bin. This can be useful for identifying patterns and trends in the data, and for making
comparisons between different datasets.[8]

Examples
This is the data for the histogram to the right, using 500 items:

Bin/Interval Count/Frequency

−3.5 to −2.51 9
−2.5 to −1.51 32

−1.5 to −0.51 109

−0.5 to 0.49 180


0.5 to 1.49 132

1.5 to 2.49 34
2.5 to 3.49 4

The words used to describe the patterns in a histogram are:


"symmetric", "skewed left" or "right", "unimodal", "bimodal" or
"multimodal".

Symmetric, Skewed right Skewed left Bimodal


unimodal

Multimodal Symmetric
It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example
on tips given in a restaurant.

Tips using a $1 bin Tips using a 10c bin


width, skewed right, width, still skewed
unimodal right, multimodal
with modes at $ and
50c amounts,
indicates rounding,
also some outliers

The U.S. Census Bureau found that there were 124 million people who work outside of their homes.[9]
Using their data on the time occupied by travel to work, the table below shows the absolute number of
people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers
for the categories above and below it. This is likely due to people rounding their reported journey time. The
problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when
collecting data from people.

Histogram of travel time (to work), US 2000 census. Area


under the curve equals the total number of cases. This
diagram uses Q/width from the table.
Data by absolute numbers
Interval Width Quantity Quantity/width

0 5 4180 836

5 5 13687 2737
10 5 18618 3723

15 5 19634 3926

20 5 17981 3596
25 5 7190 1438

30 5 16369 3273

35 5 3212 642
40 5 4122 824

45 15 9200 613

60 30 6461 215
90 60 3435 57

This histogram shows the number of cases per unit interval as the height of each block, so that the area of
each block is equal to the number of people in the survey who fall into its category. The area under the
curve represents the total number of cases (124 million). This type of histogram shows absolute numbers,
with Q in thousands.

Histogram of travel time (to work), US 2000 census. Area


under the curve equals 1. This diagram uses Q/total/width
from the table.
Data by proportion
Interval Width Quantity (Q) Q/total/width

0 5 4180 0.0067

5 5 13687 0.0221
10 5 18618 0.0300

15 5 19634 0.0316

20 5 17981 0.0290
25 5 7190 0.0116

30 5 16369 0.0264

35 5 3212 0.0052
40 5 4122 0.0066

45 15 9200 0.0049

60 30 6461 0.0017
90 60 3435 0.0005

This histogram differs from the first only in the vertical scale. The area of each block is the fraction of the
total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all").
The curve displayed is a simple density estimate. This version shows proportions, and is also known as a
unit area histogram.

In other words, a histogram represents a frequency distribution by means of rectangles whose widths
represent class intervals and whose areas are proportional to the corresponding frequencies: the height of
each is the average frequency density for the interval. The intervals are placed together in order to show
that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is
possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of
10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.)[10]

Mathematical definitions
The data used to construct a histogram are
generated via a function mi that counts the
number of observations that fall into each of
the disjoint categories (known as bins). Thus, if
we let n be the total number of observations
and k be the total number of bins, the
histogram data mi meet the following
conditions:

An ordinary and a cumulative histogram of the same data.


The data shown is a random sample of 10,000 points from
a normal distribution with a mean of 0 and a standard
deviation of 1.
A histogram can be thought of as a simplistic
kernel density estimation, which uses a kernel
to smooth frequencies over the bins. This yields a smoother probability density function, which will in
general more accurately reflect distribution of the underlying variable. The density estimate could be plotted
as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are
nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated
variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a
histogram where each bin varies independently.

An alternative to kernel density estimation is the average shifted histogram,[11] which is fast to compute and
gives a smooth curve estimate of the density without using kernels.

Cumulative histogram

A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins
up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:

Number of bins and width

There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping
data is at least as old as Graunt's work in the 17th century, but no systematic guidelines were given[12] until
Sturges' work in 1926.[13]

Using wider bins where the density of the underlying data points is low reduces noise due to sampling
randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater
precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial.
Nonetheless, equal-width bins are widely used.

Some theoreticians have attempted to determine an optimal number of bins, but these methods generally
make strong assumptions about the shape of the distribution. Depending on the actual data distribution and
the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to
determine an appropriate width. There are, however, various useful guidelines and rules of thumb.[14]

The number of bins k can be assigned directly or can be calculated from a suggested bin width h as:

The braces indicate the ceiling function.

Square-root choice

which takes the square root of the number of data points in the sample (used by Excel's Analysis Toolpak
histograms and many other) and rounds to the next integer.[15]

Sturges' formula
Sturges' formula[13] is derived from a binomial distribution and implicitly assumes an approximately normal
distribution.

Sturges' formula implicitly bases bin sizes on the range of the data, and can perform poorly if n  <  30 ,
because the number of bins will be small—less than seven—and unlikely to show trends in the data well.
On the other extreme, Sturges' formula may overestimate bin width for very large datasets, resulting in
oversmoothed histograms.[16] It may also perform poorly if the data are not normally distributed.

When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram
bins, the output of Sturges' formula is closest when n ≈ 100 .[16]

Rice rule

The Rice Rule [17] is presented as a simple alternative to Sturges' rule.

Doane's formula

Doane's formula[18] is a modification of Sturges' formula which attempts to improve its performance with
non-normal data.

where is the estimated 3rd-moment-skewness of the distribution and

Scott's normal reference rule

Bin width is given by

where is the sample standard deviation. Scott's normal reference rule[19] is optimal for random samples of
normally distributed data, in the sense that it minimizes the integrated mean squared error of the density
estimate.[12]

Freedman–Diaconis' choice

The Freedman–Diaconis rule gives bin width as:[20][12]


which is based on the interquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR,
which is less sensitive than the standard deviation to outliers in data.

Minimizing cross-validation estimated squared error

This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond
normal distributions, by using leave-one out cross validation:[21][22]

Here, is the number of datapoints in the kth bin, and choosing the value of h that minimizes J will
minimize integrated mean squared error.

Shimazaki and Shinomoto's choice

The choice is based on minimization of an estimated L2 risk function[23]

where and are mean and biased variance of a histogram with bin-width , and
.

Variable bin widths

Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This
avoids bins with low counts. A common case is to choose equiprobable bins, where the number of samples
in each bin is expected to be approximately equal. The bins may be chosen according to some known
distribution or may be chosen based on the data so that each bin has samples. When plotting the
histogram, the frequency density is used for the dependent axis. While all bins have approximately equal
area, the heights of the histogram approximate the density distribution.

For equiprobable bins, the following rule for the number of bins is suggested:[24]

This choice of bins is motivated by maximizing the power of a Pearson chi-squared test testing whether the
bins do contain equal numbers of samples. More specifically, for a given confidence interval it is
recommended to choose between 1/2 and 1 times the following equation: [25]
Where is the probit function. Following this rule for would give between and
; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum.

Remark

A good reason why the number of bins should be proportional to is the following: suppose that the
data are obtained as independent realizations of a bounded probability distribution with smooth density.
Then the histogram remains equally "rugged" as tends to infinity. If is the "width" of the distribution (e.
g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of
order and the relative standard error is of order . Compared to the next bin, the relative
change of the frequency is of order provided that the derivative of the density is non-zero. These two
are of the same order if is of order , so that is of order . This simple cubic root choice can
also be applied to bins with non-constant widths.

Applications
In hydrology the histogram and estimated
density function of rainfall and river discharge
data, analysed with a probability distribution,
are used to gain insight in their behaviour and
frequency of occurrence.[27] An example is
shown in the blue figure.
In many Digital image processing programs
there is an histogram tool, which show you
the distribution of the contrast / brightness of
the pixels. Histogram and density function for a Gumbel
distribution[26]
See also
Mathematics
portal

Data and information visualization


Data binning
Density estimation
Kernel density estimation, a smoother but more
complex method of density estimation
Entropy estimation
Freedman–Diaconis rule
Image histogram histogram of contrast
Pareto chart
Seven basic tools of quality
V-optimal histograms

References
1. Pearson, K. (1895). "Contributions to the Mathematical Theory of Evolution. II. Skew
Variation in Homogeneous Material" (https://zenodo.org/record/1432104). Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
186: 343–414. Bibcode:1895RSPTA.186..343P (https://ui.adsabs.harvard.edu/abs/1895RS
PTA.186..343P). doi:10.1098/rsta.1895.0010 (https://doi.org/10.1098%2Frsta.1895.0010).
2. Howitt, D.; Cramer, D. (2008). Introduction to Statistics in Psychology (Fourth ed.). Prentice
Hall. ISBN 978-0-13-205161-3.
3. Freedman, D.; Pisani, R.; Purves, R. (1998). Statistics (Third ed.). W. W. Norton. ISBN 978-0-
393-97083-8.
4. Charles Stangor (2011) "Research Methods For The Behavioral Sciences". Wadsworth,
Cengage Learning. ISBN 9780840031976.
5. Nancy R. Tague (2004). "Seven Basic Quality Tools" (http://www.asq.org/learn-about-quality/
seven-basic-quality-tools/overview/overview.html). The Quality Toolbox. Milwaukee,
Wisconsin: American Society for Quality. p. 15. Retrieved 2010-02-05.
6. Naomi, Robbins. "A Histogram is NOT a Bar Chart" (https://www.forbes.com/sites/naomirobb
ins/2012/01/04/a-histogram-is-not-a-bar-chart/#345b9c746d77). Forbes. Retrieved 31 July
2018.
7. M. Eileen Magnello (December 2006). "Karl Pearson and the Origins of Modern Statistics:
An Elastician becomes a Statistician" (http://www.rutherfordjournal.org/article010107.html).
The New Zealand Journal for the History and Philosophy of Science and Technology. 1
volume. OCLC 682200824 (https://www.worldcat.org/oclc/682200824).
8. "Histogram maker" (https://histogrammaker.co/). histogram maker.
9. US 2000 census (https://www.census.gov/prod/2004pubs/c2kbr-33.pdf).
10. Dean, S., & Illowsky, B. (2009, February 19). Descriptive Statistics: Histogram. Retrieved
from the Connexions Web site: http://cnx.org/content/m16298/1.11/
11. David W. Scott (December 2009). "Averaged shifted histogram" (https://www.researchgate.n
et/publication/229760716). Wiley Interdisciplinary Reviews: Computational Statistics. 2:2
(2): 160–164. doi:10.1002/wics.54 (https://doi.org/10.1002%2Fwics.54). S2CID 122986682
(https://api.semanticscholar.org/CorpusID:122986682).
12. Scott, David W. (1992). Multivariate Density Estimation: Theory, Practice, and Visualization.
New York: John Wiley.
13. Sturges, H. A. (1926). "The choice of a class interval". Journal of the American Statistical
Association. 21 (153): 65–66. doi:10.1080/01621459.1926.10502161 (https://doi.org/10.108
0%2F01621459.1926.10502161). JSTOR 2965501 (https://www.jstor.org/stable/2965501).
14. e.g. § 5.6 "Density Estimation", W. N. Venables and B. D. Ripley, Modern Applied Statistics
with S (2002), Springer, 4th edition. ISBN 0-387-95457-0.
15. "EXCEL Univariate: Histogram" (http://cameron.econ.ucdavis.edu/excel/ex11histogram.htm
l).
16. Scott, David W. (2009). "Sturges' rule". WIREs Computational Statistics. 1 (3): 303–306.
doi:10.1002/wics.35 (https://doi.org/10.1002%2Fwics.35). S2CID 197483064 (https://api.se
manticscholar.org/CorpusID:197483064).
17. Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/).
Project Leader: David M. Lane, Rice University (chapter 2 "Graphing Distributions", section
"Histograms")
18. Doane DP (1976) Aesthetic frequency classification. American Statistician, 30: 181–183
19. Scott, David W. (1979). "On optimal and data-based histograms". Biometrika. 66 (3): 605–
610. doi:10.1093/biomet/66.3.605 (https://doi.org/10.1093%2Fbiomet%2F66.3.605).
20. Freedman, David; Diaconis, P. (1981). "On the histogram as a density estimator: L2 theory"
(http://bayes.wustl.edu/Manual/FreedmanDiaconis1_1981.pdf) (PDF). Zeitschrift für
Wahrscheinlichkeitstheorie und Verwandte Gebiete. 57 (4): 453–476.
CiteSeerX 10.1.1.650.2473 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.650.
2473). doi:10.1007/BF01025868 (https://doi.org/10.1007%2FBF01025868).
S2CID 14437088 (https://api.semanticscholar.org/CorpusID:14437088).
21. Wasserman, Larry (2004). All of Statistics. New York: Springer. p. 310. ISBN 978-1-4419-
2322-6.
22. Stone, Charles J. (1984). "An asymptotically optimal histogram selection rule" (http://digitala
ssets.lib.berkeley.edu/sdtr/ucb/text/34.pdf) (PDF). Proceedings of the Berkeley conference in
honor of Jerzy Neyman and Jack Kiefer.
23. Shimazaki, H.; Shinomoto, S. (2007). "A method for selecting the bin size of a time
histogram". Neural Computation. 19 (6): 1503–1527. CiteSeerX 10.1.1.304.6404 (https://cite
seerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.304.6404).
doi:10.1162/neco.2007.19.6.1503 (https://doi.org/10.1162%2Fneco.2007.19.6.1503).
PMID 17444758 (https://pubmed.ncbi.nlm.nih.gov/17444758). S2CID 7781236 (https://api.se
manticscholar.org/CorpusID:7781236).
24. Jack Prins; Don McCormack; Di Michelson; Karen Horrell. "Chi-square goodness-of-fit test"
(https://itl.nist.gov/div898/handbook/prc/section2/prc211.htm). NIST/SEMATECH e-
Handbook of Statistical Methods. NIST/SEMATECH. p. 7.2.1.1. Retrieved 29 March 2019.
25. Moore, David (1986). "3". In D'Agostino, Ralph; Stephens, Michael (eds.). Goodness-of-Fit
Techniques. New York, NY, USA: Marcel Dekker Inc. p. 70. ISBN 0-8247-7487-6.
26. A calculator for probability distributions and density functions (https://www.waterlog.info/cumf
req.htm)
27. An illustration of histograms and probability density functions (https://www.waterlog.info/den
sity.htm)

Further reading
Lancaster, H.O. An Introduction to Medical Statistics. John Wiley and Sons. 1974. ISBN 0-
471-51250-8

External links
Exploring Histograms (https://tinlizzie.org/histograms/), an essay by Aran Lunzer and Amelia
McNamara
Journey To Work and Place Of Work (https://www.census.gov/population/www/socdemo/jour
ney.html) (location of census document cited in example)
Smooth histogram for signals and images from a few samples (http://www.mathworks.com/m
atlabcentral/fileexchange/30480-histconnect)
Histograms: Construction, Analysis and Understanding with external links and an
application to particle Physics. (http://quarknet.fnal.gov/toolkits/ati/histograms.html)
A Method for Selecting the Bin Size of a Histogram (http://2000.jukuin.keio.ac.jp/shimazaki/r
es/histogram.html)
Histograms: Theory and Practice (https://web.archive.org/web/20150501071703/http://www.
stat.rice.edu/~scottdw/stat550/HW/hw3/c03.pdf), some great illustrations of some of the Bin
Width concepts derived above.
Histograms the Right Way (http://www.astroml.org/user_guide/density_estimation.html)
Interactive histogram generator (http://www.shodor.org/interactivate/activities/histogram/)
Matlab function to plot nice histograms (http://www.mathworks.com/matlabcentral/fileexchan
ge/27388-plot-and-compare-nice-histograms-by-default)
Dynamic Histogram in MS Excel (http://excelandfinance.com/histogram-in-excel/)
Histogram construction (http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Model
erActivities_MixtureModel_1) and manipulation (http://wiki.stat.ucla.edu/socr/index.php/SOC
R_EduMaterials_Activities_PowerTransformFamily_Graphs) using Java applets, and charts
(http://www.socr.ucla.edu/htmls/SOCR_Charts.html) on SOCR
Toolbox for constructing the best histograms (http://www.ton.scphys.kyoto-u.ac.jp/~shino/hist
ograms/)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Histogram&oldid=1162463217"

You might also like