Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Statistics Assignment Full

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

NAME G.

KAVYA SHRI
PROGRAMME MASTER OF BUSINESS ADMINISTRATION (MBA)
SEMESTER I
COURSE CODE & NAME DMBA103-STATISTICS FOR MANAGEMENT
ROLL NO. 2314514198

Assign 1
Q1:
Define statistics? Explain various functions of statistics and also the key limitations of
statistics?

Statistics:
Data collection, analysis, interpretation, presentation, and organization are all part of the
mathematical field of statistics. It offers techniques for drawing conclusions about
populations from the analysis of a representative sample. In many disciplines, such as public
health, commerce, economics, social sciences, and science, statistics is essential for
condensing data and producing insightful findings.

Statistics' functions include:


Gathering of Data:
Data collection via surveys, experiments, observations, and other techniques is made easier
by statistics. The statistical analysis uses this data as its starting point.
Data Organization and Presentation:
In order to meaningfully arrange data, statistics is used to create tables, charts, graphs, and
summary measures. The understanding of patterns and trends is aided by this visual
portrayal.
Analyzing Data:
Utilizing mathematical methods, statistical analysis investigates trends, correlations, and
unpredictability in data. This covers both inferential (regression analysis) and descriptive
(mean, median, and mode) statistics.
Generalization and Interference:
With the aid of statistics, researchers can extrapolate conclusions about a population from a
sample. Making inferences about the traits of a bigger group from a subset of that group
requires this generalization.
Forecast:
Future patterns or results can be predicted using statistical modeling and analysis. This is
very helpful for financial planning, forecasting, and making decisions.
Quality Assurance:
Statistics are used for quality control in manufacturing and other businesses, making sure that
processes or products adhere to established requirements.
Statistics' limitations:
Limited Scope:
Only quantitative and measurable data can be handled by statistics. Analyzing qualitative
elements that cannot be quantified might not be successful.
Sensitivity to Disparities
A dataset's extreme values, or outliers, might have a disproportionate impact on statistical
metrics like the mean, which could result in incorrect interpretations.
Assumption of Normality:
A normal distribution is assumed by many statistical techniques. The validity of statistical
tests may be impacted by the fact that data does not always match this assumption in reality.
Dependency on the Quality of the Data:
The quality of the data gathered has a significant impact on the dependability and accuracy of
statistical results. Biased or inaccurate data can produce incorrect findings.
Comparing Causation and Correlation:
While correlations between variables can be established statistically, further evidence is
needed to demonstrate causality. It can be deceptive to depend only on statistical
relationships, as correlation does not necessarily indicate causality.
Difficulties in Interpretation:
It can be difficult to appropriately understand statistical results, which can result in the
oversimplification or misinterpretation of complicated relationships within the data.
Representativeness of the Sample:
The representativeness of the sample determines whether statistical inferences are correct.
Generalizations could not be trustworthy if the sample is not accurately representative of the
population.

Q2 :
define measurement scales? discuss qualitative and quantitative data in detail with
examples?
The various classification and measuring schemes for variables are referred to as
measurement scales, sometimes called data scales or level of measurement. Measurement
scales can be classified into four categories: nominal, ordinal, interval, and ratio. These
scales offer a framework for comprehending the type and properties of the information
gathered.
The nominal scale
The most basic type of measurement scale is this one.
It entails naming or labeling many categories without regard to any sort of natural hierarchy
or order.
Gender (Female, Male), for example Divorce, single, married, and color-wise (red, blue,
green)

Ordinal Scale:
Although the data in this scale are sorted and categorized, there are irregular gaps between
the categories.
It displays the variables' relative rankings or order.
Examples: Level of Education (High School, Undergraduate, Graduate, and Doctorate),
position in a competition (first, second, or third).

Interval scale:
The variables are kept in the same order and have consistent intervals between them on the
interval scale.
It does not, however, have a real zero point.
Examples include IQ scores and temperature in degrees Celsius or Fahrenheit.

ratio scale:
The ratio scale has a constant interval between variables with the interval scale.
Additionally, it has a real zero point, meaning that zero denotes the lack of the measured
property entirely.
Examples are age, income, weight, and height.
Let's now talk about both quantitative and qualitative data:
Definition of Qualitative Data: Qualitative data is non-numerical information that
characterizes attributes or traits. It is frequently categorized and can be separated into many
groups according to certain characteristics.

Features:
Non-Numerical: Rather than using numerical values, qualitative data is expressed using
words, labels, or categories.
Subjective: Because qualitative data interpretation frequently incorporates opinions, feelings,
or perceptions, it may be subjective.
Labels or groups: Typically, data is arranged into classes or groups according to shared
characteristics.
Qualitative Data Examples:
Gender: Classifications such as male and female.
Color: Characteristics like green, blue, or red.
Categories for marital status include single, married, and divorced.
Level of Education: Terms like high school, bachelor's, master's, and doctorate.

Definition of Quantitative Data: Information that can be measured and expressed numerically
is referred to as quantitative data. It is unbiased and amenable to mathematical interpretation.
Features:
Numerical: Mathematical procedures are possible because quantitative data is expressed in
numerical values.
Objective: Since quantitative data interpretation involves precise measurements, it is
typically more objective.
Countable and unique values (e.g., number of automobiles) or an unlimited number of
alternative values (e.g., height) are the two conceivable states for quantitative data.
Quantitative Data Examples:
Height: expressed either in centimeters or inches.
Weight: Expressed in kilos or pounds.
Temperature: Stated either in Fahrenheit or Celsius.
Income: Measurable in terms of money (dollars, for example).
A discrete count of the number of students in a class.

Q3:
Discuss the basic laws of sampling theory. Define stratified sampling technique with the
help of examples
A subfield of statistics known as sampling theory examines how to choose a sample—a
subset of a larger population—from which to draw conclusions about the population as a
whole. Among the fundamental rules of sampling theory are:
The Law of Statistical Regularity postulates that a random sample taken from a population
will have the same features as the population it is taken from.

The Law of Inertia of Large Numbers asserts that the features of the sample will
progressively resemble those of the population as sample size grows.

The Law of Unintended Consequences postulates that biases or unanticipated events may
prevent random samples from adequately representing the population.
The Law of Sampling mistakes is a paradigm for estimating and measuring sampling
mistakes, and it recognizes that errors of some degree will always exist in each sample.

Let's explore stratified sampling now:

Stratified Sampling: This sampling technique involves randomly selecting samples from each
stratum once the population is split up into subgroups or strata according to specific features.
When there are notable variances or differences within the population that must be
represented in the sample, this approach is employed.
As an example, let's say you wish to survey students on their academic achievement in a
school. Rather than choosing pupils at random from the entire school, you opt to employ
grade-based stratified sampling. Here is how the strata are defined:

Stratum 1: Students in Grade A


Stratum 2: Students in Grade B
Stratum 3: Students in Grade C
Next, a specific proportion of pupils from each stratum are chosen at random. For example, if
the students in each stratum make up 25% of the total student body at the school, you may
choose ten students at random from each grade for your survey. A more realistic picture of
the academic performance of the total student body will be given by the combined sample.
By ensuring that every subgroup is fairly represented in the final sample, stratified sampling
makes it possible to conduct more accurate analysis.

Cluster sampling.
Using the cluster sampling technique, a random sample of the population's clusters is chosen
after the population has been divided into groups or clusters. Every person in the selected
clusters is part of the sample. When the population is organically organized or when
compiling a comprehensive list of the population is challenging or costly, this approach is
especially helpful.
As an illustration, let's look at a study that aims to evaluate the academic achievement of
schools in a big city. Rather than selecting students for sampling, the researchers choose to
employ cluster sampling. Here's one way they could go about it:
Explain clusters:
In a metropolis, schools naturally cluster. Every school is made up of a collection of kids
with comparable resources, environments, and educational programs.
Choose Clusters at Random:
Select a certain number of schools at random from the city's list of all the schools. Assume
they select ten colleges from a pool of fifty.
Incorporate Every Student in the Chosen Schools:
After the schools have been chosen, include all of the pupils enrolled there in the sample.
This implies that each pupil in the designated schools joins the study.

Multii stage sampling:


Multiple steps of sampling are used in multi-stage sampling, a sophisticated sampling
technique, to choose units for study inclusion. When the target population is sizable and
distributed geographically, this approach is frequently employed. The procedure entails
sampling from each stage once the population has been divided into smaller, easier-to-
manage groups or stages.

Typical Situation:
Let's say you are in the midst of a nationwide schooling survey.
States could serve as Primary Sampling Units (PSUs).
SSUs, or secondary sampling units, can be any number of cities in a state.
Within each city, there may be particular school districts or communities that serve as
Tertiary Sampling Units (TSUs).
Finally, the units picked for data gathering could be specific children or schools within those
areas or districts.
Through the use of multi-stage sampling, researchers can effectively examine sizable and
heterogeneous populations while upholding a reasonable and practical survey size at each
stage.

Assignment 2
Que 1:
Define business forecasting? Explain various methods of business forecasting?
The technique of projecting future trends or results in a company environment is known as
business forecasting. To develop well-informed predictions regarding company activities, it
entails studying past data, present circumstances, and several variables that could affect the
future. Business forecasting's main objective is to lessen uncertainty and help decision-
makers make better decisions.

There are several approaches used in business forecasting, and the method selected is
determined by a number of factors, including the type of data being forecasted, the degree of
precision needed, and the forecast's time horizon. These are a few typical techniques for
business forecasting:

1Forecasting that is qualitative:


Expert Judgment: This entails getting advice and ideas from professionals in the field.
Delphi Method: A methodical approach to communication that depends on a group of
specialists to reach a consencus.

2.Analysis of Time Series:


Moving Averages: Finding trends by averaging historical data over predetermined intervals
of time.
Exponential smoothing involves providing more weight to recent data by exponentially
decreasing the weights of previous observations.
Finding and extending long-term trends from past data is known as trend analysis.

3.Models of Causation:
Regression analysis is the process of identifying correlations between variables in order to
forecast values in the future.
Econometric Models: Predicting business variables through statistical techniques and
economic theories.

4.Analysis of scenarios and simulation:


Monte Carlo simulation is a technique used to model the range of possible outcomes by
creating many random scenarios.
Analyzing potential outcomes and their effects on corporate performance is known as
scenario analysis.

Technology-Oriented Forecasting
Machine learning and artificial intelligence include the use of sophisticated algorithms to
evaluate and forecast vast amounts of data.
Big Data analytics is the process of using vast amounts of data to find trends and patterns.

Market Analysis:
Questionnaires and surveys are used to get feedback and expectations from suppliers,
consumers, and other stakeholders.
Focus groups: assembling a varied group of people to talk about and offer predictions about
upcoming trends.

Leading Measures:
Leading Indicator Analysis: Tracking particular economic indicators that typically shift
before the state of the economy as a whole, offering perceptions into potential future
developments.

Combinatorial Forecasting:
Combining Methods: To increase accuracy and dependability, combine several forecasting
techniques.

2nd que:
What is index number? Discuss the utility of index numbers?
A statistical metric known as an index number is used to show how a variable or set of
related variables has changed relative to a base period or base value. It is employed to
express and quantify changes over time in a group of variables. Index numbers are a helpful
tool for data comparison and analysis, particularly when working with big datasets with
several variables.

Usually, the following formula is used to get an index number:


Index=(Value in Current Period/ Value in Base period)×100
Here are some salient characteristics and the use of index numbers:
Comparative Analysis:
Index numbers make it possible to compare variables or events in relation to one another
across several categories, geographies, or time periods. When evaluating changes in
economic indicators, pricing, production, or other variables, this is especially helpful.

Comparison of Base Periods:


Index numbers designate a base period or base value, which serves as a standard for
comparison. Analysts can spot patterns and departures from the baseline by measuring
changes in relation to this reference point.

Data Simplification:
Big data sets may be difficult to understand and intricate. Index numbers streamline the
information display and facilitate more succinct understanding and analysis of patterns.

Pricing and Inflation Calculation:


Examples of index numbers used to track changes in prices over time are the Producer Price
Index (PPI) and the Consumer Price Index (CPI). These indicators assist in estimating
inflation rates, enabling companies and policymakers to modify their plans of action.

Financial Measures:
Index numbers, which show shifts in economic activity including industrial production,
employment, and commerce, are frequently employed as economic indicators. They offer
perceptions into the general state and trajectory of an economy.

Adjustment for Cost of Living (COLA):


Index numbers are used to calculate cost-of-living adjustments in a variety of situations,
including pension adjustments and changes in wages and salaries. In light of inflation, COLA
aids people in preserving their purchasing power.

Comparative Studies Abroad:


Index numbers standardize data, making cross-border comparisons easier. The Human
Development Index (HDI), for example, analyzes nations according to variables including
income, education, and life expectancy.
Indices of the stock market:
Indexes such as the S&P 500 and the Dow Jones Industrial Average (DJIA) are used to
monitor and assess stock market performance. They offer a brief overview of the general
market trends.

Metrics for business performance:


Indexes are used by businesses to monitor key performance indicators (KPIs) and evaluate
their overall business performance. Indexes measuring sales, output, or efficiency may be
examples of this.

Predicting:
In order to predict future trends based on historical data, index numbers are frequently
utilized in forecasting. Businesses and policymakers can make well-informed judgments
regarding future plans by examining index patterns.

3rd que:
Discuss various types of estimators?Also explain the criteria of a good estimator?
Utilizing sample data, estimators are statistical instruments that determine parameter values
in a population. A parameter is a population's numerical feature, like the variance or mean.
Estimators yield a population parameter approximation and are computed using sample data.
A number of criteria can be used to evaluate the properties of different types of estimators.

Various Estimator Types:

Point Calculators:
The population parameter estimate produced by a point estimator is a single number. As an
illustration, the population mean (⁽μ) can be estimated using the sample mean (⁽̉ x ˉ).

Estimators of intervals:
A range that the true parameter is expected to fall inside is provided by an interval estimator.
Typical interval estimators are confidence intervals.

Bayesian Appraisers:
Bayes Estimators: These estimators produce a posterior distribution for the parameter of
interest by combining the likelihood function with past knowledge about the parameters.

Sturdy Estimators:
M-Estimators: These robust estimators are less susceptible to data outliers and minimize a
certain objective function.

Shrinkage Appraisers:
The Ridge Regression Estimator is a tool used in regression issues that reduces the parameter
estimates by adding a penalty term to the ordinary least squares (OLS) estimation.
Lasso Estimator: The lasso estimator use the absolute values of the coefficients in addition to
adding a penalty term, much like ridge regression.

Qualities of a Reputable Estimator:


Unbiasedness: An estimator is considered unbiased if it yields parameter estimates that are
consistent with the true values on average. This can be mathematically represented as E(𝛂) =
θ, where θ is the genuine parameter value and ˂ is the estimator.

Efficiency: An estimator that is efficient delivers estimates that are not overly dispersed since
it has a minimal sample variation. The unbiased estimator with the lowest variance is seen as
being more effective.

Consistency: An estimator is considered consistent if it converges in probability to the true


parameter value as the sample size grows. Maintaining consistency makes sure that when
more data are gathered, the estimate gets more accurate.

Sufficiency: Every piece of data in the sample required to draw conclusions about a
parameter is included in a sufficient statistic.
Robustness: An estimator that is robust is not unduly impacted by anomalies or departures
from underlying hypotheses. Being robust is especially crucial when working with real-world
data that could have anomalies in it.

Minimal Mean Squared Error (MMSE): The least mean squared error (MMSE) unbiased
estimator is deemed ideal. The average of the squared discrepancies between the estimator
and the actual parameter value is known as the mean squared error.

You might also like