It is a collection of research designs which use manipulation and
controlled testing to understand causal processes. Generally, one or more
variables are manipulated to determine their effect on a dependent variable.
The experimental method Is a systematic and scientific
approach to research in which the researcher manipulates one or more
variables, and controls and measures any change in other variables.
Experimental
Research is often used where:
There is time priority in a causal relationship (cause precedes effect)
There is consistency in a causal relationship (a cause will always lead to
the same effect)
The magnitude of the correlation is great.
(Reference: en.wikipedia.org)
The word experimental research has a range of definitions. In the strict
sense, experimental research is what we call a true experiment.
This is an experiment where the researcher manipulates one variable,
and control/randomizes
the rest of the variables. It has a control group, the subjects have been
randomly assigned between the groups, and the researcher only tests one effect
at a time. It is also important to know what variable(s) you want to test and
measure.
A very wide definition of experimental research, or a quasi experiment,
is research where the scientist actively influences something to observe the
consequences. Most experiments tend to fall in between the strict and the wide
definition.
A rule of thumb is that physical sciences, such as physics, chemistry and
geology tend to define experiments more narrowly than social sciences, such as
sociology and psychology, which conduct experiments closer to the wider
definition.
Aims of Experimental Research.
Experiments are conducted to be able to predict phenomenons. Typically, an
experiment is constructed to be able to explain some kind of causation. Experimental
research is important to society
- it helps us to improve our everyday lives.
Identifying the Research Problem.
After deciding the topic of interest, the researcher tries to define the
research problem. This helps the researcher to focus on a more
narrow research area to be able to study it appropriately. Defining the
research problem helps you to formulate a research hypothesis, which
is tested against the null
hypothesis.
The research problem is often operationalizationed, to
define how to measure the research problem. The results will depend on the
exact measurements
that the researcher chooses and may be operationalized differently in another
study to test the main conclusions of the study.
An ad hoc analysis
is a hypothesis invented after testing is done, to try to explain why the
contrary evidence. A poor ad hoc analysis may be seen as the researcher's
inability to accept that his/her hypothesis is wrong, while a great ad hoc
analysis may lead to more testing and possibly a significant discovery.
Constructing the Experiment.
There are various aspects to remember when constructing an experiment.
Planning ahead ensures that the experiment is carried out properly and that the
results reflect the real world, in the best possible way.
Sampling Groups to Study
Sampling groups correctly
is especially important when we have more than one condition in the experiment.
One sample group often serves
as a control group,
whilst others are tested under the experimental conditions.
Deciding the sample groups can be done in using many different sampling
techniques. Population
sampling may chosen by a number of methods, such as randomization,
"quasi-randomization" and pairing.
Reducing sampling
errors is vital for getting valid results from experiments.
Researchers often adjust the sample size
to minimize chances of random errors.
Here are some common sampling
techniques:
1.
probability
sampling
2.
non-probability sampling
3.
simple random
sampling
4.
convenience
sampling
5.
stratified
sampling
6.
systematic sampling
7.
cluster
sampling
8.
sequential
sampling
9.
disproportional sampling
10. judgmental sampling
11. snowball sampling
12. quota sampling
Creating the Design
The research design is chosen based on a range of factors. Important
factors when choosing the design are feasibility, time, cost, ethics,
measurement problems and what you would like to test. The design of the experiment
is critical for the validity
of the results.
Typical Designs and Features in Experimental Design.
Pretest-Posttest
Design.
Check whether the groups are different before the manipulation starts and the
effect of the manipulation. Pretests sometimes influence the effect.
Control Group.
Control groups are designed to measure research bias and
measurement effects, such as the Hawthorne Effect or the Placebo Effect. A control
group is a group not receiving the same manipulation as the experimental group.
Experiments frequently have 2 conditions, but rarely more than 3 conditions at
the same time.
Randomized
Controlled Trials.
Randomized Sampling, comparison between an Experimental Group and a Control
Group and strict control/randomization of all other variables
Solomon
Four-Group Design.
With two control groups and two experimental groups. Half the groups have a
pretest and half do not have a pretest. This to test both the effect itself and
the effect of the pretest.
Between
Subjects Design.
Grouping Participants to Different Conditions
Within
Subject Design.
Participants Take Part in the Different Conditions - See also: Repeated Measures Design
Counterbalanced
Measures Design.
Testing the effect of the order of treatments when no control group is
available/ethical
Matched
Subjects Design.
Matching Participants to Create Similar Experimental- and Control-Groups
Double-Blind
Experiment
Neither the researcher, nor the participants, know which is the control group.
The results can be affected if the researcher or participants know this.
Bayesian
Probability
Using bayesian probability to "interact" with participants is a more
"advanced" experimental design. It can be used for settings were
there are many variables which are hard to isolate. The researcher starts with
a set of initial beliefs, and tries to adjust them to how participants have
responded
Pilot Study
It may be wise to first conduct a pilot-study or two before
you do the real experiment. This ensures that the experiment measures what it
should, and that everything is set up right.
Minor errors, which could potentially destroy the experiment, are often
found during this process. With a pilot study, you can get information about
errors and problems, and improve the design, before putting a lot of effort
into the real experiment.
If the experiments involve humans, a common strategy is to first have a
pilot study with someone involved in the research, but not too closely, and
then arrange a pilot with a person who resembles the subject(s). Those two
different pilots are likely to give the researcher good information about any
problems in the experiment.
Conducting the Experiment
An experiment is typically carried out by manipulating a variable, called
the independent
variable, affecting the experimental group. The effect that the
researcher is interested in, the dependent variable(s), is
measured.
Identifying and controlling non-experimental factors which the researcher
does not want to influence the effects, is crucial to drawing a valid
conclusion. This is often done by controlling variables, if
possible, or randomizing variables to minimize effects that can be traced back
to third
variables. Researchers only want to measure the effect of the
independent variable(s) when conducting an experiment,
allowing them to conclude that this was the reason for the effect.
Analysis and Conclusions.
In quantitative
research, the amount of data measured can be enormous. Data not
prepared to be analyzed is called "raw data". The raw data is often
summarized as something called "output data", which typically
consists of one line per subject (or item). A cell
of the output data is, for example, an average of an effect in many trials for
a subject. The output data is used for statistical analysis, e.g. significance
tests, to see if there really is an effect.
The aim of an analysis is to draw a conclusion,
together with other observations. The researcher might generalize the results to
a wider phenomenon, if there is no indication of confounding variables
"polluting" the results.
If the researcher suspects that the effect stems from a different variable
than the independent variable, further investigation is needed to gauge the validity of the results.
An experiment is often conducted because the scientist wants to know if the
independent variable is having any effect upon the dependent variable.
Variables correlating are not proof that there is causation.
Experiments are more often of quantitative
nature than qualitative
nature, although it happens.
No comments:
Post a Comment