We are searching data for your request:

**Forums and discussions:**

**Manuals and reference books:**

**Data from registers:**

**Wait the end of the search in all databases.**

Upon completion, a link will appear to access the found materials.

Upon completion, a link will appear to access the found materials.

I am trying to figure out the name for a design study in the following scenario. Imagine a hospital where patients with a certain condition (say bacteria infection) are treated with a certain medication (say antibiotic A). At some point in time, there was a decision to change the antibiotic used to treat the condition to another medication (Antibiotic B). A sample were drawn from time1 (with use of Antibiotic A) and another sample from time2 (after change to Antibiotic B). The samples can be considered independent and differences can be investigated using either Independent samples t-test or chi-square. I am not sure what type of study design this is:

The following are my thoughts so far:

- Causal-Comparative or Ex post facto [but since I can change the treatment (antibiotic), I am not sure this is right].
- Quasi-experimental - My doubt with this is that this is an observational study and the change in practice (antibiotics) was not due to experimentation but change in operation.
- A combination of retrospective and prospective study as stated here, however, the page and every example of this that I saw seems to suggest that this design is used on THE SAME cohort.

I saw a question about similar design here but there was no mention of the study design.

I also asked this question here and the responses seem to be between natural-experimental and quasi-experimental. However, I am not sure it is either of the two.

All assistances are welcome.

You need one dependent variable that is measured on an interval or ratio scale (see our Types of Variable guide if you need clarification). You also need one categorical variable that has only two related groups.

A dependent t-test is an example of a "within-subjects" or "repeated-measures" statistical test. This indicates that the same participants are tested more than once. Thus, in the dependent t-test, "related groups" indicates that the same participants are present in both groups. The reason that it is possible to have the same participants in each group is because each participant has been measured on two occasions on the same dependent variable. For example, you might have measured the performance of 10 participants in a spelling test (the dependent variable) before and after they underwent a new form of computerised teaching method to improve spelling. You would like to know if the computer training improved their spelling performance. Here, we can use a dependent t-test because we have two related groups. The first related group consists of the participants at the beginning (prior to) the computerised spell training and the second related group consists of the same participants, but now at the end of the computerised training.

## What is the study design for 2 samples measured at different times that can be considered independent? - Psychology

©Richard Lowry, 1999-

All rights reserved.

Chapter 11.

t-Test for the Significance of the Difference between the Means of Two Independent Samples

This is probably the most widely used statistical test of all time, and certainly the most widely known. It is simple, straightforward, easy to use, and adaptable to a broad range of situations. No statistical toolbox should ever be without it.

Its utility is occasioned by the fact that scientific research very often examines the phenomena of nature two variables at a time, with an eye toward answering the basic question: Are these two variables related? If we alter the level of one, will we thereby alter the level of the other? Or alternatively: If we examine two different levels of one variable, will we find them to be associated with different levels of the other?

Here are three examples to give you an idea of how these abstractions might find expression in concrete reality. On the left of each row of cells is a specific research question, and on the right is a brief account of a strategy that might be used to answer it. The first two examples illustrate a very frequently employed form of experimental design that involves randomly sorting the members of a subject pool into two separate groups, treating the two groups differently with respect to a certain independent variable, and then measuring both groups on a certain dependent variable with the aim of determining whether the differential treatment produces differential effects. (Variables: Independent and Dependent.) A quasi-experimental variation on this theme, illustrated by the third example, involves randomly selecting two groups of subjects that already differ with respect to one variable, and then measuring both groups on another variable to determine whether the different levels of the first are associated with different levels of the second.

In each of these cases, the two samples are **independent** of each other in the obvious sense that they are separate samples containing different sets of individual subjects. The individual measures in group A are in no way linked with or related to any of the individual measures in group B, and vice versa. The version of a t-test examined in this chapter will assess the significance of the difference between the means of two such samples, providing: (i) that the two samples are randomly drawn from normally distributed populations and (ii) that the measures of which the two samples are composed are equal-interval.

To illustrate the procedures for this version of a imagine we were actually to conduct the experiment described in the second of the above examples. We begin with a fairly homogeneous subject pool of 30 college students, randomly sorting them into two groups, A and B, of sizes N_{a}=15 and N_{b}=15. (It is not essential for this procedure that the two samples be of the same size.) We then have the members of each group, one at a time, perform a series of 40 mental tasks while one or the other of the music types is playing in the background. For the members of group A it is music of type-I, while for those of group B it is music of type-II. The following table shows how many of the 40 components of the series each subject was able to complete. Also shown are the means and sums of squared deviates for the two groups.

Recall from Chapter 7 that whenever you perform a statistical test, what you are testing, fundamentally, is the null hypothesis. In general, the null hypothesis is the logical antithesis of whatever hypothesis it is that the investigator is seeking to examine. For the present example, the research hypothesis is that the two types of music have different effects, so the null hypothesis is that they do not have different effects. Its immediate implication is that any difference we find between the means of the two samples should not significantly differ from zero.

The groundwork for the following points is laid down in Chapter 9.

Figure 11.1 shows the sampling distribution of **t** Also shown is the portion of the table of critical values of **t** (Appendix C) that pertains The designation "**t**_{obs}" refers to our observed value We started out with the directional research hypothesis that task performance would be better for group A than for group B, and as our observed result, proved consistent with that hypothesis, the relevant critical values of **t** are those that pertain to a directional test of significance: 1.70 for the .05 level of significance, 2.05 for the .025 level, 2.47 for the .01 level, and so on.

**Figure 11.1. Sampling Distribution of t for df=28**

If our observed value of **t** had ended up smaller than 1.70, the result of the experiment would be non-significant vis-à-vis the conventional criterion that the mere-chance probability of a result must be equal to or less than .05. If it had come out at precisely 1.70, we would conclude that the result is significant **at** the .05 level. As it happens, the observed **t** meets and somewhat exceeds the 1.70 critical value, so we conclude that our result is significant somewhat **beyond** the .05 level. If the observed **t** had been equal to or greater than 2.05, we would have been able to regard the result as significant at or beyond the .025 level and so on.

The same logic would have applied to the left tail of the distribution if our initial research hypothesis had been in the opposite direction, stipulating that task performance would be better with music of type-II than with music of type-I. In this case we would have expected **M _{Xa}** to be smaller than

**M**, which would have entailed a negative sign for the resulting value of

_{Xb}**t**.

If, on the other hand, we had begun with no directional hypothesis at all, we would in effect have been expecting

and that disjunctive expectation ("either the one or the other") would have required a non-directional, two-tailed test. Note that for a non-directional test our observed value of (actually, for a two-tailed test it would have to be regarded would **not** be significant at the minimal .05 level. (The distinction between directional and non-directional tests of significance is introduced in Chapter 7.)

In this particular case, however, we did begin with a directional hypothesis, and the obtained result as assessed by a directional test is significant beyond the .05 level. The practical, bottom-line meaning of this conclusion is that the likelihood of our experimental result having come about through mere random variability— mere chance coincidence, "sampling error," the luck of the scientific draw— is a somewhat less that 5% hence, we can have about 95% confidence that the observed result reflects something **more** than mere random variability. For the present example, this "something more" would presumably be a genuine difference between the effects of the two types of music on the performance of this particular type of task.

**¶Step-by-Step Computational Procedure: t-Test for the Significance of the Difference between the Means of Two independent Samples**

**Step 3.** Estimate the standard deviation of the sampling distribution of sample-mean differences (the "standard error" of **M _{Xa}** —

**M**) as

_{Xb}**End of Chapter 11.**

Return to Top of Chapter 11

Go to Subchapter 11a [Mann- Whitney Test]

Go to Chapter 12 [**t**-Test for Two Correlated Samples]

## Here's why students love Scribbr's proofreading services

Researchers often use charts or graphs to visualize the results of their studies. The norm is to place the independent variable on the “x”or horizontal axis and the dependent variable on the “y” or vertical axis.

For instance, how might a graph look from our example study on the impact of a new medication on blood pressure?

In psychological research and other types of social research, experimenters typically rely on a few different sampling methods.

### 1. Probability Sampling

Probability sampling means that every individual in a population stands a chance of being selected. Because probability sampling involves random selection, it ensures that every subset of the population has an equal chance of being represented in the sample. This makes probability samples more representative, and researchers are better able to generalize their results to the group as a whole.

There are a few different types of probability sampling:

**Simple random sampling**is, as the name suggests, the simplest type of probability sampling. Researchers take every individual in a population and randomly select their sample, often using some type of computer program or random number generator.**Stratified random sampling**involves separating the population into subgroups and then taking a simple random sample from each of these subgroups. For example, research might divide the population up into subgroups based on race, gender, or age and then take a simple random sample of each of these groups. Stratified random sampling often provides greater statistical accuracy than simple random sampling and helps ensure that certain groups are accurately represented in the sample.**Cluster sampling**involves dividing a population into smaller clusters, often based upon geographic location or boundaries. A random sample of these clusters is then selected, and all of the subjects within the cluster are measured. For example, imagine that you are trying to do a study on school principals in your state. Collecting data from every single school principal would be cost-prohibitive and time-consuming. Using a cluster sampling method, you randomly select five counties from your state and then collect data from every subject in each of those five counties.

### 2. Nonprobability Sampling

Nonprobability sampling, on the other hand, involves selecting participants using methods that do not give every subset of a population an equal chance of being represented. For example, a study may recruit participants from volunteers. One problem with this type of sample is that volunteers might differ from non-volunteers on certain variables, which might make it difficult to generalize the results to the entire population.

There are also a couple of different types of nonprobability sampling:

**Convenience sampling**involves using participants in a study because they are convenient and available. If you have ever volunteered for a psychology study conducted through your university's psychology department, then you have participated in a study that relied on a convenience sample. Studies that rely on asking for volunteers or by using clinical samples that are available to the researcher are also examples of convenience samples.**Purposive sampling**involves seeking out individuals that meet certain criteria. For example, marketers might be interested in learning how their products are perceived by women between the ages of 18 and 35. They might hire a market research firm to conduct telephone interviews that intentionally seek out and interview women that meet their age criteria.**Quota sampling**involves intentionally sampling specific proportions of each subgroup within a population. For example, political pollsters might be interested in researching the opinions of a population on a certain political issue. If they use simple random sampling, they might miss certain subsets of the population by chance. Instead, they establish criteria to assign each subgroup a certain percentage of the sample. Unlike stratified sampling, researchers use non-random methods to fill the quotas for each subgroup.

Learn more about some of the ways that probability and nonprobability samples differ.

## Module 2: Study Design and Sampling

**Cross-sectional studies** are simple in design and are aimed at finding out the prevalence of a phenomenon, problem, attitude or issue by taking a snap-shot or cross-section of the population. This obtains an overall picture as it stands at the time of the study. For example, a cross-sectional design would be used to assess demographic characteristics or community attitudes. These studies usually involve one contact with the study population and are relatively cheap to undertake.

Pre-test/post-test studies measure the change in a situation, phenomenon, problem or attitude. Such studies are often used to measure the efficacy of a program. These studies can be seen as a variation of the cross-sectional design as they involve two sets of cross-sectional data collection on the same population to determine if a change has occurred.

**Retrospective studies** investigate a phenomenon or issue that has occurred in the past. Such studies most often involve secondary data collection, based upon data available from previous studies or databases. For example, a retrospective study would be needed to examine the relationship between levels of unemployment and street crime in NYC over the past 100 years.

**Prospective studies** seek to estimate the likelihood of an event or problem in the future. Thus, these studies attempt to predict what the outcome of an event is to be. General science experiments are often classified as prospective studies because the experimenter must wait until the experiment runs its course in order to examine the effects. Randomized controlled trials are always prospective studies and often involve following a 𠇌ohort” of individuals to determine the relationship between various variables.

**Longitudinal studies** follow study subjects over a long period of time with repeated data collection throughout. Some longitudinal studies last several months, while others can last decades. Most are observational studies that seek to identify a correlation among various factors. Thus, longitudinal studies do not manipulate variables and are not often able to detect causal relationships.

## Different types of Sampling Design in Research Methodology - Research Methodology

There are different types of sample designs based on two factors viz., the representation basis and the element selection technique. On the representation basis, the sample may be probability sampling or it may be non-probability sampling. Probability sampling is based on the concept of random selection, whereas non-probability sampling is &lsquonon-random&rsquo sampling. On element selection basis, the sample may be either unrestricted or restricted. When each sample element is drawn individually from the population at large, then the sample so drawn is known as &lsquounrestricted sample&rsquo, whereas all other forms of sampling are covered under the term &lsquorestricted sampling&rsquo. The following chart exhibits the sample designs as explained above.

Thus, sample designs are basically of two types viz., non-probability sampling and probability sampling. We take up these two designs separately.

**CHART SHOWING BASIC SAMPLING DESIGNS**

**Non-probability sampling:** Non-probability sampling is that sampling procedure which does not afford any basis for estimating the probability that each item in the population has of being included in the sample. Non-probability sampling is also known by different names such as deliberate sampling, purposive sampling and judgement sampling. In this type of sampling, items for the sample are selected deliberately by the researcher his choice concerning the items remains supreme. In other words, under non-probability sampling the organisers of the inquiry purposively choose the particular units of the universe for constituting a sample on the basis that the small mass that they so select out of a huge one will be typical or representative of the whole. For instance, if economic conditions of people living in a state are to be studied, a few towns and villages may be purposively selected for intensive study on the principle that they can be representative of the entire state. Thus, the judgement of the organisers of the study plays an important part in this sampling design.

## When to use a cross-sectional design

When you want to examine the prevalence of some outcome at a certain moment in time, a cross-sectional study is the best choice.

Example You want to know how many families with children in New York City are currently low-income so you can estimate how much money is required to fund a free lunch program in public schools. Because all you need to know is the current number of low-income families, a cross-sectional study should provide you with all the data you require.

Sometimes a cross-sectional study is the best choice for practical reasons – for instance, if you only have the time or money to collect cross-sectional data, or if the only data you can find to answer your research question was gathered at a single point in time.

As cross-sectional studies are cheaper and less time-consuming than many other types of study, they allow you to easily collect data that can be used as a basis for further research.

### Descriptive vs analytical studies

Cross-sectional studies can be used for both analytical and descriptive purposes:

## Managing the Challenges of Repeated Measures Designs

Repeated measures designs have some disadvantages compared to designs that have independent groups. The biggest drawbacks are known as order effects, and they are caused by exposing the subjects to multiple treatments. Order effects are related to the order that treatments are given but not due to the treatment itself. For example, scores can decrease over time due to fatigue, or increase due to learning. In taste tests, a dry wine may get a higher rank if it was preceded by a dryer wine and a lower rank if preceded by a sweeter wine. Order effects can interfere with the analysis’ ability to correctly estimate the effect of the treatment itself.

There are various methods you can use to reduce these problems in repeated measures designs. These methods include randomization, allowing time between treatments, and counterbalancing the order of treatments among others. Finally, it’s always good to remember that an independent groups design is an alternative for avoiding order effects.

Below is a very common crossover repeated measures design. Studies that use this type of design are as diverse as assessing different advertising campaigns, training programs, and pharmaceuticals. In this design, subjects are randomly assigned to the two groups and you can add additional treatments and a control group as needed.

There are many different types of repeated measures designs and it’s beyond the scope of this post to cover all of them. Each study must carefully consider which design meets the specific needs of the study.

For more information about different types of repeated measures designs, how to arrange the worksheet, and how to perform the analysis in Minitab, see Analyzing a repeated measures design. Also, learn how to use Minitab to analyze a Latin square with repeated measures design. Now, let’s use Minitab to perform a complex repeated measures ANOVA!

## What is a Cohort Study and its Types

**There are 2 types of this analysis:** a retrospective and a prospective.If a group of subjects was formed at the present time, and this observation will be in the future, it is about prospective cohort study.

In sociology, this option is used quite often. A cohort can be created if one proceeds from information about the influence of risk factors, and also analyzes it to the present moment. In this case, it is about retrospective cohort study.The most striking example of a prospective study is the research of nursing health. In the framework of this study, all nurses are asked the same, carefully designed questions that would help to track how this or other pathology develops.

After the collection of information, the subjects are observed at a certain time, on the fact that scientists reveal the connection between the way of life and the development of the disease.Retrospective, on the other hand, collects information about the disease, collected during some period in the past. Therefore, retrospective studies are still called historical. Retrospective studies with the definition of what events and experiences from a person’s life experience could affect his current state. For example the impact of unemployment on the resumption of criminal activity of a former prisoner.

### Cohort Study Advantages and Disadvantages

#### Advantages:

This type of research has a lot of advantages. First of all, it is connected with the possibility of obtaining reliable information about the source of risk factors. At the same time, it is possible to determine in advance what data is needed and to collect these data in full. A cohort study also allows simultaneous identification of several risk factors for the effects studied. For instance, risk factors for cardiovascular disease and cancer in the study of nursing health.

Also, it allows assessing a wide range of outcomes associated with the effect of a single factor, as well as a wide range of factors for one outcome.

#### Disadvantages:

However, a cohort study may be ineffective and expensive if the outcome is rare and involves a multitude of subjects in whom the outcome is not found, so this method is not suitable for rare diseases, for example. In addition, the results remain for a long time. This is less true of historical cohort studies, but in this case, the quality of the data may suffer, since the condition for retrospective research is the availability of reliable and sufficiently detailed information on the impact of risk factors.

#### Few Words about Case Control Example

Case-control studies are a retrospective comparison of the two groups. For example, people who have fallen ill are compared with a group that does not suffer from a disease.

The study investigates the existence of a difference between past exposures to possible risk factors to representatives of two groups. This type of research is suitable for studying the risk factors for rare diseases. And it is often used to develop new hypotheses.One of the most famous studies of case-control type is research related to the establishment of a connection between smoking and the development of lung cancer. Despite the fact that for many years such a method of research was called into question, scientists managed to prove the existence of a cause-and-effect relationship between the disease and the results obtained.

### Cohort Study vs Case Control

#### Cohort study

#### Case-control study

Both cohort study and case-control research are observational studies of risk factors. Sometimes they are confused with each other. But as we see, the distinctive feature of the method of case-control research is that by the time the investigation began, all the outcomes studied had already occurred. In a cohort study at the beginning of the observation, when risk factors are evaluated, the participants do not yet have the disease being studied. Since the existence of a connection in time between the intended cause and the result serves as an important criterion for evaluating cause-effect relationships, cohort research provides more accurate information.

## Managing the Challenges of Repeated Measures Designs

Repeated measures designs have some disadvantages compared to designs that have independent groups. The biggest drawbacks are known as order effects, and they are caused by exposing the subjects to multiple treatments. Order effects are related to the order that treatments are given but not due to the treatment itself. For example, scores can decrease over time due to fatigue, or increase due to learning. In taste tests, a dry wine may get a higher rank if it was preceded by a dryer wine and a lower rank if preceded by a sweeter wine. Order effects can interfere with the analysis’ ability to correctly estimate the effect of the treatment itself.

There are various methods you can use to reduce these problems in repeated measures designs. These methods include randomization, allowing time between treatments, and counterbalancing the order of treatments among others. Finally, it’s always good to remember that an independent groups design is an alternative for avoiding order effects.

Below is a very common crossover repeated measures design. Studies that use this type of design are as diverse as assessing different advertising campaigns, training programs, and pharmaceuticals. In this design, subjects are randomly assigned to the two groups and you can add additional treatments and a control group as needed.

There are many different types of repeated measures designs and it’s beyond the scope of this post to cover all of them. Each study must carefully consider which design meets the specific needs of the study.

For more information about different types of repeated measures designs, how to arrange the worksheet, and how to perform the analysis in Minitab, see Analyzing a repeated measures design. Also, learn how to use Minitab to analyze a Latin square with repeated measures design. Now, let’s use Minitab to perform a complex repeated measures ANOVA!

In psychological research and other types of social research, experimenters typically rely on a few different sampling methods.

### 1. Probability Sampling

Probability sampling means that every individual in a population stands a chance of being selected. Because probability sampling involves random selection, it ensures that every subset of the population has an equal chance of being represented in the sample. This makes probability samples more representative, and researchers are better able to generalize their results to the group as a whole.

There are a few different types of probability sampling:

**Simple random sampling**is, as the name suggests, the simplest type of probability sampling. Researchers take every individual in a population and randomly select their sample, often using some type of computer program or random number generator.**Stratified random sampling**involves separating the population into subgroups and then taking a simple random sample from each of these subgroups. For example, research might divide the population up into subgroups based on race, gender, or age and then take a simple random sample of each of these groups. Stratified random sampling often provides greater statistical accuracy than simple random sampling and helps ensure that certain groups are accurately represented in the sample.**Cluster sampling**involves dividing a population into smaller clusters, often based upon geographic location or boundaries. A random sample of these clusters is then selected, and all of the subjects within the cluster are measured. For example, imagine that you are trying to do a study on school principals in your state. Collecting data from every single school principal would be cost-prohibitive and time-consuming. Using a cluster sampling method, you randomly select five counties from your state and then collect data from every subject in each of those five counties.

### 2. Nonprobability Sampling

Nonprobability sampling, on the other hand, involves selecting participants using methods that do not give every subset of a population an equal chance of being represented. For example, a study may recruit participants from volunteers. One problem with this type of sample is that volunteers might differ from non-volunteers on certain variables, which might make it difficult to generalize the results to the entire population.

There are also a couple of different types of nonprobability sampling:

**Convenience sampling**involves using participants in a study because they are convenient and available. If you have ever volunteered for a psychology study conducted through your university's psychology department, then you have participated in a study that relied on a convenience sample. Studies that rely on asking for volunteers or by using clinical samples that are available to the researcher are also examples of convenience samples.**Purposive sampling**involves seeking out individuals that meet certain criteria. For example, marketers might be interested in learning how their products are perceived by women between the ages of 18 and 35. They might hire a market research firm to conduct telephone interviews that intentionally seek out and interview women that meet their age criteria.**Quota sampling**involves intentionally sampling specific proportions of each subgroup within a population. For example, political pollsters might be interested in researching the opinions of a population on a certain political issue. If they use simple random sampling, they might miss certain subsets of the population by chance. Instead, they establish criteria to assign each subgroup a certain percentage of the sample. Unlike stratified sampling, researchers use non-random methods to fill the quotas for each subgroup.

Learn more about some of the ways that probability and nonprobability samples differ.

## Here's why students love Scribbr's proofreading services

Researchers often use charts or graphs to visualize the results of their studies. The norm is to place the independent variable on the “x”or horizontal axis and the dependent variable on the “y” or vertical axis.

For instance, how might a graph look from our example study on the impact of a new medication on blood pressure?

## What is the study design for 2 samples measured at different times that can be considered independent? - Psychology

©Richard Lowry, 1999-

All rights reserved.

Chapter 11.

t-Test for the Significance of the Difference between the Means of Two Independent Samples

This is probably the most widely used statistical test of all time, and certainly the most widely known. It is simple, straightforward, easy to use, and adaptable to a broad range of situations. No statistical toolbox should ever be without it.

Its utility is occasioned by the fact that scientific research very often examines the phenomena of nature two variables at a time, with an eye toward answering the basic question: Are these two variables related? If we alter the level of one, will we thereby alter the level of the other? Or alternatively: If we examine two different levels of one variable, will we find them to be associated with different levels of the other?

Here are three examples to give you an idea of how these abstractions might find expression in concrete reality. On the left of each row of cells is a specific research question, and on the right is a brief account of a strategy that might be used to answer it. The first two examples illustrate a very frequently employed form of experimental design that involves randomly sorting the members of a subject pool into two separate groups, treating the two groups differently with respect to a certain independent variable, and then measuring both groups on a certain dependent variable with the aim of determining whether the differential treatment produces differential effects. (Variables: Independent and Dependent.) A quasi-experimental variation on this theme, illustrated by the third example, involves randomly selecting two groups of subjects that already differ with respect to one variable, and then measuring both groups on another variable to determine whether the different levels of the first are associated with different levels of the second.

In each of these cases, the two samples are **independent** of each other in the obvious sense that they are separate samples containing different sets of individual subjects. The individual measures in group A are in no way linked with or related to any of the individual measures in group B, and vice versa. The version of a t-test examined in this chapter will assess the significance of the difference between the means of two such samples, providing: (i) that the two samples are randomly drawn from normally distributed populations and (ii) that the measures of which the two samples are composed are equal-interval.

To illustrate the procedures for this version of a imagine we were actually to conduct the experiment described in the second of the above examples. We begin with a fairly homogeneous subject pool of 30 college students, randomly sorting them into two groups, A and B, of sizes N_{a}=15 and N_{b}=15. (It is not essential for this procedure that the two samples be of the same size.) We then have the members of each group, one at a time, perform a series of 40 mental tasks while one or the other of the music types is playing in the background. For the members of group A it is music of type-I, while for those of group B it is music of type-II. The following table shows how many of the 40 components of the series each subject was able to complete. Also shown are the means and sums of squared deviates for the two groups.

Recall from Chapter 7 that whenever you perform a statistical test, what you are testing, fundamentally, is the null hypothesis. In general, the null hypothesis is the logical antithesis of whatever hypothesis it is that the investigator is seeking to examine. For the present example, the research hypothesis is that the two types of music have different effects, so the null hypothesis is that they do not have different effects. Its immediate implication is that any difference we find between the means of the two samples should not significantly differ from zero.

The groundwork for the following points is laid down in Chapter 9.

Figure 11.1 shows the sampling distribution of **t** Also shown is the portion of the table of critical values of **t** (Appendix C) that pertains The designation "**t**_{obs}" refers to our observed value We started out with the directional research hypothesis that task performance would be better for group A than for group B, and as our observed result, proved consistent with that hypothesis, the relevant critical values of **t** are those that pertain to a directional test of significance: 1.70 for the .05 level of significance, 2.05 for the .025 level, 2.47 for the .01 level, and so on.

**Figure 11.1. Sampling Distribution of t for df=28**

If our observed value of **t** had ended up smaller than 1.70, the result of the experiment would be non-significant vis-à-vis the conventional criterion that the mere-chance probability of a result must be equal to or less than .05. If it had come out at precisely 1.70, we would conclude that the result is significant **at** the .05 level. As it happens, the observed **t** meets and somewhat exceeds the 1.70 critical value, so we conclude that our result is significant somewhat **beyond** the .05 level. If the observed **t** had been equal to or greater than 2.05, we would have been able to regard the result as significant at or beyond the .025 level and so on.

The same logic would have applied to the left tail of the distribution if our initial research hypothesis had been in the opposite direction, stipulating that task performance would be better with music of type-II than with music of type-I. In this case we would have expected **M _{Xa}** to be smaller than

**M**, which would have entailed a negative sign for the resulting value of

_{Xb}**t**.

If, on the other hand, we had begun with no directional hypothesis at all, we would in effect have been expecting

and that disjunctive expectation ("either the one or the other") would have required a non-directional, two-tailed test. Note that for a non-directional test our observed value of (actually, for a two-tailed test it would have to be regarded would **not** be significant at the minimal .05 level. (The distinction between directional and non-directional tests of significance is introduced in Chapter 7.)

In this particular case, however, we did begin with a directional hypothesis, and the obtained result as assessed by a directional test is significant beyond the .05 level. The practical, bottom-line meaning of this conclusion is that the likelihood of our experimental result having come about through mere random variability— mere chance coincidence, "sampling error," the luck of the scientific draw— is a somewhat less that 5% hence, we can have about 95% confidence that the observed result reflects something **more** than mere random variability. For the present example, this "something more" would presumably be a genuine difference between the effects of the two types of music on the performance of this particular type of task.

**¶Step-by-Step Computational Procedure: t-Test for the Significance of the Difference between the Means of Two independent Samples**

**Step 3.** Estimate the standard deviation of the sampling distribution of sample-mean differences (the "standard error" of **M _{Xa}** —

**M**) as

_{Xb}**End of Chapter 11.**

Return to Top of Chapter 11

Go to Subchapter 11a [Mann- Whitney Test]

Go to Chapter 12 [**t**-Test for Two Correlated Samples]

You need one dependent variable that is measured on an interval or ratio scale (see our Types of Variable guide if you need clarification). You also need one categorical variable that has only two related groups.

A dependent t-test is an example of a "within-subjects" or "repeated-measures" statistical test. This indicates that the same participants are tested more than once. Thus, in the dependent t-test, "related groups" indicates that the same participants are present in both groups. The reason that it is possible to have the same participants in each group is because each participant has been measured on two occasions on the same dependent variable. For example, you might have measured the performance of 10 participants in a spelling test (the dependent variable) before and after they underwent a new form of computerised teaching method to improve spelling. You would like to know if the computer training improved their spelling performance. Here, we can use a dependent t-test because we have two related groups. The first related group consists of the participants at the beginning (prior to) the computerised spell training and the second related group consists of the same participants, but now at the end of the computerised training.

## Module 2: Study Design and Sampling

**Cross-sectional studies** are simple in design and are aimed at finding out the prevalence of a phenomenon, problem, attitude or issue by taking a snap-shot or cross-section of the population. This obtains an overall picture as it stands at the time of the study. For example, a cross-sectional design would be used to assess demographic characteristics or community attitudes. These studies usually involve one contact with the study population and are relatively cheap to undertake.

Pre-test/post-test studies measure the change in a situation, phenomenon, problem or attitude. Such studies are often used to measure the efficacy of a program. These studies can be seen as a variation of the cross-sectional design as they involve two sets of cross-sectional data collection on the same population to determine if a change has occurred.

**Retrospective studies** investigate a phenomenon or issue that has occurred in the past. Such studies most often involve secondary data collection, based upon data available from previous studies or databases. For example, a retrospective study would be needed to examine the relationship between levels of unemployment and street crime in NYC over the past 100 years.

**Prospective studies** seek to estimate the likelihood of an event or problem in the future. Thus, these studies attempt to predict what the outcome of an event is to be. General science experiments are often classified as prospective studies because the experimenter must wait until the experiment runs its course in order to examine the effects. Randomized controlled trials are always prospective studies and often involve following a 𠇌ohort” of individuals to determine the relationship between various variables.

**Longitudinal studies** follow study subjects over a long period of time with repeated data collection throughout. Some longitudinal studies last several months, while others can last decades. Most are observational studies that seek to identify a correlation among various factors. Thus, longitudinal studies do not manipulate variables and are not often able to detect causal relationships.

## Different types of Sampling Design in Research Methodology - Research Methodology

There are different types of sample designs based on two factors viz., the representation basis and the element selection technique. On the representation basis, the sample may be probability sampling or it may be non-probability sampling. Probability sampling is based on the concept of random selection, whereas non-probability sampling is &lsquonon-random&rsquo sampling. On element selection basis, the sample may be either unrestricted or restricted. When each sample element is drawn individually from the population at large, then the sample so drawn is known as &lsquounrestricted sample&rsquo, whereas all other forms of sampling are covered under the term &lsquorestricted sampling&rsquo. The following chart exhibits the sample designs as explained above.

Thus, sample designs are basically of two types viz., non-probability sampling and probability sampling. We take up these two designs separately.

**CHART SHOWING BASIC SAMPLING DESIGNS**

**Non-probability sampling:** Non-probability sampling is that sampling procedure which does not afford any basis for estimating the probability that each item in the population has of being included in the sample. Non-probability sampling is also known by different names such as deliberate sampling, purposive sampling and judgement sampling. In this type of sampling, items for the sample are selected deliberately by the researcher his choice concerning the items remains supreme. In other words, under non-probability sampling the organisers of the inquiry purposively choose the particular units of the universe for constituting a sample on the basis that the small mass that they so select out of a huge one will be typical or representative of the whole. For instance, if economic conditions of people living in a state are to be studied, a few towns and villages may be purposively selected for intensive study on the principle that they can be representative of the entire state. Thus, the judgement of the organisers of the study plays an important part in this sampling design.

## When to use a cross-sectional design

When you want to examine the prevalence of some outcome at a certain moment in time, a cross-sectional study is the best choice.

Example You want to know how many families with children in New York City are currently low-income so you can estimate how much money is required to fund a free lunch program in public schools. Because all you need to know is the current number of low-income families, a cross-sectional study should provide you with all the data you require.

Sometimes a cross-sectional study is the best choice for practical reasons – for instance, if you only have the time or money to collect cross-sectional data, or if the only data you can find to answer your research question was gathered at a single point in time.

As cross-sectional studies are cheaper and less time-consuming than many other types of study, they allow you to easily collect data that can be used as a basis for further research.

### Descriptive vs analytical studies

Cross-sectional studies can be used for both analytical and descriptive purposes: