Register      Login
New South Wales Public Health Bulletin New South Wales Public Health Bulletin Society
Supporting public health practice in New South Wales
EDITORIAL

Doing good qualitative research in public health: not as easy as it looks

Stacy M. Carter A B G , Jan E. Ritchie C D E and Peter Sainsbury B F
+ Author Affiliations
- Author Affiliations

A Centre for Values, Ethics and the Law in Medicine, University of Sydney

B School of Public Health, University of Sydney

C School of Public Health and Community Medicine, University of New South Wales

D School of Public Health, Griffith University

E International Union for Health Promotion and Education

F Population Health, Sydney South West Area Health Service

G Corresponding author. Email: carters@med.usyd.edu.au

NSW Public Health Bulletin 20(8) 105-111 https://doi.org/10.1071/NB09018
Published: 7 September 2009

Abstract

In this paper, we discuss qualitative research for public health professionals. Quality matters in qualitative research, but the principles by which it is judged are critically different from those used to judge epidemiology. Compared to quantitative research, good quality qualitative studies serve different aims, answer distinct research questions and have their own logic for sampling, data collection and analysis. There is, however, no need for antagonism between qualitative research and epidemiology; the two are complementary. With theoretical and methodological guidance from experienced qualitative researchers, public health professionals can learn how to make the most of qualitative research for themselves.

On qualitative research and public health

This issue of the NSW Public Health Bulletin presents examples of qualitative enquiry in public health. To introduce these papers, we will make some arguments about qualitative enquiry. What is ‘good’ qualitative research? What is ‘poor’ qualitative research? How can we tell the difference? Why does it matter? How can you improve the quality of the qualitative research you commission or conduct?

Qualitative research is at a high-point of popularity in public health in Australia. As a rough and limited metric, we searched Medline on 19 June 2009 using the search string ((qualitative research.mp. OR Qualitative Research/ OR qualitative method*.mp. OR qualitative stud*.mp.) AND exp Public Health/ AND (australia.mp. or exp Australia/)). This search returned no hits before 1990, 57 papers published between 1991 and 2000, and 640 papers for the period 2001 to 2009. You might expect that, as qualitative researchers, we would be celebrating! Rather, we have shared concerns that the new-found popularity of qualitative research in public health and health services might be its downfall. We worry that it may produce so much slipshod qualitative research that audiences lose faith in it as a genre, either because the work self-evidently fails to be useful or illuminating, or because its authors are unable to defend it.

Danger lurks in the illusion that ‘anyone can do’ qualitative research. Epidemiological research is difficult for novices to do unsupervised. Complex statistics are more or less unapproachable without formal training, likewise the sophisticated epidemiological designs required for publication in mainstream public health journals. In contrast, anyone who speaks a language can have a conversation with someone, write about it and call it research. This can lead to a proliferation of work calling itself qualitative research that bears little resemblance to the best practices in the field.

In this editorial, we describe what we mean by good qualitative research. As most of the studies the Bulletin publishes are epidemiological, we will organise our discussion by comparing epidemiological and qualitative principles. We will also focus on particular problems we have observed in the public health and health services literatures.


The papers in this issue

This special issue of the Bulletin contains three peer-reviewed papers and the reflections of a participant in one of the reported studies. The authors were invited because each was working in a different substantive area of public health, and in a different methodological style. We are not arguing that these are the best or only ways of working. However, the resulting papers provide opportunities to draw out some important issues in qualitative research practice.

Julie Mooney-Somers and Lisa Maher detail a community-based participatory research (CBPR) project about bloodborne viruses and sexually transmissible infections. This project was conducted in collaboration with young Aboriginal and Torres Strait Islander people and their networks in three communities. CBPR seeks immediate benefit for participants: in this case, through the development of research capacity, building new links between community organisations and research institutions, and prioritising ethical and social considerations.1 CBPR also prioritises a two-way learning process between researchers and participants. In a commentary attached to the paper, Robert Scott, a participant in the CBPR project, reflects on his experience of the process and the impact in his community.

In the second paper, Julie Leask reports on a project using role play to examine a critical moment in GP-patient communication: when a parent refuses immunisation for their child.2

In the final paper, Jenny Lewis combines qualitative and quantitative methods to ask: ‘Who is regarded as influential and what issues are considered important or difficult in health policy?’3 Already you can see some of the diversity in qualitative research practice, diversity that is highly relevant to our next question.


What is good quality qualitative research?

For epidemiologists, gold standards for good quality research are clear. Population-based random samples, random double-blind allocation in intervention trials, valid and reliable instruments, appropriate statistical tests – all of these are shared ideals. Study types are clearly defined: case-control studies, cohort studies and randomised controlled trials each follow a well-known formula and conform to an increasingly well-articulated set of rules. However, the ‘rules’ for assessing the quality of qualitative research are less straightforward. There is a large, divided body of work on this subject.410 Some seek to develop standardised rules for qualitative research and/or its reporting; others emphasise the need for flexibility and accountability from researchers rather than adherence to rigid principles.4,11,12 It would be simplistic to attempt to provide a standard ‘formula’ for conducting qualitative enquiry here: instead we will outline some basic principles.

Qualitative aims, research questions and general approach

Qualitative research achieves aims different from and complementary to those addressed in epidemiology.13 It does this by approaching enquiry differently: through a less controlled, more open study design, by asking different kinds of research questions and by employing different ways of thinking.

Descriptive epidemiology asks questions about prevalence and its patterning. How many children are immunised? Are they unequally distributed by region? Is immunisation associated with level of education? Qualitative researchers attempt to understand what happens in participants’ everyday lives, how things work and what things mean to participants. Leask’s study, for example, asks about a process: ‘How do doctors deal with a parent who is refusing immunisation?’ Another qualitative study might ask: ‘What does it mean to a parent to have their child immunised?’ Epidemiological research studies variables pre-determined by the researcher. Variables of interest must be clearly defined before data collection starts. Qualitative researchers rarely presume which variables are important, but rather seek to discover what is relevant by speaking with participants, reading texts or observing behaviours. Qualitative studies are typically far less controlled than in epidemiology, certainly markedly less than a randomised controlled trial. Qualitative researchers seek to study the social world in its ordinary, complicated, changing state.

Epidemiological logic emphasises linearity and deductive thinking; in its idealised form, epidemiology begins with hypotheses and makes observations to test these hypotheses.14 Qualitative researchers begin with induction: making observations to build theory, rather than to test theory. Then, as analysis progresses, they rely on abduction (moments of inspiration in which a hunch, clue, metaphor, explanation or pattern is imagined or recalled from existing theory to make sense of the data) and deduction (when the analyst goes back to the data to test these emerging ideas).14,15 These forms of thinking create a continuous cycle of data collection and analysis.

In short, because qualitative researchers generally do not know what is important before they start, their studies are likely to be a lot more flexible than epidemiological studies, evolving to pursue new leads as they emerge in data collection and continuous analysis.

Qualitative sampling strategies

A misunderstanding of the aims of qualitative research often leads to poor sampling in qualitative studies. In epidemiology, we wish to report prevalence of or association between variables in a defined population. We need to isolate those variables to prevent confounding. To achieve this, we ideally randomly select participants from the population; in intervention studies, we also randomise participants into different study arms. We collect and tabulate data on many variables, including demographic variables. The purpose is two-fold. The first purpose is to demonstrate that the participants could have been anybody in the population under study. They had the same chance of being selected or ending up in the intervention arm as everyone else; there was nothing special about them that could have confounded the results. The second purpose is to allow the researcher to statistically control for everything other than the variable of interest.

This is precisely the opposite of the logic of qualitative sampling: in fact, some qualitative researchers talk about participant ‘selection’ to distinguish it more clearly from probability sampling.16 In good qualitative research, participants are not ‘average’ or ‘typical’. They are special. They are selected because they are uniquely positioned to help the researcher understand what happens or what things mean. Thus, qualitative sampling is often described as ‘purposive’; that is, chosen to serve an analytic purpose. Qualitative researchers can learn as much from atypical cases (by comparison and contrast) or from unexpected sources as they can from central cases or obvious sources. A cleaner may be able to tell you as much about pandemic control as a nurse, albeit from a different perspective. Someone who comes to work with influenza may help you understand the process of staying home when infected. In Leask’s study, for example, GPs known to have an interest in immunisation or expected to have unusual views about immunisation were included, as were parents of young children.2 Lewis describes using an empirically generated map of policy makers’ reputations as a basis for selecting interviewees.3 She identified eight groups of influential people. Some groups were widely considered important, others marginal. Lewis’s qualitative sampling included people from each group, thus providing a range of central and peripheral players with different kinds of expertise or disciplinary focus. Such sampling (along with the style of data collection) allows for a wide range of relevant concepts to emerge, and for examination, rather than control, of the relationship between them.

It is a terrible waste of qualitative research resources to hear exactly the same thing from 30 ‘average’ people who are, for the purposes of the study, identical. This does little to advance the complexity or depth of the researchers’ understanding. The best qualitative samples are often determined in a dynamic way as the study progresses, the researcher constantly asking themselves questions such as: ‘Which new participants could help me better understand this important idea or process that I am starting to see in my analysis? What new questions might I ask my existing participants to help me understand? What might I need to observe to understand? What documents might help me understand?’ This dynamism requires ongoing modification of ethics approval, but in our experience Human Research Ethics Committees increasingly expect such modifications in qualitative studies, and are efficient in processing them.

Qualitative data collection methods

If qualitative research is to understand what happens and what things mean, generate new and relevant concepts, and find out what is important to participants (rather than impose pre-determined variables), then data must be collected in a relatively open way. A large number of highly structured questions will generally produce yes/no or one-line answers that yield little insight. Mooney-Somers and Maher’s description of the data collection in their CBPR project provides one alternative.1 Peer researchers spent time in the participating communities getting to know people, and this yielded important information despite being relatively informal and unstructured. Interviews were flexible and personal, commencing with the origins of both the peer researcher’s and participant’s families and with the participant’s history, proceeding to the participant’s own stories about their experience. This kind of open data gathering maximises the chance that important, unexpected insights will be developed.

Qualitative data analysis

Analysis is a neglected area of qualitative research in public health and health services. There is generally scant description of analytic methods and reasoning in published papers. Researchers often appear to do nothing more than magically intuit and then list ‘themes’ from their data. Leask provides one alternative in her paper, making a detailed account of her analytic processes. Rather than simply stating that she generated ‘themes’, she specifies that she attended to the rhetorical styles used by the doctors (e.g. giving ‘yes but’ responses, or engaging in ‘scientific ping pong’).2 Rather than focusing on counting the number of doctors who used each strategy, her analysis explains the detail of each strategy, including how they worked rhetorically in the simulated consultation.

We would argue that the best qualitative research is oriented less toward generating theme lists and counting occurrence, and more toward understanding what things mean and how they work. Experienced qualitative researchers generally use more subtle indicators of importance than counting. How passionately was something spoken of? What was unspoken or unable to be said? Who said what? How can we better understand the differences? What might these differences tell us about the process we are studying? How rich and complex was a concept? What consequences did participants describe in relation to it? If, for example, only a small number of people described a problem in a health service, but they described it as so profoundly undermining their faith in clinicians and the system that they would no longer attend, this may be a problem worth exploring with more participants, in order to better understand it.


Box 1.  Suggested references for beginning qualitative research
Click to zoom

One qualitative alternative to an emphasis on frequency counts is the concept of ‘saturation’. Experienced qualitative researchers generally seek to ‘saturate’ concepts: that is, to ensure that they have enough data to make a full and detailed account of the concepts that are central in their analysis.17,18 Flexibility in sampling allows qualitative researchers to return to the field to collect more data until they reach this point. The logic underpinning this strategy is: keep talking with the most informative people until you have a good understanding of how things work and what they mean. This differs from the alternative logic: list the topics that most people agreed with. Exploratory analytic logic is a good match for purposive sampling; frequency count logic is better matched to well-designed quantitative research using probability sampling.

Reporting and methodology in qualitative research

It is important in any research to distinguish between methodology and methods. Methods are the actions you take in a research project. Method is what you do: your sampling, your data collection, your analysis. Methodology is justification of your methods.19 You engage in methodology for yourself throughout a study, examining each choice you make and thinking about whether it is justified in relation to your study as it evolves. You also engage in methodology when you report a study for an audience and justify the methods you have used to them.

There is rarely adequate attention given to methodology in qualitative research papers, a problem widely acknowledged and not confined to public health or health services research. If authors do not justify their methods, it is difficult to determine the quality of their work. The critical question to ask oneself when engaging in methodology for others is: ‘What would a reader need to know to be able to evaluate my research for themselves? Which parts of my thinking and methods do I need to explain?’

This is not a matter of apologising for one’s research; conversely, it means arguing for its usefulness. This goes to the heart of the debate about what good quality qualitative research is. It is often a difficult argument for epidemiologically trained people to make, because the methodology of epidemiology is so different from the methodology of qualitative research. However, as Lucy Yardley argues:

While traditional criteria for research quality are often inappropriate, and the ethos and plurality of many qualitative methods are incompatible with fixed, universal procedures and standards, some way of evaluating the quality of research employing qualitative methods is absolutely necessary, in both senses of the word – both imperative and unavoidable. All interpretations contain an implicit claim of authority; it makes no sense to engage in a process of analysis and then deny that it has any validity!4

Qualitative research is time-consuming. Why would you recruit participants, collect data and go through the lengthy agonies of analysis, only to say apologetically, in keeping with epidemiological principles: ‘but of course the sample size is very small and you can’t generalise’? Many novices make these apologies and attempt to make their qualitative research look as ‘epidemiological’ as possible. Think about sampling. We sometimes see tables of standard demographics in methods sections of qualitative papers, purporting to demonstrate how much like the general population the sample were. The fault for this does not always lie with authors: sometimes editors or reviewers demand such details as a condition of publication. Not only are such demographics unlikely to satisfy the requirements of epidemiology, but also, as you will remember, they are inconsistent with the principles of purposive participant selection. If you succeed in ‘proving’ that your participants were ‘average’ or ‘typical’, rather than especially relevant to your research question and analysis, you will probably thereby demonstrate that your sampling was misdirected.

Rather than engaging in a doomed attempt to conform to epidemiological standards, a qualitative methodologist should justify, in detail, aims, research questions and how they evolved, assumptions made and theories drawn on, sample selected, data collection and analysis procedures, and the evolving ethical aspects of a study. In relation to sampling, there should be a detailed account of exactly who was included and, critically, an explanation of how each group was relevant to the research question and the analysis.20 The contributors to this issue have provided some illustrations of this logic. When Leask, for example, provides a detailed account of her analytic methods, and presents and explains a ‘negative case’ – a doctor who had a different approach to dealing with the mother who refused immunisation – she is doing methodological work for you as the reader.2 Mooney-Somers and Maher, similarly, do methodological work when they explain that their interview questions were developed in conversation with participants and were designed to respect cultural protocols, and that this was guided by the principles underlying the study.1

A brief note about existing qualitative methodologies. There are a number of methodological traditions in qualitative research – coherent ways of working that have been honed and reiterated over time. They include ethnography, grounded theory, phenomenology and narrative methodology.21 CBPR, illustrated in this issue, is another of these extant methodologies. Each of them is a terrific set of resources that can be used to guide a research project. Each of them has existed and been evolving for decades – sometimes more than a century. Each of them has considerable, complex theoretical substance. There is a tendency to slap methodological labels – especially the label ‘grounded theory’ – on anything qualitative, as a kind of badge of authenticity.12,22 This is a little like going on a harbour cruise for palaeontologists and claiming to be an expert on the Permian–Triassic extinction event, when in fact you have just read a pamphlet about dinosaurs from the Australian Museum. It will become obvious fairly quickly that you do not know your marine organisms from your terrestrial invertebrates, and you will not be able to get off the boat for at least 4 hours. Traditions such as grounded theory are only useful if used actively and coherently throughout a study – to help one engage in methodology for oneself. It is only then that it makes sense to use the label when engaging in methodology for others.


The conceptual underpinnings of research: reclaiming theory

Karl Popper, the great philosopher of science responsible for the notion of falsification, famously said that he did not care where scientists got their ideas from: the origin of ideas was a matter for psychology.14 All that mattered to science was the transformation of ideas into hypotheses and the deductive testing that followed. This may help explain a somewhat unfavourable view of theory among some public health researchers.

We think ‘theoretical’ should be reclaimed as a compliment! ‘Being theoretical’ or ‘doing theory’ means contributing to a cohesive explanation of some aspect of our world. This is the highest possible purpose of research – far greater than the distillation of lonely facts. Theory is also inescapable, along with the baggage of values that theory carries. In fact, the variables in an epidemiological study are a reduction of complex values and theoretical concepts. If, in epidemiology, we classify a person according to their ‘race’ rather than their ‘ethnicity’, their ‘culture’, their ‘language spoken at home’ or the amount of ‘cultural capital’ they have access to, a theoretical choice has been made, whether or not it is acknowledged. When we treat an individual as independent in analysis, measuring nothing to do with the society, communities or cultures of which they are a part, we are making a theoretically loaded choice.

Because of its open, inductive approach to the world, qualitative research is extremely good at generating new theories. The best qualitative research will also be knowingly informed by theories of many kinds. Theories provide concepts to use in analysis. They guide study design: encouraging focus on groups (like cultures or subcultures) or on individuals; describing in detail or building a conceptual model.21 Theories inform data creation. When you record an interview, for example, what have you recorded? People’s experiences? Their attitudes? Their beliefs? Their perceptions? Their performances?23 Would these be the same in any interview, or would they be different at different times and with different interviewers? What effect do you have in the study, and how should you best be accountable for this effect? Even the way we write is a theoretically loaded choice. Our use of an active first person voice and of authors’ first names in this editorial, for example, reveals our belief that researchers should present themselves as real live human individuals, rather than ‘objective’, distant and inscrutable, as any piece of research or writing is a product of the people who have crafted it. Theories are everywhere, and good researchers of all kinds acknowledge them and use them as resources.24

Lewis argues that the theories about policy that you bring to a study of policy influence will change what you look at.3 If you use a theory that suggests that influence rests in institutions, you will examine institutions; if in conflicting interests, you will study interests; if in contests of ideas, you will study the movement of ideas. These are not right or wrong, but different, and it is possible to be open to participants’ perspectives within each frame. Mooney-Somers and Maher’s paper, like most CBPR, also begins with normative theoretical commitments about what research should be.1 Because of its theoretical orientation, CBPR defines good research as that which includes participants as equals and achieves concrete change in participants’ communities, a theoretical commitment that prompted Scott’s contribution to the issue.


In conclusion: does the qualitative/quantitative distinction matter?

Do we need to make a distinction between qualitative and quantitative research? We would argue that we need distinction without antagonism: a kind of cross-cultural understanding and mutual respect. Qualitative and quantitative research can contribute differently and equally to knowledge in public health and health services.13 However, if qualitative research is to keep its end of this bargain, it may need to be protected from its new-found popularity and allowed to assert and follow its own principles. We would urge those with a nascent interest in qualitative research not to attempt to take it up as a straightforward, instrumental toolbox of methods. To public health audiences, qualitative research may seem new; in fact, the ideas at its heart go back centuries, some say as far as Aristotle.25,26 The methods of contemporary qualitative research were initiated in anthropology and sociology at the turn of the 20th century and have been evolving ever since.27,28 Good qualitative research requires careful thought about methodology and theory in the context of this history, which is difficult for beginners to achieve without support and training. We advise public health professionals to work with experienced qualitative researchers until they have established themselves in this new world.

Qualitative enquiry is a fractured, rich and potentially highly rewarding field of endeavour: this issue of the Bulletin is a tiny part of it. Public health, we believe, needs both epidemiology and qualitative research. Without epidemiology we cannot answer questions about the prevalence of and association between health determinants and outcomes. Without qualitative enquiry, it is difficult to explain how individuals interpret health and illness in their everyday lives, or to understand the complex workings of the social, cultural and institutional systems that are central to our health and wellbeing. We hope that this issue of the Bulletin will stimulate debate about the place of qualitative enquiry in public health and health services research in Australia. At the very least, it might prevent you from getting stuck on a metaphoric harbour cruise with only a pamphlet for company.



Acknowledgments

Our sincere thanks to the authors for their contributions to this issue of the Bulletin.


References


[1] Mooney-Somers JD,  Maher L. The Indigenous Resiliency Project: a worked example of community-based participatory research. N S W Public Health Bull 2009; 20(7–8): 112–8.


[2] Leask J. How do general practitioners persuade parents to vaccinate their children? A study using standardised scenarios. N S W Public Health Bull 2009; 20(7–8): 119–24.


[3] Lewis JM. Understanding policy influence and the public health agenda. N S W Public Health Bull 2009; 20(7–8): 125–9.


[4] Yardley L. Dilemmas in qualitative health research. Psychol Health 2000; 15 215–28.
Crossref | GoogleScholarGoogle Scholar |

[5] Kitto SC,  Chesters J,  Grbich C. Quality in qualitative research: Criteria for authors and assessors in the submission and assessment of qualitative research articles for the Medical Journal of Australia. Med J Aust 2008; 188(4): 243–6.
PubMed |

[6] Mays N,  Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000; 320(7226): 50–2.
Crossref | GoogleScholarGoogle Scholar | PubMed | CAS |

[7] Pope C, Mays N. Qualitative research in health care. 3rd ed. London: Blackwell Publishing; 2006.

[8] Kuper A,  Lingard L,  Levinson W. Qualitative research: critically appraising qualitative research. BMJ 2008; 337 a1035.
Crossref | GoogleScholarGoogle Scholar | PubMed |

[9] Seale C. The quality of qualitative research. London: Sage Publications; 1999.

[10] Flick U, editor. Managing quality in qualitative research. London: Sage Publications; 2007.

[11] Tong A,  Sainsbury P,  Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007; 19(6): 349–57.
Crossref | GoogleScholarGoogle Scholar | PubMed |

[12] Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ 2001; 322(7294): 1115–7.
Crossref | GoogleScholarGoogle Scholar | PubMed | CAS |

[13] Popay J,  Williams G. Public health research and lay knowledge. Soc Sci Med 1996; 42(5): 759–68.
Crossref | GoogleScholarGoogle Scholar | PubMed | CAS |

[14] Daly KJ. Paths of inquiry for qualitative research. In: Daly KJ, editor. Qualitative methods for family studies and human development. Thousand Oaks, CA: Sage Publications; 2007. pp. 43–60.

[15] Shank G. The extraordinary ordinary powers of abductive reasoning. Theory Psychol 1998; 8(6): 841–60.
Crossref | GoogleScholarGoogle Scholar |

[16] Maxwell JA. Qualitative research design: An interactive approach. 2nd ed. Thousand Oaks, CA: Sage Publications; 2005.

[17] Bowen G. Naturalistic inquiry and the saturation concept: a research note. Qual Res 2008; 8(1): 137–52.
Crossref | GoogleScholarGoogle Scholar |

[18] Morse JM. The significance of saturation. Qual Health Res 1995; 5(2): 147–9.
Crossref | GoogleScholarGoogle Scholar |

[19] Carter SM,  Little M. Justifying knowledge, justifying method, taking action: epistemologies, methodologies and methods in qualitative research. Qual Health Res 2007; 17(10): 1316–28.
Crossref | GoogleScholarGoogle Scholar | PubMed |

[20] Morse JM. “What’s your favorite color?” Reporting irrelevant demographics in qualitative research. Qual Health Res 2008; 18(3): 299–300.
Crossref | GoogleScholarGoogle Scholar | PubMed |

[21] Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. Thousand Oaks, CA: Sage Publications; 2007.

[22] Barbour RS. The newfound credibility of qualitative research? Tales of technical essentialism and co-option. Qual Health Res 2003; 13(7): 1019–27.
Crossref | GoogleScholarGoogle Scholar | PubMed |

[23] Mason J. Qualitative researching. 2nd ed. London: Sage Publications; 2002.

[24] Reeves S,  Albert M,  Kuper A,  Hodges B. Why use theories in qualitative research? BMJ 2008; 337(7670): 631–4.
Crossref | GoogleScholarGoogle Scholar |

[25] Buchanan DR. An Ethic for Health Promotion: Rethinking the Sources of Human Well-Being. New York: Oxford University Press; 2000.

[26] Flyvbjerg B. Making social science matter: Why social inquiry fails and how it can succeed again. Cambridge, UK: Cambridge University Press; 2001.

[27] Eriksen TH, Nielsen FS. A history of anthropology. London: Pluto Press; 2001.

[28] Bulmer M. The Chicago School of Sociology: Institutionalization, Diversity, and the Rise of Sociological Research. Chicago, IL: University of Chicago Press; 1986.