Dedoose Publications

PUBLICATIONS

Dedoose has been field-tested and journal-proven by leading academic institutions and market researchers worldwide. Thousands of prominent researchers across the US and abroad have benefited from early versions of Dedoose in their qualitative and mixed methods work and have laid an outstanding publication and report trail along the way.

Policy Based Publications

Higher Ground: New Hope for the Working Poor and Their Children

Duncan, Greg, Huston, Aletha, & Weisner, Thomas (2007)

New York: Russell Sage Foundation

During the 1990s, growing demands to end chronic welfare dependency culminated in the 1996 federal “welfare-to-work” reforms. But regardless of welfare reform, the United States has always been home to a large population of working poor—people who remain poor even when they work and do not receive welfare. In a concentrated effort to address the problems of the working poor, a coalition of community activists and business leaders in Milwaukee, Wisconsin, launched New Hope, an experimental program that boosted employment among the city’s poor while reducing poverty and improving children’s lives. In Higher Ground, Greg Duncan, Aletha Huston, and Thomas Weisner provide a compelling look at how New Hope can serve as a model for national anti-poverty policies via their qualitative, quantitative, and mixed method research approaches. New Hope was a social contract—not a welfare program—in which participants were required to work a minimum of thirty hours a week in order to be eligible for earnings supplements and health and child care subsidies. All participants had access to career counseling and temporary community service jobs. Drawing on evidence from surveys, public records of employment and earnings, in-depth interviews, and ethnographic observation, Higher Ground tells the story of this ambitious three-year social experiment and evaluates how participants fared relative to a control group. The results were highly encouraging. Poverty rates declined among families that participated in the program. Employment and earnings increased among participants who were not initially working full-time, relative to their counterparts in a control group. For those who had faced just one significant barrier to employment (such as a lack of access to child care or a spotty employment history), these gains lasted years after the program ended. Increased income, combined with New Hope’s subsidies for child care and health care, brought marked improvements to the well-being and development of participants’ children. Enrollment in child care centers increased, and fewer medical needs went unmet. Children performed better in school and exhibited fewer behavioral problems, and gains were particularly dramatic for boys, who are at the greatest risk for poor academic performance and behavioral disorders. As America takes stock of the successes and shortcomings of the Clinton-era welfare reforms, the authors convincingly demonstrate why New Hope could be a model for state and national policies to assist the working poor. Evidence based and insightfully written, Higher Ground illuminates how policymakers can make work pay for families struggling to escape poverty.
Sociology Based Publications

Cultural Consensus Theory: Applications and Frequently Asked Questions

Weller, Susan C. (2007)

Field Methods, 19(4): 339-368

Use of consensus theory to estimate culturally appropriate or "correct" answers to questions and assess individual differences in cultural knowledge. Describes the assumptions, appropriate interview materials, and analytic procedures fro carrying out a consensus analysis.
Sociology Based Publications

“They Will Post a Law About Playing Soccer” and Other Ethnic/Racial Microaggressions in Organized Activities Experienced by Mexican-Origin Families

Alex R. Lin; Cecilia Menjívar; Andrea Vest Etteka; Sandra D. Simpkins; Erin R. Gaskin; and Annelise Pesch (2015)

Organized activities have been found to provide positive experiences for Latino adolescents to develop confidence and learn critical life skills; however, these programs are sometimes a context where youth encounter negative experiences related to ethnic/racial microaggressions (ERMs). This qualitative study explores the types of ERMs that Mexican-origin parents and adolescents encountered in their organized activities experience. Parents were mainly concerned about SB-1070 and the associated law enforcement practices that posed a threat to transporting their children to and from the organized activity site. Adolescents reported that they encountered overt(e.g., ethnic teasing) as well as covert forms of discriminatory behavior (e.g., implicit ethnic stereotypes) from peers and adult leaders. Attention to the processes of ERM is critical to helping practitioners promote positive intergroup relations so that more Latinos will participate and stay active in organized activities.
Education Based Publications

Managing Data in CAQDAS

Fielding, Nigel & Lee, Ray M. (1998)

Chapter 4 in Fielding & Lee, Computer Analysis and Qualitative Research, pp. 86-118

from COMPUTER ASSISTED QUALITATIVE DATA ANALYSIS SOFTWARE: A PRACTICAL PERSPECTIVE FOR APPLIED RESEARCH, JOSEPH B. BAUGH, ANNE SABER HALLCOM, and MARILYN E. HARRIS Computer assisted qualitative data analysis software (CAQDAS) holds a chequered reputation to date in academia, but can be useful to develop performance metrics in the field of corporate social and environmental responsibility and other areas of contemporary business. Proponents of using CAQDAS cite its ability to save time and effort in data management by extending the ability of the researcher to organize, track and manage data. Opponents decry the lack of rigor and robustness in the resultant analyses. Research reveals that these opinions tend to be divided by “the personal biography and the philosophical stance of the analyst” (Catterall & Maclaran, 1998, p. 207), as well as “age, computer literacy, and experience as a qualitative researcher” (Mangabeira, Lee & Fielding, 2004, p. 170). A more recent article (Atherton & Elsmore 2007) discussed the continuing debate on CAQDAS in qualitative research: The two perspectives both indicate that CAQDAS should be used with care and consideration; in ways that explicitly demonstrate a “fit” between the ethos and philosophical perspective(s) underpinning a research study, on the one hand, and the means of ordering and manipulating the data within CAQDAS on the other. (p. 75) Despite the ongoing literary debate on the merits of CAQDAS, the use of computer-aided qualitative data analysis has become acceptable to most qualitative researchers (Lee & Esterhuizen; Morison & Moir, 1998; Robson, 2002). However, writers advise that researchers avoid the trap of letting the software control the data analysis (Catterall & Maclaran, 1998). Morison and Moir counseled that CAQDAS is merely one tool in the qualitative data analysis toolbox. No tool should replace the researcher's capacity to think through the data and develop his or her emergent conclusions (Atherton & Elsmore, 2007). On the other hand, Morison and Moir among others (e.g., Blank, 2004; Catterall & Maclaran, 1998; Mangabeira et al., 2004) found the use of qualitative data analysis software can also free up significant amounts of time formerly used in data management and encoding allowing the researcher to spend more time in deeper and richer data evaluation. Qualitative research studies to develop performance metrics can create huge amounts of raw data (Miles & Huberman, 1994; Robson, 2002). Organizing, tracking, encoding, and managing the data are not trivial tasks and the effort should not be underestimated by the applied researcher. Two methodologies exist to handle these activities and manage the data during the data analysis phase. The first methodology is a manual process, which must be done at times to avoid missing critical evidence and provide trustworthiness in the process (Malterud, 2001), while the second methodology indicates the use of technology for managing the data and avoid being overwhelmed by the sheer amount of raw data (Lee & Esterhuizen, 2000). It is the experience of the authors that some manual processing must be interspersed with CAQDAS. This provides an intimacy with the data which leads to the drawing of credible and defensible conclusions. Thus, a mixed approach that melds manual and automated data analyses seems most appropriate. A basic approach for applying traditional qualitative research methodologies lies in the ability of CAQDAS to support data reduction through the use of a “provisional start list” (Miles & Huberman, 1994, p. 58) of data codes that are often developed manually from the research question. A rise in the use of CAQDAS for applied research and other nonacademic research fields has been identified (Fielding & Lee, 2002). Since CAQDAS is becoming more prevalent in nonacademic researcher populations and can be useful for developing performance metrics for corporate social and environmental responsibility and solving other complex business issues, it seems prudent at this juncture to discuss how to use the software appropriately rather than rehash the argument for or against using CAQDAS. Selection of and training with an appropriate CAQDAS package can help the researcher manage the mountains of data derived from qualitative research data collection methods (Lee & Esterhuizen, 2000).
Medical Based Publications

Codebook Development for Team-Based Qualitative Analysis

MacQueen, Kathleen M., McLellan, Eleanor, Kay, Kelly, & Milstein Bobby (1998)

Cultural Anthropology Methods, 10(2): 31-36

One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin, 1990; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development. On the one hand, we must often explain basic methods such as this in clear terms to a wide range of scientists who have little or no experience with qualitative research and who may express a deep skepticism of the validity of our results. On the other, our codebook development strategy must be responsive to the teamwork approach that typifies the projects we undertake at CDC, where coding is generally done by two or more persons who may be located at widely dispersed sites. We generally use multiple coders so that we can assess the reliability and validity of the coded data through intercoder agreement measures (e.g., Carey et al. 1996) and, for some projects, as the only reasonable way to handle the sheer volume of data generated. The standardized structure and dynamic process used in our codebook development strategy reflects these concerns. This paper describes (1) how a structured codebook provides a stable frame for the dynamic analysis of textual data; (2) how specific codebook features can improve intercoder agreement among multiple researchers; and (3) the value of team-based codebook development and coding. Origins of the Codebook Format Our codebook format evolved over the course of several years and a variety of projects. The conceptual origins took shape in 1993 during work on the CDC-funded Prevention of HIV in Women and Infants Project (WIDP) (Cotton et al. 1998), which generated approximately 600 transcribed semistructured interviews. One research question pursued was whether women’s narratives about their own heterosexual behavior could help us understand general processes of change in condom use behavior (Milstein et al. 1998). The researchers decided to use the processes of change (POC) constructs from the Transtheoretical Model (Prochaska 1984; DiClemente and Prochaska 1985) as a framework for the text analysis. However, the validity of the POC constructs for condom-use behavior was unknown, and a credible and rigorous text coding strategy was needed to establish their applicability and relevance for this context. To do this, the analysts had to synthesize all that was known about each POC construct, define what it was, what it was not, and, most importantly, learn how to recognize one in natural language. Several years earlier, O’Connell (1989) had confronted a similar problem while examining POCs in transcripts of psychotherapy sessions. Recognizing that "coding processes of change often requires that the coder infer from the statement and its context what the intention of the speaker was," O’Connell (1989:106) developed a coding manual that included a section for each code titled "Differentiating (blank) from Other Processes." Milstein and colleagues used O’Connell’s "differentiation" section in a modified format in their analysis of condom behavior change narratives. They conceptualized the "differentiation" component as "exclusion criteria," which complemented the standard code definitions (which then became known as "inclusion criteria"). To facilitate on-line coding with the software program Tally (Bowyer 1991; Trotter 1993), components were added for the code mnemonic and a brief definition, as well as illustrative examples. Thus, the final version of the analysis codebook contained five parts: the code mnemonic, a brief definition, a full definition of inclusion criteria, a full definition of exclusion criteria to explain how the code differed from others, and example passages that illustrated how the code concept might appear in natural language. During the code application phase, information in each of these sections was supplemented and clarified (often with citations and detailed descriptions of earlier work), but the basic structure of the codebook guidelines remained stable.
Education Based Publications

Qualitative Interviewing

Patton, Michael Quinn (1980)

Thousand Oaks: Sage Publications, In Michael Quinn Patton, Qualitative Evaluation Methods, pp. 195-263

We interview people to find out from them those things we cannot directly observe. This issue is not whether observational data are more desireable, valid, or meaningful than self-report data. The fact is tahtw e cannot observe everything. We cannot observe felings, thoughts, intentions, behaviors that took place at some previous point in time, situations that preclude the presence of an observer, or how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things. Thus, the purpose of interviewing is to allow us to enter into the other person's perspective. Qualitative interviewing begins with the assumption that the perspective of others is meaningful, knowable, and able to be made explicit. We interview to find out what is in and on someone else's mind, to gather their stories. Program evaluation interviews, for example, aim to capture the perspectives of program participants, staff, and others associated with the program. What does the program look and feel like to the people involved? What are their experiences? What thoughts do people knowledgeable about the program have concerning the program? What are their expectations? What changes do participants perceive in themselves as a result of their involvement in the program? It is the responsibility of the evaluator to provide a framework within which people can respond comfortably, accurately, and honestly to these kinds of questions. Evaluations can enhance the use of qualitative data by generating relevant and high quality findings. As Hermann Sudermann said in Es Lebe das Leben I, ‘I know how to listen when clever men are talking. That is the secret of what you8 call my influence.’ Evaluators must learn how to listen when knowledgeable people are talking. That may be the secret of their influence. An evaluator or qualitative or mixed method research interviewer faces the challenge of making it possible for the person being interviewed to bring the interviewer into his or her world. The quality of the information obtained during an interview is largely dependent on the interviewer. This chapter discusses ways of obtaining high-quality information by talking with people who have that information. We’ll be delving into the ‘art of hearing’ (Rubin and Rubin 1995). This chapter presents three different types of interviews. Later sections consider the content of interviews: what questions to ask and how to phrase questions. The chapter ends with a discussion of how to record the responses obtained during interviews. This chapter emphasizes skill and technique as ways of enhancing the quality of interview data, but no less important is a genuine interest in and caring about the perspectives of other people. If what people have to say about the world is generally boring to you, then you will never be a great interviewer. On the other hand, a deep and genuine interest in learning about people is insufficient without disciplined and rigorous inquiry based on skill and technique.
Geography Based Publications

Health geography II ‘Dividing’ health geography

Rosenberg, Mark (2015)

Over the years, various observers of health geography have sought to ‘divide’ the sub-discipline mainly along theoretical lines or to argue for a broadening of its theoretical base. Paralleling the growing theoretical pluralism within health geography has been a growing methodological pluralism. As in other parts of human geography, health geographers have embraced historical research, quantitative and qualitative methods, and computer mapping and geographic information science (GIS). Analysing recent contributions by health geographers, the question I seek to answer is whether the growing theoretical and methodological pluralism has paradoxically led to increasing divisions in the topics of study based mainly, but not solely, on what methods are employed in the research. While there are topical overlaps (e.g. quantitative and qualitative studies of particular vulnerable groups), it is less obvious as to how research using one methodology is informing research using the other methodology.
Policy Based Publications

Impacts of Children with Troubles on Working Poor Families: Experimental and Mixed Methods Evidence

Bernheimer, L., Weisner, T.S., & Lowe, E. (2003)

Mental Retardation, 41(6): 403-419

Mixed-method and experimental data on working poor families and children with troubles participating in the New Hope anti-poverty experimental initiative in Milwaukee are described. Sixty percent of these families had at least one child who had significant problems (learning, school achievement and/or behavior, home behavior, retardation, other disabilities). Control group familieswith children who had troubles had more difficulties in sustaining their family routine than did New Hope experimental families.
Sociology Based Publications

Children of the 1960s at Midlife: Generational Identity and the Family Adaptive Project

Weisner, T. S., & Bernheimer, L. P. (1998)

Chicago: University of Chicago Press, In R. Shweder (Ed.), Welcome to middle age! and Other Cultural Fictions, pp. 211-257

Many of us believe we recognize the symptoms of middle age: lower back pain, mortgages, and an aversion to loud late-night activities. This particular construction of midlife, most often rendered in chronological, biological, and medical terms, has become an accepted reality to European-Americans and has recently spread to such non-Western capitals as Tokyo and New Delhi. Welcome to Middle Age! (And Other Cultural Fictions) explores the significance of this pervasive cultural representation alongside the alternative "fictions" that represent the life course in other regions of the world where middle age does not exist. In this volume, anthropologists, behavioral scientists, and historians explore topics ranging from the Western ideology of "midlife decline" to cultural representations of mature adulthood that operate without the category of middle age. The result is a fascinating, panoramic collection that explores the myths surrounding and the representations of mature adulthood and of those years in the life span from thirty to seventy. Weisner and Bernheimer on the use of qualitative, ethnography and mixed methods chapter on describing the outcomes of a counter-culture group of the 1960s who had been studied longitudinally with attention to their childrearing practices, lifestyle, and children's later social and psychological adaptation.
Education Based Publications

Students' Perceptions of Characteristics of Effective College Teachers: A Validity Study of a Teaching Evaluation Form Using a Mixed Methods Analysis

Onwuegbuzie, A. J., Witcher, A. E., Collins, K. M. T., Filer, J. D., Wiedmaier, C. D., & Moore, C. W. (2007)

American Educational Research Journal, 44(1): 113-160

This study used a multistage mixed-methods analysis to assess the content-related validity (i.e., item validity, sampling validity) and construct-related validity (i.e., substantive validity, structural validity, outcome validity, generalizability) of a teaching evaluation form (TEF) by examining students’ perceptions of characteristics of effective college teachers. Participants were 912 undergraduate and graduate students (10.7% of student body) from various academic majors enrolled at a public university. A sequential mixed-methods analysis led to the development of the CARE-RESPECTED Model of Teaching Evaluation, which represented characteristics that students considered to reflect effective college teaching—comprising four meta-themes (communicator, advocate, responsible, empowering) and nine themes (responsive, enthusiast, student centered, professional, expert, connector, transmitter, ethical, and director). Three of the most prevalent themes were not represented by any of the TEF items; also, endorsement of most themes varied by student attribute (e.g., gender, age), calling into question the content- and construct-related validity of the TEF scores. Also cited by Harris, Ingle, Rutledge, 2014, 'How Teacher Evaluation Methods Matter for Accountability A Comparative Analysis of Teacher Effectiveness Ratings by Principals and Teacher Value-Added Measures.' Abstract Policymakers are revolutionizing teacher evaluation by attaching greater stakes to student test scores and observation-based teacher effectiveness measures, but relatively little is known about why they often differ so much. Quantitative analysis of thirty schools suggests that teacher value-added measures and informal principal evaluations are positively, but weakly, correlated. Qualitative analysis suggests that some principals give high value-added teachers low ratings because the teachers exert too little effort and are “lone wolves” who work in isolation and contribute little to the school community. The results suggest that the method of evaluation may not only affect which specific teachers are rewarded in the short term, but shape the qualities of teacher and teaching students experience in the long term.
1-10 of 100