Drivers Category

Drivers Update
Drivers

The interview 2012 method of data collection

Version: 94.26.18
Date: 20 April 2016
Filesize: 37 MB
Operating system: Windows XP, Visa, Windows 7,8,10 (32 & 64 bits)

Download Now

Survey researchers employ a variety of techniques in the collection of survey data. People can be contacted and surveyed using several different modes: by an interviewer in-person or on the telephone (either a landline or cellphone via the internet or by paper questionnaires (delivered in person or in the mail). The choice of mode can affect who can be interviewed in the survey, the availability of an effective way to sample people in the population, how people can be contacted and selected to be respondents, and who responds to the survey. In addition, factors related to the mode, such as the presence of an interviewer and whether information is communicated aurally or visually, can influence how people respond. Surveyors are increasingly conducting mixed-mode surveys where respondents are contacted and interviewed using a variety of modes. Survey response rates can vary for each mode and are affected by aspects of the survey design (e.g., number of calls/contacts, length of field period, use of incentives, survey length, etc.). In recent years surveyors have been faced with declining response rates for most surveys, which we discuss in more detail in the section on the problem of declining response rates. In addition to landline and cellphone surveys, Pew Research Center also conducts web surveys and mixed-mode surveys, where people can be surveyed by more than one mode. We discuss these types of surveys in the following sections and provide examples from polls that used each method. In addition, some of our surveys involve reinterviewing people we have previously surveyed to see if their attitudes or behaviors have changed. For example, in presidential election years we often interview voters, who were first surveyed earlier in the fall, again after the election in order to understand how their opinions may have changed from when they were interviewed.
There is a long tradition of using tests as the data collection approach in assessing student achievement and aptitude. Since the early 20th century, tests have been used extensively to gather information on individual aptitude in schools and the armed forces. Monahan 1998 effectively traces the controversial use of tests for measuring aptitude and intelligence, as well as the history of the development of other standardized tests. Kaplan and Saccuzzo 2013 and Nitko and Brookhart 2011 provide comprehensive overviews of standardized aptitude as well as achievement testing, as does Thorndike and Thorndike- Christ 2010 (cited under General Quantitative Overviews) and Miller, et al. 2013 (cited under Technical Properties). These texts are excellent resources for learning how to develop, administer, and collect psychometric data from tests, as well as how to score measures of knowledge, understanding, and cognitive skills. Kaplan and Saccuzzo 2013 and Thorndike and Thorndike- Christ 2010 provide a technical summary from a psychological perspective, while Nitko and Brookhart 2011 and Miller, et al. 2013 are more introductory, with an emphasis on applications to educational research. Kranzler and Floyd 2013 also cover aptitude assessment in children and adolescents, information used extensively for placement into special services and advanced programs. The extent to which aptitude tests are unfair or biased constitutes an ongoing debate. Joint Committee on Testing Practices 2004 has provided the Code of Fair Testing Practices in Education, an important summary of principles of fair testing. Bias in testing is addressed in more general texts such as Kaplan and Saccuzzo 2013 and Thorndike and Thorndike- Christ 2010. Alternate conceptions about what a test of aptitude should include are presented by Stemler and Sternberg 2013, who provide a contemporary argument for including.
Notice: Undefined offset: 1 in omega_samhsa_breadcrumb (line 153 of /srv/www/sites/capt/sites/all/themes/omega_samhsa/template.php). Method Description Pros Cons Archival Data that have already been collected by an agency or organization and are in their records or archives Low cost Relatively rapid Unobtrusive Can be highly accurate Often good to moderate validity Usually allows for historical comparisons or trend analysis Often allows for comparisons with larger populations May be difficult to access local data Often out of date When rules for recordkeeping are changed, makes trend analysis difficult or invalid Need to learn how records were compiled to assess validity May not be data on knowledge, attitudes, and opinions May not provide a complete picture of the situation Key Informant Interviews Structured or unstructured one-on-one directed conversations with key individuals or leaders in a community Low cost (assuming relatively few) Respondents define what is important Rapid data collection Possible to explore issues in depth Opportunity to clarify responses through probes Sources of leads to other data sources and other key informants Can be time consuming to set up interviews with busy informants Requires skilled and/or trained interviewers Accuracy (generalizability) limited and difficult to specify Produces limited quantitative data May be difficult to analyze and summarize findings Focus Groups Structured interviews with small groups of like individuals using standardized questions, follow-up questions, and exploration of other topics that arise to better understand participants Low cost Rapid data collection Participants define what is important Some opportunity to explore issues in depth Opportunity to clarify responses through probes Can be time consuming to assemble groups Produces limited quantitative data Requires trained facilitators Less control.

© 2011-2016 binorogy.5v.pl