cab: For Past Participants
We know a lot about what kids and families do, but you probably don't know very much about what we do! On this page you'll find information explaining how CAB works.
description
Childhood and Beyond (CAB) research project has been going on for 30 years now, and is a longitudinal, cross-sectional study of students and their achievement in school. The way we study their achievement is mainly through surveys that have been administered periodically over the course of 30 years, interspersed by interviews (done mainly by graduate students) and collection of school student record data (i.e., semester grades, standardized test scores, and special education placement).
The surveys cover a broad range of activities, behaviors and beliefs. For example, we ask about: school activities and beliefs (e.g., what kinds of classes students are taking, and how the student feels about them), family atmosphere (e.g., decision making in the home and sibling relationships), extra-curricular activities (e.g., participation in and importance of sports/clubs), and mental health (e.g., how often do you feel good about yourself); and use these factors to form a more complete picture of the child to determine what helps students achieve in school.
The surveys cover a broad range of activities, behaviors and beliefs. For example, we ask about: school activities and beliefs (e.g., what kinds of classes students are taking, and how the student feels about them), family atmosphere (e.g., decision making in the home and sibling relationships), extra-curricular activities (e.g., participation in and importance of sports/clubs), and mental health (e.g., how often do you feel good about yourself); and use these factors to form a more complete picture of the child to determine what helps students achieve in school.
Surveys
There are several different aspects to survey research. First of all, there's survey development. This is something handled mostly by the upper-level graduate students and faculty members. It consists of collecting questions to ask on the survey, arranging them into a coherent format, testing and re-testing the format (usually with willing staff members' children who are around the age of those in our subject pool) to make sure it is clear and meets our time specifications.
Once we have a suitable survey ready, we print it and get ready to administer it. There are two different ways to do this: proctored administration in the schools, or self-administration of a mailed survey. In the past we have most often administered surveys in the schools for many reasons. The first of these is that it promotes a higher response rate, since the students are already in school and often willing to take some time off class. Being in the school also gives us the opportunity to answer any questions the students might have about the survey itself or the study in general, and to see the atmosphere they go to school in. When we are administering surveys during the summer, to schools out of our area, or to students who are out of school, we mass mail surveys to students for self-administration at home. We have also used this method of administering surveys for the few parent surveys we have done. Though it requires less time to do a mass mailing, we try to get into the schools as much as we can to get more surveys and avoid more misunderstandings on the survey.
Once the surveys have been administered and are back in the office, we check them into our database and, as in recent years, issue a check for the participant. With our most recent in-school administration, we handled this procedure at the school so we could hand the check directly to the student as he/she finished. This process needed to be closely monitored so that we could account for all the surveys and accompanying checks.
While the surveys are coming into the office, we begin coding development. Coding is the process of turning the respondents' answers into numbers so that they may be used in statistics. For the most part at this point we have codes developed for all the questions on the survey. Although a few have been added, updated, or re-worked, which requires a new coding scheme to be developed. This often consists of several people sitting down with some of the new surveys, collecting the answers, separating them into like types, and assigning them numbers to be coded later.
When the majority of the surveys are in, we can begin actually coding them. This consists of individual people sitting down with stacks of surveys and making decisions on what number to assign what answer, using the previously developed coding scheme. Coding can be both tedious and interesting. Since students are often creative and sometimes bored, they will come up with very hard-to-code answers. Usually, each person who is involved in coding is assigned a certain set of like questions, and becomes one of our "experts" in coding that type of question. This helps our coding reliability and helps insure that the same question with the same answer will be coded the same way every time.
After the coding of a section is finished, we check-code it. This consists of one of our experts in the section sitting down with another expert's books and, essentially, re-coding them. There are several reasons to do this - primarily to make sure that nothing was missed the first time around, but also to assure that the answers were coded correctly. Often, in the process of coding, we come across a new answer coming up enough to deserve its own code, or some sort of error in the coding system. Instead of having each person go back over their own drawer numerous times to check for each and every mistake, we wait until everyone is finished and pay special attention to those specific errors.
Once the surveys have been coded and check-coded, they are sent to an outside keypunching company to create the data file from the numbers we have written in the margins. To do this, the company requires us to tell them how to format the data file, and then they get the completed formatted file back to us within approximately one month. When the data file gets back, we "clean" it, meaning that we check for errors made in coding or keypunching. Once the data file is clean, graduate students and faculty use it to research specific questions and write papers for conferences and publications from it.
Once we have a suitable survey ready, we print it and get ready to administer it. There are two different ways to do this: proctored administration in the schools, or self-administration of a mailed survey. In the past we have most often administered surveys in the schools for many reasons. The first of these is that it promotes a higher response rate, since the students are already in school and often willing to take some time off class. Being in the school also gives us the opportunity to answer any questions the students might have about the survey itself or the study in general, and to see the atmosphere they go to school in. When we are administering surveys during the summer, to schools out of our area, or to students who are out of school, we mass mail surveys to students for self-administration at home. We have also used this method of administering surveys for the few parent surveys we have done. Though it requires less time to do a mass mailing, we try to get into the schools as much as we can to get more surveys and avoid more misunderstandings on the survey.
Once the surveys have been administered and are back in the office, we check them into our database and, as in recent years, issue a check for the participant. With our most recent in-school administration, we handled this procedure at the school so we could hand the check directly to the student as he/she finished. This process needed to be closely monitored so that we could account for all the surveys and accompanying checks.
While the surveys are coming into the office, we begin coding development. Coding is the process of turning the respondents' answers into numbers so that they may be used in statistics. For the most part at this point we have codes developed for all the questions on the survey. Although a few have been added, updated, or re-worked, which requires a new coding scheme to be developed. This often consists of several people sitting down with some of the new surveys, collecting the answers, separating them into like types, and assigning them numbers to be coded later.
When the majority of the surveys are in, we can begin actually coding them. This consists of individual people sitting down with stacks of surveys and making decisions on what number to assign what answer, using the previously developed coding scheme. Coding can be both tedious and interesting. Since students are often creative and sometimes bored, they will come up with very hard-to-code answers. Usually, each person who is involved in coding is assigned a certain set of like questions, and becomes one of our "experts" in coding that type of question. This helps our coding reliability and helps insure that the same question with the same answer will be coded the same way every time.
After the coding of a section is finished, we check-code it. This consists of one of our experts in the section sitting down with another expert's books and, essentially, re-coding them. There are several reasons to do this - primarily to make sure that nothing was missed the first time around, but also to assure that the answers were coded correctly. Often, in the process of coding, we come across a new answer coming up enough to deserve its own code, or some sort of error in the coding system. Instead of having each person go back over their own drawer numerous times to check for each and every mistake, we wait until everyone is finished and pay special attention to those specific errors.
Once the surveys have been coded and check-coded, they are sent to an outside keypunching company to create the data file from the numbers we have written in the margins. To do this, the company requires us to tell them how to format the data file, and then they get the completed formatted file back to us within approximately one month. When the data file gets back, we "clean" it, meaning that we check for errors made in coding or keypunching. Once the data file is clean, graduate students and faculty use it to research specific questions and write papers for conferences and publications from it.
interviews
For the most part, interviews are handled by graduate students and staff. First, they select a pool to interview, which for CAB has been families with siblings in the study and, recently, some special interest groups (e.g. a special talent in sports/academics/etc.). Next, they train interviewers in a specific protocol for the interview, and contact the families to set up an interview time. Interviews are taped and transcribed, and each interviewer fills out an "Interviewer Observation Sheet" to get an idea of the atmosphere the interview was conducted in (for the most part, the family home).
Occasionally the interviews are transcribed by temporary workers in the office, but more recently we have been sending tapes out to a transcribing agency to free our temps' time for other tasks. After the transcripts come back, they go through a coding process vaguely similar to that of surveys so that the data from them can be used in statistics as well.
With our most recent sets of interviews, we also used a process called an Eco-map, which is a large picture that describes the family relationships and the relationships between the family as a unit and areas outside the family (e.g., mother's extended family, school, work). This picture is coded similarly, and since it is already on paper, it is ready before the interviews and gives us a peek at some of the data we will get later.
Occasionally the interviews are transcribed by temporary workers in the office, but more recently we have been sending tapes out to a transcribing agency to free our temps' time for other tasks. After the transcripts come back, they go through a coding process vaguely similar to that of surveys so that the data from them can be used in statistics as well.
With our most recent sets of interviews, we also used a process called an Eco-map, which is a large picture that describes the family relationships and the relationships between the family as a unit and areas outside the family (e.g., mother's extended family, school, work). This picture is coded similarly, and since it is already on paper, it is ready before the interviews and gives us a peek at some of the data we will get later.
Recorded data
Record data consists of semester grades, standardized test scores, and special education or gifted classifications as recorded by the school. This involves going into the school, usually during the summer, and writing all the information down from the students' records. Since we are writing the data down directly from the records, we often code it right on the spot rather than writing down the grades and test scores verbatim. Occasionally, the school gives us a limited amount of time, which means we must use our time to write the grades down there and code them back in the office. Once the data is collected, it goes through essentially the same coding process as the surveys.