Conducting Educational Research
Step 10: Write the Instruments Section

The purpose of the Instruments section is to give readers a detailed explanation of how the key variables were conceptualized and measured in the study. Since the instruments have already been adopted, adapted, or developed, this section is simply to describe the instruments that have already been produced. Remember from previous chapters that you have an important responsibility to clearly report the methods used in your study so other researchers can evaluate and replicate your work. Clearly and completely reporting on the instruments used in the study is important for the following reasons:

What is reported in the Instruments section will differ by how the instrument was developed. If you are writing a project for UniJos, typically the Instruments section is divided into three components: a description of the instrument itself, an explanation of the development of the instrument, and reliability and validity of the instrument. If the instrument is measuring many variables, this can sometimes be awkward so I recommend discussing with your supervisor about combining the description of the instrument and the development into the same section. If you are writing a journal article, then these three components will be integrated into a holistic description of the instrument, development, and reliability and validity information for each variable separately. In other words, the description, development, and reliability/validity information for intrinsic motivation will be explained in one paragraph, the description, development, and reliability/validity information for self esteem will be explained in the next paragraph, and so on.


Description of the Instrument

The Instrument section should start with a general overview of the instrument(s) used: the type of instrument (e.g., self-report questionnaire, achievement test scores, interview, behavioral checklist) and the general format of the instrument (e.g., Part A for personal information, and Parts B, C, D for the key variables). An example introduction is below:
The key variables in this study were measured by a self-report questionnaire. The first part of the instrument included demographic characteristics of gender, age, year in school, and type of school (public/private). The rest of the questionnaire assessed the five variables in the research hypotheses.

After a brief introduction, the Instruments section should explain how each variable was measured separately: one paragraph per variable. Oftentimes it is best to label the paragraph with the name of the variable that will be described in the paragraph (see the example below). Each paragraph should contain the following components:

  • A brief explanation of the construct that it was designed to measure. If there is no Operational Definitions section in Chapter 1, you must give the operational definition here. If there is a separate section for Operational Definitions, then a brief description will suffice.
  • " How participants responded. For example, some items are open-ended where participants are free to write any response. Other close-ended items may use a Likert Scale (e.g., Strongly Agree (4), Agree (3), etc.), tick yes/no, or indicate the frequency of a behavior (e.g., 5=Daily, 4=Weekly, etc.). Still other variables may have unique responses. Regardless, this section must very clearly explain how participants responded to the item.
  • Provide at least one sample item from the instrument itself.
  • How the variable was scored. More about scoring Likert Scale variables will be explained in Coding the Data. Briefly, in quantitative research, each participant must be assigned a number on each variable. The process of assigning the number must be clearly explained. For Likert Scale-like items, scores on each item that measure the variable are typically either summed or averaged. Multiple-choice items on achievement test scores are typically summed. How scores are assigned to essay items on achievement test scores must be clearly explained. The marking scheme for all achievement tests or examinations of knowledge must be attached in an Appendix.
An example description of the instrument is provided below:

Socioeconomic Status. Socioeconomic status (SES) is defined as a person's economic standing based on lifestyle, prestige, power, and control of resources (Liu, Ali, Soleck, Hopps, Dunston, Pickett, 2004). In this study, students who attended two private schools were classified as middle SES and students who attended two public schools were classified as low SES. The school name was noted by the researcher on the students' completed exams.

Intrinsic Motivation. Intrinsic motivation is a person's interest and enjoyment in a specific task. In this research study, the specific task is mathematics. Participants responded to 7 items, indicating how true each statement was to them on a Likert Scale from 1 (not at all true) to 7 (very true). A sample item is "I enjoyed doing mathematics very much." A total score for intrinsic motivation was calculated for each participant by averaging the responses on each of the seven items.

Positive Affect. Positive affect is the state of high energy, full concentration, and pleasurable engagement. Participants responded about how they feel while doing mathematics with ten adjectives. They rated how well each adjective described how they feel during mathematics on a five point Likert scale from 1 (very slightly or not at all) to 5 (extremely). "Excited" and "enthusiastic" are examples of positive affect. Positive affect was scored by averaging the responses on the ten adjectives.

Note how the paragraph for each variable started with the variable name. This helps to clarify what variable is being measured. Then an explanation of each variable was given, as well as exactly how the variable was measured including both the number of items and how participants responded. The sample item helps the reader evaluate whether the items do reflect the definition of the variable as stated at the beginning. Finally, how each item was scored aids in interpreting the Results in the next section.


Development of the Instrument

The Instrument section should also report how the instrument was developed. This section depends on whether the instrument was adopted, adapted, or developed by the researcher. Again, it is clearest for the reader if the procedures for the development of each variable is explained separately. If the same instrument was used to measure multiple variables, make it clear which variables were assessed by that instrument.

Adopted
If the instrument was adopted verbatim from an original instrument, explain:

  • Who developed the measure with specific citation information (include the reference in the References section)
  • Other studies that have used the instrument.

An example description of an instrument that was adopted is given below:

Intrinsic Motivation. A subtest from the Intrinsic Motivation Inventory (Ryan, 1982) was used to assess intrinsic motivation. This assessment has been used in other educational research studies by many others including Plant and Ryan (1985); Nix, Ryan, Manly, and Deci (1999); and Vansteenkitse and Deci (2003).

Adapted
If the instrument was adapted because substantial changes were made from the original, explain:

  • Who developed the measure with specific citation information (include the reference in the References section)
  • Exactly what was changed on the instrument, how the changes were made, and why
  • How the revision was judged to be a valid amendment

An example description of an instrument that was adapted is given below:

Positive Affect. Positive affect was assessed using an adaptation of the Positive and Negative Affect Scale (PANAS; Watson, Clark, & Tellegren, 1988). The original instrument was pilot tested with a group of 15 university students. They were asked to give a definition of each of the 10 adjectives. If 2 or more students gave an incorrect definition of the adjective, then it was determined that the participants would also not understand what the word meant. Of the ten adjectives, only one was unfamiliar to the pilot sample: jittery. A close synonym was identified as stressed, which replaced the original word jittery. The revised version of the PANAS was pilot-tested on a new group of 15 university students. Each adjective in the revised version had a correct definition by 14 or 15 pilot participants, so it was judged as an adequate adaptation of the PANAS.

New Instrument
If a new instrument was created for the purpose of the study, explain:

  • The procedure and resources used to develop the measure

An example description of a new instrument that was adapted is given below:

Socioeconomic Status. SES is typically defined by a family's level of income, parental occupation, and the parents' level of education. Operationally, SES tends to be measured in the United States by qualification for free or reduced lunches in public school system (e.g., Gonzales et al., 2008). However, Nigeria does not have a nationwide structure of support for students from low SES backgrounds. Therefore, SES was defined in this study as the type of school that children attended. Public education in Nigeria has many infrastructure problems, including a lack of funding and frequent teacher strikes that oftentimes cause 12 years of formal schooling to take 13 or more years to complete. As a result, most parents in Nigeria try to raise the money necessary to pay for the relatively more expensive fees for private schools.


Reliability and Validity

As mentioned previously, reliability and validity evidence are required for each variable of interest. Therefore, the reliability coefficient should be reported for each variable that consists of more than one item. Likewise, validity evidence should also be reported. This is easiest when an instrument was adopted because the researcher can simply summarize the validity evidence that has been gathered by other researchers as demonstrated below.

The coefficient alpha, measuring split-half reliability, for the PANAS measuring positive affect is .89. Validity for the instrument has been found by correlating the instrument with situations that should change affect. Positive affect has been found to be related to social activity (Watson et al., 1988).


Conclusion

Once the Instruments section is finished, the actual instruments themselves must be placed in an Appendix. Note that the instrument in the appendix must be the exact instrument that participants completed. Sometimes I suspect that students change an instrument after the participants complete it for the purpose of appearance in the appendix. This is unethical because it misrepresents the data that participants provided.


NEXT

Return to Educational Research Steps

Copyright 2012, Katrina A. Korb, All Rights Reserved