IB Psychology Internal Assessment: Experimental Report
AI-Generated Content
IB Psychology Internal Assessment: Experimental Report
The Internal Assessment (IA) is not just another assignment; it is your opportunity to think and work like a real psychologist. By designing, conducting, and reporting your own experiment, you bridge the gap between theoretical concepts and empirical practice. This project demands a careful balance of scientific rigor, ethical responsibility, and clear communication, ultimately teaching you how psychological knowledge is systematically built and evaluated.
Understanding the Task: Replication with a Purpose
Your IA requires you to conduct a simple experiment, which is a research method where one variable is manipulated to observe its effect on another, while controlling for extraneous influences. You will not invent a brand new study. Instead, you will perform a replication, which is an exact repeat of a published study's method, or a modification, where you change one key element (like the sample or a procedure) of an existing study. This approach ensures your study is grounded in established research.
Choosing a study to replicate is your first critical step. Look for classic, straightforward experiments in cognitive psychology or the biological approach, as these often translate well to a school setting. A study on the Stroop effect (where naming the color of a word is slowed if the word itself is a different color name) or memory recall under different conditions are excellent candidates. The goal is to demonstrate your understanding of the experimental process, not to produce groundbreaking results. Your modification, if you choose one, should be purposeful—for instance, testing if the Stroop effect differs between bilingual and monolingual participants.
Designing Your Experiment: From Hypothesis to Procedure
With a base study selected, you must formulate a focused, testable hypothesis. This is a precise, measurable prediction about the relationship between your variables. For a replication of a memory study, your operationalized hypothesis might be: "Participants who study a list of words using the chunking method will recall a significantly higher number of words (measured by a free recall test) than participants who study using simple repetition."
Your experimental design must be appropriate. You will almost certainly use an independent measures design, where different participants are used in each condition of the independent variable. This avoids practice effects. The independent variable (IV) is what you manipulate (e.g., study technique: chunking vs. repetition). The dependent variable (DV) is what you measure (e.g., number of words correctly recalled). You must identify and control key extraneous variables, such as noise, time of day, or participant prior knowledge, to ensure that any change in the DV is likely due to your IV.
Ethical approval is a non-negotiable foundation. Before any contact with participants, you must complete a thorough ethical consideration. This involves obtaining informed consent (participants understand what they will do and can withdraw at any time), ensuring the right to withdraw, maintaining confidentiality (using participant numbers, not names), and providing a full debriefing after the study to explain its true aims and address any concerns. Your teacher must approve your ethical proposal before you proceed.
Executing the Study and Collecting Data
Once approved, you collect data systematically. You need a minimum of 20 total participants, with at least 10 in each condition for an independent measures design. Your procedure must be standardized: every participant in the same condition receives identical instructions and experiences the same experimental environment. This standardization is crucial for reliability. Use materials that are easy to administer and score, like word lists, simple puzzles, or perception tasks. Record your raw data immediately in a well-organized table, ready for analysis.
Analyzing Results: Descriptive Statistics and Presentation
The results section is where you transform raw data into meaningful information. You must use descriptive statistics to summarize and present your data clearly. For each condition of your IV, calculate the measure of central tendency (mean, median, or mode) and a measure of dispersion (range, or ideally, standard deviation). The mean is the average score, while the standard deviation tells you how spread out the scores are around the mean—a low standard deviation indicates scores are clustered tightly together.
Always present your calculations. For example:
- Condition A (Chunking): Mean = , Standard Deviation =
- Condition B (Repetition): Mean = , Standard Deviation =
These statistics must be accompanied by a clear graphical display. For comparing the means of two independent groups, a bar chart with error bars (representing standard deviation) is the most appropriate and expected format. The chart should be properly labeled (title, axes, units) and allow for a visual comparison of your groups. Your text should narrate what the graph and statistics show without interpreting why. For instance: "As shown in Figure 1, the mean recall score for the chunking group () was higher than for the repetition group (). The repetition group also showed greater variability in scores, as indicated by the larger standard deviation."
Writing the Discussion: Evaluation and Reflection
The discussion is your chance to demonstrate critical thinking. Begin by stating whether your results support your hypothesis. Then, link your findings back to the background theory and the study you replicated. Do your results match the original study? If not, propose reasoned explanations.
The core of the discussion is a detailed evaluation of your methodology. You must discuss strengths and limitations in terms of internal validity (did you actually test what you intended?), reliability (would the study produce consistent results?), and generalizability (can the findings be applied to other people/settings?). For example, a strength might be effective control of an extraneous variable through standardization. A limitation might be a small, convenience sample of fellow students, which limits population validity.
Suggest modifications for a future replication. If participant variability was high, you might suggest using a matched pairs design. If instructions were confusing, you would propose a clearer script. This shows you understand how to improve scientific inquiry.
Common Pitfalls
- Poor Operationalization: Defining your variables vaguely (e.g., "memory" or "stress") instead of measurably (e.g., "score on a 20-item free recall test" or "heart rate in beats per minute"). This makes your study impossible to replicate accurately.
- Correction: Always ask "How will this be measured?" Your DV must produce a number.
- Ignoring Ethical Practicalities: Treating the ethical consideration as a formality. Failing to properly debrief participants or storing data with names attached are serious issues.
- Correction: Plan your ethics from the start. Create an anonymous data collection sheet and a script for the debrief. Get signed consent forms.
- Confusing Description with Interpretation in Results: Writing "The chunking group did better because the method is more effective" in the results section. The "why" belongs in the discussion.
- Correction: The results section should only state the facts: what the means, standard deviations, and graph show. Save all explanations, links to theory, and reasons for the next section.
- Superficial Evaluation: Stating a limitation like "our sample was small" without explaining how it specifically impacts the validity or generalizability of your conclusions.
- Correction: Deepen the analysis. For example: "The use of a convenience sample of 16-year-old IB students limits the population validity of the findings, as we cannot generalize the effect of chunking to older adults or to non-academic populations."
Summary
- The Psychology IA is a structured exercise in conducting and reporting a simple experimental study, typically through the replication or modification of published research.
- A successful project rests on a testable, operationalized hypothesis, a sound experimental design (usually independent measures), and rigorous attention to ethical guidelines from conception to debriefing.
- Data analysis requires the correct use of descriptive statistics (mean and standard deviation) presented in a clear, appropriate graph, with narrative that describes but does not interpret the findings.
- The discussion must explicitly connect results to the hypothesis and background research, followed by a critical evaluation of methodological strengths and limitations, concluding with practical suggestions for improving a future study.