Writing up the results section of your dissertation
(Last updated: 12 May 2021)
Since 2006, Oxbridge Essays has been the UK’s leading paid essay-writing and dissertation service
We have helped 10,000s of undergraduate, Masters and PhD students to maximise their grades in essays, dissertations, model-exam answers, applications and other materials.
If you would like a free chat about your project with one of our UK staff, then please just reach out on one of the methods below.
When asked why doing a dissertation can be such a headache, the typical student usually replies with one of two answers. Either, they simply don't like writing enormous volumes of text, or – and you may relate here – they categorically do not enjoy analysing data. "It's so long and boring!", the typical student wails.
Well, students wail, and we answer. We have put together this very comprehensive, very useful guide on how to write up the results section of your dissertation. To help you further, we've broken the information down into both quantitative and qualitative results, so you can focus on what applies to you most.
Writing up your quantitative results
Understanding the basics of your research
First, you need to recall what you have assessed – or what your main variables are.
All quantitative research has at least one independent and one dependent variable, and, at this point, you should define them explicitly. An independent variable is one that you control to test its effects on the dependent variable. A dependent variable is thus your outcome variable.
Second, you need to determine if your variables were categorical or continuous.
A categorical variable is one with a fixed number of possible values, and a continuous variable is one where final scores have a wide range. Finally, you need to recall if you have used a so-called covariate or confounder variable. This is a variable that could have influenced the relationship between your independent and dependent variable, and that you controlled in order to accurately estimate the relationship between your main variables.
Let’s explain all this with an example. Suppose that your research was to assess whether height is associated with self-esteem. Here, participants’ height is an independent variable and self-esteem is a dependent variable. Because both height and scores on a measure of self-esteem can have a wide range, you have two continuous variables. You might have also wanted to see if the relationship between height and self-esteem exists after controlling for participants’ weight. In this case, weight is a confounding variable that you need to control for.
Here is another example. You might have assessed whether more females than males want to read a specific romantic novel. Here, your independent variable is gender and your dependent variable is the determination to read the book. Since gender has categories (male and female), this is a categorical variable. If you have assessed the determination to read the book on a scale from 1 to 10 (e.g. 1 = no determination at all to read the book, all the way to 10 = extremely strong determination to read it), then this is a continuous variable; however, if you have asked your participants to say whether they do or do not want to read the book, then this is a categorical variable (since there are two categories: “yes” and “no”).
Lastly, you might have wanted to see if the link between gender and the determination to read the book exists after controlling for participants’ current relationship status. Here, relationship status is your confounding variable.
We will return to these examples throughout this blog post. At this point, it is important to remember that outlining your research in this way helps you to write up your results section in the easiest way possible.
Let’s move on to the next step.
Outlining descriptive and frequencies statistics
In order to report descriptive and/or frequencies statistics, you need to outline all variables that you have used in your research and note whether those variables are continuous or categorical.
For continuous variables, you are using descriptive statistics and reporting the measures of central tendency (mean) and measures of variability or spread (standard deviation). For categorical variables, you are using frequencies statistics and reporting the number (or frequency) of participants per category and associated percentages. Both these statistics require you to make a table, and in both cases you also need to comment upon the statistics.
How does all of this look in practice? Recall the two examples that were outlined above. If you have assessed the association between participants’ height and self-esteem, while controlling for participants’ weight, then your research consists of three continuous variables. You need to make a table, as in TABLE 1 below, which identifies means and standard deviations for all these variables. When commenting upon the results, you can say:
Participants were on average 173.50 cm tall (SD = 5.81) and their mean weight was 65.31 kg (SD = 4.44). On average, participants had moderate levels of self-esteem (M = 5.55, SD = 2.67).
Note that, in this example, you are concluding that participants had moderate self-esteem levels if their self-esteem was assessed on a 1 to 10 scale. Since the value of 5 falls within the middle of this range, you are concluding that the mean value of self-esteem is moderate. If the mean value was higher (e.g., M = 8.33), you would conclude that participants’ self-esteem was, on average, high; and if the mean value was lower (e.g., M = 2.44), you would conclude that average self-esteem scores were low.
TABLE 1. Descriptive statistics for all variables used in research:
M | SD | |
---|---|---|
Height (cm) | 173.50 | 5.81 |
Weight (kg) | 65.31 | 4.44 |
Self-esteem | 5.55 | 2.67 |
Let’s now return to our second research example and say that you want to report the degree to which males and females want to read a romantic novel, where this determination was assessed on a 1-10 (continuous) scale. This would look as shown in TABLE 2.
TABLE 2. Descriptive statistics for the determination to read the book, by gender:
Males | Females | |
---|---|---|
Determination to read the book | M = 3.20 | M = 6.33 |
Determination to read the book | SD = .43 | SD = 1.36 |
We can see how to report frequencies statistics for different groups by referring to our second example about gender, determination to read a romantic novel, and participants’ relationship status.
Here, you have three categorical variables (if determination to read the novel was assessed by having participants reply with “yes” or “no”). Thus, you are not reporting means and standard deviations, but frequencies and percentages.
To put this another way, you are noting how many males versus females wanted to read the book and how many of them were in a relationship, as shown in TABLE 3. You can report these statistics in this way:
Twenty (40%) male participants wanted to read the book and 35 (70%) female participants wanted to read the book. Moreover, 22 (44%) males and 26 (52%) females indicated that they are currently in a relationship.
TABLE 3. Frequencies statistics for all variables used in research:
Males | Females | |
---|---|---|
Determination to read the book | ||
Yes | 20 (40%) | 35 (70%) |
No | 30 (60%) | 15 (30%) |
Relationship status | ||
Yes | 22 (44%) | 26 (52%) |
No | 28 (56%) | 24 (48%) |
Reporting the results of a correlation analysis
The first of these is correlation, which you use when you want to establish if one or more (continuous, independent) variables relate to another (continuous, dependent) variable. For instance, you may want to see if participants’ height correlates with their self-esteem levels.
The first step here is to report whether your variables are normally distributed. You do this by looking at a histogram that describes your data. If the histogram has a bell-shaped curve (see purple graph below), your data is normally distributed and you need to rely on a Pearson correlation analysis. Here, you need to report the obtained r value (correlation coefficient) and p value (which needs to be lower than .05 in order to establish significance). If you find a correlation, you need to say something like:
The results of the Pearson correlation analysis revealed that there was a positive correlation between people’s height and their self-esteem levels (r = .44, p < .001).
One final thing to note, which is important for all analyses, is that when your p value is indicated to be .000, you never report it by saying “p = .000”, but by noting “p < .001”. This is because the p value can never be exactly equal to .000. In all other cases, you will indicate an exact p value, by saying “p = .011”.
If your data is skewed rather than normally distributed (see red graphs), then you need to rely on a Spearman correlation analysis. Here, you report the results by saying:
Spearman correlation analysis revealed a positive relationship between people’s height and their self-esteem (rs = .44, p < .001). Finally, if you have used a covariate, such as participants’ weight, you have used a partial correlation. Your results now tell you the extent to which participants’ height and self-esteem correlate after controlling for participants’ weight. You report the results by saying something like: There has been a significant positive correlation between height and self-esteem after controlling for participants’ weight (r = .39, p = .034).
You also need to make a table that will summarise your main results. If you didn’t use a covariate, you will have a fairly simple table, such as that shown in TABLE 4. If you have used a covariate, your table is slightly more complex, such as that shown in TABLE 5. Note that both tables use “-” to indicate correlations that have already been noted within the table. Also note how “*”, “**”, and “***” are used to annotate correlations that are significant at different levels.
TABLE 4. Correlations between all variables used in research:
Height (cm) | Self-esteem | |
---|---|---|
Height (cm) | 1.00 | – |
Self-esteem | .44*** | 1.00 |
***p < .001 |
Control variables | Height (cm) | Self-esteem | Weight (kg) | |
---|---|---|---|---|
None | Height (cm) | 1.00 | – | – |
Self-esteem | .44*** | 1.00 | – | |
Weight (kg) | .38** | .32** | 1.00 | |
Weight (kg) | Height (cm) | 1.00 | – | – |
Self-esteem | .39* | 1.00 | -.44 | |
*p < .05; **p < .01; ***p < .001 |
Reporting the results of a regression
These are the specific points that you need to address in order to make sure that all assumptions have been met:
(1) for the assumption of no multicollinearity (i.e., a lack of high correlation between your independent variables), you need to establish that none of your Tolerance statistics are below .01 and none of the VIF statistics are above 10;
(2) for the assumption of no autocorrelation of residuals (i.e., a lack of correlation between the residuals of two observations), you need to look at this table and see whether your Durbin-Watson statistic falls within a desirable range, depending on your number of participants and the number of predictors (independent variables); and,
(3) for the assumptions of linearity (i.e., a linear relationship between independent and dependent variables) and homoscedasticity (i.e., a variance of error terms that should be similar across the independent variables), you need to look at the scatterplot of standardised residual on standardised predicted value and conclude that your graph doesn’t funnel out or curve.
All of this may sound quite complex. But in reality it is not: you just need to look at your results output to note the Tolerance and VIF values, Durbin-Watson value, and the scatterplot. Once you conclude that your assumptions have been met, you write something like:
Since none of the VIF values were below 0.1 and none of the Tolerance values were above 10, the assumption of no multicollinearity has been met. Durbin-Watson statistics fell within an expected range, thus indicating that the assumption of no autocorrelation of residuals has been met as well. Finally, the scatterplot of standardised residual on standardised predicted value did not funnel out or curve, and thus the assumptions of linearity and homoscedasticity have been met as well.
If your assumptions have not been met, you need to dig a bit deeper and understand what this means. A good idea would be to read the chapter on regression (and especially the part about assumptions) written by Andy Field. You can access his book here. This will help you understand all you need to know about the assumptions of a regression analysis, how to test them, and what to do if they have not been met.
Now let’s focus on reporting the results of the actual regression analysis. Let’s say that you wanted to see if participants’ height predicts their self-esteem, after controlling for participants’ weight. You have entered height and weight as predictors in the model and self-esteem as a dependent variable.
First, you need to report whether the model reached significance in predicting self-esteem scores. Look at the results of an ANOVA analysis in your output and note the F value, degrees of freedom for the model and for residuals, and significance level. These values are shown in PICTURE 2.
PICTURE 2. Results of ANOVA for regression:
PICTURE 3. Model summary for regression:
Significance value tells you if your predictor reached significance – such as whether participants’ height predicted self-esteem scores. You also need to comment upon the β value. This value represents the change in the outcome associated with a unit change in the predictor. Thus, if your β value is .351 for participants’ height (predictor/independent variable), then this means that for every increase in height by 1 cm, self-esteem increases by .35. You need to report the same thing for your other predictor – that is, participants’ weight.
Finally, since your model included both height and weight as predictors, and height acted as a significant predictor, you can conclude that participants’ height influences their self-esteem after controlling for weight.
PICTURE 4. Regression coefficients:
The model reached significance, meaning that it successfully predicted self-esteem scores (F(1,40) = 99.59, p < .001). The model explained 33.5% of variance in self-esteem scores. Participants’ self-esteem was predicted by their weight (β = .35, t = -8.13, p < .001). For every increase in weight by 1 kg, self-esteem decreased by 35. Participants’ self-esteem was also predicted by their height (β = .58, t = 17.80, p < .001), after controlling for their weight. For every increase in height by 1 cm, self-esteem increased by .58.
Reporting the results of a chi-square analysis
For instance, you would do a chi-square analysis when you want to see whether gender (categorical independent variable with two levels: males/females) affects the determination to read a book, when this variable is measured with yes/no answers (categorical dependent variable with two levels: yes/no).
When reporting your results, you should first make a table as shown in TABLE 3 above. Then you need to report the results of a chi-square test, by noting the Pearson chi-square value, degrees of freedom, and significance value. You can see all these in your output.
Finally, you need to report the strength of the association, for which you need to look at the Phi and Cramer’s V values. When each of your variables has only two categories, as in the present example, Phi and Cramer’s V values are identical and it doesn’t matter which one you will report. However, when one of your variables has more than two categories, it is better to report the Cramer’s V value. You report these values by indicating the actual value and the associated significance level.
Note that Cramer’s V value can range from 0 to 1. The closer the value is to 1, the higher the strength of the association. You can report the results of the chi-square analysis in the following way:
There was a significant association between gender and the determination to read the romantic novel (x2(1) = 25.36, p < .001). Cramer’s V value was significant (Cramer’s V = .75, p < .001) and it indicated a high strength of the association.
Reporting the results of a t-test analysis
Recall that you have previously outlined descriptive statistics for these variables, where you have noted means and standard deviations for males’ and females’ scores on the determination to read the novel (see TABLE 2 above). Now you need to report the obtained t value, degrees of freedom, and significance level – all of which you can see in your results output.
You can say something like:
Males reported a lower determination to read the novel (M = 3.20, SD = .43) when compared to females (M = 6.33, SD = 1.36). The results of a t-test analysis revealed that this difference reached significance (t(54) = 4.47, p < .001).
Reporting the results of one-way ANOVA
In the t-test example, you had two conditions of a categorical independent variable, which corresponded to whether a participant was male or female. You would have three conditions of an independent variable when assessing whether relationship status (independent variable with three levels: single, in a relationship, and divorced) affects the determination to read a romantic novel (as assessed on a 1-10 scale).
Here, you would report the results in a similar manner to that of a t-test. You first report the means and standard deviations on the determination to read the book for all three groups of participants, by saying who had the highest and lowest mean. Then you report the results of the ANOVA test by reporting the F value, degrees of freedom (for within-subjects and between-subjects comparisons), and the significance value.
There are two things to note here. First, before reporting your results, you need to look at your output to see whether the so-called Levene’s test is significant. This test assesses the homogeneity of variance – the assumption being that all comparison groups should have the same variance. If the test is non-significant, the assumption has been met and you are reporting the standard F value.
However, if the test is significant, the assumption has been violated and you need to report instead the Welch statistic, associated degrees of freedom, and significance value (which you will see in your output; for example, see PICTURE 3 above).
The second thing to note is that ANOVA tells you only whether there were significant differences between groups – but if there are differences, it doesn’t tell you where these differences lie. For this, you need to conduct a post-hoc comparison (Tukey’s HSD test). The output will tell you which comparisons are significant.
You can report your results in the following manner:
Single participants were most determined to read the book (M = 7.11, SD = .45), followed by divorced participants (M = 5.11, SD = .55) and participants who are in a relationship (M = 4.95, SD = .44). ANOVA revealed significant between-groups differences (F(2,12) = 5.12, p = .004). Post-hoc comparisons revealed that significant differences occurred between participants who were single and in a relationship (p = .003) and between single and divorced participants (p = .004). There were no significant differences between divorced and in-a-relationship participants (p = .067).
Reporting the results of ANCOVA
For instance, you will use ANCOVA when you want to test whether relationship status (categorical independent variable with three levels: single, in a relationship, divorced) affects the determination to read a romantic novel (continuous dependent variable, assessed on a 1-10 scale) after controlling for participants’ general interest in books (continuous covariate, assessed on a 1-10 scale).
To report the results, you need to look at the “test of between-subjects effects” table in your output. You need to report the F values, degrees of freedom (for each variable and error), and significance values for both the covariate and the main independent variable. As with ANOVA, a significant ANCOVA doesn’t tell you where the differences lie. For this, you need to conduct planned contrasts and report the associated significance values for different comparisons.
You can report the results in the following manner:
The covariate, general interest in books, was significantly related to the determination to read the romantic novel (F(1,26) = 4.96, p < .001). There was also a significant effect of relationship status on the determination to read the romantic novel, after controlling for the effect of the general interest in books (F(2,26) = 4.14, p < .001). Planned contrasts revealed that being single significantly increased the determination to read the book when compared to being in a relationship (t(26) = 2.77, p = .004) and when compared to being divorced (t(26) = 1.89, p = .003).
Reporting the results of MANOVA
For instance, you would use MANOVA when testing whether male versus female participants (independent variable) show a different determination to read a romantic novel (dependent variable) and a determination to read a crime novel (dependent variable).
When reporting the results, you first need to notice whether the so-called Box’s test and Levene’s test are significant. These tests assess two assumptions: that there is an equality of covariance matrices (Box’s test) and that there is an equality of variances for each dependent variable (Levene’s test).
Both tests need to be non-significant in order to assess whether your assumptions are met. If the tests are significant, you need to dig deeper and understand what this means. Once again, you may find it helpful to read the chapter by Andy Field on MANOVA, which can be accessed here.
Following this, you need to report your descriptive statistics, as outlined previously. Here, you are reporting the means and standard deviations for each dependent variable, separately for each group of participants. Then you need to look at the results of “multivariate analyses”.
You will notice that you are presented with four statistic values and associated F and significance values. These are labelled as Pillai’s Trace, Wilks’ Lambda, Hotelling’s Trace, and Roy’s Largest Root. These statistics test whether your independent variable has an effect on the dependent variables. The most common practice is to report only the Pillai’s Trace. You report the results in the same manner as reporting ANOVA, by noting the F value, degrees of freedom (for hypothesis and error), and significance value.
However, you also need to report the statistic value of one of the four statistics mentioned above. You can label the Pillai’s Trace statistic with V, the Wilks’ Lambda statistic with A, the Hotelling’s Trace statistic with T, and Roy’s Largest Root statistic with Θ (but you need report only one of them).
Finally, you need to look at the results of the Tests of Between-Subjects Effects (which you will see in your output). These tests tell you how your independent variable affected each dependent variable separately. You report these results in exactly the same way as in ANOVA.
Here’s how you can report all results from MANOVA:
Males were less determined to read the romantic novel (M = 4.11, SD = .58) when compared to females (M = 7.11, SD = .43). Males were also more determined to read the crime novel (M = 8.12. SD = .55) than females (M = 5.22, SD = .49). Using Pillai’s Trace, there was a significant effect of gender on the determination to read the romantic and crime novel (V = 0.32, F (4,54) = 2.56, p = .004). Separate univariate ANOVAs on the outcome variables revealed that gender had a significant effect on both the determination to read the romantic novel (F(2,27) = 9.73, p = .003) and the determination to read the crime novel (F(2,27) = 5.23, p = .038).
Writing up your qualitative results
Understanding the basics of your research
Before reporting the results of your qualitative research, you need to recall what type of research you have conducted. The most common types of qualitative research are interviews, observations, and focus groups – and your research is likely to fall into one of these types.
All three types of research are reported in a similar manner. Still, it may be useful if we focus on each of them separately.
Reporting the results of interviews
For example, let’s say that your qualitative research focused on young people’s reasons for smoking. You have asked your participants questions that explored why they started smoking, why they continue to smoke, and why they wish to quit smoking. Since your research was organised in this manner, you already have three major themes: (1) reasons for starting to smoke, (2) reasons for continuing to smoke, and (3) reasons for quitting smoking. You then explore particular reasons why your participants started to smoke, why they continue to smoke, and why they want to quit. Each reason that you identify will act as a subtheme.
When reporting the results, you should organise your text in subsections. Each section should refer to one theme. Then, within each section, you need to discuss the subthemes that you discovered in your data.
Let’s say that you found that young people started smoking because: (1) they thought smoking was cool, (2) they experienced peer pressure, (3) their parents modelled smoking behaviour, (4) they thought smoking reduces stress, and (5) they wanted to try something new. Now you have five subthemes within your “reasons for starting to smoke” theme. What you need to do now is to present the findings for each subtheme, while also reporting quotes that best describe your subtheme. You do that for each theme and subtheme.
It is also good practice to make a table that lists all your themes, subthemes, and associated quotes.
Here’s an example of how to report a quote within a text:
Several participants noted that they started smoking because they thought smoking was cool. One participant said: “I was only 15 at the time and I was looking at these older boys whom everybody considered as cool. I was shy and I always wanted to be more noticed. So I thought that, if I start smoking, I will be more like these older boys” (interview 1, male).
Reporting the results of observations
For instance, you might have noticed that the therapist finds it important to discuss: (1) the origin of the problem, (2) the lack of a patient’s medical difficulties, (3) the experience of stress, (4) the link of stress to the problem, and (5) the new understanding of the problem. You can consider these as themes in your observations.
Accordingly, you will want to report each theme separately. You do this by outlining your observation first (this can be a conversation or a behaviour that you observed), and then commenting upon it.
Here’s an example:
Therapist: Was there something that stressed you out during the last few months?
Patient: Yes, of course. I thought I would lose my job, but that passed. After that, I was breaking up with my girlfriend. But between those things, I was fine.
Therapist: And was there any difference in your symptoms while you were and while you were not stressed?
Patient: Hmmm. Actually yes. Now that I think of it, they were mostly present when I went through those periods.
Therapist: Could it be that stress intensifies your symptoms?
Patient: I never thought of it. I guess it seems logical. Is it?
In this example, the therapist has tried to make a connection between the patient’s symptoms and stress. She did not explicitly tell the patient “your symptoms are caused by stress”. Instead, she has guided him, through questions, to connect his symptoms to stress. This seems beneficial because the patient has arrived at the link between stress and symptoms himself.
Reporting the results of focus groups
As an example, let’s say that your focus group dealt with identifying reasons why some people prefer Coca-Cola over Schweppes, and vice versa. You have transcribed your focus group sessions and have extracted themes from the data. You have discovered a wide variety of reasons why people prefer one of the two drinks.
When reporting your results, you should have two sections: one listing reasons for favouring Coca-Cola, and the other listing reasons for favouring Schweppes. Within each section, you need to identify specific reasons for these preferences. You should connect these specific reasons to particular quotes.
Here’s an example of how this may look:
The first reason why some participants favoured Schweppes over Coca-Cola is that Schweppes is considered as less sweet. Several participants agreed on this notion. One said: “I don’t like Coca-Cola because it is simply too sweet. Schweppes has a much more bitter taste, and I don’t feel like I am getting stuffed with sugar” (participant 2, female). Another participant agreed by noting: “I completely agree with what she said. Because Coca-Cola is sweet, I feel like I have taken a candy, and this doesn’t refresh me. A glass of cold Schweppes is much more refreshing. I don’t feel like needing water after it” (participant 4, male).
In conclusion…
As we have seen, writing up qualitative results is easier than writing quantitative results. Yet, even reporting statistics is not that hard, especially if you have a good guide to help you.
Hopefully, this guide has reduced your worries and increased your confidence that you can write up the results section of your dissertation without too many difficulties.