To keep, to break or to make the ‘rules’… that is a question

What are the ‘rules’ of dissertation writing? They almost certainly differ between fields. In the visual arts, for example, a dissertation would almost certainly include text as well as images, textiles, design elements of a certain shape and form. In Mathematics, a dissertation would have to include mathematical ‘language’ and forms as well as more conventional text, as arguments are made in equations as well as in other forms of writing. In the social sciences in general the ‘rules’ seem to be fairly general. But it seems broadly agreed that there should be around 80-100,000 words or text, and that there must be an introduction, a literature review, a methodology and methods chapter, findings and discussion of these, and a conclusion that could include recommendations. These were certainly the ‘rules’ as I understood them when I started out with my own PhD process. By rules, I mean ‘the agreed upon (usually quite tacit) format, style and content of a dissertation within a particular field or discipline’.

A big question for me, throughout my own doctoral journey, was about these rules, and whether I was breaking them, keeping them or making new ones as I went. There are risks associated with breaking rules – you could be misunderstood, or make your argument more convoluted and confusing through trying to create a new way of producing a dissertation, or find yourself having to do many corrections. But, I think there are also risks associated with keeping a set of rules, especially if you don’t have a clear understanding of what the rules are and why they are there. In this case you could end up producing a thesis that conforms but underneath the surface may be ‘thin’ or unsatisfying, and this lack of understanding of what you are writing and why could make it harder for you to move successfully into a research and writing career post-PhD. Further, making new rules is tough, and doesn’t often happen (not for scholars just starting out anyway). You may make small little dents in a structure you disagree with or think could be improved, but it’ll take a number of you to really start making people question whether a specific rule, process or outcome perhaps needs to be changed or updated.

I feel like I bent, rather than broke, some of the rules of dissertation writing, and the experience was mostly an anxious one, even though the creativity it allowed me to bring to my writing was exciting, and satisfying. I think visually, and my research journal is full of pictures and scribbles as well as more conventional forms of text. I like metaphors, and I use these a lot in my writing and my teaching. I had a metaphor for the argument I was making, and I was using this to think through my ‘theory chapter’, until a friend, Carol, listened to my idea and suggested that it might be a useful metaphor for the whole dissertation. The metaphor was that of an archaeological dig, and it structured the way I organised my chapters, the headings I gave to them, and what I included in them. I really loved it, but I worried that it was too creative for a PhD in Education, and that it would somehow detract from the seriousness of my argument, or be seen as flippant by my examiners. I think this is one of the more common fears, perhaps, about using a visual and creative tool like metaphor in a field that is not conventionally visual, like the fine or creative arts. In the end, none of my examiners commented on it (which was disappointing) so I needn’t have worried so much. But I still think what I did was important, for me, even if it was not noted by the people who were ultimately responsible for passing or failing my work.

We write, when we do, for others – for our readers and colleagues – but more importantly, we write for ourselves, for our own personal, emotional and intellectual growth and edification. I think when you’re doing a PhD and you’re focused so much on what your supervisor/s think and what your examiners will think and what parts you’ll be able to publish in journals and what the wider academic community in your field will think, we forget to ask ourselves what we think about what we are writing. The questions about making, breaking (bending) or keeping within the rules becomes a question then about balance – to what extent do we consider our own desires and aims as creators of our own work, and how to we balance these with what we are asked to produce for our external audience? What kinds of risks do we accept and grapple with when we choose to bend and break generally accepted rules of thesis or article writing? What if what we are doing feeds our own souls, but falls on deaf ears in terms of examiners and peer reviewers? Is that too much of a risk, and do we then tone down our creativity in order to create something more conventional and less risky? For me, this is a risk: I’ll be able to get my article published (please editors) but I may not be really happy with what I have put out these connected with my name.

Perhaps we need to make these kinds of conversations a more recognised and conscious part of PhD supervision, and academic writing for publication. Why do the ‘rules’ as we tend to know them exist and who do they serve? Can they be bent, broken and remade? Who carries the risks here, and what indeed are these risks? I don’t yet have any clear answers**, but I think these are important questions to be asking, talking about, and finding answers to as scholarly communities of practice and as PhD students and supervisors.

 

**A new edited collection takes on the notion of risk in doctoral writing from a range of perspectives: Thesen, L. and Cooper, L. 2013. Risk in Academic Writing: Postgraduate Students, Their Teachers and the Making of Knowledge. Bristol: Multilingual Matters.

Advertisements

Iterativity in data analysis: part 2

This post follows on from last week’s post on the iterative process of doing qualitative data analysis. Last week I wrote a more general musing on the challenges inherent in doing qualitative analysis; this week’s post is focused more on the ‘tools’ or processes I used to think and work my way through my iterative process. I drew quite a lot on Rainbow Chen’s own PhD tools as well as others, and adapted these to suit my research aims and my study (reference at the end).

The first tool was a kind of  ’emergent’ or ‘ground up’ form of organisation and it really helps you to get to know your data quite well. It’s really just a form of thematic organisation – before you begin to analyse anything, you have to sort, organise and ‘manage’ your mountain of data so that you can see the wood for the trees, as it were. I didn’t want to be overly prescriptive. I knew what I was looking for, broadly, as I had generated specific kinds of data and my methodology and theorology were very clearly aligned. But I didn’t really know what exactly all my data was trying to tell me and I really wanted it to tell its story rather than me telling it what it was supposed to be saying. I wanted, in other words, for my data to surprise me as well as to show me what I had broadly hoped to find in terms of my methodology and my theoretical framework.  So, the ‘tool’ I used organised the data ‘organically’ I suppose – creating very descriptive categories for what I was seeing and not trying to overthink this too much. As I read through my field notes, interview transcripts, video transcripts, documents, I created categories like ‘focusing on correct terminology’ and ‘teacher direction of classroom space’ and ‘focus on specific skills’. The theory is always informing the researcher’s gaze, as Chen notes in her paper (written with Karl Maton) but to rush too soon to theory can be a mistake and can narrow your findings. So my theory was here, underpinning my reading of the data, but I did not want to rush to organise my data into theoretical and analytical ‘codes’ just yet. There was a fair bit of repetition as I did this over a couple of weeks, reading through all my data at least twice for each of my two case studies. I put the same chunks of text into different categories (a big plus of using data software) and I made time to scribble in my research journal at the end of each day during this this process, noting emerging patterns or interesting insights that I wanted to come back to in more depth in the analysis.

An example of my first tool in action

An example of my first tool in action

The second process was what a quantitative researcher might call ‘cleaning’ the data. There was, as I have noted, repetition in my emergent categories. I needed to sort that out and also begin to move closer to my theory by doing what I called ‘super-coding’ – beginning to code my data more clearly in terms of my analytical tools. There were two stages here: the first was to go carefully through all my categories and merge very similar ones, delete unnecessary categories left over after the merging, and make sure that there were no unnecessary or confusing repetitions. I felt like the data was indeed ‘cleaner’ after this first stage. The second stage was to then super-code by creating six overarching categories, names after the analytical tools I developed from the theory. For example, using LCT gave me ‘Knowers’, ‘Knowledge’, ‘Gravity’ and ‘Density’. I was still not that close to the theory here so I used looser terms than the theory asks researchers to use (for example we always write ‘semantic gravity’ rather than just ‘gravity’). I then organised my ‘emergent’ categories under these headings, ending up with two levels of coded data, and coming a step closer to analysis using the theoretical and analytical tools I had developed to guide the study.

By this stage, you really do know you data quite well, and clearer themes, patterns and even answers to your questions begin to bubble up and show themselves. However, it was too much of a leap for me to go from this coding process straight into writing the chapter; I needed a bridge. So I went back to my research journal for the third ‘tool’ and started drawing webs, maps, plans for parts of my chapters. I planned to write chunks, and then connect these together later into a more coherent whole. This felt easier than sitting myself down to write Chapter Four or Chapter Five all in one go. I could just write the bit about the classroom environment, or the bit about the specialisation code, and that felt a lot less overwhelming. I spent a couple of days thinking through these maps, drawing and redrawing them until I felt I could begin to write with a clearer sense of where I was trying to end up. I did then start writing, and working on the chapters, and found myself (to my surprise, actually) doing what looked and felt like and was analysis. It was exciting, and so interesting – after being in the salt mines of data generation, and enduring what was often quite a tedious process of sitting in classrooms and making endless notes and transcribing everything, to see in the pile of salt beautiful and relevant shapes, answers and insights emerging was very gratifying. I really enjoyed this part of the PhD journey – it made me feel like a real researcher, and not a pretender to the title.

One of my 'maps'

Another ‘map’ for chapter writing

A different 'map' for writing

A ‘map’ for writing

This part of the PhD is often where we can make a more noticeable contribution to the development, critique, generation of new knowledge, of and in our fields of study. We can tell a different or new part of a story others are also busy telling and join a scholarly conversation and community. It’s important to really connect your data and the analysis of it with the theoretical framework and the analytical tools that have emerged from that. If too disconnected, your dissertation can become a tale of two halves, and can risk not making a contribution to your field, but rather becoming an isolated and less relevant piece of research. One way to be more conscious of making these connections clear to yourself and your readers is to think carefully about and develop a series of connected steps in your  data analysis process that bring you from you data towards your theory in an iterative and rich rather than linear and overly simplistic way. Following and trying to trust a conscious process is tough, but should take you forward towards your goal. Good luck!

keep calm

 

Reference: Chen, T-S. and Maton, K. (2014) ‘LCT and Qualitative Research: Creating a language of description to study constructivist pedagogy’. Draft chapter (forthcoming).

 

Iterativity in data analysis: part 1

This post is a 2-parter and follows on from last week’s post about generating data.

The one thing I did not know, at all, during my PhD was that qualitative data analysis is a lot more complex, messy and difficult than it looks. I had never done a study of this magnitude or duration before, so I had never worked with this much data before. I had written papers, and done some analysis of much smaller and less messy data sets, so I was not a c0mplete novice, but I must say I was quite taken aback by the mountain of data I found I had once the data generation was complete. What to do now? Where to start? Help!

The first thing I did, on my supervisor’s advice, was get a license for Nvivo10 and uploaded all my documents, interview and video recordings and field notes into its clever little software brain so that I could organise the data into folders, and so that I could start reading and coding it. This was invaluable. Software that enables you to store, organise and code your data is a must, I think, for a study as large and long as a PhD. This is not an advert for Nvivo so I won’t get into all its features, and I am sure that other free and paid-for qualitative data analysis packages like Atlas Tii or the Coding Analysis Toolkit from UMass would do the job just as well. However, I will say that being able to keep everything in one place, and being able to put similar chunks of text into different folders without mixing koki colours or scribbling all over paper to the point of confusion was really useful. I felt organised, and that made a big difference to my mental ability to cope with the data analysis and sense-making process.

The second thing I did was keep very detailed notes in my research journal on my process as it unfolded. This was essential as I needed to narrate my analysis process to my readers in as much detail as possible in my methodology chapter. If a researcher cannot tell you how they ended up with the insights and conclusions they did, it is much harder to trust their research or believe what they are asking you to. I wanted to be believable and convincing – I think all researchers do. Bernstein (2000) wrote about needed two ‘languages of description (LoD)’ in research: the internal (InLoD) which is essentially where you create a theoretical framework for your study that coheres and explains how you are going to understand your problem in a more abstract way; and the external (ExLoD) where you analyse and explain the data using that framework, outlining clearly the process of bringing theory to data and discovering answers to your questions. The stronger and clearer the InLod and ExLoD, the greater chance other researchers then have of using, adapting, learning from your study, and building on it in their own work. When too much of your process of organising, coding, recoding, reading, analysing, connecting the data is hidden from the reader, or tacit in your writing about it, there is a real risk that your research can become isolated. By this I mean that no one will be able to replicate your study, or adapt your tools or framework to their own study while referencing yours, and therefore your research cannot be readily be built on or incorporated into a greater understanding of the problems you are interested in solving (and the possible solutions).

This was the first reason for keeping detailed notes. The second was to trace what I was doing, and what worked and what did not so that I could learn from mistakes and refine my process for future research projects. As I had never worked with a data set this large or varied before, I really didn’t know what to do, and the couple of qualitative research ‘textbooks’ I looked at were quite mechanical or overly instrumental in their approach, which didn’t make complete sense to me. I wanted a more ‘ground-up’ process, which I felt would increase the validity and reliability of my eventual claims. I also wanted to be surprised by my data, as much as I wanted to find what I thought I was looking for. The theory I was using further required that I not just ‘apply’ theory to data (which really can limit your analysis and even lead to erroneous conclusions), but rather engage in an open, multiple and iterative reading of the data in successive stages. Detailed notes were key in keeping track of what I was doing, what confused me, what made sense and so on. Doing this consciously has made me feel more confident in taking on similarly sized research projects in future, and I feel I can keep building and learning from this foundation.

This post is a more conceptual musing about the nature of qualitative data analysis and lays the groundwork for next week’s post, where I’ll get into some of the ‘tools’ or approaches I took in actually doing my analysis. Stay tuned… 🙂

 

Data: collecting, gathering or generating?

I’m thinking about data again – mostly because I am still in the process of collecting/gathering/generating it for my postdoctoral research. I had a conversation with a colleague at a conference I went to recently who talks about ‘generating’ his data – colleagues of mine in my PhD group use this term too – but the default term I use when I am not thinking about it is still ‘collecting’ data. I’m sure this is true for many PhD scholars and even established researchers. I don’t think this is a simple issue of synonyms. I think the term we use can also indicate a stance towards our research, and how we understand our ethical roles as researchers.

Collect (as other PhD bloggers and methods scholars have said) implies a kind of linear, value-free (or at least value-light) approach to data. The data is out there – you just need to go and find it and collect it up. Then you can analyse it and tell your readers what it all means. Collect doesn’t really capture adequately, for me, the ethical dilemmas that can arise, large and small, when you are working in the ‘field’. And one has to ask: is the data just there to be collected up? Does the data pre-exist the study we have framed, the questions we are asking, and the conceptual and analytical lenses we are peering through? I don’t think it does. Scientists in labs don’t just ‘collect’ pre-existing data – experiments often create data. In the social sciences I think the process looks quite different – we don’t have a lab and test tubes etc – but even if we are observing teaching or reading documents, we are not collecting – we are creating. Gathering seems like a less deterministic type of word than collecting, but it has, for me, the same implications. I used this word in my dissertation, and if I could go back I would change it now, having thought some more about all of this.

Generating seems like a better word to use. It implies ‘making’ and ‘creating’ the data – not out of nothing, though; it can carry within it the notions of agency of the researcher as well as the research participants,  and notions of the kinds of values, gazes, lenses, and interests that the parties to the research bring to bear on the process. When we generate data we do so with a particular sense in mind of what we might want to find or see. We have a question we are asking and need to try and answer as fully as possible, and we have already (most of the time) developed a theoretical or conceptual gaze or framework through we we are looking at the data and the study as a whole. We bring particular interests to bear, too. If, as in my study, you are doing research in your own university, with people who are also your colleagues in other parts of your and their working life, there are very particular interests and concerns involved that impact not just on what data you decide to generate, but also how you look at it and write about it later on. You don’t want to offend these colleagues, or uncover issues that might make them look bad or make them uncomfortable. BUT, you also have a responsibility, ethically, to protect not just yourself but also the research you are doing. Uncomfortable data can also be very important data to talk about – it can push and stretch us in our learning and growth even as it discomforts us. But this is not an easy issue, and it has to be thought about carefully when we decide what to look at, how and why.

These kinds of considerations, as one example, definitely influence a researcher’s approach to generating, reading and analysing their data, and it can help to have a term for this part of the research process that captures at least some of the complexity of doing empirical work. For now, I am going to go with others on this and use ‘generating’. Collecting and gathering are too ‘thin’ and capture very little if any of the values, interests, gazes and so forth that researchers and research participants can bring to bear on a study. Making and creating – well, these are synonyms for generating, but at the moment my thinking is that they make it sound too much like we are pulling the data out of nothing, and this is not the case either. The data is not there to be gathered up, nor is it completely absent prior to us doing the research. In generating data, we look at different sources – people, documents, situations – but we bring to bear our own vested interests, values, aims, questions, frameworks and gazes in order to make of what we see something different and hopefully a bit new. We exercise our agency as researchers, not just alone, but in relation to our data as well. Being aware of this, and making this a conscious rather than mechanical or instrumental ‘collection’ process can have a marked impact, for the better I think, on how ethically and responsibly we generate data, analyse it and write about down the line.