Teaching Critical Analysis in Cancel Culture

Content Warning: Discussion of trans exclusion narratives

They say 2020 is the year of the blog (They being two or three people in my Twitter timeline), and I’m full of new year’s resolutions so let’s talk about Cancel Culture. 

Cancel Culture, if you are blissfully naive, is an internet practice where someone who says something controversial or ‘problematic’ is repeatedly attacked and boycotted in an attempt to minimise the dangerous opinion’s platform. You can think of it as a reaction to the Paradox of Tolerance, in that it is most often the political left does to protect those viewed as being less powerful. 

The fabulous YouTuber ContraPoints (Natalie Wynn) started January with a 100 minute long video discussing cancel culture after she herself was ‘cancelled’ for a recent video where she invited Buck Angel to provide a voice over for a quote. ContraPoints has put in a huge amount of effort into this video and it’s really one of her best, so I won’t explain the whole drama here, I’ll just point you toward her video. It’s here. You should really consider watching it.

This video really appealed to me for a lot of reasons. As you may well be aware, the University of Edinburgh has seen a lot of outrage regarding feminism and trans rights. The rest of this post, and indeed all of my content, comes from my perspective as a cis-gendered bisexual woman, and I would encourage you to do your own research on these topics, and to remember that I am no expert here. There are two important concepts here, trans-exclusionary radical feminism (TERF) which many people prefer to call ‘gender critical feminism’, and trans-inclusionary feminism. In a nutshell, the trans-exclusionary feminists are concerned that trans women will endanger female spaces and female safety. Trans inclusive feminism considers that trans women are women, and have the same need for female spaces as cis women. I use the words ‘trans-exclusionary’ and ‘trans-inclusive’ in this blog because at present I see that as the crux of the conflict. Many years ago, I understood gender-critical feminism to be about breaking down imposed boundaries of gender, but that’s not how the term is now used. It is now fairly universally used to mean trans-exclusionary philosophies. Maybe I misunderstood the term at first, or maybe it has changed, but for me it’s clearer to talk in exclusion/inclusion terms.  

The New York Times recently argued that TERFs are a particularly British problem, which I’m not sure I agree with, but there is certainly a type of older British feminist that you almost ‘expect’ to come out with trans-exclusionary views, such as J K Rowling. Recently Rowling tweeted in support of Maya Forstater, a woman who took her employers to tribunal for not renewing her fixed term contract after she repeatedly made trans-exclusionary tweets. Her complaint was not upheld. I was irritated by people defending Forstater, and made this tweet, jumping on the cancel culture bandwagon for JK Rowling. 

The Halo Reach realisation is real, incidentally. I remember being on the Halo fan forums at the time and enthusiastically telling a Bungie developer I’d be happy to have my multiplayer character grunt in a female voice, even if the model was the same. That memory, sparked after seeing some of the Reach remake trailers, struck me as a really clear example of why being correctly gendered in conversation is so important to people, in a way that cis folks like myself might find easier to relate to. On reflection, I wish I hadn’t said ‘fuck TERFs’ in the tweet, because that’s not going to invite any reasonable debate or encourage anyone to listen to my ideas. And this is the issue with cancel culture that ContraPoints’ video discusses. Cancel culture prohibits discussion.

While I was watching ContraPoints’ video, I was uncomfortably reminded of something my dad used to say to me when we were having political debate. My dad frequently calls me a ‘Trotskyite’, who’s too busy feuding with my fellows to pull together for the left as a whole. Much like myself, my dad will also say almost anything in a debate to win, but his observation about the left’s tendency to schism is a pertinent one. In the nineties, around our kitchen table, he was eerily prescient of the left’s challenges in the 2010s.

The Economist recently posted a brilliant article about why the Conservatives are the most successful political party in the world. If it’s behind a paywall for you, the answer is that the Tories are very flexible in morality and policy if it keeps them in power. They will be the party they need to be to get elected. The left, by contrast, will fight within itself about how exactly it should respect all members (see this Atlantic article, this Guardian article which seems to suggest that Labour should have done everything differently in the last GE). 

We live in a world where it is very easy to express an opinion, and where you can be cancelled for that opinion just as quickly. Take Joan Meiners who tweeted about the inaccuracy of peoples’ bee tattoos, not realising that those tattoos were a specific reference to the Manchester Evening News Arena bombing. 

I liked Joan’s original post when I saw it, and then I liked the tweets criticising it. Where do I fall in the outrage league?

How do we teach the academics of the future to critically analyse ideas when they see trusted, powerful, and adored figures like JK Rowling being told to die for what is an opinion? Especially when we say we’ll assess them, and the perceived penalty of getting it wrong is so high? In my first year science course, I tell my students I expect compassionate science from them, but I don’t tell them how I expect them to critique a piece of science compassionately. I tell them their best exposure to critical analysis will be in scientific papers, but we all know how bad peer review is for encouraging this kind of discourse.

Trans rights may seem like a strange road to thinking about how I teach critical analysis, but the concern and trepidation I felt about these issues must be a reflection of how it feels to critique something in a modern classroom. Our classrooms will have students who are aware of these debates, aware of these controversies, and whom we’re asking to critically analyse a piece of work. It is our responsibility to teach with an awareness of this culture. 

So this year I’m going to work on actively teaching compassionate critical analysis. I’m not totally sure how this will go yet, so I started drafting an outline. Have a look through the slides and see what you think, feel free to adapt or hit me up on Twitter to give me any feedback.

Cancel culture is cancelled. 

100 Papers

This is as close as I’ll come to an academic year in review

In 2019 I took part in the #100papers challenge. The idea is that you aim to read (fully) 100 scientific papers in a year. 

As I understand it, the challenge was born from the #365papers challenge. Some fools well-intentioned folks aim for averaging a paper a day for a year, and others thought “I’ll be lucky if I manage a third of it”. With both #365papers and #100papers, the idea is that you’ll commit to reading more if you’re publicly tracking it, and maybe also read more widely. I knew that #365papers would not be achievable for me, but #100papers might have been within my grasp (spoilers, it wasn’t). 

I really like setting myself challenges. I’ve done a variety of photography and reading challenges over the years. Tracking the papers that I read on Twitter is innately appealing to me. I also wanted to put a potted summary or key outcome from each paper onto my tweets to force me to read the papers instead of cheating the essence of the challenge by skimming. 

I have a pretty good work-life balance. I set aside a day a week to devote to research and I manage to keep that day protected about 60% of the time I’d say. How many papers did I read in their entirety in 2019? 

40. I read 40 papers cover to cover. 

I have some thoughts about this exercise. Firstly, I don’t think this is The Way to read papers. Something I noticed about reading whole papers was how pointless it often is. I teach students to be selective about how they approach papers, and when I was trying to find out how someone set up a study or I wanted an overview of a particular field, I wasn’t sitting down to read a whole paper, I was flicking to the relevant parts of various papers. So my first big takeaway is that reading whole papers isn’t something that I would prioritise over strategic paper skimming. 

With that being said, there is something quite meditative and indulgent about reading a whole paper. There were some very fun papers like Jenny Scoles’ one on messy boundary objects where the narrative itself is enjoyable. 

(There was also this deeply enjoyable rant where you could feel the authors’ visceral hatred of the right-brain-thinking myth.)

And I also really liked having the Twitter thread of all the papers I’d read, and the ability to jump back into that thread to share with people was massively useful. Bauer et al 2017, alongside reading Invisible Women, has changed my research practice quite considerably this year: 

The performative aspect of talking about the papers I’m reading online was also interesting. I think you can track what projects I was working on with this twitter thread. You can see when I started reading up on our Widening Participation cluster for example, and I like some of the conversations the spawned from the thread. 


In 2020, I’m probably not going to do the challenge again, but I’ll certainly be posting a papers thread, maybe #paperswotiread or something along those lines. The target of 100 fully read papers is not feasible for me, and if its not feasible for me, I’m not comfortable advertising it to those academics who may be following me. I’ve been thinking a lot this year about how I model what I view as ‘good’ academic practice, and I’m trying to make positive choices. So I’ll be doing something like this in 2020, just without the targets.  

Data Literacy

Fancy getting a private blood test? I did . . .

This post talks about my health in a somewhat vague way because the specifics aren’t important here. I want to preface the whole thing by saying I’m absolutely fine and you don’t need to be worrying about me.

For the past couple of years I’ve been dealing with an underlying health concern. It might be nothing, but it doesn’t feel right to me, and after a particularly bad time in late 2018, I asked for a referral to a consultant. Fifty weeks later, the appointment came through. 

Once I got a date for the appointment, I started collecting data. I used a (research-led design) health app to track my mood, my pain, my triggers, and my symptoms. But I also sent away for private blood tests. 

There are lots of companies now which will run a range of screening tests on your blood, spit, or whatever else you’d like to send off in a biohazard bag through the post. There’s also a lot of debate about whether this is a Good Thing or not. 

To me this is a question of data literacy. 

With many long-running conditions, healthcare providers expect individuals to collect data. I have spoken about keeping a note of Athena’s allergies before:

I recorded this data in a little leather notebook I wrote in every evening. The app I use to record my current symptoms is just a more advanced version of the same thing. Both will help a physician to make decisions about treatment. The rub is that the vet and the consultant have been trained for years to make sense of that data, whereas I, as a non-specialist, might draw the wrong conclusion. What is meaningful information, and what’s just noise?

We all know the danger of googling symptoms, such as that time you had a bit of a sore neck and then you discover you have neck cancer and you should get your affairs in order. Extremes are noticeable and attractive. We propagate stories such as that time an infra-red camera scan detected breast cancer, because they’re interesting – and because they’re unusual. I’ve seen lots of hot patches on people using infra-red cameras, and its never once been a serious underlying condition. Because as a screening tool, its not that useful. 

Screening tests are problematic. We have two measures that are useful when we think about screening: sensitivity and specificity. Let’s imagine we’d come up with a blood test that aimed to diagnose whether you were in fact a bit of a dick at times. A highly sensitive test would read ‘positive’ for everyone who had ever been a bit of a dick. A highly specific test would read negative if you had never been a bit of a dick. The issue is that sensitivity and specificity are not always related. You can have a highly sensitive test that rightly tells all the dicks they’re dicks, but if it has low specificity it will also tell a lot of people who are not dicks that they are, in fact, dicks. And this will upset a lot of nice people. Conversely, a super specific test will never frighten the good people with a wrong dick diagnosis, but it will also tell a lot of dicks that they’re good people. We want both sensitivity and specificity in our tests. 

Alongside the issue of how accurate the test is, there’s the question of ‘do you need to know?’ Here’s a great thread on the risks of DNA testing for cancer here by Rachel Horton. One of the lines that stands out to me is: “The scarier a result, the more likely it is to be a false positive”

The human body is not something that can be easily categorised. Our biology is a spectrum. Not only may a test be wrong, but you might just be tootling along quite happily with any number of potentially scary numbers attached to you. You may have a perfectly benign cyst in your brain. You may just live with low iron levels. In fact, many GPs are incredibly concerned about how these tests can worry people for no reason. 

Just like you’ve got that weird fleck in your eye, or the way your finger bends at an odd angle, there are lots of little oddities in your body that are perfectly fine. The body has such a great ability to just manage with so many things that don’t quite fit the concept of the medically fit person. 

So with all these sensible caveats in mind: why have I paid for the tests? Like I say, this to me is a data literacy issue. There are rightful concerns to these tests, and healthy people should absolutely not spend money on this unless they really enjoy looking at meaningless numbers (hey – that’s why I have a FitBit, no judgement). But they did give me more information. Specifically that there was nothing obviously wrong other than being highly deficient in Vitamin D like all Scottish people. 

As a semi biology-literate person, I did deliberately select the tests that would be likely to be affected by the health concerns I have, and deselected a number of the tests that I did not think were necessary (but that the company were keen on pushing). I also have a reasonable guess at what’s wrong with me from my GP, and that was later agreed by the consultant.

Personalised healthcare is fast becoming a ‘selling point’ for a lot of services. I don’t think we can stem that tide, so we need to think about how we create more data literate people in all sectors, so others can think critically about what’s being offered them. But health is also scary. The health of our loved ones is particularly scary. Can we ever really be objective or analytical when it comes to health? Is that what these companies are preying on?

I don’t have the answers to these questions, but experiencing it myself has given me a new perspective. And I wonder if this is something we need to start teaching simply as a matter of course: how to deal with large amounts of your own personal data. Probably.

Oh – and after I went to the consultant? I was called to the GP for yet more blood tests.

How you doin’?

The peer observation cycle at the R(D)SVS is approaching its end, so its time to find a buddy and ask each other “how am I doing?” Feedback on practice that’s dear to my heart – my favourite thing!

The peer observation cycle at the R(D)SVS is approaching its end, which means we’re all hurriedly looking around for someone to come give us some feedback on how we teach. My teaching load has changed hugely since I started (hello new course organising responsibilities!) and I was feeling quite blasé about the peer obs process, mentally putting myself in reserve for those people who were undoubtedly going to run out of time and need a buddy last minute. 

Up until a colleague approached me and asked if we could buddy up because we teach very similar subjects. Why not do it right, after all? 

This is the first time in all my peer observation/feedback on teaching sessions that I’ve ever been observed by a more experienced colleague, and a considerably more experienced colleague at that (Dear Peer, if you’re reading this, your experience is simply a reflection of your very hard work, and in no way a commentary on years teaching 😉 ) And to my surprise, I found myself nervous about it. 

When I’m talking to people about our peer observation sessions I give lots of advice that I did not follow myself. For example, you have complete control over your peer observation sessions. If things are stressful or you’re not feeling it, you can always reschedule. But of course the time I’d scheduled with my peer just happened to fall over another colleague’s sick leave that I was having to unexpectedly cover for, a period of feeling under the weather myself, and a very stressful busy work period. But we went for it anyway. 

How did the session go? Well like many of my teaching sessions, I walked away thinking there was so much more I could have done, and my peer picked up on some of those in our debrief. But what was interesting was that the thing I’d asked my peer to focus on was what my peer considered to be the strongest part of the lecture. And this is a recurring theme in all my peer observations. The things I’m fretting about are usually not the things the peer picks up on. 

In this particular session I’d been asking about the engagement, and worrying about how the students were responding to the more ‘active learning’ parts of the session. My peer helped me see how positively they responded, and then was able to share some of their practice with me that I am definitely stealing drawing from next week when I continue the session. 

My peer also asked me a few questions, like why I didn’t scaffold in breaks, that made me think a little bit more about my approach to preparing teaching. Many of the questions my peer asked could have been answered “Oh I usually do that but…” and that alone is a fascinating observation. I spent a crazy amount of time designing these courses, and working hard on programmatic level innovation. Despite all that hard work, when I’m under pressure I default to teaching in the way I’m most comfortable with, the way I was taught. 

This realisation has also highlighted for me that building engaged and active learning opportunities actually costs me more preparation time than a more traditional lecture, despite the fact it seems like the student is doing most of the work. None of these observations are new to the field I’ll point out – people have been making this discovery for years, and  I suspect I’ve discovered this before too. I think the value of the peer observation session is helping to catch out those little bad habits you can slip back into. 

My final observation on the peer observation is that I’m really proud of myself for accepting the feedback. Working on feedback has been something of a project for me over the last few years. It reminds me of a time when a peer review came back on a paper I’d submitted, and I’d been grumping over the reviewer comments as you’re wont to do, until I got to the end and saw the reviewer’s name on this completely open journal. The reviewer was somebody I highly respected, and suddenly my entire perspective on the feedback changed. Having my peer be a more experienced colleague who I really respect was a great way for me personally to become more open to the feedback I was receiving. 

Ultimately lots to work on, and lots to be proud of, and all for a little bit of an uncomfortable an un-British conversation where we asked each other “how am I doing?”

24601

She works hard for her money . . . ?

This week, in between trying to get through an email backlog from two weeks of conferences, I’ve been trying to write up a piece of research we’ve done looking at academic identity. Curiously, on Twitter there’s also been a provocative Tweet suggesting:

Which has incited discussion in my Twitter timeline about how much scientists should be investing in their career. 

In between those two weeks of conference, in between trying to work on this identity paper on the train, I went surfing for the first time. On Belhaven beach, in a borrowed winter wetsuit, I eagerly became A Surfer. I have none of the accoutrements (although I always find shopping the most fun part of identity formation), and I so far have only one lesson under my belt, but I am firmly convinced this was the identity I was born to carry. Sitting on a sandy white beach and wrapping up warm while I watch the waves is, at present, part of how I conceptualise myself. 

I pick up identities easily. For a while I was a diver (until I couldn’t face another open water dive in Scotland), I am occasionally a knitter, always a writer even when I haven’t written anything original in months, and for a long time now, I have been a scientist. 

During those two weeks of conference I was tired, and a little homesick, and grumpy about not being able to catch up with my workload on horrible trains. I was resenting every one of those 37 hours I’m contracted to work. I’m also a big proponent of taking conferences at your own pace, but at conferences number 3 + 4 of my year, I found myself pushing my limits to go to more talks, hear more about higher education, and talk to more people.

I care about my identity as an education researcher far more than I ever did about my animal behaviour researcher identity. This was a little surprising to me a couple of years ago when I went into this field. And it gives me very mixed feelings about the discussions regarding ‘science as a job’. 

On the one hand, scientists are more productive when they are happy and healthy – and we need to change the conversation around busyness and workload. The way we glorify exhaustion and working out of hours is doing our colleagues a great disservice. 

And yet, this is a job that I genuinely love and will work very hard to protect. Its a job that has a lot of benefits, and its a job that other people might want. Can I really say its ‘just’ a job?

In the end, I know that I can. I can because, despite my love for this job, I’m not here because I work harder or better than everyone else. I maybe work harder than some, but I was luckier than others, and I was in the right place at the right time. Beyond this idea that ‘real scientists work even harder’ is an even more pernicious lie. The idea that we get what we deserve. 

Academia is not a fair place. We are discriminatory, we judge people on implicit criteria, our metrics are meaningless. Peer review is broken and every way we recognise and reward success in a scientific career is much more about whether you fit the traditional academic mold, rather than any intrinsic value you have. 

I would caution all of us who love our jobs, who think that we work harder and better and faster, to just check our privileges. And just hang loose, bro.

Complexity

I have the beginnings of some thoughts about teaching statistical modelling

One of my fabulous colleagues has started a book club on campus where a group of us work through Advanced R by Hadley Wickham. After the day I learned about the tidyverse, this Advanced R book club has been the biggest set of leaps I’ve been making in my R skills, and I’m probably only understanding about a fifth of it.

This week we began the chapter on functional programming – and Ian’s code and examples are on github. I went home and spent the evening doing this:

There was one example that Ian drew up that I can’t stop thinking about from a teaching perspective. Teaching stats is really, really intimidating, because the more you know about it, the more you recognise how subjective it can be. I often see people take refuge in complexity where they refuse to answer a learner’s question in favour of reiterating the memorised textbook response. I’ve done this myself! At the same time, I’ve had a really intriguing stats challenge with a colleague where I’ve gone around the houses trying to make sure I can justify our choices.

This comes down to model selection, which is one of the most Fun(™) conversations you can ever have about statistics. The more I learn about statistics the more I feel that model selection is the personification of this tweet from my colleague:

You see, there really are no ‘right’ answers in model selection, just ‘less wrong’ ones. This is the subject of a lot of interesting blogs. One of them is David Robinson’s excellent ‘Variance Explained’.

Another of @drob’s posts that I’ve linked to before I’m sure is this one: Teach tidyverse to beginners. This idea fascinates me. David (and I feel I can call him David because I once asked him a question at a demo and he said it was a good question and it was honestly one of the highlights of my life) suggests that students should have goals, and they should be doing those goals as soon as possible.

I don’t know how much educational training the Data Camp/RStudio folks have but I’m always really impressed with the way they teach.

(It’s important here to take a moment to acknowledge the problems Data Camp is having at the moment regarding how they addressed a sexual harassment complaint. I have the utmost sympathy for all involved, and at the moment I don’t feel that boycotting Data Camp is the answer, but it’s worth pointing towards blog posts like this one to give a different opinion.)

‘Doing’ as soon as possible is something we struggle with in higher education. I’ve just had to rewrite a portion of a paper to defend why I think authentic assessment is so vital for science. We put ‘doing’ at the top of our assessment pyramids, and talk about how it takes us a long time to get there.

During this week’s bookclub, my colleague Ian had a great example of using the broom and purrr packages in R to fit multiple models to a dataset quickly and easily. And I had to derail the conversation in the room for a bit. Why don’t we teach this to our students straight away? At present, the way I teach model selection is a laborious process of fitting each model one by one, examining the results individually, and then trying to get those results into some kind of comparable format. After some brief discussion, with all the usual sciencey caveats, our Advanced R bookclub was all keen to use this as a way of introducing model selection to students.

I feel as though this is tickling at the edge of something quite important for higher education, especially for the sciences. Something about empowering students, and getting them to ask me about things I don’t know the answer to more quickly. I also feel just a little irate about the fact I can’t formalise this as nicely as I know David Robinson and the RStudio lot can. I kind of feel like some of the most useful stuff I’m doing lately is in the Open Educational Resources range, such as my Media Hopper channels and on my GitHub. There’s a freedom in OERs to push the boat, and to start teaching the complex things first.

And ultimately, my disjointed ramblings might just help someone else connect a few dots. Happy spring, people!

If We Should Dress for Sun or Snow

Despite feeling pretty good about my work-life balance last year, I’ve been a little humbled by 2019 so far. My personal life has needed more attention than my work life, and I’ve been feeling guilty about shifting the focus.

Before Christmas I got very into the Groundhog Day musical soundtrack, particularly If I Had My Time Again, which is my new favourite shower sing-along. I was also thinking a lot about academic workload last year, and how the varying pressures of the academic role can be challenging.

Despite feeling pretty good about my work-life balance last year, I’ve been a little humbled by 2019 so far. My personal life has needed more attention than my work life, and I’ve been feeling guilty about shifting the focus. It’s been difficult to keep on top of things, and I hadn’t quite appreciated how much I’d let things creep into the evenings.

There were two articles recently that my mind kept returning to. One is Dr Anderson’s widow speaking out about academic workload, and this article about email’s influence on workload. Particularly on Monday when I was attending an Echo 360 community meet-up about learning analytics.

I had good reasons for wanting to go to this community meet-up. I’m interested in analytics, and I’m the PI on our university’s evaluation project so a little networking is always valuable. I’m also in the rare academic position of having some spare money floating around so it all seemed worthwhile. Except there was a very west-of-Scotland sounding voice in the back of my head wondering if I’m worth spending that money on. Who am I to go to That London to talk to people? Shouldn’t I be slaving over a hot laptop?

On the other side of this, I’ve also spent a little bit of my evenings this week working on a Shiny app. Now I want to emphasise that ‘a little bit’ in this context literally means five or ten minutes here and there when an idea comes to me, but it’s still very much useful time. And yet I’ve been frustrated that I haven’t been able to spend more time on it.

A couple of months ago I had a devil’s advocate style debate with my good colleague Ian about how much these kind of extracurricular activities should contribute to our CVs. We kept circling back to how much the open science and open data analysis movements favour those people with the spare time to dedicate to this kind of work. If all your work is on proprietary data, you maybe can only contribute to things like a github repository in your spare time. And if when you get home you start doing the childcare, or can’t get away with not cleaning the house because you prefer to spend that time tweaking a package. What if all your hours out of work are spent on other tasks, and when you have that lightning moment of “ah – I should use enquo()!” you can’t immediately go to your laptop to check it out?

There are many people much busier than me who manage to contribute way more than me. Those people should be applauded. And we should definitely still value the amazing resources people put online. I think it is our responsibility as academics to support ourselves (and our managers too).

All this is a round-about way of saying that having a little bit less time to make-up for my business has highlighted to me how very important it is to protect time for the things that are important in your work. During one of our protected analysis times today I started a new package which I hope will be able to be incorporated into a shiny app I’m planning for our students. Tomorrow’s my first Writing Friday since before Christmas.  This is the way to do it. And yes, my emails have been slipping in the mean-time. Let ’em.

We should believe we are worth the time.

(And also I managed to go to work today wearing two different earrings and no one pointed it out. That’s not relevant but it amused me greatly.)

Naughty and Nice

A small Christmas blog on the ethics of being overheard . . . he’s making a list, he’s checking it twice . . .

Amazon have put all five seasons of Person of Interest on Prime. Person of Interest is an amazing exploration of what it might cost humanity to create artificial intelligence, and its beautifully prescient given Amazon’s recent Alexa data breach where a user was able to access another user’s recordings.

In my book (which if you’re looking for a last minute Christmas gift, do check it out) I talk about how we might end up studying personality through artificial intelligence, and the ethics of how we might consider this data use.

I’m delighted with my Christmas present of Person of Interest. I cry the whole way through this show. It is amazing. But I also have an Alexa sitting in my house, and a Google phone. Occasionally my phone flashes its screen, saying “I didn’t recognise your voice”, much like Athena’s ears prick when she’s snoozing and hears me get to my feet. Do I need to listen for you right now?

On the other hand, in 2018 I’ve also had to balance the issue of not having ethics committee permission to share sensitive data and the challenges that has caused for making my research open and reproducible. I am proud particularly of this repository which will be elaborated on in a publication next year – how we can be reproducible when we’re dealing with data that should be confidential. But yes, privacy is a challenge.

And it’s a very strange conversation to have in December. He sees you when you’re sleeping, he knows when you’re awake . . .

Recently, I was asked how old I was when I understood about Santa. It actually ties in to my first experience with religion. I was raised without any religion whatsoever, and when I got to school I was introduced to this whole new concept. That my own mind and actions were not my own private space, that someone or something might be watching. I made a deal with this ‘God’ (who I pictured as Danny Devito in a toga, I do not know why). If I was very good, he would reward me with a hotdog on Friday at lunch time. As one of the Mac kids, I was always at the middle of the lunch queue and the hot dogs were always gone. So I was very, very good for a whole week. I did the praying. I was kind. And on Friday  . . . there was no hot dog.

The only other experience I had with religion was one of my grandfathers who had cryptically said “Any God who doesn’t want me isn’t a God I want to believe in”, and at the age of 5 I sanguinely accepted this logic, and decided the lack of hot dog meant God had no interest in my soul. That Christmas I tried this logic again, and created my perfect toy (a My Little Pony toy of my favourite character – except there would be movable bits). Santa did not come through.

We teach morality to children with the idea of oversight. Perhaps not entirely, but ‘being watched’ is a large component of how we learn our own moral frameworks. The Good Place has made an excellent TV show exploring the concept of being constantly observed (and measured). It’s probably not a coincidence that we’re interested in these stories right now. But it’s also not a coincidence that I got thinking about this after realising I knew more than I wanted to about my new neighbours.

Ultimately I think data collection and analysis is an organic process, and it’s very hard to draw a line over ‘good data collection and analysis’ and ‘bad data collection and analysis’. Amazon absolutely should not be sending clips of audio to a random stranger. But should I hear random snippets from my neighbours’ lives? How often should we accept being ‘overheard’ as a price of being digital neighbours?

I don’t have an answer for this – or even a reason to blog about it on Christmas eve. I just think it’s a very interesting question.

Lessons in Course Design

By some counts (i.e. the number I list on my CV) I’ve led the design of about thirty higher education courses over the last few years. I asked Twitter what would be the most useful format for talking about those lessons . . .

This is that blog.

By some counts (i.e. the number I list on my CV) I’ve led the design of about thirty higher education courses over the last few years. And even I have to have learned something by the end of it. I asked Twitter what would be the most useful format for talking about those lessons, and Twitter was very keen on a personal blog, because they wanted the dirty truths.

This is that blog.

Broadly speaking, I have three takeaways from my work on course design. They overlap, of course, because life is messy, but these are what I’ll be taking forward in future. Respect the need for the course, accept that courses will always be co-creations, and while you must try to innovate, you must also recognise why innovation is so difficult. Respect. Accept co-creation. Acknowledge the hardships of innovation.


Respect the Course

At the risk of turning you off this blog post immediately – this was one of my big lessons that made everything ‘click’ the moment I grasped it. On Edinburgh’s Teaching Matters blog I’ve talked about the course design process that really drove this home for me – but at all stages of course design, from the early planning to the third year review, I have found it very useful to go back to why we want the course in the first place.

There are some corollory lessons to this one. If your reason for the course is ‘we want the money’ or ‘the king on high said make it so’, it becomes much harder to find a single cohesive thread that should tie the course together. One of the earliest courses I designed very much came from an edict on high (so high it was impossible to refuse), so the team and I discussed what was missing from elsewhere in the programme. That course became a place to teach the skills that we didn’t have the time to teach elsewhere, and I was very proud of it.  

Having a reason for what you’re doing helps you make the big decisions. How do you decide on an exercise if you need to compromise on timing? If you have a central motivation that drives you, you’ll find it a lot easier to distinguish between the two.

Accept Co-Creation

This one may be more personal. I hate co-creation. I mean, if anyone is listening I love it and team-work is one of my great strengths, but generally I hate it. I think this is particularly difficult when designing a course.

Have you ever given somebody else’s lecture? It’s difficult. Even giving your own lecture, a year later, can be difficult. Lectures are so much a product of you at that moment.

I am a huge defender, and a huge proponent, of the looser aspects of stagecraft that make a lecture. When you see students aren’t following and you stop and regroup – that’s good. When you get diverted away from the beaten path by a really interesting question – that’s good! Teaching adults should not be about sticking rigidly to a lesson plan that anyone could pick up and run with. It needs to be personal.

But with that, comes the difficulty of accepting the other personals in the room. As someone who prides herself on her communication, I am sometimes amazed at how explicit I must be when describing a teaching activity to another. So vice versa, I try to work hard to understand how someone else plans to teach something. This might be something we all need to work on, or just me, but there needs to be more acceptance of how courses arise out of everyone.

And by this I also mean accept the co-creation of the students. My somewhat looser philosophy of the class-plan has also been informed by the different classes I’ve seen. I am still not sure exactly why the same broad cohort, the same rough course, the same timetable slot, can all sometimes result in a wildly different group of students (there’s a study in this!).

It would churlish to express this as ‘no course plan survives first contact with the enemy’ – but recognise that the course you design in your head will not be the same course you actually teach, because your students create that with you.


What Price Innovation?

This final point may end up a blog in itself. In a QAA event this week we talked briefly about student led teaching awards. There’s often a category of innovative teaching. When recruiting staff, I have pushed away from assessing their teaching in terms of its ‘innovation’. What counts as innovation? Is it if I haven’t seen it before? Or if the whole panel hasn’t seen it before?

This year we ran a brand new teaching exercise at the vet school which I don’t think is particulalry innovative. I’ve been doing stuff like it, in other contexts, for years. But the students hadn’t seen anything like it, and they loved it. We’ll undoubtedly be talking a lot more about it in the next six months as we unpack our evaluation.

However, that evaluation will likely underplay the sick feeling I had that morning, my racing heart and the sheer amount of work it took to get us there. Innovation takes a lot work, and a lot of risk.

We ask for innovation when we teach, even though we greatly penalise those whose teaching ‘doesn’t work’. Therefore innovations must always be a sure thing. I feel very safe in my role, comparative to a lot of early career academics, and even I feel frightened when I see that sea of blank faces. Or worse, read that angry comment that the assessment was confusing, or there was no point to the teaching.

I have tried much more that hasn’t worked than has. I’m thinking particularly of an assessment this year where I tried to play about with how some things were weighted (partly due to the discussions we had in the co-creation phase – see ‘accept co-creation’ above, even when the urge arises to assign blame) and I am reverting back to tradition immediately.

I think there is often a push, particularly when you are in that early excitement of design, to do something eye-catching and startling. Think about yourself before you do this. You are the one who needs to run the course.

Sometimes innovation will be hard and painful but still needs to be done, perhaps because it’s the whole reason your course exists. That’s a battle you will need to have. So make those choices strategically.

And if you are in the position to support innovation, anything you can do to reinforce the idea that failure is not going to mean immediate unemployment would be greatly appreciated by those on short-term contracts who probably sacrificed a paper to try something new.

Respect the course. Accept co-creation. Acknowledge the hardships of innovation.

Productive Wastage

I’m often accused of being productive, which is not how I think of myself. Instead, I spend time on things I never think will be finished . . .

I’m often accused of being productive which I find hysterical because I have had to dedicate a whole cupboard to my unfinished crafting projects and my list of ‘started’ papers is longer than my list of actual finished ones, never mind just the published ones.

Some colleagues and I were discussing productivity on Friday and one of my accusers said she’d read that the key to productivity was focussing on the process and not the end product. When I describe my work process I often say that I hate ‘kidding myself’. If I’m not going to do the thing that I’m supposed to do I don’t sit staring at it, instead I do something else. For example, my NSS package happened when I was supposed to be addressing some reviewers’ comments for our assessment paper. And on Friday, when I was supposed to be addressing those comments again, I went home and played Assassin’s Creed because it had had been a bit of a difficult week and the freedom to say “bugger it” is one of academia’s greatest perks. (Never underestimate the power of ‘bugger it’ when talking about productivity). I don’t kid myself about the work I’m doing.

I have never considered my ‘don’t kid yourself’ motto in terms of ‘process’, but it might actually be a more useful way to conceptualise it. I like exploring different processes. I usually have a little chunk of something I’ve tried before – you want to know about ‘play’? Well one Monday afternoon I randomly did a lit review for the beginning of a paper, here it is. You’d like to know how to make an R Package, well one week I wrote a data package for fun. While there is an end product for these things, I don’t necessarily bother with them.

One of the greatest examples of this is NaNoWriMo. For the uninitiated, National Novel Writing Month takes place in November each year and encourages everyone to write a 50,000 word novel. I love NaNoWrimo and have taken part several times, and finished only once. NaNoWriMo does not care about the final product. A common solution to writer’s block is to have ninja’s jump through the window, which will take at least ten pages to resolve before you have to get back to wherever you were doing. To me, this is the ultimate test of process.

I’ve been idly playing with my own idea for 2018 and I decided to announce the name with this blog post – I’ll be writing “Love in the Time of Elk Cloner” this year, and I probably won’t finish, given that November has a lot of marking for me, but that’s not the point. The point is that I will work on those skills, and exercise my creative muscles, and next time someone needs something a bit left-field written, I’ll be ready.


So, academics and technical folks – this is my recommendation for being productive like me – waste more time on stuff that won’t be finished, especially ridiculous novels with barely thought out premises. If you want to give it a shot, you can start NaNoWriMo with me this year. Follow me over there.