So 2015 comes to an end. And what a busy year it’s been!
Last year the blog hit 100,000 views and recently it passed 200,000 which was quite a surprise. The top 5 posts are still pretty much unchanged.
5. Learning styles, facts and fictions.
At the start of the year I wrote that I intend to write only about 12 posts this year. I actually wrote seven on this blog, eight if you count this one (plus 2 guest posts) 1 for the EAP archivist Blog (here) and a couple for the Gender Equality blog Nicola and I run (here). I also wrote a piece for ELGazette (here). That’s about 12, right?
Last year I said I hoped to write more ‘try this is works’ posts, but didn’t manage to. I also didn’t manage to produce anything from Mike’s wish list (sorry Mike). Yet again I had no offers from people to blog about something they have expertise in. So I haven’t been doing much on the blogging front.
That said I did present at IATEFL and Leicester hosted BALEAP the week after. I was kindly invited by Tyson to speak at TOSCON in Toronto, which was fantastic. I also did my first ever keynote, at NATECLA, which was a great experience. I ended this year with webinar for BEsig which should be viewable online at some point. In all these events (and in the invite EnglishUK gave me to their conference) I’ve been touched by the kindness of people. From people like Tyson’s boss, Bruce, taking me out to dinner and the other members of Toronto team, and the NATECLA team who helped prepare me by letting me come to another conference, to the members of BEsig who sat through a practice of my webinar and gave me suggestions for making it better. There are a lot of really great people out there.
I also met (if only briefly) an number of people I’ve been tweeting at and reading for years which was great. Far too many to mention but check out the pictures.
In what turned out to be a prophetic statement I wrote that IATEFL would be my ‘difficult second album’. I won’t dwell on this too much as enough has probably already been written, but I would just say that it continues to fascinate me how polarised the reaction was. I still meet people (and talked to a number afterwards) who really liked the talk, or found it interesting and if you watch the video many in the audience seemed to enjoy it. Others did not, and that’s fine. what’s interesting however, is the narrative that has developed that the talk was a complete disaster. I meet people nowadays who raise their eyebrows and suggest that ‘things didn’t go well this year, huh?’ but who were neither at the talk nor have seen it. ho-hum.
Next year this blog will be very quiet. I’m going to be working on some other projects and so don’t have plans to blog very much. (One such project is a ‘learn Japanese’ podcast I’ve started with an old friend). If you know anyone who is well-informed on a subject and want to suggest them for guest post (even if it’s yourself) please do, I’ll be happy to post it!
Thanks for reading and have a great 2016!
“Nobody made a greater mistake than he who did nothing because he could do only a little.” – Edmund Burke
How do you know that smoking causes cancer?
Easy, right? scientists said so and they did lots of research to prove it. But what research did they actually do and how did they do it? If you’re anything like me, you probably have absolutely no idea.
In the 1950s two British doctors carried out a cohort study. This is when you look at a large group of people (40,000+ doctors in this case) over a period of time and study which conditions they suffer from and then try to match those conditions with other factors. For example those in the group getting lung cancer seemed to also overwhelmingly be the ones who smoked. Bingo, we have a correlation.
I often wonder what would have happened if this were education research posted on twitter nowadays? My feeling is that as soon as it had been tweeted out countless blogs would have popped up to discredit it.
Firstly someone would point out that correlation doesn’t always mean causation. Next we would read that doctors shouldn’t be trusted because ‘remember what happened with Thalidomide‘. Then, someone else would casually note that there must be hundreds of other factors which could influence these people, like diet and lifestyle. They would then pull out the classic educational trump card that ‘every smoker is different’ and that what affects one wouldn’t necessarily affect another. Next someone would ask for the authors to define exactly what they meant by ‘smoking’ are we talking pipes or roll ups? And just how many cigarettes makes one a smoker? Finally the coup-de-grace would be delivered with the comment that ‘my grandfather smoked 40 a day and lived till he was 100’.
Once the cloud of doubt was thick enough, everyone could go back smoking, safe in the knowledge that the imperfections in this research would protect them from cancer.
The reasons used to dismiss research in education also exist in medical research and psychological research and somehow they seem to manage.
Take a human beings for example. Each has their own unique genetic code. The differences are so extreme that some people can drink a little alcohol and suffer quite high levels of liver damage while others drink lots and are fine. Other can smoke their whole lives without getting lung cancer. Other people can die if given penicillin.
Yes despite these differences when I buy a packet of painkillers it says “take one per day for adults” with no warnings about “unless you’re a middle-aged woman weighing between X and Y”. Somehow we can all just take one a day and ‘it works!’ But in education context is king and attempts to move the field forward can often be dismissed out of hand by this kind of low level niggling.
The Nirvana fallacy is where ‘good’ is rejected because it isn’t ‘perfect’. It’s the enemy of ‘good enough’ or just ‘better than before’. And in education these kinds of improvements are exactly what we should be aiming for. There will never be a perfect method, but we should be asking are there ways of doing things that are a little better than how we’re doing them now.
The Nirvana fallacy is not only apparent in criticisms of research, it also makes an appearance in two other areas of TEFL; textbooks and testing. Textbooks often don’t represent real language use, have contrived levels and use ‘old fashioned’ teaching methodology. They are often bland and designed by companies seeking to make a profit.
None of this is controversial and there is plenty of research to back this up. But new textbooks come out all the time and are often better than the ones that precede them. Yet here again ‘better than before’ is not seen as good enough and instead there are many who seem to feel they should be thrown out altogether unless they are perfect. Of course ‘perfect’ here means applicable to every individual student’s needs regardless of the context, first language, learning preferences and cultural beliefs. They would also use the teaching methodology preferred by whichever teacher was using them and contain language appropriate and authentic for every knowable context.
Tests too fall victim to the nirvana fallacy. In all areas of education it seems anti-test sentiment is high. Certainly tests can be powerful and life changing and bad tests are disastrous but again is that a reason to stop testing students or is it an argument for better tests?
Testing is one of the most well-researched and evidence driven fields in education. The test ‘form‘ a person sits is the very tip of a complex and expensive test writing process which has been refined for decades. Tests also give us information on what a students is capable of, how well they’ve progressed and what they need to work on. Test writers and theorists go to incredible lengths to ensure tests are fair for students and yet I know of hardly any teachers who have positive views about testing.
Bad research, bad textbooks and bad tests are all arguments for better research, better textbooks and better tests. It’s absolutely right that teachers should be critical of things that don’t work, and I will be there with them, pointing out sloppy research, crappy textbooks and poorly written tests. But should we dismiss the whole endeavour because it’s not perfect? Would we make similar arguments about other fields? charity for instance; ‘sure this well may supply clean drinking water but the hospitals are still in a terrible state and the government is unstable so why bother?’
We can still aim for improvements while admitting that things are not perfect. As Michael Long notes:
The responsibility of professionals in any field is not to know the right answer, but to be able to defend recommendations in light of what is thought to be the right answer or the likeliest right answer (best practice), given what is known or thought to be known at the time. What is irresponsible is to throw up one’s hands and declare that no proposals should be made and defended until everything is known for sure (which will never happen).