Tuesday, March 4th (March forth, get it?) is National Grammar Day. This year, it also happens to be Pancake Tuesday, the day when we’re all supposed to confess our sins and stuff ourselves with pancakes before Lent.
I’m not so crazy about Lent, but grammar and pancakes? On the same day? You mean I can make pancakes in the shape of punctuation marks for…um…educational purposes?
So, here we go — five grammatical sins explained with pancakes:
1. Using “comprised of” when you mean “composed of”
This is my all-time biggest grammar peeve. Frequent readers and former lab-mates will have no doubt heard me rant on this before. When you are describing the parts that make up a whole, for instance, the ingredients in a pancake, you might say, “Pancakes are composed of eggs, milk, and (because it’s a Monday night and I’m lazy), pancake mix.” Or, you might say, “Pancakes comprise eggs, milk, and pancake mix.” Either of these would be correct.
You would not say, “Pancakes are comprised of eggs, milk, and pancake mix.”
The word comprise means “to include,” so when you say “comprised of,” it’s like saying “included of.” It’s gibberish. It’s painful. And I’m pretty sure a unicorn accidentally steps on a kitten somewhere on the internet every time you use it. You wouldn’t want any murderous unicorns on your conscience, would you? Good. So please stop the madness.
Just remember, the whole always comprises its parts:
2. Plural-possessive confusion
The other day I saw a sign that said, “WANTED: Auto’s dead or alive.” Of course, we all know they really meant “autos,” as in more than one automobile. So why the wayward apostrophe?
Here’s the rule:
- If you are making a plural (i.e. more than one of a thing), you don’t use the apostrophe. (e.g. three pancakes)
- If you are making a possessive (i.e. signifying that a thing belongs to someone), then you use the apostrophe. You do not use the apostrophe to signify a plural, unless you want to be stabbed with a fork, like this:
- Exception to the possessive rule: “its” — see #3 below for clarification.
3. Confusing “its” and “it’s”
“It’s” is a contraction of two words: “it is.” As in: “Look, it’s a pancake!”
“Its” is a possessive signifying that something belongs to “it.” As in: “You spread butter on its surface.”
4. Fewer vs. Less
We’ve all seen the express checkout line at the supermarket that reads “15 items or less.” Now, I’m with Stephen Fry on this one — for the sake of keeping the express line moving, I can let this one go. But if you aren’t sure when to use “fewer” or “less,” here’s the rule:
- If you can count the thing, and you can reduce its quantity by countable amounts, then you use “fewer.” For example: “Two pancakes are fewer than three.”
- If you can’t count the thing, you use “less.” For example: “I have less pancake batter than I had before.”
5. Where punctuation goes relative to quotation marks
OK, I’ll admit, this one can be tricky, because the rules are sometimes different depending on where you are and what style you’re using. In the US and Canada, the following rules are most commonly used:
- Periods always go inside quotation marks.
- Commas always go inside quotation marks. (Revision 3/4/14: my original pancake comma was backwards! Eeep! All fixed!)
- Semicolons and colons always go outside quotation marks.
- Question marks & exclamation marks go inside quotation marks if they form part of the direct quote; otherwise, they go outside.
Okay, now here’s my confession: This post was supposed to be about seven deadly grammatical sins. But all these pancakes made me hungry. So I ate them. :-)
Yesterday, I introduced you to Scrivener, my go-to writing platform for Mac. Scrivener does two things extraordinarily well: it supports my messy, nonlinear way of collecting my thoughts, and it offers a distraction-free space where I can focus entirely on writing. Unfortunately, what it doesn’t offer (yet) is an iOS version.
On my iPad, I turn to Writing Kit. While it obviously can’t do everything Scrivener can do, it does have some advantages over other iOS apps I’ve tried. The heart of Writing Kit is a minimalist text editor that supports either Markdown (e.g. for web writing) or Fountain (e.g. for script/screenwriting). If you don’t know the syntax for either of these languages, that’s okay — it works just fine as a plain text editor. Personally, I like Markdown because it’s a pretty simple and intuitive way to format plain-text without compromising readability, and it easily converts to HTML. (If you’ve ever tried to use a fully-formatted Word file as the basis for a web document and dealt with all the random formatting issues that arise, you’ll understand the virtues of plain-text with minimal markup). There are in-app cheat sheets for both Fountain and Markdown so it’s easy to get started if you’re new to either one.
The major advantage of Writing Kit is that it has a built-in web browser and embedded search engine, so it’s easy to quickly look something up without having to leave the app and open Safari. From the browser, you can also directly insert links into your document, without having to copy and paste, which I’ve always found a bit awkward on an iPad. I haven’t used this function, but if you are an Instapaper, Read It Later, or Pocket user, you can also send links directly to your reading lists.
I also like Writing Kit’s gesture-based text selection and undo/redo functions, as well as its support for standard keyboard shortcuts (although I have encountered a few glitches with shortcuts). It also supports a variety of export formats — Markdown, HTML (or HTML source code), rendered PDF, or direct-interface with external apps, such as Pages.
All in all, it’s a pretty decent option for iPad. But truth be told, I don’t rely on my iPad as much for writing as I did even a year ago. It was great when I was taking courses and doing most of my work away from home, but now I rarely carry my iPad with me. After I finished taking courses, I started making a conscious effort to reduce my screen time, which means I actually do most of my “mobile” writing on paper these days. Notebooks are lighter, never need recharging, and they support my stationery addiction. Turns out there’s really no app for that.
Since I’ve been on the subject of technology and writing, I thought it might be a good time to talk about some of the tools that I do use. I should preface this by saying that this is a subject that I tackle with some trepidation. At heart, I’m still very much a purist in the sense that writing is writing; it’s just putting words on a page, and at the end of the day, the only “app” that really makes a lick of difference is the one between my ears. There’s a part of me that couldn’t care less whether I’m writing with a pencil on a napkin or the latest offering from the App Store. But having said that, I do also appreciate that sometimes it’s easier to focus on writing when you’ve got tools that are compatible with your process.
For me, there are two things that are important for the way that I write. First, I’m a nonlinear thinker when it comes to composition. It’s a bit of a blind-man-with-an-elephant process — it might begin with a tusk here, a trunk there, a tail, an ear — I know they all fit together, but I need time to think about how and where everything connects. That’s a visual process for me; I like to spread things out, move them around, and look at the whole shape and structure of a piece before I start writing things down in a specific order. I’ve always hated linear or hierarchical outlining methods, and traditional word processing applications like Microsoft Word are linear by design. For this reason, I tend to do a lot of preliminary work in my head or on paper before I even start making notes on my computer. (If you’ve ever seen my office, you’ll know I am always scribbling things on post-it notes and it looks random and disorganized— but for me it’s just a kind of “buffer” for new information until I decide where it fits and what to do with it.)
Second, when I do sit down to write, all I want to do is write. I don’t want to be bothered by auto-formatting, auto-correcting, or helpful suggestions from animated paperclips. I don’t want my software guessing at what it thinks I’m trying to write, or highlighting my spelling and grammar mistakes as I go. It’s hard enough for me to tame the editor in my own head without having my software second-guessing me at the same time. This is where the apps I use make the biggest difference for me — I now do the majority of my writing in a minimalist, full-screen, distraction-free mode, and it really helps me focus.
With these two principles in mind, I’ve experimented with a variety of different apps, for both desktop and mobile platforms. They all have their pros and cons, but now I’ve pretty much settled on two that work very well for me — one desktop, one mobile.
Today, I’ll introduce the desktop tool: Scrivener
Okay, first things first. The biggest con of Scrivener is its price tag. At $45US, it’s priced well ahead of a lot of the other apps out there, and for that reason will not be on top of everyone’s list of favourites. There’s also no iOS version (yet; more on this later), which is problematic for those of us who love our iThings. But Scrivener is an absolutely beautiful piece of software, built by a writer with the needs of writers in mind — endlessly customizable and refreshingly unobtrusive.
Scrivener is designed to operate as a kind of electronic writing studio, a single virtual space that brings together everything you need for a particular project. You can store all of your drafts, notes, and research (including external files such as images, PDFs, videos and webpages) within a single project “binder,” organized in whatever way best suits your needs. There are some preset binder templates that suggest ways to organize yourself based on the type of work you’re doing, but everything is customizable. For example, I manage everything related to this blog within one project binder in Scrivener, with folders for posts, links, images, and other notes, as well as quick-links to resources I use often.
It’s a dream for assembling elephants in the dark — Scrivener is the only software I’ve ever used that really allows me to work in a way that feels natural to me. Anything in your binder can be displayed and manipulated on a virtual corkboard, and files can be easily merged, split, or compiled in any arrangement you want. Even in composition-mode, you can seamlessly shuffle text and manage ideas as they emerge without ever breaking out of full-screen composition. At any point, you can select text from the current composition window and send it to another destination within your binder. Or, if you’re struck with an idea that relates to an entirely different project, you can use the virtual “scratch pad” to save the idea outside of your binder (or send it immediately to another project) — again, without any toggling between windows, and with minimal disruption to your train of thought. Got many different files you want to view or edit together without committing to a permanent merge? Scrivener will display them together, in one window or in a split-screen arrangement — whichever you prefer.
There are all kinds of handy tools for formatting, revision, tracking changes, importing/exporting different file formats, setting and tracking writing goals, support for speech recognition/dictation, automatic backups, and so on — it’s all there at your fingertips, but it’s never in your face when all you want to do is write. To me, that’s the real beauty of Scrivener; so much power, with so little interference.
I will confess — the transition to Scrivener wasn’t an overnight process. It’s a no-brainer to download and just start writing, but if you’re used to working with traditional word processing software, Scrivener is a very different experience. I had to get out of the habit of expecting Scrivener to behave like a word processor before I really began to appreciate how to use it most effectively. But now? Scrivener lives in my dock where Word used to be, and I don’t feel like anything’s missing.
I should also note that Scrivener would be great for academic writing. It’s compatible with bibliographic management software, and supports both LaTeX and MathType. I never used Scrivener to its full potential for academic work, in part because I had been a longtime user of Reference Manager (Windows only), and at the time, Scrivener was only available for Mac. There is now a Windows version available, but since I left the lab, cross-platform compatibility hasn’t been an issue for me.
There is an iOS version in development, but no firm release date at this stage. In the meantime, Scrivener is designed to sync with Simplenote or Index Card for iOS — I don’t use either of those apps, but it’s easy enough to import files from other apps (or sync via Dropbox), that this isn’t a crippling flaw, particularly in light of Scrivener’s other advantages.
More on my favourite iPad app tomorrow.
Today, a friend posted a link to this article in the New York Times describing new software designed to automate essay grading in large university classes. It’s not the first I’ve heard of this technology, and the article in the Times is a year old — hardly breaking news, but I remain disturbed nonetheless by the notion of machine-based assessments of students’ writing.
Proponents of the technology claim that the software produces results similar to human graders, and even, as one researcher claims in the Times piece, exceeds the capacity of human graders to provide feedback to large classes. I’m skeptical, but not for all the reasons you might expect. When it comes to speed, reliability, and the ability to analyze writing for common structural and grammatical issues, I’ll admit that the software has the potential. Not long ago, I spent some time playing around with a piece of software called Swan (an acronym for Scientific Writing Assistant), and I was rather surprised and impressed with how advanced the analysis was, even with difficult scientific text. While I wouldn’t recommend it as a substitute for human feedback, it does a thorough enough job that I could see it being a useful editorial tool, particularly for those writing in English as a second language.
But as an assessment tool for student writers? I have a big problem with that.
First — and foremost — I think teaching and learning is fundamentally a human interaction, and nowhere is this more true than in writing and communication. Writing is about human connection, whether it’s between a writer and a reader, a viewer and an audience, or a student and a teacher. A machine might be able to provide reliable feedback about the structure and content of a piece of writing, but there’s no connection. A machine can’t judge whether a piece resonates with its audience. Do we really want to be teaching a generation of writers how to communicate effectively with empty boxes?
Second, the demand for automated grading stems from ever-increasing time pressure on instructors. As someone who has spent many hours reading, grading, and providing feedback on student writing, I understand that pressure all too well. But at the same time, I’m troubled by the implication that this isn’t a valuable investment of an instructor’s time. The idea that an instructor’s time is better spent elsewhere begs the question: what’s more important than providing students with thoughtful, meaningful, and constructive feedback?
Deep down, I have the same complaint about automated essay grading as I do about the use of machine-gradable multiple choice exams for assessing higher-order learning outcomes. It places so much of the emphasis on the assessment and the assessment tool, but assessment is not the point of education. Learning is. When we reduce the process to inputs and outputs from a machine, I wonder if we aren’t missing the point.
Can a machine recognize a “teachable moment”?
There’s so much more to becoming a writer than mastering the black and white (and sometimes grey) issues of how words work. The development of one’s own style, judgement, and intuitive grasp of the language — our ability to discern for ourselves the difference between the “right word” and the “almost right word” — requires much more thoughtful feedback than even a sophisticated machine can provide.
When I decided to go back to school to study professional writing, I already had a firm grasp of the language. I’d have passed any machine’s assessment with flying colours. If I knew I was going into a program where my work was going to be assessed by a machine, I wouldn’t have bothered. Even with real instructors grading my work, most of the courses posed little challenge.
What made the entire program worth my time — what made a real difference to my writing — were the one or two instructors who looked beyond my basic mastery of the language and asked, not “Is this an A?” but “Is this the best you’re capable of?” I never worried about getting good grades — what kept me up at night was the spectre of a handwritten margin note: “Not good enough for you.”
There’s a world of difference between the ability to assign a student a grade based on some arbitrary standard, and the ability to judge a student as an individual and dare them to do better, not for a grade, but for themselves. That’s where real teaching and learning happens.
Technology has a place in the classroom, but this isn’t it.
Notwithstanding my previous objections about New Year’s Resolutions, I did start the year off with the intention to blog more frequently. Five weeks into 2014, we can see how well that’s going so far, but I do know one thing for sure: it’s time to play with matches.
By playing with matches, of course, I don’t mean in the “Oops, I set the house ablaze way.” I mean in the daringly creative, try anything once, hakuna matata, “slimy, yet satisfying” kind of way.
When I started this blog as part of a class assignment back in 2011, it was meant to be fun — a light-hearted look at life in the lab — something I needed as much as anyone at the time. Even with one foot out of the lab, academic matters still flowed naturally into my writing. But now, almost two years removed, it’s not that I can’t connect with life as a scientist anymore; it’s that I neither need nor want to. As a writer, I find myself craving something more than to see and write about the world through a scientist’s eyes. And that’s been my struggle with the blog of late — what is it about, if it’s not about life as a scientist?
It’s a question that’s prompted me to get back to basics, back to the things that made writing fun before it was an academic and professional obligation. I’ve always found a kind of joy in writing, even academically, but as the jargon and acronyms and pressure to publish have made their way out of my life, that ticklish sense of writing as play has slowly crept back in. And that means anything could happen. The last time I tossed aside the rule book, I landed in a workshop on “Writing as Play, Discovery, and Invention”, where, much to my immediate distress, our first assignment was to write surrealist poetry.
I don’t write poetry.
But I (reluctantly) wrote poetry:
Muse — care and feeding of:
If you must, then hold it back, but only to let it grow. Trim it like hedges in the autumn, but only so it can blossom in the spring. Wear it like a trusty pair of shoes as you go out and soak up the world, otherwise you’ll outgrow it while it languishes in the closet, wanting, ignored. If you expect a waterfall, crashing down upon the rocks, you’ll hear the melancholy song of a cello warming a dark room. Expect peace, and you’ll find yourself wrung out like so many soggy towels left unfolded on the bathroom floor. Expect gratitude, and it will take you on a journey filled with deception, half-truths, and long-buried secrets. But expect a friend, and it will hoist you up on its shoulders and carry you up the mountain, to a place where the air is cold and too thin to breathe, but where a whole world lies before you, and the view is spectacular.
This time, I’m hoping to steer clear of poetry, surrealist or otherwise. But I am still veering off the beaten path — tonight, I begin round one of NYC Midnight’s Short Story Competition. It’s a tournament-style competition. Writers are assigned to heats, given a random prompt and a limited amount of time to submit a story, with the prompts changing and the time limits decreasing with each subsequent round of competition.
Short stories aren’t my usual thing either but…I now have seven days to write an action/adventure story about a newly discovered animal, featuring a pilot as a principal character.
Like I said, anything could happen…
I have issues with New Year’s resolutions. For starters, the new year doesn’t, by definition, stay “new” for very long. By my estimation, the new year only lasts until about the third week of January, when residual holiday cheer and New Year’s resolve is subsumed by the grim reality of at least two more months of bitter cold and darkness. Second, of all days to make a fresh start, why January 1st, the universal day of hangovers, sleep deprivation and Honeymooner’s marathons?
I’m also annoyed with the commercials. We spend the entire holiday season eating and drinking in excess, while simultaneously being bombarded with ads for exercise machines, fad diets, gym memberships, and the countless other products that seem to crawl out of the woodwork in time to cash in on our newfound but short-lived New Year’s willpower.
For the last few years, I’ve done little more than go through the motions of setting resolutions, which is to say, I’ve sat down with a friend and we’ve dreamt up outlandish things for each other to do that we know neither of us will actually get around to doing. None are out of the realm of possibility — I could go on a moonlight picnic or I could get my handwriting analyzed, but I could also just as easily make myself a peanut butter and tofu sandwich at midday and read my horoscope.
Stranger things have happened, and without any resolving on my part.
The other day I thought — rather than resolving to do a bunch of stuff I probably won’t do, why not celebrate all the random things that did happen that I couldn’t have planned for? Isn’t spontaneity the spice of life?
So here, in no particular order, are some things I did this year that I probably would never have resolved to do — and probably wouldn’t have done if I had:
1. I made mittens. It doesn’t sound like much, but considering the Great Costume-sewing Adventure of 2012, in which I spent 10 days hand-sewing a Snoopy costume that I completely improvised despite having no practical sewing experience, I felt redeemed by my little mitten project. It only took three hours, two attempts, and one incredibly patient textiles student as my guide, but look Ma, two hands!
2. I drank iceberg beer, got “Screeched-in”, and attended my first fashion show — in rural Newfoundland. I also got to spend three days with some pretty awesome people, learning about topics we care about, with no acronyms, jargon, biochemical pathways, or crimes against PowerPoint. Long may yer big jib draw!
3. I convocated (again). Or is it “convoked”? In either case, I guess the third time’s the charm — I won the Dean’s Medal for academic excellence in my program. And while GPA-based accolades make me a little itchy at this stage of my career (I just don’t think that GPA means that much outside of the classroom), I must admit I’m glad I had a reason to attend the ceremony. A lot happened in the three years between my MSc and my professional writing diploma, and academic recognition aside, it was the right time for a bit of symbolic closure on the whole transition. Onward!
4. I stopped for geese. And a wayward cow crossing the road in Idaho. Just one footnote on all of the great times shared with friends this year. I don’t often get the opportunity to say so without sounding mushy and sentimental, but I have amazing friends — my life wouldn’t be what it is without them — they’re like a family unto themselves.
5. I learned to juggle (sort of). It would be more accurate to say that I learned a strategy for learning how to juggle. Whether any actual juggling takes place depends on how coordinated I can manage to be on any given day — but that hardly matters — it’s still fun. This is what happens when you spend a weekend hanging out in a freezing tent and the juggling club happens to show up. (and no, it wasn’t a circus tent!)
6. I took up running, and kind of learned to like it. Which is monumental, considering how much I hated it when I started. I still cannot claim to be a very strong runner, but the grinding monotony that I detested so much in the beginning has slowly given way to a more satisfying, even peaceful routine.
7. I watched a campfire debate that looked a lot like this:
The final score? PhD: 1, Person who “saw a program”: 0. Sitting by the fire watching the sparks fly: priceless.
8. I didn’t publish…and I didn’t perish. This year, there were no impact factors, no citations, no arbitrary and nebulous measures of “productivity” — and it was probably the most creative, productive, and meaningful year of my career to date. Now more than ever, I wonder — what could science be like if everyone focused less on publishing and more on creativity? What could happen if everyone had just a little more time to slow down and think?
9. I gained a new appreciation for mathematics. This might come as a surprise from a former scientist, but I’m not really mathematically-minded, at least not at a theoretical level. Calculus was my Achilles heel as an undergrad — so much so that I once summarized a lecture in physical chemistry as “blah, blah, magic, answer!” No amount of professorial wand-waving helped — it wasn’t until I got deeper into biochemistry and into the application of the concepts that they made any sense at all. Consequently, I went through my entire scientific career viewing math merely as a tool, until this year, when a first-year undergrad explained fractals in a way that made the math inherently interesting, accessible, and really quite stunningly beautiful. Cool.
10. I took an extended holiday. Well, three weeks may or may not qualify as “extended” depending on your definition, but it’s the longest voluntary, non-conference related, e-mail and guilt-free holiday I’ve ever taken. Two weeks in, I am definitely more than 66.67% relaxed and refreshed. :-) Maybe Santa does exist!
Happy holidays, everyone! Onward and upward in 2014!
Of all the adjectives that come to mind when you think of academic or scientific writing, there’s one I’d bet sinks to the bottom of the list regardless of the audience. You might think a paper is unintelligible, incomprehensible, jargon-filled, complicated, detailed, sometimes exciting, often boring, but certainly not funny. I mean, science is serious business, and no researcher in his or her right mind would dare compromise citations for laughs.
Or would they?
Every year the scientific community waits on the edge of its collective seat for the announcement of the IgNobel Prizes, celebrating real research that “makes people laugh, and then think.” The lucky winners become the laughing stock for awhile, but the giggling is often followed by a sober realization: “Someone actually studied that? Seriously?”
Part of what makes the IgNobels so unabashedly funny is that often, they start out with a completely serious question. Does a person’s posture affect their estimation of an object’s size? (Apparently so, especially if you lean slightly to the left). Can a doctor accidentally make your butt explode while performing a colonoscopy? (Turns out it’s rare, but I’m sure…um…relieved that they did the research).
What’s a little harder to find in the scientific literature are examples of researchers being intentionally cheeky. But such examples do exist, and they call out for a Top 10 List.
Here it goes:
10. “The unsuccessful self-treatment of a case of writer’s block.” (D. Upper. Journal of Applied Behaviour Analysis, 1974)
The reviewer’s comment on this paper sums it up best: “Clearly it is the most concise manuscript I have ever seen — yet it contains sufficient detail to allow other investigators to replicate Dr. Upper’s failure.”
Which is precisely what other investigators set out to do. That it took 33 years and a research grant of $2.50 underscores the scope of the problem. Writing, it seems, is hard.
Many papers tackling such a difficult problem are lost to researchers outside of the discipline, but the beauty of Upper’s pivotal work is that it can be readily applied to different fields. Consider its recent application to a study in molecular biology:
Only time will tell whether science will have the last…or first…word on this mystifying phenomenon.
What at first seems like just another paper full of jargon about faster-than-light particles reduces to a simple and elegant conclusion: “Probably not.”
8. “Synthesis of Anthropomorphic Molecules: the NanoPutians” (Chanteu & Tour, Journal of Organic Chemistry, 2003)
They drew stick figures. With molecules!
Ever vigilant, however, of illustrating their nano-peeps in a way that would not be representative of their equilibrium state, the authors add this caveat: “…the liberties we take with the nonequilibrium conformational drawings are only minor when representing the main structural portions; conformational license is only used, in some cases, with the NanoPutians’ head dressings.”
Science…solving bad hair days, one molecule at a time.
7. “Trajectory of a falling Batman” (Marshall et al., Journal of Physics Special Topics, 2011)
This study is exactly what it sounds like. They analyzed the path of a falling Batman to determine whether our beloved caped crusader could indeed survive on a Batwing and a prayer. Their grim conclusion? Splat.
Or, as the authors put it (albeit far less eloquently in my opinion): “Clearly gliding using a batcape is not a safe way to travel, unless a method to rapidly slow down is used, such as a parachute.” Noted.
6. ”The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute” (Lim et al, British Medical Journal, 2005)
Don’t you just hate it when your co-workers steal all the teaspoons? If you’re a real scientist, you don’t get mad. You get a publication!
There are few things more satisfying in science than publishing a really important paper, and then being asked to present it at a really important conference. This is one of those truly remarkable papers that must be seen to be appreciated.
Now, you might think that a study such as “chicken chicken chicken” could have little application outside the poultry world. But you’d be wrong. Indeed, I know at least a few people who have adopted “chicken chicken” as the universal code word for a scientific talk that has gone on way too long. So here’s my public service announcement to researchers everywhere: if you’re ever speaking and the audience starts muttering “chicken chicken” to themselves, they’re not hungry — they want you to stop.
4. “Santa and the moon” (Barthell, CAPJournal, 2012)
Do you remember when you were a kid on Christmas morning, and how you carefully examined the wrapping paper for its scientific accuracy before meticulously unwrapping the toy you’d been waiting for all year?
I totally thought so.
This one belongs in the category of “You might be a scientist if…”
Parents: If your child ever exhibits signs of trauma from the inaccurate portrayal of moon phases on wrapping paper, it might be time to have a serious talk about graduate school. Might I recommend this book at bedtime?
3. “Absolute dating of deep-sea cores by the PA(232)/TH(230) method and accumulation rates: a reply” (Journal of Geology, 1963)
Sometimes, despite their best efforts, scientists make mistakes. Luckily, when this happens, there are usually other scientists happily willing to point it out. Such was the case with this paper, in which some scientists pointed out an error in the original paper, and the authors simply replied, “Oh well, nobody is perfect.” Gotta admire their honesty.
Who says romance is dead? Nothing quite says “publish or cherish” like a marriage proposal embedded in a scientific paper!
1. 20 more hilarious scientific papers in five minutes.
Thanks to Seriously, Science? (or, the artists formerly known as NCBI ROFL).
Having spent the better part of my research career working in the field of plant biotechnology, I’m no stranger to scientific controversy. As a researcher, I’ve met my fair share of people — including other scientists — who hold strong opinions about genetically-modified food: about whether it’s safe for humans and the environment, whether it’s really needed to address global food security, whether it should be labeled, who really benefits (other than biotech companies and patent attorneys), and even why, as a biotechnologist, I should burn in Hell.
I’m not easily rattled by a scientific debate.
I’m well-versed in the arguments both for and against, and my personal opinions on the issue are neither black nor white. I’ve spent much of my career immersed in the scientific evidence and the best I can settle on personally is an evolving shade of grey — biotechnology is not inherently good or bad, neither panacea nor Pandora’s Box. When it comes to a debate, as a scientist, my default setting is to keep calm and consider the evidence.
But last weekend I was sitting around a campfire with a couple of my scientist friends when an acquaintance (not a scientist) started challenging us on various scientific topics — from genetically-modified foods to factory farming to Angelina’s double mastectomy. It was by far one of the most exasperating evenings I’ve ever spent discussing science. It was also perhaps one of the most educational in what it highlighted about the challenges of modern science communication.
When I started writing about science, I had an editor who encouraged me to focus on the art of translation — turning technical jargon into something accessible by a reader he fondly referred to as “Joe Lunchbucket.” Joe Lunchbucket was the reader who browsed the paper over a ham sandwich and would only read a science article if it was wrapped with a shiny, friendly bow that said “Gee whiz, this is cool, you gotta see this!” In time, however, I learned that translation wasn’t enough. It’s hard to get people excited about science for science’s sake — there has to be a compelling story, something for the audience to connect with above the science itself. After years of being trained as a researcher to strip the humanity out of my writing, I had to learn to put it back in, to stop “writing about science” and start writing about people and the process of discovery.
Joe Lunchbucket made me a better writer, but now I think he’s a relic of a simpler journalistic era. Science communication is far messier today than even when I first dipped a toe in the water a decade ago. It’s no longer fundamentally about telling a good story or making sense of the jargon, although those skills are still vital. Now it’s as much about rising above the noise, fighting fiction with fact, battling stubborn misperceptions, and countering fear-mongering emotional arguments with…logic?
If only it were that easy…
Case in point: our opponent in last weekend’s campfire debates was a middle-aged woman, raised in a small rural town, no post-secondary education. What she knows about science, she’s learned from decades-old high school lessons and the internet. She’s health-conscious and environmentally-conscious and I’ll give her the benefit of the doubt that her motivation to be informed about issues she cares about is sincere. But this is a person who still believes, despite her “research”, that a woman shouldn’t run because her uterus might fall out, that commercial chickens are no longer raised with legs, and that wheat gluten is an evil product of genetic engineering.
How do you even begin to argue with a person who reacts to information without having even a basic understanding of it? How do you dispel misinformation about science with a person whose sole connection to the subject is grounded in emotion rather than logic? How can you compete calmly against sensationalist drivel?
I can tell all the stories I want, I can counter with facts based on peer-reviewed literature, I can even empathize with the difficulty of distinguishing the good information on the internet from the bad. But who am I but a scientist? I’m one of the people who once made a living advancing the very technology she fears. Why would my knowledge of the subject come as any comfort to her? What have I done to earn her trust?
Granted, it also doesn’t help that stereotypes persist about scientists being cold and unfeeling and distant, or that we have a government that wants its scientists to cater to commercial interests, undermining our ability to represent ourselves as objective, trustworthy sources of information. But what have we done as researchers, to help ourselves?
I think we have done science a disservice in separating the process of doing science from the process of communicating it to the public. As researchers, our professional obligation is to publish our work in peer-reviewed academic journals, to share it with other researchers. We write in a technical language and publish in a medium that largely excludes the public. We can partially address the issue of access by insisting that publicly-funded research be made available free to the public, and I’m a strong advocate for such open access initiatives. But access alone isn’t enough. It doesn’t do anything to address the issue of comprehension.
To support access without comprehension only opens the door to the further spread of misinformation — perpetuated by well-intentioned (and sometimes not so well-intentioned) people who understand just enough of the scientific detail to get it wrong. Traditionally, we’ve relied on professional journalists to get the story right — sometimes with mixed results. Now we have a whole slew of advocacy groups and “citizen journalists” who flood the internet with their own interpretation of the science. The loss of journalistic gatekeepers isn’t necessarily a bad thing as public engagement is concerned, but as researchers, it’s clear that we can’t continue to rely on others to get the story right. We can’t shut ourselves out of the public conversation and then expect our voices to be respected. Now more than ever, researchers need to be proactive in engaging directly with the public.
We’re also working in a political climate that is becoming increasingly hostile to basic research. Researchers are under significant pressure from the government and funding bodies to deliver short-term economic outcomes — to focus on applied, industry-friendly research with commercial applications. So far, the pushback from researchers seems to focus on the threat to academic freedom, but does anyone outside of academia really understand what that means? Will Joe Lunchbucket have any sympathy for a bunch of tenured Ivory Tower white coats complaining that they can no longer do whatever they want? The threat to academic freedom is very real and very serious, but it’s not an argument that’s going to resonate with people who don’t really understand what researchers do. And as long as we’re relying on other messengers to explain the value of research to the public, we’re hardly in a position to complain that the public doesn’t comprehend the severity of the situation.
When I first got involved in science writing and outreach as an undergraduate student, one of my professors responded with an air of disgust: “What self-respecting scientist does that?”
At the time, it was enough to temporarily shake my conviction. But a decade later, I wonder what self-respecting scientist can afford not to?
In my double-life as a writer and a scientist, I’ve always had an interest in writing about science in way that is accessible to a general audience. As an undergrad, I remember coming out of some of my classes feeling the way you do when you have a secret you can’t wait to tell somebody, a mental itch just begging to be scratched. Only science wasn’t secret. It was right there, all around us, all the time. So whenever I learned something really cool, I couldn’t help but write about it immediately to my closest non-scientist friend, librarian, and confidant. She’d send me relationship advice, and I’d reply with something like “Have you ever seen a starfish flip over?”
This might explain why I still live alone with my cat and an autographed calendar of half-naked firemen. But I digress.
My point is, this was before Facebook and YouTube and Wikipedia; I couldn’t just send a link to what I wanted to share – I had to actually try to explain it.
Within a year, I got a job writing profiles of researchers for university publications that were targeted to the general public. My job was to go talk to scientists, find out what they did, and figure out a way to share it with people who might only have a passing interest in the topic. I had to wade through the jargon and simplify the science without “dumbing it down” (a phrase, by the way, that I detest for its inherent condescension). It was a challenging task, made all the more difficult by the researchers’ frequent inability to express what they did in simple terms.
It got no easier when I became a researcher myself, and had to learn how to communicate just as effectively with the jargon as I did without it. The jargon was both a necessity and a barrier to effective communication, a double-edged blade that I’ve spent the better part of my career trying to master, with no shortage of red ink spilled in the effort. Along the way, I’ve tried to share what I’ve learned with my colleagues and students, so that they too might become more effective jugglers of jargon and the public understanding of science.
But it’s not just about effective communication. One of the benefits of being forced to explain a difficult concept in simple terms is that you must change your perspective, and in the process of seeing something again with a beginner’s eyes, you can come to a deeper understanding of the subject yourself. You stop taking the jargon at face value and begin asking yourself “what does this really mean?”
Recently, this xkcd comic prompted the creation of the Up-Goer Five Text Editor, which challenges users to explain a difficult concept using only the “ten hundred” (thousand isn’t on the list) most commonly used words. Researchers in several disciplines have already jumped to the challenge and tried to describe what they do using this limited vocabulary.
Of course, I had to try it.
The scope of the challenge was evident as soon as I tried describing oilseed biotechnology without being able to use the words “plant”, “seed”, “oil” or “fat.” Couldn’t use “science” or “lab” or “research” either. Even “affect” and “effect” are off the list, which meant I would have to get over my scientific squeamishness about saying one thing “causes” another.
Here is what I came up with:
I try to understand how living things work inside. Lots of things happen in our bodies all the time, even when we’re sleeping, which is pretty amazing! But we don’t understand very much of it. Knowing about what happens inside us (in our cells) is important because then we can learn what is good or bad for our bodies. Sometimes we hear about food that is good or bad for us. What does that mean? Our food changes what happens in the cells of our bodies, and it can make us feel good, or it can make us sick. If we know what kind of food makes us feel good, we can try to make more foods that keep us feeling good, or fix the things that make us sick.
How do we do that?
Our food also comes from living things. Like us, the living things that we use for food also have stuff happening inside their cells. This can cause them to be good or bad for us when we eat them. By changing what happens in the cells of the living things we plan to eat, we can make more and better food. This will help us to not get sick. Sometimes, we can also change the way the living things we eat are grown so that they can grow in places where they would usually die (like places where it is really dry and there is no rain). This helps people grow food in places where they don’t already have enough food.
Now, I’m not sure I would suggest that we all start talking this way — after all, we’ve evolved beyond a vocabulary of “ten hundred” for a reason. But aside from the challenge of conveying a difficult concept simply, this exercise also really makes you aware of the words that aren’t nearly as common as you’d think. Perhaps it will make you think twice about the words you do choose the next time you have to explain something in plain language, when the question is not “can you use the word?”, but “should you?”
There has been a lot of discussion over the past year about the state of academic publishing — from the exploitive business models of the publishing heavyweights, to the crisis this creates for libraries, and the emerging case for open access. Longtime readers of this blog will know that I am a strong proponent of open access initiatives, and that I believe this kind of public debate on the issue is long overdue.
But I also wonder if it misses the point.
You see, we’re still not addressing the issue of what makes researchers beholden to publishers in the first place. The publishing models may be changing, but the culture that drives the explosive growth of academic publishing has not. So while open access promises to bring down one barrier to scientific communication, we continue to ignore what is potentially a much more formidable and serious challenge — the sheer volume of papers being published.
Although it’s difficult to determine accurately, it has been estimated that there are more than 50 million scholarly papers in existence right now, with more than 1.5 million new articles being added each year (and steadily climbing). That’s about 3 papers being published every minute. And the sad reality is that the vast majority of these papers will never be cited.
This begs an obvious question: if the purpose of scholarly publishing is to communicate our results, but the majority of papers go unread or uncited, are we actually communicating? More to the point, why do we continue to “publish-or-perish” if most of our publishing efforts are for naught?
The truth is, it’s not primarily about communication anymore; it’s about satisfying institutional demands that treat publications as a proxy for research excellence. That idea took root over a century ago, when William Rainey Harper, then president of the University of Chicago, first proposed that advancement in academic rank and salary should be tied “more largely” to research productivity. The policy was quickly adopted by other research-intensive universities. By the 1930s-40s, the phrase “publish-or-perish” had been coined to describe the pressure on academics to publish, and today, it’s practically synonymous with the academic lifestyle. Today, scholarly publications are not only used to measure individual performance, but also to help determine institutional rankings and to measure national performance on innovation.
Today, the pressure to publish comes not from a demand by the consumers of knowledge (i.e. other scholars), but from a demand by the merchants of knowledge — those whose profits and prestige are linked directly to growth in research publications.
What about the rest of us?
Short-term career advancement aside, the publish-or-perish culture doesn’t really benefit individual researchers, who invest much of their time and energy either in producing papers or in reviewing them — a huge waste if most of those articles fail to reach their audience.
It’s not particularly beneficial to students, since researchers are often forced to prioritize research output over teaching. For grad students, publish-or-perish means they spend much of their time learning and conforming to an academic career structure that will not serve the 70-80% of them who will not have access to stable academic careers no matter how much they publish. That’s another post entirely.
It doesn’t benefit scientific integrity, since the unrestrained growth in publications places more and more stress on the system of peer-review. The pressure to publish is so intense that researchers will resubmit their rejected manuscripts multiple times, working their way down from the top-tier journals to the electronic slush piles that have lower standards of peer-review. Each published paper then, often represents multiple rounds of peer-review, each a burden on the system, and even then, it’s still possible for researchers to pay for access to bottom-feeder journals just to get the paper out, however bad. Bad research in turn places its own burden on the system as precious resources are invested in a futile effort to validate them.
In recent years we’ve seen everything from rampant plagiarism and duplication of publications, to data manipulation or fabrication, to elaborate “citation cartels” used to artificially inflate the prestige of certain journals. Just last week, a researcher was caught using his own email address under a fake name so that he could “peer-review” his own papers!
On one hand, we might be comforted by the fact that the peer-review system is catching these cases of obvious misconduct, but on the other hand, shouldn’t we be more alarmed about the institutional pressures that are leading to the behaviour in the first place?
And what of the higher-profile retractions? What does it say about the effectiveness of our current system of peer-review when the highest-profile journals (Nature, Science, Cell, etc.) also tend to have the highest rates of retraction? True, these journals tend to be on the cutting edge and we know it can cut both ways sometimes. These journals also have large audiences: more eyes means more potential for catching a mistake. But these are also the journals that tend to get the most press, so such high-profile retractions also have the potential to negatively impact public trust of science. When you have researchers chasing publication in these journals as a career-making move rather than it being based on the quality of their data, it can only harm the collective image of the research community.
Public trust isn’t the only thing at stake. Scholarly communication is, by its highly technical nature, exclusive to a particular audience. Even with the expansion of open access publishing, which would make papers accessible to the public, there is still a significant barrier for the public in understanding that original research. And yet, never before has the public understanding and acceptance of science been so important. The big issues of the day — energy, sustainability, climate — require a certain acceptance of scientific information. But outreach activities often don’t count for much in faculty evaluations — so researchers tend to remain focused on writing papers for other scholars. In effect, the publish-or-perish system places more value on scholarly papers that go unread than on the myriad ways that researchers could otherwise be connecting with a non-academic audience to inform the public debate.
This is perhaps the most tragic waste of our scientific knowledge, and the point at which I begin to seriously question the ethical footing of the publish-or-perish culture. In an era of economic instability and fiscal restraint, when public access to scientific knowledge is key to advancing the most pressing social, economic, and political issues of the day, is it ethical for publicly-funded institutions to continue demanding that researchers prioritize scholarly publications (largely for the sake of prestige), at the expense of actually reaching the audience that can most benefit from the research?
What’s the alternative and how do we make it happen?