Category Archives: education

How my favourite poet introduced me to my favourite editor (or…vice versa?)

April is National Poetry Month. As many of you know, I’m not much of a poet, and not much more a reader of poetry. It seems I struggle with everything that makes poetry, well, poetic. There’s either too much freedom in form or not enough, counting syllables one minute, dropping capitals and punctuation the next. As in art, there is a fine line in poetry between what counts as a masterpiece and what looks like someone kicked over a paint can. I’ve never really gotten it.

And yet, the most memorable book of my childhood — not my first book, but the book I can still largely recite from memory — was a book of poetry. My first grade teacher used to read every day from Shel Silverstein’s Where the Sidewalk Ends. I still have my copy, faded and worn and much loved, with my name etched in practised elementary school scrawl on the inside front cover. The book is home to some of my most beloved literary characters: Sarah Cynthia Sylvia Stout (who would NOT take the garbage out!), Ickle Me, Pickle Me, and Tickle Me (who went for a ride in a flying shoe), and dear old Reginald Clark, who was afraid of the dark, and begged each day, “Please do not close this book on me.” (which of course we did, every day before lunch time…and I still do, as a rule, anytime I feel the need to look up a Recipe for a Hippopotamus Sandwich.)

It’s a book that’s lost little of its charm over the years.  Indeed, it was only a few years ago that I discovered the book’s dedication, “For Ursula” referred to the legendary Harper & Row editor, Ursula Nordstrom, the woman who, over the span of a fifty-year career, revolutionized children’s publishing. Ursula Nordstrom not only nurtured some of the greatest creative talents in 20th century children’s literature (e.g. Maurice Sendak, E.B. White, Margaret Wise Brown – to name just a few!), but she also dared to publish what she called “good books for bad children,” shattering the strict moralistic standard of children’s books in favour of books that catered to the real emotions and imaginations of children.

Nordstrom began her career at Harper & Brothers in 1931, when she accepted a position as a clerk in the College Textbooks Department. Although Nordstrom expressed an early interest in writing, she did not join Harper with any immediate ambition to become an editor. Instead, she climbed the ranks as an assistant to her friend, Ida Louise Raymond, who headed Harper’s Department of Books for Boys and Girls. When Raymond retired from the post, Nordstrom was named her successor, and would remain in a leadership role within Harper for the next forty years, amassing a long list of notable firsts: she published one of the first juvenile books handling the subject of menstruation (The Long Secret, by Louise Fitzhugh) and the first novel for young readers that explored homosexuality (John Donovan’s I’ll Get There. It Better Be Worth the Trip). And when Maurice Sendak’s picture book, In the Night Kitchen, was censored and burned because it depicted a nude boy, Nordstrom publicly decried the “mutilation” of the book and maintained that “it is only adults who ever feel threatened by Sendak’s work.”

Though widely revered by authors, artists, and librarians, she sometimes faced staunch criticism from self-proclaimed authorities on children’s literature. One such critic, Anne Carroll Moore, famously asked Nordstrom what qualified her to be a children’s editor, given that she had no formal education, no children, and was not a teacher or librarian. Nordstrom was undaunted, replying matter-of-factly: “I am a former child, and I haven’t forgotten a thing.

Her letters — many of which have now been published as a collection — are an absolute delight to read. Most are letters between her and her authors and artists, which capture the special bond she had with them. But there are also letters to her readers — mainly children, but also some critics — which are as heartfelt and honest as many of the books we grew up with, thanks to her.

It’s difficult to imagine that Shel Silverstein — who began his career as a songwriter (most famous for Johnny Cash’s “A Boy Named Sue“), and whose poems were originally published in Playboy – would have gotten a second look from any other children’s editor of the time. But Nordstrom actually helped convince a reluctant Silverstein that he should try his hand at children’s poetry — and I love her for it.

Because what fun would have been growing up without Me Stew and The Loser and the Crocodile’s Toothache?  (I think I’d rather have been eaten by a boa constrictor!)

Happy Poetry Month!

 

Science: It’s about people, people!

Unless you’ve been living under a rock lately, you’ve no doubt heard that scientists have recently uncovered a major piece of evidence supporting the Big Bang Theory. Maybe that matters to you. Maybe it doesn’t. Maybe you understand what it means. Or not. Maybe the headline only reminds you of that geeky sitcom by the same name.

But whatever your reaction to the science, there’s another piece to this story that went viral this week, and you don’t have to be a physicist to appreciate what it means:

I think it’s beautiful. This video documents a side of science (and scientists) that the public rarely gets to see. Passion, excitement, validation, discovery, connection — suddenly it’s not just a story about physics, but a story about the pursuit of knowledge — about what physics means to people.

One of the things I’ve always disliked about academic communication is how we intentionally strip our own voices out it. We’re afraid to get personal, to tell a story, to reveal how invested we are in our work. We think that a flick of a pen will make our words “objective,” but this is, and always has been, an unrealistic expectation. The emotional and intellectual investment, the tension of not knowing, doesn’t go away — indeed, it’s often what drives us to research in the first place. There’s an inherent disconnect between what we value and how we talk about it. We claim to be seeking “truth,” but we routinely leave out the most inescapably honest part — that there is a human voice and a human story behind every advance.

I don’t mean to suggest that academic writing should read as personal narrative, or that we should place ourselves in the spotlight that rightfully belongs to the knowledge we generate. But I do think there’s a part of the story we’re not telling, and the omission isn’t benign. It represents a real gap in our cultural perception of what it means to do science, and of what science means to people.

Too often when we talk about making science accessible to the public,  the focus is on simplifying the technical concepts, what many call “dumbing it down.” I hate that expression — not only for what it implies about the audience, but also for what it implies about science itself.  Science doesn’t operate apart from the rest of the world; it’s not something we have to bring down to the masses.

I think we just have to be more human about the stories we choose to tell. We have to learn to recognize when it’s important for our audience to understand what “five sigma” means (or not), and when it’s important just to let them see what excites us about our work, and about the world we share.

Five common grammatical mistakes explained with pancakes

Tuesday, March 4th (March forth, get it?) is National Grammar Day. This year, it also happens to be Pancake Tuesday, the day when we’re all supposed to confess our sins and stuff ourselves with pancakes before Lent.

I’m not so crazy about Lent, but grammar and pancakes? On the same day? You mean I can make pancakes in the shape of punctuation marks for…um…educational purposes?

So, here we go — five grammatical sins explained with pancakes:

1. Using “comprised of” when you mean “composed of”

This is my all-time biggest grammar peeve. Frequent readers and former lab-mates will have no doubt heard me rant on this before. When you are describing the parts that make up a whole, for instance, the ingredients in a pancake, you might say, “Pancakes are composed of eggs, milk, and (because it’s a Monday night and I’m lazy), pancake mix.” Or, you might say, “Pancakes comprise eggs, milk, and pancake mix.” Either of these would be correct.

You would not say, “Pancakes are comprised of eggs, milk, and pancake mix.”

The word comprise means “to include,” so when you say “comprised of,” it’s like saying “included of.” It’s gibberish. It’s painful. And I’m pretty sure a unicorn accidentally steps on a kitten somewhere on the internet every time you use it. You wouldn’t want any murderous unicorns on your conscience, would you? Good. So please stop the madness.

Just remember, the whole always comprises its parts:

comprises

2. Plural-possessive confusion

The other day I saw a sign that said, “WANTED: Auto’s dead or alive.” Of course, we all know they really meant “autos,” as in more than one automobile. So why the wayward apostrophe?

Here’s the rule:

  • If you are making a plural (i.e. more than one of a thing), you don’t use the apostrophe. (e.g. three pancakes)

plural

  • If you are making a possessive (i.e. signifying that a thing belongs to someone), then you use the apostrophe. You do not use the apostrophe to signify a plural, unless you want to be stabbed with a fork, like this:

noapost

  • Exception to the possessive rule: “its” — see #3 below for clarification.

3. Confusing “its” and “it’s”

“It’s” is a contraction of two words: “it is.” As in: “Look, it’s a pancake!”

1 pancake

“Its” is a possessive signifying that something belongs to “it.” As in: “You spread butter on its surface.”

its

4. Fewer vs. Less

We’ve all seen the express checkout line at the supermarket that reads “15 items or less.”  Now, I’m with Stephen Fry on this one — for the sake of keeping the express line moving, I can let this one go. But if you aren’t sure when to use “fewer” or “less,” here’s the rule:

  • If you can count the thing, and you can reduce its quantity by countable amounts, then you use “fewer.” For example: “Two pancakes are fewer than three.”

morefewer

  • If you can’t count the thing, you use “less.” For example: “I have less pancake batter than I had before.”

moreless

5. Where punctuation goes relative to quotation marks

OK, I’ll admit, this one can be tricky, because the rules are sometimes different depending on where you are and what style you’re using. In the US and Canada, the following rules are most commonly used:

  • Periods always go inside quotation marks.
  • Commas always go inside quotation marks. (Revision 3/4/14: my original pancake comma was backwards! Eeep! All fixed!) 

inside2

  • Semicolons and colons always go outside quotation marks.

OUTSIDE

  • Question marks & exclamation marks go inside quotation marks if they form part of the direct quote; otherwise, they go outside.

Okay, now here’s my confession: This post was supposed to be about seven deadly grammatical sins. But all these pancakes made me hungry. So I ate them. :-)

Rage against the (essay grading) machine

Today, a friend posted a link to this article in the New York Times describing new software designed to automate essay grading in large university classes.  It’s not the first I’ve heard of this technology, and the article  in the Times is a year old — hardly breaking news, but I remain disturbed nonetheless by the notion of machine-based assessments of students’ writing.

Proponents of the technology claim that the software produces results similar to human graders, and even, as one researcher claims in the Times piece, exceeds the capacity of human graders to provide feedback to large classes. I’m skeptical, but not for all the reasons you might expect. When it comes to speed, reliability, and the ability to analyze writing for common structural and grammatical issues, I’ll admit that the software has the potential.  Not long ago, I spent some time playing around with a piece of software called Swan (an acronym for Scientific Writing Assistant), and I was rather surprised and impressed with how advanced the analysis was, even with difficult scientific text. While I wouldn’t recommend it as a substitute for human feedback, it does a thorough enough job that I could see it being a useful editorial tool, particularly for those writing in English as a second language.

But as an assessment tool for student writers? I have a big problem with that.

First — and foremost — I think teaching and learning is fundamentally a human interaction, and nowhere is this more true than in writing and communication. Writing is about human connection, whether it’s between a writer and a reader, a viewer and an audience, or a student and a teacher.  A machine might be able to provide reliable feedback about the structure and content of a piece of writing, but there’s no connection. A machine can’t judge whether a piece resonates with its audience. Do we really want to be teaching a generation of writers how to communicate effectively with empty boxes?

Second, the demand for automated grading stems from ever-increasing time pressure on instructors. As someone who has spent many hours reading, grading, and providing feedback on student writing, I understand that pressure all too well. But at the same time, I’m troubled by the implication that this isn’t a valuable investment of an instructor’s time.  The idea that an instructor’s time is better spent elsewhere begs the question: what’s more important than providing students with thoughtful, meaningful, and constructive feedback?

Deep down, I have the same complaint about automated essay grading as I do about the use of machine-gradable multiple choice exams for assessing higher-order learning outcomes. It places so much of the emphasis on the assessment and the assessment tool, but assessment is not the point of education. Learning is. When we reduce the process to inputs and outputs from a machine, I wonder if we aren’t missing the point.

Can a machine recognize a “teachable moment”?

There’s so much more to becoming a writer than mastering the black and white (and sometimes grey) issues of how words work. The development of one’s own style, judgement, and intuitive grasp of the language — our ability to discern for ourselves the difference between the “right word” and the “almost right word” — requires much more thoughtful feedback than even a sophisticated machine can provide. 

When I decided to go back to school to study professional writing, I already had a firm grasp of the language. I’d have passed any machine’s assessment with flying colours. If I knew I was going into a program where my work was going to be assessed by a machine, I wouldn’t have bothered.  Even with real instructors grading my work, most of the courses posed little challenge.

What made the entire program worth my time — what made a real difference to my writing — were the one or two instructors who looked beyond my basic mastery of the language and asked, not “Is this an A?” but “Is this the best you’re capable of?” I never worried about getting good grades — what kept me up at night was the spectre of a handwritten margin note: “Not good enough for you.”

There’s a world of difference between the ability to assign a student a grade based on some arbitrary standard, and the ability to judge a student as an individual and dare them to do better, not for a grade, but for themselves.  That’s where real teaching and learning happens.

Technology has a place in the classroom, but this isn’t it.

Ten papers demonstrating unequivocally that scientists have a sense of humour

Of all the adjectives that come to mind when you think of academic or scientific writing, there’s one I’d bet sinks to the bottom of the list regardless of the audience. You might think a paper is unintelligible, incomprehensible, jargon-filled, complicated, detailed, sometimes exciting, often boring, but certainly not funny. I mean, science is serious business, and no researcher in his or her right mind would dare compromise citations for laughs.

Or would they?

Every year the scientific community waits on the edge of its collective seat for the announcement of the IgNobel Prizes, celebrating real research that “makes people laugh, and then think.” The lucky winners become the laughing stock for awhile, but the giggling is often followed by a sober realization: “Someone actually studied that? Seriously?”

Part of what makes the IgNobels so unabashedly funny is that often, they start out with a completely serious question. Does a person’s posture affect their estimation of an object’s size? (Apparently so, especially if you lean slightly to the left). Can a doctor accidentally make your butt explode while performing a colonoscopy? (Turns out it’s rare, but I’m sure…um…relieved that they did the research).

What’s a little harder to find in the scientific literature are examples of researchers being intentionally cheeky. But such examples do exist, and they call out for a Top 10 List.

Here it goes:

10. The unsuccessful self-treatment of a case of writer’s block.” (D. Upper. Journal of Applied Behaviour Analysis, 1974)

The reviewer’s comment on this paper sums it up best: “Clearly it is the most concise manuscript I have ever seen — yet it contains sufficient detail to allow other investigators to replicate Dr. Upper’s failure.” 

Which is precisely what other investigators set out to do. That it took 33 years and a research grant of $2.50 underscores the scope of the problem. Writing, it seems, is hard.

Many papers tackling such a difficult problem are lost to researchers outside of the discipline, but the beauty of Upper’s pivotal work is that it can be readily applied to different fields. Consider its recent application to a study in molecular biology:

Screen shot 2013-06-25 at 5.55.09 PM

Excerpt from Lorenz et al. (2011) ViennaRNA Package 2.0; Algorithms for Molecular Biology, 6:26 (Click to enlarge)

Only time will tell whether science will have the last…or first…word on this mystifying phenomenon.

9. “Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?” 

What at first seems like just another paper full of jargon about faster-than-light particles reduces to a simple and elegant conclusion: “Probably not.”

8. Synthesis of Anthropomorphic Molecules: the NanoPutians” (Chanteu & Tour, Journal of Organic Chemistry, 2003)

They drew stick figures. With molecules!

Ever vigilant, however, of illustrating their nano-peeps in a way that would not be representative of their equilibrium state, the authors add this caveat: “…the liberties we take with the nonequilibrium conformational drawings are only minor when representing the main structural portions; conformational license is only used, in some cases, with the NanoPutians’ head dressings.”

Science…solving bad hair days, one molecule at a time.

7. “Trajectory of a falling Batman” (Marshall et al., Journal of Physics Special Topics, 2011)

This study is exactly what it sounds like. They analyzed the path of a falling Batman to determine whether our beloved caped crusader could indeed survive on a Batwing and a prayer. Their grim conclusion? Splat.

Or, as the authors put it (albeit far less eloquently in my opinion): “Clearly gliding using a batcape is not a safe way to travel, unless a method to rapidly slow down is used, such as a parachute.”  Noted.

6.  “The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute” (Lim et al, British Medical Journal, 2005)

Don’t you just hate it when your co-workers steal all the teaspoons?  If you’re a real scientist, you don’t get mad. You get a publication!

5. Chicken, chicken, chicken.

There are few things more satisfying in science than publishing a really important paper, and then being asked to present it at a really important conference. This is one of those truly remarkable papers that must be seen to be appreciated.

Now, you might think that a study such as “chicken chicken chicken” could have little application outside the poultry world. But you’d be wrong. Indeed, I know at least a few people who have adopted “chicken chicken” as the universal code word for a scientific talk that has gone on way too long. So here’s my public service announcement to researchers everywhere: if you’re ever speaking and the audience starts muttering “chicken chicken” to themselves, they’re not hungry — they want you to stop.

4. Santa and the moon” (Barthell, CAPJournal, 2012)

Do you remember when you were a kid on Christmas morning, and how you carefully examined the wrapping paper for its scientific accuracy before meticulously unwrapping the toy you’d been waiting for all year?

I totally thought so.

This one belongs in the category of “You might be a scientist if…”

Parents: If your child ever exhibits signs of trauma from the inaccurate portrayal of moon phases on wrapping paper, it might be time to have a serious talk about graduate school. Might I recommend this book at bedtime?

3. Absolute dating of deep-sea cores by the PA(232)/TH(230) method and accumulation rates: a reply” (Journal of Geology, 1963)

Sometimes, despite their best efforts, scientists make mistakes. Luckily, when this happens, there are usually other scientists happily willing to point it out. Such was the case with this paper, in which some scientists pointed out an error in the original paper, and the authors simply replied, “Oh well, nobody is perfect.”  Gotta admire their honesty.

2. Two body interactions: a longitudinal study

Who says romance is dead? Nothing quite says “publish or cherish” like a marriage proposal embedded in a scientific paper!

1. 20 more hilarious scientific papers in five minutes.

Thanks to Seriously, Science?  (or, the artists formerly known as NCBI ROFL).

The difficulty of communicating science simply

In my double-life as a writer and a scientist, I’ve always had an interest in writing about science in way that is accessible to a general audience. As an undergrad, I remember coming out of some of my classes feeling the way you do when you have a secret you can’t wait to tell somebody, a mental itch just begging to be scratched. Only science wasn’t secret. It was right there, all around us, all the time. So whenever I learned something really cool, I couldn’t help but write about it immediately to my closest non-scientist friend, librarian, and confidant. She’d send me relationship advice, and I’d reply with something like “Have you ever seen a starfish flip over?”

This might explain why I still live alone with my cat and an autographed calendar of half-naked firemen. But I digress.

My point is, this was before Facebook and YouTube and Wikipedia; I couldn’t just send a link to what I wanted to share  – I had to actually try to explain it.

Within a year, I got a job writing profiles of researchers for university publications that were targeted to the general public. My job was to go talk to scientists, find out what they did, and figure out a way to share it with people who might  only have a passing interest in the topic. I had to wade through the jargon and simplify the science without “dumbing it down”  (a phrase, by the way, that I detest for its inherent condescension). It was a challenging task, made all the more difficult by the researchers’ frequent inability to express what they did in simple terms.

It got no easier when I became a researcher myself, and had to learn how to communicate just as effectively with the jargon as I did without it. The jargon was both a necessity and a barrier to effective communication, a double-edged blade that I’ve spent the better part of my career trying to master, with no shortage of red ink spilled in the effort. Along the way, I’ve tried to share what I’ve learned with my colleagues and students, so that they too might become more effective jugglers of jargon and the public understanding of science.

But it’s not just about effective communication. One of the benefits of being forced to explain a difficult concept in simple terms is that you must change your perspective, and in the process of seeing something again with a beginner’s eyes, you can come to a deeper understanding of the subject  yourself. You stop taking the jargon at face value and begin asking yourself “what does this really mean?”

Recently, this xkcd comic prompted the creation of the Up-Goer Five Text Editor, which challenges users to explain a difficult concept using only the “ten hundred” (thousand isn’t on the list) most commonly used words. Researchers in several disciplines have already jumped to the challenge and tried to describe what they do using this limited vocabulary.

Of course, I had to try it.

The scope of the challenge was evident as soon as I tried describing oilseed biotechnology without being able to use the words “plant”, “seed”, “oil” or “fat.”  Couldn’t use “science” or “lab” or “research” either. Even “affect” and “effect” are off the list, which meant I would have to get over my scientific squeamishness about saying one thing “causes” another.

Here is what I came up with:

I try to understand how living things work inside. Lots of things happen in our bodies all the time, even when we’re sleeping, which is pretty amazing! But we don’t understand very much of it. Knowing about what happens inside us (in our cells) is important because then we can learn what is good or bad for our bodies. Sometimes we hear about food that is good or bad for us. What does that mean?  Our food changes what happens in the cells of our bodies, and it can make us feel good, or it can make us sick. If we know what kind of food makes us feel good, we can try to make more foods that keep us feeling good, or fix the things that make us sick.

How do we do that?

Our food also comes from living things. Like us, the living things that we use for food also have stuff happening inside their cells. This can cause them to be good or bad for us when we eat them. By changing what happens in the cells of the living things we plan to eat, we can make more and better food. This will help us to not get sick. Sometimes, we can also change the way the living things we eat are grown so that they can grow in places where they would usually die (like places where it is really dry and there is no rain). This helps people grow food in places where they don’t already have enough food.

Now, I’m not sure I would suggest that we all start talking this way — after all, we’ve evolved beyond a vocabulary of “ten hundred” for a reason. But aside from the challenge of conveying a difficult concept simply, this exercise also really makes you aware of the words that aren’t nearly as common as you’d think. Perhaps it will make you think twice about the words you do choose the next time you have to explain something in plain language, when the question is not “can you use the word?”, but “should you?”

Florida’s differential tuition plan — should “high-demand” degrees cost less?

Over the past couple of months, I’ve been following the news about a draft proposal to introduce differential tuition for Florida post-secondary students. The plan (which you can read about here) would (among other things) freeze tuition for degrees that are considered to be “high-skill, high-wage, high-demand” programs (as determined by the market), while allowing tuition in other programs to increase. In effect, students would pay less for a science or engineering degree than they would for an arts degree.

Differential tuition itself isn’t a new idea. In fact, it usually works the other way around, with programs like engineering and medicine having higher tuition than other programs, a reflection of the higher-costs associated with training students in those fields. While it’s not an ideal situation for students to shoulder the extra costs, it at least makes sense that if students must bear the added costs of training in a specific field, that the increases be targeted to students in those particular programs. To use Florida’s own “high-skill, high-wage, high-demand” logic, these are the students who stand to benefit the most from their investment, and whose long-term earning potential can best offset the front-end tuition costs.

Florida’s plan to reduce tuition for such high-skill (and high-cost) programs aims to incentivize enrolment by making high-demand programs more affordable for students. This only works as long as the State is able to provide sufficient base funding to offset the extra costs of training students in those fields. But the proposal doesn’t rule out the possibility that State support could be cut, leaving universities to sort out how to maintain the mandated tuition freeze for “high-demand” programs. If universities are forced to cover the costs on their own, it would likely be at the direct expense of students enrolled in “less desirable” programs – the very students who, by the State’s own judgement, are not as “strategically’ placed to recoup those costs after they graduate.

I think if the goal is to reduce tuition in fields strategic to a particular sector of the market, there’s got to be a better solution than to cripple students in other disciplines. How about increasing degree-specific or industry-sponsored scholarships?  If market forces are to dictate enrolment targets, why not let the market also bear the added training costs?

The Florida proposal makes another critical assumption – not just that tuition matters in a student’s choice of program, but that there is an actual “shortage” of skilled STEM graduates to begin with. The “scientist shortage” has been parroted left and right for decades now, and yet, the dismal job prospects for many newly minted STEM grads (particularly at the PhD level) have prompted critics to argue that the career structure for early-career scientists is “extraordinarily wasteful” – squandering highly-trained talent in perpetually low-paying, low-stability, entry-level positions. If there is an excess of underemployed STEM graduates but still a perceived shortage of qualified applicants in the market, this suggests a skills gap rather than an enrolment gap. If students aren’t graduating with the skills needed to go out and find jobs in the industry, the way to address the shortage is to fix the curriculum, not to just enrol more students.

And what about the so-called “less-desirable” degrees? Before we simply cast them aside as a second-class option, shouldn’t we at least ask why they might be “less-desirable” from the market’s perspective? If universities are to be truly responsive to the needs of the market, the answer can’t be to simply ignore (or worse, actively penalize) students in lower-demand fields. Education is a lifelong investment, whereas the needs of the labour market change. It’s not good enough just to prepare students for the jobs available immediately after they graduate; their short-term career prospects need to be balanced with a longer-term resilience to change.

A great university, to me, is a place where students become part of an interdisciplinary community that looks beyond majors and seeks to bridge gaps rather than broaden them. The focus should be on producing well-rounded graduates in any field – graduates who have not only mastered the skills specific to their own discipline, but who also know how to communicate and collaborate with other disciplines. Such an environment depends on students from all disciplines feeling like, and being treated as though they have something to contribute. It doesn’t start with preconceived institutional or legislative judgements about whose degree is inherently more valuable.

The frontier is in your living room: playing games for science

People sometimes ask me now that I am out of the lab if I ever miss doing science, and I’m never quite sure how to answer that. It all depends on how you define “doing science.”  I think the question they are really asking is whether I miss the lab work. That’s a different question for me. I think “lab work” and “doing science” stopped being synonymous at some point after I fell down the administrative rabbit hole of lab management.  In that sense, I missed “doing science” long before I left the lab.

But what does it mean to “do science”, anyway?

This is hardly a semantic question. Modern science is often perceived as something carried out by experts. When we say something is “scientifically proven”, it implies (rightly or wrongly) that it has passed some rigorous and authoritative standard, and the gatekeepers of that standard are, naturally, the highly-trained professionals we call scientists. But in turning science as a whole over to the expert, we introduce a barrier to broader public understanding and appreciation for science as a means of learning about the world.  In other words, if science is something done by experts, and you’re not an expert, then science is off-limits to you.

It’s an ironic twist considering that science itself is a relatively young profession. The earliest pioneers in science were often self-taught amateurs and tinkerers — people who had little formal training but were passionately curious. Their work laid the foundation for modern science, and yet, if they were working today in their basements and garages, we might question the validity of their contributions to science. Today they would be viewed as quacks, operating on the fringes of science — not “real” scientists.

Obviously, the nature of some fields of research is much different today than it was 200 years ago. There are all kinds of sane, practical, reasons not to be doing molecular biology in  your basement. But I think we have to be cautious about confusing the practise of science with the spirit of it. The spirit of scientific inquiry demands that there be a place for the amateurs and the tinkerers — those who relish what Einstein once called “the sheer joy of finding out.”

Recently, I was reminded of a paper published a couple of years ago in Biology Letters, written by a group of 10-year-olds, about an experiment their class had conducted with bees. I remember being charmed by the elegant simplicity of it all, right down to their hand-drawn crayon figures – these children had embarked on a project from which they came to understand science as a simple process of asking questions and finding the answers. In a recent TED talk, their advisor recounts the challenges of trying to get the students’ work published — the “experts” in this case did not believe that children could make a valuable contribution to science. And yet, they’d answered a question that none of the experts had ever answered before. They discovered something new.

To me, that’s what the spirit of science is all about, and that’s something I believe should be accessible to everyone. People need to know how science works. Science simply can’t thrive as a mysterious black box that only experts can decipher.

One of the wonderful consequences of the growing popularity of “crowdsourcing” is that it can also be applied to scientific problems. Thanks to websites like Scistarter, Zooniverse, and Games for Change, researchers can now directly engage the public in collecting or analyzing data on a large scale. Through so-called “citizen science” projects,  you can contribute to research just by observing what’s in your own neighbourhood, or by playing a computer game. The very definition of what it means to “do science” is changing, and I think it’s only for the better.

It also means, incidentally, that in some ways I get to “do” more science now than I did in my final year in the lab. I mean, if I am going to spend a Saturday playing games instead of writing papers, why not get sucked into a real-life puzzle like Fold.it, EteRNA, or Phylo?

Projects like this also open up new interdisciplinary possibilities. Have graphic design skills? You can make infographics for NASA. Or if you’re into music, you might try your hand at describing digitized collections of piano scores. Simply by moving these projects online and tapping into the public’s interests, there’s a whole new opportunity to break down disciplinary boundaries and stereotypes about what constitutes “real science.”

Perhaps most importantly, citizen science projects have the incredible potential to start a public discussion about science, something that politicians, journalists, and scientists often aim to do, only to fall seriously short of engaging their audience.

It will be interesting to see how these early forays into public engagement in science play out (no pun intended). I hope it’s  the beginning of a more enlightened era for scientific communication, one that engages the public actively rather than expecting them to be passive consumers of knowledge handed down by experts.

Where are all the supermodel scientists? (Why, in binders, of course!)

You know, there’s one thing I’ve always wondered about men in science. Why do we always get the nerdy-looking  ones? How come I can get calendars with hot firemen but can’t seem to find any pinups of ruggedly handsome, half-naked men in lab coats?  Is it because they’re all so pale and fragile-looking? Maybe they think they can compensate for their weak physical attributes with their stunningly attractive intellect?

I mean…brains are sexy too…right?

Not according to neuroscientist Dario Maestripieri, who allegedly posted the following comment to his Facebook page:

My impression of the Conference of the Society for Neuroscience in New Orleans. There are thousands of people at the conference and an unusually high concentration of unattractive women. The super model types are completely absent. What is going on? Are unattractive women particularly attracted to neuroscience? Are beautiful women particularly uninterested in the brain? No offense to anyone..

No offence?  No offence!?

I’ve been to many scientific conferences too, so I can see where his comments are coming from. In fact, I’m kind of pleased that he’s finally noticed the hoards of balding white guys (no offence to you either, Mr. Maestripieri) and has chosen to speak up about the persistent gender inequality that exists in the sciences.

But as a woman in science, how do I not take offence to the Maestripieri’s suggestion that a scientific conference is supposed to be a beauty pageant? Or to his implication that, as women, we’re just there to be eye candy, while our scholarly contributions are secondary?

It’s really a shame that we can’t all be supermodels, but we’ve been kinda busy working against a systemic gender bias that creates barriers for female scientists at various stages of our professional development — not the least of which is being judged by our appearance rather than our intellect. I’m sorry to say that it doesn’t leave a lot of time for spa days and shopping trips. It’s a system, by the way, that was created and is reinforced by men. So, Mr. Maestripieri…if you don’t like what you see, change it. We’ve been trying to change it for years, and trust me, we share your disappointment in the results.

Maybe you could ask Mitt Romney for some help finding suitably qualified..er…attractive candidates.  I hear he’s got “binders full of women.” About the only thing that could make that sound even more stalker-ish and creepy  is if the binders had photos. But how else do you know if you’re getting a supermodel?

Publish or perish — what’s behind door #3?

There has been a lot of discussion over the past year about the state of academic publishing — from the exploitive business models of the publishing heavyweights, to the crisis this creates for libraries, and the emerging case for open access. Longtime readers of this blog will know that I am a strong proponent of open access initiatives, and that I believe this kind of public debate on the issue is long overdue.

But I also wonder if it misses the point.

You see, we’re still not addressing the issue of what makes researchers beholden to publishers in the first place. The publishing models may be changing, but the culture that drives the explosive growth of academic publishing has not. So while open access promises to bring down one barrier to scientific communication, we continue to ignore what is potentially a much more formidable and serious challenge — the sheer volume of papers being published.

Although it’s difficult to determine accurately, it has been estimated that there are more than 50 million scholarly papers in existence right now, with more than 1.5 million new articles being added each year (and steadily climbing). That’s about 3 papers being published every minute.  And the sad reality is that the vast majority of these papers will never be cited.

This begs an obvious question: if the purpose of scholarly publishing is to communicate our results, but the majority of papers go unread or uncited, are we actually communicating? More to the point, why do we continue to “publish-or-perish” if most of our publishing efforts are for naught?

The truth is, it’s not primarily about communication anymore; it’s about satisfying institutional demands that treat publications as a proxy for research excellence. That idea took root over a century ago, when William Rainey Harper, then president of the University of Chicago, first proposed that advancement in academic rank and salary should be tied “more largely” to research productivity. The policy was quickly adopted by other research-intensive universities. By the 1930s-40s, the phrase “publish-or-perish” had been coined to describe the pressure on academics to publish, and today, it’s practically synonymous with the academic lifestyle. Today, scholarly publications are not only used to measure individual performance, but also to help determine institutional rankings and to measure national performance on innovation.

Today, the pressure to publish comes not from a demand by the consumers of knowledge (i.e. other scholars), but from a demand by the merchants of knowledge — those whose profits and prestige are linked directly to growth in research publications.

What about the rest of us?

Short-term career advancement aside, the publish-or-perish culture doesn’t really benefit individual researchers, who invest much of their time and energy either in producing papers or in reviewing them — a huge waste if most of those articles fail to reach their audience.

It’s not particularly beneficial to students, since researchers are often forced to prioritize research output over teaching. For grad students, publish-or-perish means they spend much of their time learning and conforming to an academic career structure that will not serve the 70-80% of them who will not have access to stable academic careers no matter how much they publish. That’s another post entirely.

It doesn’t benefit scientific integrity, since the unrestrained growth in publications places more and more stress on the system of peer-review. The pressure to publish is so intense that researchers will resubmit their rejected manuscripts multiple times, working their way down from the top-tier journals to the electronic slush piles that have lower standards of peer-review. Each published paper then, often represents multiple rounds of peer-review, each a burden on the system, and even then, it’s still possible for researchers to pay for access to bottom-feeder journals just to get the paper out, however bad. Bad research in turn places its own burden on the system as precious resources are invested in a futile effort to validate them.

In recent years we’ve seen everything from rampant plagiarism and duplication of publications, to data manipulation or fabrication, to elaborate “citation cartels” used to artificially inflate the prestige of certain journals. Just last week, a researcher was caught using his own email address under a fake name so that he could “peer-review” his own papers!

On one hand, we might be comforted by the fact that the peer-review system is catching these cases of obvious misconduct, but on the other hand, shouldn’t we be more alarmed about the institutional pressures that are leading to the behaviour in the first place?

And what of the higher-profile retractions? What does it say about the effectiveness of our current system of peer-review when the highest-profile journals (Nature, Science, Cell, etc.) also tend to have the highest rates of retraction?  True, these journals tend to be on the cutting edge and we know it can cut both ways sometimes. These journals also have large audiences: more eyes means more potential for catching a mistake. But these are also the journals that tend to get the most press, so such high-profile retractions also have the potential to negatively impact public trust of science. When you have researchers chasing publication in these journals as a career-making move rather than it being based on the quality of their data, it can only harm the collective image of the research community.

Public trust isn’t the only thing at stake. Scholarly communication is, by its highly technical nature, exclusive to a particular audience. Even with the expansion of open access publishing, which would make papers accessible to the public, there is still a significant barrier for the public in understanding that original research. And yet, never before has the public understanding and acceptance of science been so important. The big issues of the day — energy, sustainability, climate — require a certain acceptance of scientific information. But outreach activities often don’t count for much in faculty evaluations — so researchers tend to remain focused on writing papers for other scholars. In effect, the publish-or-perish system places more value on scholarly papers that go unread than on the myriad ways that researchers could otherwise be connecting with a non-academic audience to inform the public debate.

This is perhaps the most tragic waste of our scientific knowledge, and the point at which I begin to seriously question the ethical footing of the publish-or-perish culture. In an era of economic instability and fiscal restraint, when public access to scientific knowledge is key to advancing the most pressing social, economic, and political issues of the day, is it ethical for publicly-funded institutions to continue demanding that researchers prioritize scholarly publications (largely for the sake of prestige), at the expense of actually reaching the audience that can most benefit from the research?

What’s the alternative and how do we make it happen?

Follow

Get every new post delivered to your Inbox.

Join 96 other followers

%d bloggers like this: