Med school students assigned to improve most-used medical Wikipedia entries

Dr. Amin Azzam who teaches at the UCSF school of medicine, has created an elective for his fourth year students in which they are assigned to improve the most-used medical Wikipedia entries. Students are given Wikipedia orientation and taught how to be good participants in the project. This is especially relevant given the fact that Wikipedia is the most-used reference among doctors and medical students. The students prioritize the most-cited, most-visited entries, and they are working with wikipedians to have these entries translated into many other languages, as well as adapting it for the "simple English" version of Wikipedia.

The pilot run, he said, was a great success. Five students (believe it or not, that’s a lot of students for a fourth-year elective in medical school, according to Azzam), after being oriented to the structure and editing process of the site, spent their month targeting articles that required improvement: the most read and those with the greatest potential health impact. They put their medical knowledge—after all, Azzam said, the students were less than six months away from being doctors—to good use. Most of Wikipedia is surprisingly accurate, Azzam said, because it uses the “wisdom of the crowd” to vet information. But medical pages have catching up to do. “Medical professionals haven’t been editing Wikipedia,” he said. “In fact, we were told not to go near it.” This anti-crowdsourcing bias has kept doctors from contributing to the site’s accuracy until now, Azzam said. But current students are more open to the value of editing the articles. +

America’s future doctors are starting their careers by saving Wikipedia [Rachel Feltman/Quartz]

Notable Replies

  1. The scariest thing is if the normal wikipedia rules for sources are being held to. How many definitive medical texts are available online for referencing?

    Given how difficult it is to find quality medical and scientific journalism, the requirement for online sources would seem to bias things away from accuracy.

  2. Regardless of what you think of the editorial quality of wikipedia and the infighting that goes on there, two facts remain.

    1. It's the most frequently used source of medical information for

    2. Dr. Azzam, with 5 medical students taking an elective, is trying to
      improve that info.

    Now, even if each of those students only reviews and edits 1 wiki page a piece (diabetes, cardiovascular health, cancer, reproductive health, obesity), they will improve the health education of tens of thousands of people, if not hundreds of thousands.

    That's Public Heath advocacy that scales. Your concerns about the editorial process at wiki is a footnote, not the story.

  3. This is actually pretty common in a lot of specialized fields. My former anthropology advisor has undergrads write, review, and edit each others Wikipedia posts (most on topics relating to central and south american archaeology) as assignments for class. The students learn how write, cite, and peer-review. Wikipedia gets better entries. My old prof doesn't have to do it himself. It's a win win win.

  4. I used to do a similar assignment for my class of undergraduate behavioral science majors. My motivation was to get them to make a real contribution of their knowledge to the world, and think about writing for a broad community instead of the professor. But they got so scarred by the process that I've stopped. Several years ago, it was possible to write new articles about clinical tests and other relevant research methods; this is getting harder as more articles are out there, so they are left to improve articles that are more likely to have turf-warriors guarding it. Then, even when you try to explain copyright and creative commons to an average student, getting them to understand this when they create new images is a challenge. Most of the time whenever imagery got created anrd uploaded, it would get moved and deleted after a few weeks because the student didn't understand the complex attribution process, or some other issue. Other students would get their articles deleted because of non-notability and didn't have the stomach to argue, others for perceived copyright violations (they weren't), or accusations that it seemed like a class assignment (god forbid). Maybe I'll try again in the future, but probably in some different way that will be more likely to give the students a positive impression of wikipedia editing.

  5. Elusis says:

    There's a whole body of research and meta-analysis of research on various therapy and counseling modalities.

    Pretty much all available evidence supports the broad conclusion that therapy is better than no therapy. That's the good news. It's better to get therapy than to be put on a waiting list, to be given individual or group psychoeducation, to be assigned to read a book (bibliotherapy) about your problem, or to be given medication.**

    Attempts to distinguish whether any particular model is "best" or "best for a particular problem" are more complicated. Some have called the "therapy is better than no therapy" conclusion the "Dodo Award" after the character in Alice in Wonderland: "All have won, and all must have prizes." Certainly, it would seem like a good thing to know whether any particular approach might be more likely to help you than another when you're trying to pick a therapist.

    The search for Empirically-Supported Treatments has mostly favored cognitive-behavioral approaches, which show very strong evidence of effectiveness with simple phobias and anxiety disorders, as well as general effectiveness with some other diagnoses. CBT also tends to show well for other diagnoses, but the field gets muddy quickly because CBT is also the easiest "fit" with the parameters of EST research. If you "operationalize" something in behavioral terms (e.g. you define "depression" as "scoring more than 12 points on this 20-point scale asking about depressed behaviors"), it's very easy to say that a treatment is effective if it decreases those behaviors. But is "scoring only 10 points on that scale" the same thing as "not being depressed"? A purely behavioral description isn't always an appropriate or complete one, but it sure makes CBT score well in the EST framework.

    What about more nebulous therapy outcomes, like "improving a couple's relationship" or "having better sex" or "living comfortably with bipolar disorder?" Behavioral measures aren't necessarily going to effectively capture the whole of what "improvement" or "success" looks like, particularly in couple or family problems where people might define the problems and their desired outcomes differently. A classic example of this problem is evaluating marital therapies and only considering them "successful" if the couple stays married - sometimes an agreement to divorce with as much dignity and respect as possible is also a "successful" outcome. And is a couple who stays together "successful" if they last another 6 months? 1 year? 2 years? 5 years? How long do you have to follow up to declare "success" for your therapy in that model? But if I'm a client wanting to keep my marriage intact (and functional), I don't want to hear that a therapy is "successful" if most people who got it stayed together for another year after therapy and then became unhappy or broke up.

    What about a complex presentation, where parents are concerned about their 17-year-old who is using drugs, fighting with them all the time, and threatening to drop out of school? If, after family therapy, the kid is no longer using drugs, and the family is getting along better, but the kid still drops out of school, was therapy "effective"? Empirically Supported Treatments are supposed to match specific treatment modes with specific problems they treat effectively, but this basically limits research to straightforward, definable presenting problems without complicating factors (e.g. substance abuse with mental health problems and relationship problems).

    Hopefully you can see the problems here, anyway.

    However, the focus has broadened in the field to also look at Evidence-Based Practice, which allows inclusion of common factors research (regardless of the model, what factors seem to be common to most or all successful therapy relationships?), models with more complex concepts and interventions than CBT, and more complex presenting problems. There's also been much work done in process research - looking at therapy sessions individually and across a course of treatment to determine what moments were most important for helping change to occur, what was most significant about a given session or course of therapy to the client(s), what therapist factors are most relevant to change, etc.

    One good example is Emotionally Focused Therapy. EFT is a couple therapy approach (now being expanded to family therapy) that was developed with continuous process research input. The whole time the model was being developed, researchers were doing in-depth qualitative and quantitative research on tapes of therapy sessions, reports from clients during the course of therapy, reports from therapists, and follow-up with clients after completion. All that data was used to continually focus and refine the model into its couple therapy form, and the same process has been used to adapt it for couples with trauma and now with families. (There's also research going on into whether the model needs adaptation for same-sex couples, couples from outside North America, etc.) Meanwhile, they've been continuously gathering long-term outcome data - I believe the first 20-year followups will be available sometime soon.

    The end result is that EFT helps 73% of distressed couples move out of distress into a "healthy/satisfied" range, and 89% of couples experience some improvement. (They've found that the folks who improve but don't get "well" are often couples with trauma, which is why they've been adapting the model for them.) CBT with couples has comparable effects at the end of a course of therapy. But the difference comes in the follow-ups: 6 months after therapy ends, 50% of the couples who get CBT are back to their pre-therapy state (i.e., they look just as distressed as they did before they got any help). The effects of EFT, on the other hand, appear to be basically stable at 6 months, 1 year, 5 years, 10 years, and 15 years of follow-up. A summary of research is here (pdf).

    So to say that evidence for therapy is "weak at best" is entirely incorrect. Evidence that therapy works is quite strong in fact.

    Some specific therapies are exceptionally well-supported with a large body of evidence for their effectiveness with specific problems; some specific therapies have a modest amount of evidence that they work in a general sense; some - particularly some that are trendy, marketing gimmicks, pseudo-science related, etc. - have no specific evidence for their use at all and yet may still fall under the general "therapy is better than no therapy" umbrella. (Living in the Bay Area, there is much more of that last category than I'd like, and of course it these that often have motivated practitioners using Wikipedia as a marketing opportunity, which was the point of my original comment.)


    ** Medication plus talk therapy has been found to be more effective for depression than medication alone OR talk therapy alone. However, a new wrinkle in the research just emerged: If you have depression AND relationship problems, and the depression emerged first, medication plus individual talk therapy has a good chance of helping. But if the relationship problems emerged first, individual therapy has a good chance of making things WORSE. Medication plus couple therapy is helpful in both cases. My students had a heck of a time getting their heads 'round that one last week in our first week of the semester, but they're catching on.

Continue the discussion

27 more replies