Showing posts with label in the news. Show all posts
Showing posts with label in the news. Show all posts

Sunday, May 10, 2015

Another reason why early sex ed will lead to less early sex

This post was inspired by, but is not directly related to, this quiz testing how much you know about the new Ontario sex ed curriculum. (I got 9/10.)

Some critics of sex ed criticize teaching students about various sex acts at an age that is generally perceived to be too young to be engaging in those sex acts.

But it occurs to me that if your goal is to prevent young people from having sex, introducing the concepts early would probably help achieve that goal.

I was informed, via age-appropriate educational books, about the existence of various sex acts years before I was ready for them (which was a good thing, since I reached menarche years before I had the slightest even theoretical interest in sex), and every single time my visceral reaction was "Ewww, gross!!!!"  As I evolved in the direction of developing interest in sex, I had to overcome the "Ewww, gross!!!!" before I could develop positive interest.

I also learned of various other sex acts, via the internet, when I was older and ready to have sex.  In these situations, my reaction was either "Hmm, interesting..." or "Meh, not for me."  Even for the sex acts I find more distasteful (which are objectively more distasteful than any of the sex acts I learned about before I was ready for sex) I never reached the same level of visceral revulsion as I did before I was ready to have sex.

So if you want young people to not have sex, telling them about sex when they're young enough to think that it's gross will introduce an additional emotional barrier that will stand between them and their desire to have sex for a certain period of time.

Monday, April 27, 2015

Emotions are weird

When I was a little girl, my grandmother took us to see Sharon, Lois & Bram whenever they were in town. Eventually, we outgrew their concerts, as one does, and we never went again.

Last year, they named a playground in my neighbourhood after Sharon, Lois & Bram, and the trio showed up at the dedication and sang a few songs.

When I heard that Lois died, one of the first feelings to come to me was "OMG, that time I saw them at the park was the last time I'd ever see them perform live in my whole entire life!!!"


Except of course it was.

I'm a grown adult who's childfree by choice.  There's no reason to think I'd ever go to a Sharon, Lois & Bram concert again.

I didn't regret not having gone to more when I was an older kid. I had outgrown them and, in addition to not enjoying them as intended, would have felt awkward and out of place.  I only went to the one in the park last summer because it was in a park - I could just walk by on a public sidewalk, stop and listen if I felt moved to do so, and casually drift away if I got bored or felt out of place.

And, just to make things weirder, if I hadn't had the opportunity to see them in the park last summer, I would never have felt "OMG the last time I saw them was the last time ever!" I wouldn't even have had a specific memory of the last time I saw them, just like how I don't have a specific memory of the last time I watched Sesame Street or Mr. Rogers (both of which I do occasionally watch as an adult).

But for some reason, because I had the opportunity to wander age-appropriately into this little mini-concert last year, I felt this pang of...whatever the hell you'd call the emotion of "OMG that was the last time ever!", which I never would have felt otherwise.

Emotions are weird.

Monday, April 20, 2015

Legally-mandated helicopter parenting vs. children's literature

When I was a kid, I always felt vaguely humiliated that my life didn't work like the lives of the protagonists of my books.  They got to have their own independent adventures.  They got to go to the park or walk in the woods or go to a friend's house or be home alone, all without adult supervision.  Sometimes they even bought things at stores or went to the library or went to the doctor without an adult.  And I wasn't allowed to do anything!  What was wrong with me?  Why wasn't I worthy of this basic human independence that all my protagonists got to enjoy??

Reading a recent article where "free range" children got picked up by the police, I find myself wondering how 21st-century kids feel about this.

I was feeling humiliated because my parents wouldn't allow me the freedom of the protagonists in my books, but today it's even worse - it's not just that your parents say no, it's that the police will come and arrest you!  (Yes, the police didn't technically arrest the kids, but I'm sure it feels to the kids like they did.)

But then it occurred to me that maybe this very serious sense of "You can't go to the park alone or the police will come and arrest you" might actually make it feel less bad for the kids.  It's not that you aren't allowed because you aren't good enough, it's that no one is allowed because it's against the law.  But, on the other hand, that might just cause confusion.  Peter and Jane did it, so why can't I?  If it's against the law, why didn't the policeman arrest Peter and Jane when he was talking to them?

Another possibility that I hadn't considered is that children's books may have caught up with reality.  Perhaps the protagonists of today's children's books are supervised at all times?  That would certainly make it more difficult to come up with a workable story, but so do cellphones and they appear in fiction.  (Or maybe that's why so many of my early children's books were populated by anthropomorphic animals living in the quaint, non-specific past?)


This all made me realize that children's books are in fact the original media that influences impressionable children!  People always talk about TV and movies and video games, but far, far more of my idea of How The World Is or Should Be were formed by the books I read at a very young age.  I think I was far more influenced by the idea that I should be able to ride a zebra because that's what a character in a book was doing than by anything I saw on TV.

Monday, April 13, 2015

Ontario has destreamed Grade 9 at least once before

An article on the front page of today's Toronto Star says that an advocacy group is calling for an end to streaming in Ontario high schools, by which they mean having "academic" and "applied" versions of each class.  This surprised me, because neither the article nor the report (PDF) makes any mention of the fact that Ontario has destreamed Grade 9 (as this advocacy group is recommending) at least once before. 

I know, I was there.

My Grade 9 classes were all destreamed when I started high school in 1994.  It was a fairly new development at the time.  Mine might have even been the very first destreamed year - in any case, it was definitely being talked about like it was new and unprecedented when I was in middle school.

Surely there's data on student outcomes from this time.  There are probably even teachers around who taught in Ontario high schools before, during and after the early-90s destreaming.  It seems like this would all be highly relevant in lobbying and making decisions about whether Ontario schools should be streamed.

I want to make it very clear that I am not arguing or hinting for or against streaming. I have no strong feelings about my own destreamed experience, and I readily acknowledge that, as a student who thrives in any academic environment regardless of whether or not it challenges me, my own experience is irrelevant to any goals they might be trying to achieve with either streaming or destreaming.

I'm simply saying that Ontario-specific data and experience exists.  It would be remiss of them not to use it.

Saturday, March 28, 2015

Why copayments for medical appointments are a bad idea: a breast lump story

When conducting a routine breast exam during my annual physical, my doctor detected something on the armpit side of my left breast that wasn't present on the right side.  He ordered a breast ultrasound, which found some of my  lymph nodes in that area were larger than perhaps they should be.  A mammogram was then ordered, which found that a few individual lymph nodes were enlarged, but there were no malignancies or other problems.  I was therefore instructed to return in six months for another breast ultrasound to see if the lymph nodes in question returned to normal.

While I was in the middle of this process, Steve Paikin posted a blog post sharing his doctor's idea for copayments for each medical appointment.  I commented on that post expressing my concern that the majority of medical appointments I get aren't even my idea but rather are required by red tape (I've previously blogged about that here), but this breast lump diagnosis process was an even better example.


During this little adventure, I had five appointments in a nine-day period (and a minimum of three more if I opt to follow up in six months as recommended), none of which I actually wanted or would have thought to request for myself.

I only got the annual physical because it's the price of admission for getting my birth control renewed. I'd be more than happy to buy my birth control over the counter (as some have recommended should be possible for public health purposes), but I have no choice but to go to my doctor and get the recommended screenings if I want a new prescription.

I didn't think the thing my doctor found was a problem - to my touch it felt just like a normal part of my breast anatomy. After reading up on breast cysts, I didn't think getting a potential breast cyst diagnosed was especially important - they're not a problem, most often non-actionable, and quite often go away by themselves. That area of my breast is squishy and mobile, it's nothing like the description of hard, immoveable lumps that I've always been told indicate possible cancer.  But I went along because it's a quick, easy, non-intrusive test and it was probably faster to get the test than to argue.  And, I figured, once the test shows it's nothing, my doctor will be more likely in the future to take me at my word when I say that's just how my breast is.

After the test, I had to go to the doctor for test results, which I think is a suboptimal way of doing things. I'd rather have the results emailed directly to me, and schedule an appointment with the doctor if I had any questions. But my doctor's policy is that they only contact you with results if action is required, so if I didn't go for that follow-up I'd never learn what action was apparently required.

On an intellectual level, I didn't think the mammogram was necessary as a follow-up to the ultrasound either.  After reading up on breast ultrasounds, I didn't see why a mammogram would be helpful or informative as a follow-up to an ultrasound - all the information I found talked about how ultrasounds saw things that mammograms didn't see.  But, frankly, I was scared into it.  Getting a phone call telling me I needed a mammogram (when this wasn't on my "things that might happen" list) was shocking and disconcerting.  I have it mentally categorized as a "cancer test", so it triggered fears of cancer, and I went along with the test to rule out cancer.

And, again, I had to go to the doctor for the mammogram results even though they were clear to me and I didn't need any help with interpretation.  Because I have no way of getting the results without going to the doctor, I had to take that appointment or I would never have received confirmation that there were no malignancies.

So that's five appointments, all of which were required by my doctor as opposed to by me, none of which I would ever have asked for myself if it were completely up to me.  And if I follow up in six months, I'll need three more (one with my doctor to get the ultrasound requisition, one at the imaging clinic for the ultrasound, and one with  my doctor for the ultrasound results.)  I'm really disinclined to follow up - it feels like a fishing expedition - but I'm concerned about being considered a non-compliant patient if I don't, and I do need my doctor's goodwill to keep getting my contraception.


At this point, some of you are thinking "Breast lumps are serious business!  It's good and important that you got it checked out - you really shouldn't skimp on that sort of thing!"

If that's the case, that's a very good reason why there shouldn't be a copay for each appointment.  A copay would disincentivize patients like me from following up on lumps in their breasts, or perhaps even having these lumps detected in the first place.


Besides all that, before they can even consider a copay, they'd have to streamline the process so that fewer appointments are required by red tape.  For example, as I mentioned above, they shouldn't make you go in to see the doctor to get your test results.  It would be much more efficient to just email them to the patient when emailing them to the doctor, and the patient can contact the doctor if they have any question.  When I'm doing medical translations, I find it a fairly simple matter to google up any terminology I don't understand and the implications of the test results become apparent once I've worked out the meaning of all the words.  If they want to cut down on the number of appointments, they need to at least start by eliminating unnecessary appointments like test results that can just be replaced by a simple email!


At this point, some of you are thinking "That would be hideously irresponsible!  Many people can't accurately interpret medical results and there's all kinds of ridiculous information on the internet! People who aren't medical professionals need the guidance of medical professionals."

If that's the case, that's another very good reason why there shouldn't be a copay for each appointment. A copay would disincentivize patients like me from discussing our test results with our doctors, and instead leave us making decision based on our haphazard informal education and Google.

Thursday, March 19, 2015

Idea for a new economic indicator

This post was inspired by, but is not directly related to, this article.

When talking about whether the population as a whole is making economic gains or losses, people often talk about middle class vs. low income vs. high income, or they look at average or median incomes for the population as a whole and for various demographics.  Less often, but sometimes, they talk about the ratio of income to tuition to housing prices. (The Globe and Mail has a useful comparison tool.)

It occurs to me that another useful indicator would be to look at changes in income over time with people who bring various levels of education, skills and experience to the table.  For example, how has the income level of a person with an undergraduate degree and 10 years of work experienced evolved over the years?  What about a newly-minted Ph.D.?  What about a student working their way through college?  What about people who have been freelancing for 5 years?

It might be useful to get somewhat specific (Is the person with an undergrad degree and 10 years of work experience a translator or a teacher or a computer program?), but the data would cease to be comparable if you got too specific (I don't know how informative it would be to track the income of social media specialists or FORTRAN programmers over decades).

If the data is available, it would also be interesting to track negative factors.  How has the income of people who were laid off one year ago evolved?  (i.e. were they more or less likely to get new jobs within a year in previous decades?)  What's the situation of people who started a business within the past two years?  What about people who are involuntary entrepreneurs (i.e. they didn't want to start a business, but couldn't get hired)?

I think this would fill in some blanks, and it has the potential to draw attention to certain problems that may be hidden by the other, more commonly used indicators.

Thursday, February 19, 2015

There is no incentive to falsely take a citizenship oath

Recently in the news is the story that the government intends to appeal the Federal Court ruling that it is unlawful to require people to remove their clothing (in this case, a niqab) before taking a citizenship oath.

Sitting here steeped in white girl cultural hegemony, I tacitly assumed that they wanted people to uncover their faces during the oath for identification or fraud prevention purposes.  But it occurred to me in the shower this morning that no one would cover their face during a citizenship ceremony for nefarious purposes, because there's no incentive to do so - nothing would be gained or achieved by doing so, and it wouldn't change anything.

Let's unpack this.

Scenario: Cindy the New Citizen has gone through the entire immigration process and permanent resident process and citizenship exam and all the hoops and paperwork and everything, and has just received an invitation to attend a citizenship ceremony and take the citizenship oath.  Congratulations, Cindy! But Cindy doesn't attend the ceremony and take the oath.  Instead, Irene the Imposter attends the ceremony, pretending to be Cindy, and takes the oath in her place.

So what would the outcome of this scenario be?

Would Irene become a citizen by taking the oath?  Of course not - it's not a binding magical contract like in Harry Potter!  The record would show that Cindy, who is fully qualified to be a citizen, is now a citizen.  So Cindy would be a citizen and Irene's status would not change. 

This means that Irene has no incentive to impersonate Cindy, because it would have no impact on Irene's status.

But what if it's not Irene whose intentions are nefarious, but rather Cindy?  What if Cindy is trying to get citizenship without being beholden to the oath?  Let's think about this.

Suppose, Cindy breaks her oath and is caught.  When called out on it, she says "Nope, you can't hold me to that!  I didn't take the oath - I sent an imposter on my behalf!"  She's still in trouble, since the content of the oath is, essentially, promising to fulfill your duties as a citizen and obey the law, so she'd be in trouble for being derelict in her duties and/or breaking the law.  And, on top of everything else, she'd also be guilty of fraud! 

This means that Cindy has no incentive to send an imposter on her behalf, because that would only make things worse.

But what if Irene isn't there on Cindy's behalf?  What if she's there without Cindy's knowledge?

I can think of two possible motives for that: either Irene is trying to steal Cindy's identity, or she's trying to inflict citizenship upon Cindy without her knowledge.

If Irene is trying to steal Cindy's identity, she would have had to start long before the citizenship ceremony.  She could only find out about Cindy's citizenship ceremony if she has access to Cindy's mail, in which case she's either successfully stolen her identity, or has access to far more useful things like credit card statements and tax documents.  Going to the ceremony and taking the oath as Cindy will have no impact on the extent of her identity theft.

If Cindy doesn't actually want citizenship and Irene is trying to inflict it upon her without her knowledge, Cindy wouldn't even be having a citizenship ceremony.  There's quite a lot of work to do and steps to take to become a citizen, and if Cindy didn't want it, she could just do nothing. 

There is simply no reason why anyone would falsely take the oath with nefarious intentions, because it would do nothing to help them achieve their nefarious intentions and basically wouldn't be worth their time.  Therefore, there's no reason to fret about being able to see everyone's faces at all times.

Monday, February 16, 2015

Journalism wanted: how did the Toronto Star's HPV vaccine story end up being sensationalistic?

I recently wrote a blog post complaining about a Toronto Star article about HPV vaccination that presented the story very sensationalistically and failed to include necessary context.

This week's public editor column agrees with that assessment.  Public editor Kathy English says:
In looking at all of this, I have to wonder why the Star published this at all — especially at this sensitive time in public health. If there is no proof that any of the young women’s illnesses, or the 60 adverse reactions in the database, were caused by the vaccine, then what is the story?
In that same column, she says:
To be fair, in the Gardasil investigation, reporters David Bruser and Jesse McLean absolutely do not conclude or state that the vaccine caused any of the suspected side effects the young women talk about. The article was written carefully to try to impart to readers the message that there was no conclusive evidence.
Also, on CBC radio program As It Happens, Toronto Star publisher John Cruikshank said:
"We failed in this case. We let down. And it was in the management of the story at the top."
What I want to know: how did the front page layout and presentation and tone of the story turn out sensationalist if the public editor and the publisher both think this is inappropriate and it's not consistent with the reporters' stated intentions?

I know the writers don't write the headlines and aren't necessarily involved in layout, and I know that senior editors might not necessarily vet every single page layout in the whole newspaper every single day.  But you'd think they'd approve the front page!  You'd think they'd edit an article extra-carefully if it's going to be the first thing people see, and you'd think they'd look at the big, front-page, above-the-fold headline and make sure it reflects the writers' intended thesis.

It would be informative to readers to write a story about how this sort of thing comes about.

Thursday, February 05, 2015

Horrid journalism from the Toronto Star

The Toronto Star wrote very sensationalist front-page story about people who report having various illnesses after receiving a cervical cancer vaccination. 

As they mention in the subheadline (with some weird conjuction use), they found 60 people who reported illnesses, out of hundreds of thousands who have received the vaccine

The problem: they don't mention the statistics of these kinds of illnesses occurring in similar populations who have not recently be vaccinated.  We're talking tens of times among a sample size of hundreds of thousands, which is hundredths of a percent. It is certainly plausible that the number of illnesses reported are consistent with what would happen ordinarily in the general population. 

Back when I did my research before getting Gardasil, my research found just that: the number of reported conditions in the sample group was consistent with the number in the general population.  That could certainly be the case here.  But the Star doesn't provide the numbers!

If the number of illnesses found in this investigation is significantly higher than what would have occurred in the control group, then that is important information that supports the Star's thesis and they should include it.

But if it is not, then this is an irresponsible piece of journalism.

By failing to include these numbers, they've made the article non-credible in the eyes of the most-informed audience who will read it critically, while sensationalistically creating paranoia among the least-informed audience who will only skim the headlines.

The article ends with one of the interviewees saying “I am not against the vaccine, I want people to be responsible about Gardasil. I am trying to inform people.”

In order to inform people so that they can make responsible decisions about Gardasil, you need to include control group numbers!

Sunday, November 23, 2014

What if suicide prevention were removed from the mandate of mental health care?

When Robin Williams committed suicide, many people responded with the "Genie, you're free" scene from Aladdin. This response received a lot of criticism, some of which argued that suicide isn't freedom.

It occurred to me that the problem with this statement is it's clearly unknowable.  The author has no way of knowing with the amount of certainty they claim that you don't find freedom or peace after death.

And, because of this, their anti-suicide message has no credibility in the eyes of those considering suicide.  They're quite clearly just saying stuff to perpetuate the message of Suicide Is Bad.  So a person considering suicide isn't going to listen to them, because they're obviously just going to unquestioningly say Suicide Is Bad regardless of the truth of the matter.  (And if suicide is in fact Bad, you'd think they could come up with something substantiated to support that position.)

Then it occurred to me that this might be the symptom of a broader problem in mental health care and emergency response.

If I were suicidal, I would never even consider seeking medical attention, because I feel like they'd just want to stop me from committing suicide.  They'd restrain me in a mental ward somewhere and declare the job done, or monitor me for the rest of my life and never leave me a moment's peace.  Sounds like hell!

But what if health care as a whole recognized a person's right to end their life? Your body, your choice!  They don't prevent, persuade, coerce or manipulate you into not committing suicide.  It's considered a perfectly valid choice.

However, since it is also a drastic - and irreversible - choice, they strongly urge you to try less drastic approaches first.  Take a pill, talk to a doctor - the mental health equivalent of rebooting your computer and maybe reinstalling the OS rather than going straight to throwing it out the window. If it hurts, the doctor will give you something to try to stop it from hurting.  If you're feeling nothing, the doctor will give you something to try to make you feel again.  If your fish are dead, the doctor will try to resuscitate them.  If it doesn't work, you're no worse off than you were before and you can always kill yourself later!

Some people will argue "But when I was suicidal, I didn't actually want to kill myself.  I wanted to stop wanting to kill myself."  That's fine, a person could still go to the doctor and say "I have suicidal feelings and I don't like them! Can you help me make them stop?" But if the patient feels their suicidal feelings are valid, the doctor won't force them to do anything about it.

Analogy: if you've never gotten pregnant and you want to have children, you can go to the doctor and request assistance with conceiving.  But if you've never been pregnant and you're okay with that, they don't force fertility treatment on you.

And some people will argue "When I wanted to kill myself, it was just the depression talking. Once I received help, I came to realize that I didn't want to kill myself."  If that's the case, this approach will still achieve the same results.  The hurting/sadness/feeling nothing/dead fish will be treated, the patient will come to the realization they didn't actually want to kill themselves, and life would proceed as usual.

But if you want something right this moment and someone tells you "I'm going to take you to a doctor who will make you not want the thing you want," that would feel like they're going to brainwash you.  And if the doctor's mandate is to do everything in their power to prevent you from achieving what you want, you'd probably actively avoid them, perhaps even going as far as to deceive people about your condition and situation so they don't brainwash/restrain/monitor you in a way that would make it impossible to achieve your goal.

Building on the fertility treatment analogy above: suppose you tell a loved one that you want to have children, and they respond by taking you to a doctor who will make you not want children.  Or, based on the information you have absorbed from media/culture/society, you believe that a doctor would respond by taking all measures to prevent you from having children, up to and including forcibly sterilizing you. 

Or the inverse: suppose you don't want to have children, and a loved one responds by taking you to a doctor who will make you want to have children. And the information you have received throughout your life leads you to believe that the doctor would go as far as forcibly impregnating you.

Would this make you feel safe seeking medical treatment?  Or would it make you want to avoid it at all costs?

***

Removing the suicide prevention mandate might also help reduce the criminalization of mental health patients. 

There was recently a series in the Toronto Star about how people are failing police checks they need for employment because they are known to police (even though they were never found guilty and in some cases never arrested or charged).  And some of them are known to police because police attended a mental health call.  The police were called because the person was considered a threat to themselves, and in the messed up system of disclosure for background checks there's no differentiation between being a threat to oneself and a threat to others.

If health care professionals were not mandated to prevent suicide, there'd be no such thing as involving the police because someone is a threat to themselves.  Killing yourself would be considered your own decision to make, even if it's ill-advised, so there'd be no reason to forcibly stop you.

Analogy: if someone wants risky ill-advised elective surgery and they're proactively trying to get this surgery, this isn't considered a reason for police intervention.  Even if getting the surgery would harm them, that's between them and their doctors. 

Since there's no police involvement, people won't have police records dogging them just because they were once suicidal, so they'd have the full range of employment and travel options still available to them. Surely this would make for a better recovery than being shut out of jobs where they can do good just because they were once suicidal!

Yes, this aspect could also be addressed by police only disclosing appropriate and pertinent information in background checks, but I feel like the medical profession could be more easily persuaded to make helpful decisions than the police.

Sunday, August 17, 2014

Weird Al

From a New Yorker profile of Weird Al:
With his parodic versions of hit songs, this somehow ageless fifty-four-year-old has become popular not because he is immensely clever—though he can be—but because he embodies how many people feel when confronted with pop music: slightly too old and slightly too square. That feeling never goes away, and neither has Al, who has sold more than twelve million albums since 1979.
Anxiety starts early for pop audiences. For decades, I have had twenty-somethings tell me that they don’t know what’s on the charts, haven’t listened to any new artists since college, and don’t “know anything about music.” They feel confused by how quickly the value of their knowledge of what’s current fades. Weird Al’s songwriting process, almost without exception, is to confront that anxiety and to celebrate it. Yankovic will take a mysterious and masterful song and turn it into something mundane and universal. He makes the grand aspirational concerns of teen-agers in Lorde’s “Royals” into a story that includes a lesson about the hygienic advantage of taking food home in aluminum foil. (You’ll see the rhyme there.) Charli XCX’s boast of being “classic, expensive, you don’t get to touch,” in Azalea’s “Fancy,” becomes an ad for a handyman who can resurface your patio in Yankovic’s “Handy.”
The opening lyrics of “Smells Like Nirvana,” Yankovic’s 1992 version of Nirvana’s “Smells Like Teen Spirit,” are as close to a mission statement as he has: “What is this song all about? Can’t figure any lyrics out. How do the words to it go? I wish you’d tell me, I don’t know.” Weird Al has been cool for so long because pop makes everybody feel uncool; that he is the only one to admit it has made him a pop star.
If I'd seen this theory written about anyone or anything else I'd assume it's bullshit, but that's actually an accurate description of how my Weird Al fandom began, with Smell Like Nirvana.

I was 10 years old when Smells Like Teen Spirit was released and 11 years old when Smells Like Nirvana was released.  I was attending a middle school at the time (Grades 6-8) so I was surrounded by people who were into teen pop culture, but I wasn't quite ready for it myself.  I had absorbed the message from the adults around me that being into teen pop culture was Bad, it was giving in to Peer Pressure, and I wanted to prove to them that I'm Better Than That.

But, at the same time, it was problematic on a social-survival level to be completely unfamiliar with teen pop culture.  You couldn't just walk around having never heard of stuff.

Weird Al provided the perfect solution.  With Smells Like Nirvana, I could be familiar with Nirvana and enjoy how the music rocks without claiming to be a fan.  In fact, I was mocking it - surely something that could be used to demonstrate I'm Better Than That when necessary! But, at the same time, enjoying parody certainly suggests enough familiarity with the original, so I didn't come across as never having heard of stuff. Weird Al allowed me to save face without having to commit to anything (in the bizarre preteen landscape where such things demanded commitment.)

In the years that followed, I would grow into pop culture, and then into the ability to take it or leave it as I pleased, without regard for the opinions of peers and grownups.  But in those few awkward years when I was still muddling through and wasn't quite ready for the pop culture environment inhabited by my peers, Weird Al helped ease the transition for my awkward preteen self.  And, because of that, he will always have a place in my adult self's ipod.

Monday, August 11, 2014

Robin Williams

Normally when someone commits suicide and the people left behind say "He had so much to live for," they mean normal regular everyday people stuff.  He wasn't a total fuck-up and had a moderate amount of success or potential in one or two areas of life and maybe a handful of people who truly loved him and another couple dozen who'd miss him when he's gone.

I'm not going to presume to rule on how much another person does or doesn't have to live for, but what's kind of mindblowing about Robin Williams is he was at a point in his life where he could do whatever he wanted. His status as one of the greats was established and he was, literally, beloved by millions.  He could produce crap for the rest of his career. (Apparently he did produce some crap recently. No one remembers it, and his status as a great is intact.) He could produce nothing ever again. He'd still be one of the greats and beloved by millions.  I'm not even a fan of his (I'm not not a fan, but I've never sought out his work. I've enjoyed my fair share of it, but I've never sought it out.) and, even if he hadn't just died, it wouldn't even occur to me to question his place in the pantheon.

If he'd cheated on his wife or relapsed back into drug use or engaged in various Rob Ford-style antics, the general public would say "Meh, Hollywood. It happens." His place in the pantheon would still be secure.  

If he'd been hard up for money, he could have thrown together a standup tour (it wouldn't even have to be good to make him enough money - then, if necessary, he could made a good tour and have a comeback in a few years) or had his agent call up Disney or Pixar and ask if they wanted Robin Williams to voice the wacky comic relief character in their next movie. He could have made a guest appearance on a sitcom or Whose Line and earned enough to keep body and soul together.  If he'd written a book (or had one ghost-written), people would have bought it. If he'd made a movie, people would have gone to see it. If he'd appeared in a Broadway musical or run for public office or joined Cirque du Soleil, people would tune in to see what happens, and a good number of them would be cheering for him.

People would, quite literally, pay him good money to simply be himself in their presence or on cue. He had secured the love of more people than he could possibly imagine (some of whom, I'm sure, actually cared about him as a person even if the feeling was unrequited) and the respect of exponentially more.  He had more leeway and flexibility and options than most of us can even dream of.

And still, the poor man couldn't find peace.

I hope he's free.


Monday, July 28, 2014

Journalism Wanted: why don't doctors who don't want to prescribe contraception join another field of medicine?

There was recently a story in the news where a walk-in clinic doctor wouldn't prescribe birth control because he had moral objections to it.

All this coverage would have benefited from an interview with the doctor in question, or others like him, shedding light on their internal reasoning for choosing this medical specialty.

As we've discussed before, approximately one third of all Canadians use prescription contraception.  That means that any given doctor working in family practice or a clinic can expect one third of all their patients to come in at least once a year asking for contraception.

If you're morally opposed to providing contraception, why would you pursue a line of work where one third of your clientele is going to ask for something you're morally opposed to?

There are many fields of medicine where contraception is not going to come up at all. Gerontology, podiatry, oncology, pediatrics, palliative care, otolaryngology, gastroenterology, cardiology, pulmonology, hematology, and I'm sure many other kinds of medicine whose existence I've never thought about.  Contraception is only going to come up in general practice, walk-in clinics, and gynecology/urology, with occasional appearances in emergency medicine, dermatology, and possibly endocrinology.

Why doesn't this doctor and other like him choose one of the many other fields of medicine, or work in a children's hospital or a long-term care home or somewhere similar where they simply won't be called upon to provide contraception?

I also wonder if medical schools and colleges of physicians and whatever other organizations might be involved take any measures to discourage future doctors from studying to practise in fields in which they're morally opposed to very common and medically-accepted treatments.

Sunday, July 13, 2014

Do police normally photograph the genitals of child pornography victims?

There was recently a story in the news about a 17-year-old boy who sent sexually explicit photos to his 15-year-old girlfriend, and as a result faces child pornography charges.  And, in order to prosecute these charges, police wanted to photograph this young man's erect penis.

The question of the appropriateness of the charges and police action have already received extensive coverage, but there's actually a bigger, more serious issue here:  do they routinely take photographs of the genitals of actual child pornography victims, i.e. minors who were forced or coerced or manipulated or tricked or exploited by adults into appearing in pornography?

In other words, if a grown adult had taken a picture of this teenager's penis for prurient purposes, would the police still be trying to take a picture of his penis for evidence purposes?

According to the article, the only things this kid is charged with are possession and manufacturing of child pornography (i.e. pictures of his own penis).  And it says that the police want to take pictures of his penis "for comparison to the evidence from the teen’s cell phone", which suggests that they intend to prove that the pictures on the teen's cell phone are child pornography by proving that they are in fact pictures of the teen's penis, as determined by official comparison with the official pictures of the teen's penis taken by the police.

It's certainly not implausible that there may have been, or may be in the future, a situation of actual child pornography (i.e. where a minor was forced or coerced or manipulated or tricked or exploited by adults into being photographed or filmed for prurient purposes) where the minor victim's face is not shown.  In cases like this, do the police also take nude photos of the minor victim for the purpose of official comparison with the pornography they've seized as evidence, in order to prove that the materials they've seized as evidence is in fact child pornography?

If so, this is a much larger problem that needs to be solved!

Saturday, July 05, 2014

How to illustrate articles about dying bees

Lately, there have been quite a few articles in the media saying that bees are dying out because of pesticide use, with the general thesis that this is a bad thing.

Problem: some articles are illustrated with giant zoomed-in pictures of bees, far larger than life, where you can see all the yucky details like hairs and antennae.

And, given my phobias, my immediate visceral reaction is "AAAAH!!!! KILL IT KILL IT KILL IT!!!!!!"

Which isn't quite the reaction the article is going for!

I do understand how ecosystems work so I know on an intellectual level why bees dying is a bad thing.  But the visceral phobia-based reaction is faster and louder, so the "KILL IT KILL IT!!!!!!" comes to mind before I even notice what the article is about.  And then, if I can bear to look at the headline, it's telling me about how this thing is being killed.

I know my reaction is not within the range of normal, but the fact remains that, in the culture of these articles' target audience, bugs are culturally considered yucky.  If I see a bug and I say "Eww, gross!" more people would think that's a "normal" reaction than if I see a bug and I say "Aww, isn't it cute!"  Bigger bugs are considered yuckier, and the details like legs and hairs and antennae are seen as grotesque. Fear of bugs is one of the most common specific phobias, many people are afraid of bees because they sting, and it's culturally considered normal and a valid choice to kill bugs because they're yucky (c.f. the existence of flyswatters and Raid).

In short, even among non-phobic readers, these enormous, grotesque pictures of the bees are far more likely to inspire revulsion than sympathy, which is contrary to the intention of the article.

A far better strategy would be to illustrate these articles with pictures of honey looking delicious and flowers looking beautiful - which is, in fact, the end result that you want people thinking about. If it is in fact necessary to portray bees, they should under no circumstances be zoomed in on so they appear larger than life! Features like legs and hair and antennae should be de-emphasized, and the image positions and camera angles should be such that people don't even for a second think there's an actual bee on their paper or screen. In appropriate contexts, perhaps cartoons of anthropomorphic bees could be used - more of a friendly food brand mascot and less of a creature that escaped from the gates of hell.

Zoomed-in pictures of bees are not going to change anyone's opinion from "meh" to "Save the bees!" People who think bees are fascinating up close already want to save the bees, people who are indifferent will react with indifference, and people who are grossed out will, even if only briefly, react with "Kill it!"  But pictures of honey and flowers might turn a "meh" into "Wait, I like honey and flowers, this is important!"  And, in any case, they're far less likely to inspire "Kill it!"

Tuesday, June 10, 2014

Saving for retirement ≠ pension

I recently took the 2014 Ontario Vote Compass test.  I found it was useful for identifying areas where parties' platforms weren't what I expected or their positions relative to each other weren't what I expected.  But one of the questions baffled me.  It asked if I agree or disagree with the statement:

"Ontario should require workers to save more for retirement."

At the end of the Vote Compass test, you can click on a link to see the rationale for the compass allocating each party's position to each issue.  And when I clicked through for this one, it became apparent that the issue they were talking about was the creation of an Ontario pension plan.  By "require workers to save more for requirement", they meant "create a provincial pension plan.

This is gravely misleading!  While saving money for retirement is certainly an important part of a pension plan, the two concepts are certainly not interchangeable.  The big deal about a pension plan is not that you divert money from your income to save for retirement, but that the plan turns this money into a steady source of income for your old age.  

Saving money is simple. Turning your savings into a pension is complex.

Saving money is arithmetic - actually, it's just addition and subtraction (and maybe even just addition depending on how you do the math), with no multiplication or division necessary.  Turning your savings into a pension is...I don't even know what kind of math it is, and I got an A in every math class on my high school's curriculum.

You can tell immediately if you're succeeding at saving money - the balance of your savings account goes up and doesn't go down. You can't tell if you're successfully creating a pension for yourself until it's too late.

Saving money is a diligent personal behaviour.  Turning savings into a pension is an entire profession, requiring its own training and expertise.

To reduce a pension plan to "you should save more money" is like reducing having perfect teeth to "you should brush your teeth."  Yes, the diligent personal behaviour is necessary, but you also need the professional expertise to achieve your goal.

The enormous benefit of having a pension plan instead of doing it yourself is that your pension is managed by expert professionals who are hired by expert professionals, and whose primary mandate is to make the pension plan succeed.  If you hire a financial planner as an individual, you're stuck with just your own non-expert knowledge to determine whether they're competent or a charlatan, and it's quite likely that their primary mandate is to sell specific financial products or have a high number of transactions or pull in new customers, depending on their compensation model.  Finding a skilled and competent financial planner who will work in your own best interests is not necessarily a simple matter for those of us who aren't financial experts ourselves, and we can't necessarily tell if our planner is in fact doing their job properly before it's too late.

With a pension plan, you also have economies of scale, and can mitigate risk by diversifying more than an individual can and by distributing risk over a longer period of time than an individual's personal retirement savings.


I think the Vote Compass test may have landed on this phrasing because one of the parties has nothing in their platform about creating a new or expanding an existing defined-benefit pension plan, and instead uses the phrasing "Give Ontarians the opportunity to save more for their retirement..." by promoting PRPPs. But this does not negate the fact that the other parties' platforms talk about actual defined-benefit pensions, where a given input will guarantee a given output.  This is far more than simply requiring people to engage in diligent behaviour, and the CBC and the Vote Compass people do us a disservice by representing it the way they did.

Tuesday, May 13, 2014

What if the real problem is on the other side of the "confidence gap"?

I recently blogged about The Agenda's blog post about their difficulty booking female guests.  Steve Paikin framed the problem as prospective female guests not wanting to go on TV when they didn't feel they were experts in the subject matter, but, as a viewer, I think it's more of a problem that The Agenda is willing to books guests who aren't up on the subject matter but will read up on it before going on TV (something Steve Paikin presents as laudable.)

I had a similar thought when I read the article circulating about the "confidence gap", which proposes that men advance more than women because men are more confident, i.e. more likely to loudly declare "Yes, I can do that!" regardless of whether they actually can.

Why are they assuming that the men's behaviour is baseline and correct?  What if the problem is in fact that people who are overconfident are being unduly rewarded?  What if the problem is that the system isn't set up to recognize people who have a fair and accurate assessment of their abilities?  What if we could circumvent the Peter Principle by figuring out a way to accurately and proactively identify and recognize people's actual objective skill levels and set them up with commensurate responsibilities and compensation?

Disregarding my role as an employee, if I look at this solely in my capacity as a client, as a part of the economy, as a part of society, I find it unhelpful that people would get promoted and rewarded simply for being loud. In my capacity as a client, as a part of the economy, as a part of society, I need people in positions of power and expertise and authority not just to be the most competent, but also to have a realistic sense of their own abilities and limitations.  It is very important that they only say "Yes, I can definitely do that" when they can definitely do that.  If they're running around saying "Yes, I can definitely do that" when they don't actually know because they've never done it before but they're willing to give it a whirl, that just make things worse.  We need to be able to trust the professionals and experts of the world to actually be competent professionals and experts, and we can't trust them if their best credential is that they're loud.  This creates a world where you have to approach everything with caution - Can that shoemaker in fact fix my shoes? Can that doctor in fact do that operation on me? - even though you don't have the expertise to independently evaluate these people in the first place.  That would make things worse for everyone, so we need to make sure the people responsible for putting people in positions of expertise and authority are able to assess them based on actual expertise.

Talking with Ehrlinger, we were reminded of something Hewlett-Packard discovered several years ago, when it was trying to figure out how to get more women into top management positions. A review of personnel records found that women working at HP applied for a promotion only when they believed they met 100 percent of the qualifications listed for the job. Men were happy to apply when they thought they could meet 60 percent of the job requirements. At HP, and in study after study, the data confirm what we instinctively know. Underqualified and underprepared men don’t think twice about leaning in.
Are these men who meet 60% of the qualifications getting the promotions?  If so, there's something wrong.  Why are they listing qualifications if they aren't required?  Why are they considering applicants who don't meet the qualifications if the qualifications are required?

The people who are applying only if they meet 100% of the qualifications are doing the job poster the basic human decency of taking them at their word.  If they are being punished for that, the system is broken.
We were curious to find out whether male managers were aware of a confidence gap between male and female employees. And indeed, when we raised the notion with a number of male executives who supervised women, they expressed enormous frustration. They said they believed that a lack of confidence was fundamentally holding back women at their companies, but they had shied away from saying anything, because they were terrified of sounding sexist. One male senior partner at a law firm told us the story of a young female associate who was excellent in every respect, except that she didn’t speak up in client meetings. His takeaway was that she wasn’t confident enough to handle the client’s account. But he didn’t know how to raise the issue without causing offense. He eventually concluded that confidence should be a formal part of the performance-review process, because it is such an important aspect of doing business.
How to raise the issue is very simple: in the meeting, you say "[Young Female Associate], what do you think? Do you see any points that haven't been addressed?"  Then, after she says something useful, you mention to her after the meeting "I'm very glad you mentioned [useful thing] in that meeting!  It was very important, and no one else seems to have thought of it."  Lather, rinse, repeat until you reach a critical mass of feedback (which shouldn't take super long - half a dozen meetings at most.) 

This lady's manager thinks she is excellent in every respect, but does not have as accurate a sense of her own skill set as perhaps she should. She truly doesn't realize that, despite the fact that she's a relative newbie, the other people in the room don't see the thing that she sees or don't have the idea she does, rather than having already thought of and dismissed it (I've discussed my own experience with this phenomenon here). So she needs to have this demonstrated to her with specific examples and be set up for success. That's where the manager comes in - as someone who sees her work as well as others' and is more experienced in this field, the manager is the best person to give her a sense of what her own skill set is - strengths and areas for improvement.  But because he doesn't know how to do this part of his job without raising offence, her career progression suffers.

He's in this management job without knowing how to boost a shy, new employee's confidence - and instead coming up with the ridiculously ineffective idea of grading people on confidence.  He should be setting her up for success by giving her openings to see first-hand how her contributions are valuable and necessary, but instead he's setting her up for failure by adding a performance-review item that correlates with her greatest weakness, without doing anything to help her improve other than perhaps telling her to improve.

Which leads me to wonder: did this manager, who can't figure out how to effectively coach a quiet employee without causing offence, get his management job simply because he was the loudest person in the room?

***

I should also add my personal experience with confidence: the more confident I get, the more willing I am to admit when I don't know something or don't have a certain skill set.  When I was just starting out my tech support job in university, I pretended I knew everything everyone was talking about out of imposter syndrome, terrified that they'd mock me or fire me if I (a teenager who had never been more than a personal home user - and this in the 20th century) admitted that I hadn't heard of reimaging a computer. I just said "Yes, of course I know what that is," and frantically muddled my way through.

But as I've had more and more experience validating the fact that what I know is acceptable and I won't get in trouble for not knowing everything, as I've been influenced by Eddie Izzard and learned how to do Entitlement, I've become more and more confident - confident enough to accurately represent and express how capable I do or don't feel in a given situation.

For me, saying "Yes, I definitely can" when I wasn't certain I could was a symptom of lacking confidence.  Saying "Probably, but I'm not certain," or "Sorry, I have no experience in that," or "I'll give it a try but I can make no guarantees" is a sign of confidence.

Thursday, January 16, 2014

The real problem in the York University religious accommodation case

I first heard about the York University religious accommodation story through Twitter, so I got all the outrage before I got a straightforward reporting of facts.  It wasn't until I read Friday's Toronto Star editorial that I saw the missing piece that pointed to the real problem, which has gotten buried in all the debate and outrage and sensationalism.  But, I'm pleased to report, the real problem is much simpler, less fraught, and more easily resolved.

The real problem is that this is an online course, but it includes a group project that apparently needs to be done in person, and this in-person component is not mentioned in the course calendar.

When this story first made the news, my first thought was "Well, what did the student expect?"  The answer is he expected an online course. So he was actually conducting himself perfectly reasonably, given his limitations and the information available to him at the time, by enrolling in a course listed as online. 

There are plenty of other situations where it might be disproportionately inconvenient to have an in-person requirement sprung on you.  Maybe you have medical issues that preclude going to campus and are trying to keep chipping away at your degree while you convalesce. Maybe you're pregnant and on bedrest.  Maybe you're a caregiver and can't get away for long periods of time but can occasionally find a moment to go online.  Maybe you live somewhere car-dependent but recently lost the ability to drive and haven't yet been able to reorganize your life accordingly.  I'm sure you can think of a few examples that you'd find perfectly reasonable.

So the solution is simply to accurately represent the course location in the course calendar.  I'm not saying they have to pinpoint the specific room number way back when the course calendar is published, I'm thinking more in general terms.  If it's on campus, say so.  If it's on campus but not in a fully accessible location, say so.  If it's on a different campus, say so. If it's an online course with an in-person requirement, say so.  If it mostly takes place on campus but students will occasionally have to travel to other locations, say so.  Are these locations in the city or outside of it? Accessible by transit or not?  Whatever it is, say so.

This will allow students to make informed decisions about the courses they take. Students who would find a particular course unduly inconvenient can opt out ahead of time, without having to lose money by dropping the course or involving the administration in an attempt to get an exception.  And only a very small number of professors and instructors would be inconvenienced by the need to edit the course calendar entries, because the vast majority of courses do in fact take place in the stated location and only the stated location.

Wednesday, January 01, 2014

Journalism wanted: why aren't Hydro workers electricians?

I just blogged that Hydro workers should be allowed to reconnect homeowners' equipment in order to facilitate power outage recovery.

Then I read an article about what the Hydro CEO was doing during the outage, which mentions in passing:
Meanwhile, workers report that, after finally restoring power in many neighbourhoods, they are being forced to disconnect some houses because of damage done to stand pipes, the hollow masts usually mounted on rooftops that serve as a conduit for power cables to enter a dwelling. A bent or broken stand pipe poses a risk of fire, and it’s the homeowner’s responsibility to have it fixed by a qualified electrician.
Hydro workers are not electricians.
 (My emphasis.)

So why aren't Hydro workers electricians?  They're working with electricity.  They're connecting bigger wires than electricians usually work with, so it seems like they should be able to be electricians.  Are they actually unable to do the work of electricians?  Or is this merely a certification issue?  Or is it a jurisdiction issue?

 What would it take for Hydro workers to be electricians?  Would they have to learn new skills?  Or just get an additional certification?

 I hate it when I walk away from a newspaper article with my questions than I went in with.

Sunday, November 17, 2013

Fact-checking

Being the kind of fangirl I am, when I entered Eddie Izzard fandom I read every current and past article I could get my hands on, and continue to read every article where he's mentioned. (I have a google alert set up and everything.)

And one thing I've noticed in reading all these articles on a very specific subject with a rather narrow scope is the frequency with which they reuse quotes or statements or information from old articles, without regard for whether that information is still current.

The example of this that I find most egregious is the oft-repeated statement, most recently seen in Post City, that Eddie raised over £200,000 for Sport Relief when he ran 43 marathons in 51 days in 2010.

This statement is completely true.  And it is completely misleading.  Because Eddie did in fact raise well over £1 million with his marathons.

I blogged about it when it happened.  The now-defunct video I'd linked to in my blog (which I so wish was still alive because it would completely prove my point) was from the Sport Relief 2010 broadcast.  Eddie himself also confirmed the 1.6 million number on Twitter. There's also a BBC article with the million pound number prominently featured, an article in the UK newspaper The Guardian citing 1.8 million, and an archived Sport Relief page from when the total was 1.1 million.

The fact that the number is over a million is important, because that's commensurate with the number of Twitter followers Eddie has.  In my blog post linked above, I mentioned that it was more than the number of followers he had at the time.  There's a huge difference between raising an amount of money commensurate with your number of Twitter followers and raising exponentially less money, especially with a feat so ridiculous as 43 marathons in 51 days.  (Analogy: I have 189 Twitter followers.  If I were to attempt to raise money, raising $189 is a reasonable expectation.  However, raising $40 would not be gloat-worthy at all.  And if I were doing multiple marathons, raising $40 would be pretty much a failure.)

Eddie deserves full credit for raising an amount of money commensurate with his feat and his audience reach, but because of citogenesis (although not necessarily through Wikipedia in this case) he isn't always getting it.

And this leads me to wonder: what other defunct or misleading statements are making it into media reports, perhaps on more important subjeccts?