Showing posts with label half-formed ideas. Show all posts
Showing posts with label half-formed ideas. Show all posts

Monday, January 23, 2012

"You're welcome" vs. "No problem" revisited

I've blogged before about the nuances of "you're welcome" vs. "no problem" as a response to "thank you".

But reading this story from Not Always Right gave me some sudden insight on why the "you're welcome" people don't like being told "no problem": they want it to be a problem!

They seem to be interpreting "you're welcome" as "you are welcome [in the sense of "entitled"] to impose upon me by making this request of me", and see a "no problem" as implying that they are not entitled to that. "No problem" is equalizing, "you're welcome" is subservient.

I use "no problem" specifically because it is equalizing, in an attempt to neutralize the burden of gratitude in the other party. I'm saying "It's okay, we're cool, you don't owe me any gratitude, I'm not putting this on your tab." This is what I like in customer service and in life in general, so I try to give it to others.

It makes me feel welcome in the literal sense, the same way I'm welcome in, say, my parents' home. I'm totally allowed to walk in and fix myself a drink and rummage through the fridge. In a customer service context, it makes me feel like they're giving me good service because I'm just as cool as they are, not because I'm above them. Because they like me, not because they are obligated to serve me. They're saying "Hey, it's you! How are you doing? Do you want a coffee?" rather than doing their job and rolling their eyes at me when I leave. And, while it is totally their prerogative to just do the job and roll their eyes at me when I leave, I'd much rather have them like me.

But the "you're welcome" people don't seem to care about that, they seem to prefer to be treated with deference, liked for their position rather than for themselves.

Friday, January 20, 2012

Things They Should Invent Words For

We've all heard the expression "privatizing profit and socializing risk". The phenomenon I want to make a word for is similar, but I can't seem to structure an analogous expression. It's a sort of socialization of the requirement for expertise, but not precisely.

One example of the phenomenon is pensions. They seem to be moving away from defined benefit pensions, where experts manage it for you, to defined contribution pensions, where they give everyone a little bit of money and tell them to go manage it themselves.

Another major example comes from from job searching. Based on what my parents and grandparents tell me, employers used to be willing to hire unskilled labour or workers with a lot of potential but no particular experience in the area (and they tended to look upon university degrees as potential), and then let them learn on the job or train them up so they could eventually move up the ladder and do better-paying work. But in my own job hunting experience, I find that most employers want workers who already have the very specific skills and experience required for the position - even when it's something easily learnable like proprietary software. And, on top of that, employers have been known to reject applicants who have education that isn't strictly required for the job.

This also reminds me of how every once in a while you hear employers in the news saying that they can't find enough skilled workers, but these complaints about the lack of skilled workers seem to be reaching my ears far more readily than information about what kind of skills which employers need, and how to go about acquiring these skills, and how to figure out which of those jobs you'd be a good fit for rather than picking some skilled trade at random.

Anyway, the general concept I want to coin a word for is this sort of increasing expectation over time that individuals who are not involved in organizations or fields of expertise are independently responsible for developing knowledge of the needs of those organizations or the skills of those fields of expertise, whereas historically the larger organizations were more willing to make the effort to integrate and orient people.

I'm not explaining this as well as I should be. Coinages and better explanations welcome.

Thursday, December 29, 2011

The disparity between the size of glasses and the size of standard drinks

Reading about a game on the LCBO website that tests how well you can pour a standard drink, I was reminded of the first set of wineglasses I ever purchased.

I had one or two wineglasses among my worldly possessions already, but I wanted to get some that matched. They were cheap, from the dollar store or something, but they were decently nice-looking and I quite liked them. We christened them with a lovely glass of wine that gave us quite a happy buzz indeed. The next day, I got home from work and poured myself a glass of wine, and...discovered that there wasn't even one glass left in the bottle? How could that be? The two of us had one glass each the previous day, there are five glasses in a bottle, where did the rest of the wine go?

Turned out they were oversized glasses. When you filled them to a reasonable-looking place, they contained two standard drinks of wine (unlike my previous glasses, which, when filled to a reasonable-looking place, contained one standard drink of wine.) No wonder we got such a good buzz on the previous night! There hadn't been any serious consequences to that little adventure, but what if those glasses had been used to serve to someone who had been driving?

This gets me thinking that it would be useful if glasses intended for alcoholic beverages were only available in single standard-drink sizes. Of course, oenophiles would probably complain because they like those oversized bowls so you can get the nose of the wine. So what if there was a line on the glass itself indicating how far to fill it for one standard drink? What if the box they come in or the bottom of the glass was marked with a warning label saying how many standard drinks it holds?

This would probably still garner complaints about the government meddling in commerce and whatnot, so here's a faster and easier solution that should offend no one: the LCBO should give away free glasses. They should be simple but attractive, of decent quality, and sized to make it impossible to accidentally overserve. They should be available in any quantity up to whatever constitutes a normal set of glasses like you might find in a wedding registry. You can just walk in and pick them up, no drama, and perhaps they could even include them with purchases as a value-added bonus at the beginning. Drinking glasses are cheap (I've bought them commercially in a set for as little as 50 cents a glass), the LCBO's profits are high, and hindering accidental overserving surely falls within their social responsibility mandate. The fact that they're given away for free at the place where you go to buy alcohol anyway means that people would have to make more effort to get oversized glasses than to get standard-sized glasses, so more responsible drinking is easier than less responsible drinking.

Personally, I'd still prefer if all alcohol glasses commercially available had to be sized to a standard drink, but I think a lot of people would complain. Giving them away at the LCBO would get the job done for people who don't care what kind of glasses they use and people who do want their glasses sized to a standard drink, without giving those who want non-standard glasses any reason to complain.

Friday, December 16, 2011

What if the library gave patrons credit for early returns?

One thing that surprised me in discussions of the library charging for holds that aren't picked up is the number of people who are annoyed not just by people who don't pick up their holds, but by people who pick up their holds on the last day before they expire, or keep library materials check out right up until the due date.

I don't consider this a problem myself and I don't know if the library considers it a problem, but nevertheless my shower gave me an idea to address it:

What if libraries gave patrons credit for holds picked up early or books returned early? For example, using amounts that make the math easy and might not necessarily be the optimal ratio, suppose they credit one cent to your account for every day before the deadline that you either pick up a hold or return an item. Late fines are currently 10 cents a day, so this would mean that if you're a cumulative total of 10 days early in circulating your material, that will cancel out one day's late fine.

The big question here is whether circulating material faster is more important to the libraries than the revenue generated by fines. I don't know the answer to that question.

The other question is whether this would motivate people to game the system by taking out material they don't want and returning it right away. This incentive could be partially mitigated by allowing the credits to only offset future fines and you still have to pay fines already incurred. People could still game the system, but how many people are organized enough to game the system in anticipation of future late fines but not organized enough to get their books back in time? I don't know the answer to that question.

But if it turns out it actually is important for the library to encourage faster circulation of materials, this could be a starting point for brainstorming.

Saturday, December 03, 2011

A little less conversation: building better consensus-building

One thing I find absolutely tedious about watching youtubes of Occupy is the people's mike. It takes such a long time to say anything! This also echoes something I find tedious about municipal politics: live, in-person consultations where anyone gets to get up and talk. Again, it takes such a long time! Surely it would be faster, easier, and more convenient to have everyone submit their ideas in writing - reading is faster than talking, and the writing process tends to result a more organized deputation than extemporizing does.

But, at the same time, there's a certain democracy to everyone getting up and having their say in full that we don't necessarily want to lose. So how can we make the general process of public consultation faster and easier and less tedious without making it less democratic?

Here's what I've got so far:

We start with a whiteboard, which can be either literal, virtual, or metaphorical depending on what's needed. For a set and reasonable period of time, everyone writes on the whiteboard every factor they can think of that needs to be taken into consideration for the issue in question. Each factor only needs to appear on the whiteboard once, no matter how many people think it's important (we'll address the number of people who think it's important in a minute.) So even if every single person in the room thinks it's important for the new widgets to be backwards-compatible with existing widgets, only one person needs to stand up and say so or send in an email saying so for it to get written on the whiteboard.

This is also a question and answer time. Anyone can post or ask a question, and anyone can answer or expand on anyone else's answers. All questions asked and all answers given are recorded on another whiteboard for everyone's review.

After the period of time for contributing to the whiteboard is over, there's a voting period. During the voting period, everyone votes on each factor on two axes: Agree/Disagree and Important/Unimportant. You can cast a neutral vote by abstaining. Once all the votes have been tallied, you can see what the collective's priorities are. Then they can take action to implement everything that gets a high number of Agree and Important votes and avoid everything that gets a high number of Disagree and Important votes. Things voted Unimportant but with a clear Agree or Disagree consensus will be addressed if doing so doesn't interfere with the things voted Important. Things voted Important but without a clear consensus could be subject to further discussion/dissection, or looked at in terms of how they related to other Important factors with clearer consensus.

Whiteboard and voting will be made as accessible as possible. The whole thing could be online if everyone involved has internet access, but if that's difficult for anyone then in-person, telephone, write-in, and any other kind of input method people might require should be allowed.

The enormous advantage of this method would be that it eliminates duplication. Instead of having to hear (or even read) dozens of impassioned pleas on the importance of backwards-compatibility, only one person has to bring it up and the importance will be made clear in the voting phase. At the same time, if one lone maverick is insistent that the widgets should glow in the dark, it's right up there with all the other idea and will stand and fall on its own merits. If other people think it's a good idea, it could go through even though that one guy doesn't have very much reach.

This method of consensus-building is far from perfect, but I'm putting it out there as a starting point. Improvements welcome.

Saturday, November 26, 2011

What if patients were allowed to deprioritize longevity?

They recently changed breast cancer screening guidelines, reducing screening in areas where it hasn't been proven to reduce mortality.

What bugs me about this is they're only looking at mortality. The reason why I'd be particularly concerned about breast cancer as compared with other cancers is I don't want to lose my breasts. I like my breasts and I want to keep them. If I'm going to be moved to take any particular measures to avoid breast cancer, it's going to be because I want to keep my breasts, not to avoid dying. However, we don't have the information to make that decision. They didn't look at whether early detection reduces the need for mastectomies, or, for that matter, chemotherapy. (I'd also very much like to keep my hair and continue my 17-year non-vomiting record.)

This is similar to my attitude towards GERD. I've been thinking about it pretty much non-stop for the past three months, and I've concluded that I'd very much prefer being able to eat exactly what I want for 100% of my life, even if it means my life is much shorter. I'd rather die at 50 having eaten exactly what I want every single day than live to 100 without eating anything that makes me happy. (Unfortunately, this isn't quite an option, because the disease manifests itself as difficulty eating. If I get esophageal erosion or Barrett's esophagus or esophageal cancer, I will be physically incapable of eating pleasurably.) However, the general medical approach assumes that dietary restrictions are a perfectly reasonable first step in preventing what might ultimately develop into esophageal cancer, and I can't find any sign that medical science is even thinking about working to eliminate the need for dietary restrictions.

As a patient, I'd really like to have the option of choosing to have my medical care not focus on keeping me from dying, and instead prioritize getting the most out of whatever time I do have. (And I want to be able to define "getting the most out of" for myself, so that it includes such fripperies as pleasure and vanity.) This would require not only the consent and cooperation of my medical team, but also the consent and cooperation of medical science. My doctor can't change my breast cancer screening protocol to maximize my likelihood of being able to keep my breasts unless medical science does research into whether screening helps avoid mastectomies, not just prevent death.

At this point, some people reading this are probably thinking "But...I want to avoid death!" And I know that with breast cancer awareness specifically, some people are really bothered by campaigns that focus on the fact that breasts are awesome rather than the fact that cancer can be fatal. So I'm not saying that patients shouldn't be able to prioritize survival and longevity. I'm just saying that we should have a choice. If you want to live to 100 no matter what, medicine should help you. If I don't have a problem with dying younger because it will spare me Alzheimer's, medicine should help me get what I want out of life.

From a disgustingly pragmatic point of view, allowing patients to deprioritize longevity might also save the health system money. Why pour resources into extending the lives of people who don't care if their lives are extended? (You might say "To keep them from dying of something complicated and expensive," but who's to say they won't die of something complicated and expensive decades later anyway? (Someone really should do research on that.)) There's the potential to save a few patient-decades of care with the full consent of the patients, and actually make them happier while doing so.

Friday, November 11, 2011

What if quality of housing counted towards section 37 community benefits?

I was looking at City of Toronto documents for a proposed development, and I was surprised to see that the developer had to contribute a certain amount of money as "community benefits" to various projects in the area. Turns out this is set out in section 37 of Ontario's Planning Act. In basic terms, it means that if developers want more height or density than normally permitted, they have to give something back to the community in exchange. In the documents I was looking at, they suggested contributing money to parks or streetscape projects.

But what if developers could contribute their community benefits through quality of housing?

For example, what if they provided more family-sized suites, or lower prices, or more energy-efficient housing, or some combination of the above? What if they provided some of the suites for use as public housing? What if they reserved a certain number (or even all!) the suites for purchase by owners rather than investors or agents who are just going to buy and flip or rent them out for profit?

As an area resident, I find it beneficial to increase the supply of suites that meet my needs, even if I'm not immediately in the market for moving. If the supply increases, that might drive down prices, thus reducing my rent increase as well as making it easier to buy.

There would need to be measures to make sure that they don't introduce crappy housing as a baseline, upgrade it to normal housing, and call it a community benefit. There also need to be measures to make sure that this better-quality or better-value housing benefits actual residents, rather than getting snapped up by investors.

Off the top of my head, perhaps quality of housing could be measured relative to the rest of the neighbourhood. If it's basically the same as the rest of the neighbourhood, you get fewer points than if you're introducing the first building in the neighbourhood to have central air conditioning. This is analogous to how the City might try to encourage grocery stores to move into neighbourhoods that are food deserts, but wouldn't take any particular measures to encourage grocery stores to move into neighbourhoods that already have a couple of grocery stores.

To keep investors and flippers from yoinking better-value housing, perhaps the amount of community benefit credit the developer gets for building lower-priced units could be based on the number that are still occupied by the original owners after a certain amount of time. The flaw here is that the developers don't have much control over what people do with their units after they buy them, but they do have the power to stop these kinds of marketing techniques and instead focus on the actual community they're becoming a part of.

The dialogue surrounding development and intensification all too often seems to disregard the fact that what they're building are people's homes, and the people who live there will be citizens, constituents, and community members. I'd really like to see analysis of a development's impact on "the community" include the people who will be living there.

Tuesday, November 01, 2011

Half-formed idea: how to incentivize clinical testing of alternative medicine

I previously came up with the idea that they should incentivize clinical testing of natural remedies and other alternative medicine.

Here's about half a solution: everything that has been clinically proven gets covered by OHIP.

The advantage for practitioners of alternative medicine and for patients is that treatment is no longer limited by the patient's budget. Patients can receive - and practitioners can be paid for - what treatment is needed.

The advantage for social responsibility is that this makes it easier to get things that have been tested than things that haven't been tested.

The advantage for OHIP is that alternative medicine would probably be in many areas cheaper. Pharmaceuticals and medical technology can be hellaciously expensive. If herbs or acupuncture can be proven to do just as good a job, even if it's in just 10% of situations, that would save significant money.

This would mean that OHIP would have to cover a wider range of things than it currently does, such as medication and dental care. But that's a good thing - everyone needs those things and they represent significant expenses for people who don't have benefits through their jobs. Broader coverage would be more in line with OHIP's actual mandate.

One change that would be necessary is coming up with a mechanism for OHIP to cover over-the-counter medications. Many of them have been clinically tested, and we don't want to clog up the health care system by forcing people to go to the doctor for a prescription for vitamins or decongestant. But that shouldn't be too difficult to work out. Our health cards have magnetic strips, so why not just swipe them at point of sale?

In this plan, things that have not undergone any clinical testing will still remain available and paid for at the patient's expense, like they are now. Things that have gone through testing and have been proven ineffective but harmless will also continue to be available at the patient's expense. Only things proven to be actively harmful will be pulled. So, for proponents of alternative medicine, there's no downside unless they're peddling snake venom.

The missing link in this plan is still funding and facilities for conducting the research in the first place. It's likely a significant start-up expense and I doubt there are labs just sitting around waiting to be used. They'd still have to work out that part.

Sunday, August 21, 2011

Is medical science working to eliminate the need for virtue?

The lifestyle changes that I've been whining about are considered, both by conventional and alternative medicine, to be the first step in treating the condition. The standard way of thinking is maybe they'll be all you need, and that would be a good thing. Medication and procedures are intended for more extreme cases, where lifestyle changes don't work.

This has me wondering: is anyone in medical science even thinking about it the other way around, i.e. can we invent a medication or procedure that would make the lifestyle changes unnecessary? Just make you not reflux at all, so you can have as much acidic food as you want?

(I haven't done extensive research this far, but what information I have suggests that medication for GERD are unsustainable in the long term because they can deteriorate your bones, and available surgeries might not necessarily last the rest of your life and might need to be redone. If you know of a medication or surgery that actually does stop GERD without lifestyle changes, please post it in the comments, I beg of you!)

This also reminds me of smoking. If you smoke, you're supposed to quit. There are tools to help you quit. But is there, or is medical science working on, a way to counter the harm done by cigarettes? Smoke a cigarette and then taken an anti-cigarette pill or something?

I've never heard of anything like this for anything.* Is that because science hasn't yet figured out how?

Or is that because of the Protestant-work-ethicish societal attitude that we should all just Be Good and Virtuous if we want our lives to work well?

I find myself wondering if that's true. So many of the people I've whined to were all "Oh, it's no big deal, you just have to make a few changes." But that's what's making me unhappy!

You'd think capitalism and big pharma would get behind this. Now, instead of people buying cigarettes, they can buy cigarettes AND anti-cigarette pills. Come on, get on it, our economy needs a boost!

*Update: I can think of one example: the morning-after pill. Another possible example is insulin, but I don't think diabetes management is quite up to the point where you eat whatever you want and then take the corresponding amount of insulin. Unless, of course, it is, in which case more power to you!

Tuesday, July 19, 2011

The problem with conventional thinking about machine translation

Reading In the Plex, Steven Levy's fascinating biography of Google, I came across the following quote from machine translation pioneer Warren Weaver:

When I look at an article in Russian, I say, "This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode."


I can tell you with absolute certainty that this is incorrect, and people who don't find themselves able to get past this way of thinking end up being very poor translators.

A more accurate approach would be "This idea really exists in a system of pure concepts unbounded by the limits of language or human imagination, but it has been coded in a way that one subset of puny humans can understand. I will now encode it for another subset of puny humans to understand."

To translate well, you have to grasp the concepts without the influence of the source language, then render them in the target language. You're stripping the code off and applying a new one.

If you translate Russian to English by assuming that the Russian is really in English, what you're going to end up with is an English text that is really Russian. Your Anglophone readers will be able to tell, and might even have trouble understanding the English.

The Russian text is not and has never been English. There's no reason for it to be. The Russian author need never have had a thought in English. He need never have even heard of English. Are your English thoughts really in Russian? Are they in Basque? Xhosa? Aramaic? Of course not! They're in English, and there's no need or reason for them to be in any other language.

This is a tricky concept for people who don't already grasp it to grasp, because when we start learning a new language (and often for years and years of our foray into a new language) everything we say or write in that language is really in English (assuming you're Anglophone - if you're not, then, for simplicity's sake, mentally search and replace "English" with your mother tongue for the purpose of this blog post). We learn on the first day of French class that je m'appelle means "my name is". But je m'appelle isn't the English phrase "my name is" coded into French. (If anything is that, it would be mon nom est.) The literal gloss of je m'appelle is "I call myself", but je m'appelle isn't the English idea "I call myself" coded into French either. If anything, it's the abstract idea of "I am introducing myself and the next thing I say is going to be my name" encoded into French. The French code for that concept is je m'appelle, the English code is "my name is".

I'm trying to work on a better analogy to explain this concept to people who don't already grok it, but here's the best I've got so far:

Think of the childhood game of Telephone, where the first person whispers something to the second person, then the second person whispers what they heard to the third person, and so on and so on until the last person says out loud what they heard and you all have a good laugh over how mangled it got.

What Mr. Weaver is proposing is analogous to trying your very very best to render exactly what you heard the person before you say.

But to grasp concepts without the influence of language and translate well is analogous to listening to what the person before you said and using your knowledge of language patterns and habits to determine what the original person actually said despite the interference.

Which defeats the purpose of Telephone, but is the very essence of good translation.

Monday, July 18, 2011

This should be a tweet, but I can't get it down to 140

I find myself wondering how people who truly, genuinely believe in and fear hell can bring themselves to have children. Because bringing a child into a world where hell exists introduces the possibility that the kid will go to hell someday.

I did once yearn to have children, I did once genuinely fear hell, and I do have your basic adult hormonal child protection instincts, which I'd imagine are massively stronger when it's your own child.

It's perfectly normal protective instincts to be willing to risk one's life to save one's child's life. But, for those who believe in it, the threat of far is vastly worse than the threat of death. Death is a sudden extinguishing of life, while hell is eternal torture without hope of reprieve. Religious traditions with a strong fear of hell do tend to contain the idea that it's your religious duty to have children. But if any parent would risk their life for their child's life, wouldn't they also risk hell to save their child from hell?

It is true that parents tend to think "But MY child will be GOOD," but your basic human decency isn't usually enough in hellfearing religions. Religious traditions with a strong fear of hell also tend to make it difficult to get into heaven. The slightest lapse of virtue can send you to hell, and in some cases even a virtuous life with improper rites can send you to hell. Thinking back to my previous mindset of hellfear and adding protective instincts, the risk of having a child go to hell far outweighs the biological/hormonal yearning to have a baby and any other benefits of procreation that I can think of.

I wonder what other factors there are for hellfearing parents that outweigh even the horrors of hell?

Sunday, July 03, 2011

Building a better long-term care model

Reading this article about a woman who doesn't want to leave the hospital because there are no openings in her preferred nursing homes and she doesn't want to go to the first available nursing home, I've been thinking about how to improve the current system. Here's what I've come up with:

What if being transferred to a long-term care facility didn't ever have to be final?

You can make a list of the facilities you want and rank them in order. You're immediately put into the available facility on your list that you've given the highest ranking. If none of the facilities on your list are available, you're put into the first available bed.

HOWEVER: After you get put into the first available bed, you're still in line for the facilities on your list. When a bed becomes available in one of them, you get moved there. And even after you're placed in a facility on your list, you still get informed of openings in facilities that rank higher on your list with the option of transferring there. In other words, if your #3 facility has an opening first so you're placed there, but then your #1 facility subsequently has an opening, you get the option of transferring to your #1 facility.

You can put however many facilities you want on your list, and rank them however you want. You can have every facility in the province ranked in order of preference, or you can have 12 facilities ranked equally, or you can have 2 facilities in first place and 5 in second place, or whatever you want.

Possible variations:

- Patients who are currently placed in a facility that's not on their list have precedence over patients who are currently placed in a facility that is on their list. For example, if I'm currently in my #5 choice and you're currently in the first available bed in a facility that isn't on your list, and I'm ahead of you on the waiting list for a facility that's #1 on both our lists, you get admitted to that facility first.
- Exceptions can be made to the "first available" rule under specific circumstances (for example, if the first available facility has been found in violation of regulations, if the first available facility is inaccessible to the patient's support people, etc.)
- The patient (or, if the patient is not competent to make decisions for themselves, their representative) can veto any placement proposed under this system.
- Before the patient loses their faculties, they can include in their power of attorney guidelines dictating circumstances under which the representative may or may not change the patient's priority order. For example, the representative might be permitted to change the patient's priority order if a new facility is built that didn't exist back when the patient was still competent, but might not be permitted to make changes solely for financial reasons.

I have no idea how much of this is or isn't a good idea. It seems like it would get appropriate care to as many people as possible and preferred care to as many people as possible, but there could easily be flaws that I'm not seeing. It would be cool if they could do projections on this model and see if it would work.

Thursday, June 30, 2011

Building a better Senate

As I've blogged about before, there are things I like and dislike about the Senate. The things I like tend to be those that provide a counterpoint to the House of Commons, while what I don't like tends to be when the Senate blindly rubber-stamps the House of Commons, without using the safety net afforded by their unelected nature to provide true sober second thought. So I was disappointed that recent discussions of Senate reform have the Senate either becoming more like the House of Commons without a clear differentiation between the two, or simply want to outright abolish the Senate without introducing sober second thought elsewhere.

So I've started brainstorming some ways to make the Senate more of what I like about the current system with less of what I dislike about the current system and about others' ideas for reform. Here's what I've come up with so far:

What if senators had to be non-partisan?

Currently, senators are affiliated with a political party (generally the party that appoints them, although I think there might be a few individual exceptions.) This is a hindrance to sober second thought when they vote along party lines.

What if we took the complete opposite approach and outright prohibited partisanship in senators? They aren't allowed to be members of political parties, they aren't allowed to donate or work in support of parties or candidates, and people who have engaged in these activities within a certain period of time before appointment are not allowed to be senators. These kinds of standards exist for certain types of high-profile or influential public service positions, so it seems feasible to extend them to make a non-partisan senate.

What if senators could not serve under the prime minister who appoints them?

A problem with the current system is that senators might feel beholden to the PM who appoints them. To solve this, what if prime ministers appointed senators to replace those retiring under the next mandate? Under this model, Stephen Harper would look at which senators will be retiring in 2015-2019, and come up with a short list of possible replacements. The flaw in this plan is that prime ministers can serve multiple terms, so it might not be entirely effective.

What if senators were drawn out of a hat?

Currently, prime ministers appoint one senator to fill each senate vacancy. What if, instead, they selected a number of people for a senate candidacy pool, and then whenever a vacancy comes up, they draw a name at random from this pool of candidates? All candidates appointed by all prime ministers remain in the pool until they reach the age of 75, unless they do something really bad that merits elimination from the pool (this would be carefully defined in the law.)

What if senators were picked at random from the general population?

Instead of the prime minister appointing people, what if we had "senate duty" along the lines of "jury duty"? People are selected at random from the voter list and told to report to Ottawa for a year or five years or some other defined term for senate duty. Relocation expenses are covered, you get a senator's salary, and maybe they have a rule that your employer has to keep your job waiting for you like with the military. I can make an argument for making senate duty mandatory, and I can make an argument for letting people who aren't interested simply opt out. There would also have to be a way to screen out people who aren't mentally competent etc., although they probably already have something like that for jury duty.

What if Senate votes were secret?

It would certainly be more difficult for senators to be beholden to their political masters if no one knows how they voted (or perhaps even how many votes for or against each bill got). This might sound like a bad thing because it's less transparent, but it would also provide a counterpoint to the House of Commons where there are open votes and party discipline.

Monday, May 23, 2011

What if they taught noblesse oblige in school?

I first learned about the concept of noblesse oblige in sociolinguistics class, when we were studying U and Non-U. To give us some context, the prof talked to us about the British conceptualization of old money (generally title nobility) vs. nouveau riche. The most memorable example she gave was that titled nobility would wear an extremely good quality cashmere sweater that they bought 20 years ago, while nouveau riche would ostentatiously buy the trendiest new clothes every year. I found the noblesse oblige concept appealing, and try to work it into my own life on the few occasions when I can find an opportunity to do so.

A number of things recently have made me wonder "Haven't they ever heard of noblesse oblige?" Some of this comes from politics, some of it comes from my recent readings on bullying theory. Most recent was from this article:

At one Southern school, some popular kids keep the price tags on their clothing so that classmates can see that they paid full price at a nondiscount store.


WTF? Haven't they ever heard of noblesse oblige?

Actually, they probably haven't. I first met the concept in an upper-year university sociolinguistics course, so why on earth would I think schoolkids should have heard of it?

But wouldn't it be useful if it were a more widely-known concept? What if they taught it in school?

Obviously they can't teach it as a thou shalt - that would come across as lecturey and sanctimonious and would never work. It would have to be closer to how I was introduced to it, simply "This is a thing that exists. Nobility does it."

So how would you do that? First thing that comes to mind is in a novel. For one or more of the books everyone reads in English class, pick something where noblesse oblige is a plot or character point. Appealing protagonist characters demonstrate noblesse oblige, and unappealing antagonist characters fail to do so. It wouldn't be the whole moral of the novel, just an underlying thread, like how entails are an underlying thread of Jane Austen novels but the books are far more than just a lecture on the follies of entails. That would introduce people to the concept of noblesse oblige in a non-lecturey way, and maybe the idea would stick with some people and help make the world a better place in the long run.

Saturday, May 14, 2011

Questioning the illegality of assassination

I was surprised when reading Noam Chomsky's reaction to Bin Laden's death to learn that assassination is illegal under international law. This surprised me, because all-out war can be perfectly legal under international law, and war is far messier and hurts far more people than assassination. I googled around and it seems to be true, and I also have a vague memory of in 2001 when Canada was first going to occupy Afghanistan, asking why we couldn't just assassinate Bin Laden instead and being told that that's illegal.

I think we need to rethink this. It just doesn't seem right that it would be illegal to, say, send in a small team of spooks to neatly assassinate Gaddafi, but World War I was perfectly legal. Why should it be legal to kill thousands, even millions, of soldiers and civilians and destroy infrastructure and livelihoods, but illegal to sneak into some despot's compound and off him in his sleep?

I'm certainly not saying that people or countries should be allowed to kill people and then get a get out of jail free card by calling it assassination, or that assassination is even objectively a good thing, at all, ever. I'm just thinking it might be a less unpalatable shade of grey than full-out military action.

In his article, Mr. Chomsky says:

We might ask ourselves how we would be reacting if Iraqi commandos landed at George W. Bush’s compound, assassinated him, and dumped his body in the Atlantic.


And his point, that the American people would not be best pleased with that development, is, of course, correct and valid. But I suspect the American people would be even less pleased if war were declared on the whole country and millions of innocent civilians found themselves bombed out and under military occupation when the occupying force really just wanted that one guy.

Perhaps it would be useful for international law to create a framework inside which assassination can be legal. Perhaps countries who want to assassinate someone could go before an international court and get an assassination warrant. (Q: But then wouldn't the target know they're about to be assassinated? A: Are there any plausible targets for assassination who aren't already assuming someone wants to assassinate them?) As a starting point, I propose that, in any situations where war or other military occupation would be legal, targeted assassination should also be legal (and military action should not be a prerequisite to targeted assassination.) Perhaps, before military action could be considered legal, the initiator should have to justify why targeted assassination isn't an option.

I'm certainly not under the impression that military actions normally stick to the letter of international law in the first place, but nevertheless, even if just for form's sake, the action with the less harmful outcome should be just as legal as the action with the more harmful outcome.

Monday, May 09, 2011

How to make me conservative

I've been watching Johnathan Haidt's TED talk on the moral roots of liberals and conservatives, and I realized that I actually have quite a lot in common with conservatives. I don't have a high level of openness to new experiences. I like things that are familiar, safe, and dependable. Mr. Haidt says that liberals "want change and justice, even at the risk of chaos" and conservatives "want order, even at cost to those at the bottom." I don't necessarily want change, except when it's necessary for justice or to improve things. I wouldn't say "at the risk of chaos", the strongest I'd go is "at the risk of reasonable sacrifice." I rather like order as well (although not when it's code for authoritarianism), just not at the cost of anyone - especially not those at the bottom! Overall, I like the rut I'm in and would very much like to stay here. My politics come from my personal desire not have my comfy rut taken away, and my socialist value that anyone who would like to do so should be able to enjoy the same benefits from the status quo that I do.

The more conservative people around me seem to think that I should be more conservative, and based on Mr. Haidt's theories it seems like the potential is in me. So why am I not there?

I've been thinking about this for a while, and I think it comes down to two things: the status quo is not satisfactory, and there is sufficient will among people who identify as conservative to change the aspects of the status quo that I find positive to make me nervous. I am naturally inclined to unquestioningly accept the status quo, and to fiercely cling to the aspects of it that I see as positive. Elimination of threats to positive aspects of the status quo is the most likely way to make me conservative.

So what does that mean in specific terms?

1. Good jobs for all Employment gives me money which buys me my comfy rut. If I could be confident that my earning potential (along with that of people I care about, people I identify with, and people I look at and think "there but for the grace of god go I") is not going to vanish due to circumstances beyond my control, I could feel safe and secure enough to be conservative. However, as long as the status quo is moving towards contract hell for all, I will be disinclined to protect the status quo.

2. Maintain our rights Everything else that I value about the status quo can fall under the broad category of retaining our existing rights, and everything that I want to change can be defined as either expanding existing rights to everyone, or restoring rights that were eliminated in living memory. I feel secure because I have access to all the tools I need to remain childfree, and I want that available to everyone. I feel terrified that the police could just round up everyone who happened to be in a particular area of a public street during the G20, and I want to go back to a world where that couldn't happen. If I could be confident that my rights (along with those of people I care about, people I identify with, and people I look at and think "there but for the grace of god go I") are not going to vanish due to circumstances beyond my control, I could feel safe and secure enough to be conservative. However, as long as the status quo includes people very loudly trying to take them away, I will be disinclined to protect the status quo.

Friday, April 08, 2011

What if schools were evaluated on long-term results?

I was reading this article on the problems with standardized tests, and it got me thinking about more effective ways to evaluate education. And it occurred to me that the true measure of education is long-term results.

For example, my high school was rather proud of the fact that 80% of its graduates went on to university. But what percentage made it past first-year university? We don't know. If, hypothetically, only half of us made it past first-year university, there's probably something wrong with the high school. And a high school where only 60% go to university but they all graduate is probably doing better.

Obviously, there are many problems with using long-term results. You'll lose track of some people, and you're more likely to lose track of students who have slipped through the cracks. It doesn't signal problems until it's far too late to do anything about them. It introduces the likelihood that outcomes will be affected by variables beyond the school's control.

But still, it seems relevant. It would be so useful if they could figure out a way to incorporate long-term outcomes as part of the evaluation.

Saturday, February 19, 2011

Theory: insecurity in one's own philosophy is the root of all evil

I blogged recently about how various patriarchal cultures are operating suboptimally essentially as a result of the patriarchs' insecurity in their own philosophy.

It occurs to me that many of the evils of the world are the result of powerful regimes being insecure in their own philosophies.

I was recently in a conversation with someone who felt the need to expound at length upon why communism is bad. But none of the examples they gave had anything to do with the actual social/political/economic practises that constitute actual communism. Instead they were on about stasi and gulags and propaganda - things that communist countries did because they were insecure in their philosophy. If they had trusted their philosophy, they wouldn't have needed all this stuff that they used to hurt people and ruin people's lives. And if they weren't pouring so many resources into assuaging their insecurity, they'd have had more to put into making their actual social and economic model work.

The evils that result from religion are similar. The problems happen when religions try to force themselves on people who aren't interested, start wars with other religion, and try to colonize countries and impose their values upon legislation. If they truly were secure in their dogma, they could just quietly go about life, letting the benefits of their religion speak for themselves. And if religions didn't go around trying to force themselves on others, there fewer people would perceive other religions as threats. Even I, as a recovering catholic, think I could appreciate the beauty and history of my former religion if it would stop trying to infringe upon my life as a private citizen.

Wednesday, February 16, 2011

My theory, which is mine

I always advise fellow translators to use a more specific preposition than "regarding" (or synonyms thereof). I feel that "regarding" forces the reader to make some effort to figure out how the two elements are related to each other, and if you can use a more specific preposition, then the reader doesn't have to make this effort.

However, I have also begun to think that using no prepositions whatsoever, by piling the elements together as a noun phrase or something similar, might make it even more effortless for the reader. This obviously wouldn't work for non-Anglophones (at least not non-Anglophones coming from Romance languages), but I really do suspect noun phrases scan more effortlessly for Anglophones. Perhaps it's because it implies to the reader that they're closely familiar with the subject matter, giving them a sort of false reassurance.

Specific (fake) example:

"The problem regarding the umbrellas"
takes more effort to read than
"The problem with the umbrellas"
takes more effort to read than
"The umbrella problem"

Strictly speaking, they all provide the same amount of information. If someone is completely unfamiliar with whatever the problem with the umbrellas is, calling it "the umbrella problem" isn't going to help them. But if they already have the information they need to understand "the problem regarding the umbrellas", then "the problem with the umbrellas" or "the umbrella problem" will be more effortless to read and understand.

Is this consistent with your experience with the English language?

(Anonymous comments welcome, non-Anglophone comments welcome, but if English is not your first/primary language please tell me what is.)

Tuesday, February 08, 2011

How to buy better school performance with one simple tweak

I've read in a number of places that one approach to improving school performance is to offer money to schools who improve, or offer the most money to the schools who improve the most.

I'm not sure whether or not that approach would work, but here's a simple tweak to maximize its effectiveness: give some of that money to the students.

All students get some money. Students who pass get more money than students who fail. The highest-performing students get more money, but the most improved students also get more money. The highest-performing student in the school and the most-improved student in the school get exactly the same amount of money. Maybe the money baseline could increase with each grade, so that you'll never that less money than last year for getting exactly the same marks (i.e. if a D student pulls their average up to B in grade 10 and gets a shitload of money for improvement, we don't want them to get less money for maintaining a B in Grade 11.)

A school can only be successful if it elicits the desired behaviour in its students. School administrators and teachers already want the students to show the desired behaviour, if only because it makes life easier. If financial incentives are effective and appropriate (and I'm not sure whether or not they are), why not give at least part of them to the group that actually front-line produces the results being evaluated?