I was invited, along with the rest of the CMU community, to attend, in February, an "Intelligence Seminar" on "Intelligent Preference Assessment: The Next Steps?" This was to be led by Craig Boutilier, Professor and Chair, Department of Computer Science, University of Toronto. I didn't make it. I'll tell you why.
Let me give you the first sentences of the invitation:
"Preference elicitation is generally required when making or recommending decisions on behalf of users whose utility function is not known with certainty. Full elicitation of user utility functions is infeasible in practice, leading to an emphasis on approaches that a) attempt to make good recommendations with incomplete utility information; and, b) heuristically minimize the amount of user interaction needed to assess relevant aspects of a utility function."
Got that? Good. Let me know what it means. I was hung up on the first two words and didn't do too well after. I guess my utility function wasn't know with certainty. Maybe it just wasn't functioning.
I put this before you, not because I attempt to make recommendations with incomplete utility function or heuristically minimize the amount of user interaction. I put it before you to show you how not to communicate.
I know, I know, you're going to say that the writing is intended for a technical audience and that the technical audience will understand Dr. Boutilier when he goes on to say, ""Current techniques are, however, limited in a number of ways: (i) they rely on specific forms of information for assessment; (ii) they require very stylized forms of interaction; (iii) they are limited in the types of decision problems that can be handled."
Well, I will respectfully disagree. You see, Dr. Boutilier begins his second paragraph in English. He says, "In this talk, I will outline several key research challenges in taking reference assessments to a point where wider user acceptance is possible. I will focus on three three (sic) current techniques that we're developing that will help move in the direction of greater user acceptance. Each tackles one of the weaknesses discussed above." Kinda sounds human, doesn't it?
Academic writing typically takes the form you see above. The author, desiring greatly to impress his/her colleagues, begins by using the longest words (typically jargon) and longest sentences he/she can imagine. After preening his/her feathers and puffing the pecs, the writer experiences a reality check and writes reasonably understandable language, knowing that if he/she is completely unintelligible, the students may stay away, those who have not been forced to attend by their teachers.
Then, finding himself (or herself) becoming reasonably intelligible, the writer, again conscious that the peers are watching, moves back to more jargon and highfalutin' language. To wit: Dr. Boutilier continues: "The first two techniques allows (sic) users to define 'personalized' features over which they can express their preferences. Users provide (positive and negative) instances of a concept (or feature) over which they have preferences."
This kind of language exists to impress rather than to express. It is used to exclude rather than to include. And, it works beautifully. Fortunately many scientific writers have the confidence not to write this way. They write to express; they write to communicate; they write to include. They craft their writing using plain language that includes short words, short sentences and short paragraphs with only necessary technical language. They strive for a response that says, "I see!" (Imagine a light bulb going on over the reader's head), not a response that says, "Say what?"
BTW, I'm open to anyone who can translate the first sentences for us!