Category Archives: philo

One year as an ex-academic

I realized the other day that it’s been just over a year since I quit graduate school, and by extension gave up on a life in academics. It was around the middle of April 2011 when I submitted my resignation letter. (I was spending the month teaching a course at my alma mater, and I felt more than a twinge of poignancy at the fact that I wrote the letter just a few hundred feet away from where I’d written my acceptance letter about nine years earlier, during my last semester of college.) Anyway, I thought I’d take a moment to reflect on how the year – my first spent as a non-student since I was four years old! – has gone.

The most personal of my fears about leaving graduate school had to do with the way I define myself. I was proud to think of myself as the academic type, and doubly proud to think of myself as a philosopher. (Observation: when someone who wears many hats does some work in philosophy, it’s common to see “philosopher” move to the front of their list of self-descriptors. Quelle prestige!) I’d spent about a third of my life under a certain avatar (in name if not always in practice), and I was nervous about casting it aside. How would I think of myself? How would I describe myself to others?

It turned out that the transition of self-definition was much less difficult than I’d feared. There are a few reasons why. First, from a practical point of view, I wasn’t really that engaged in my academic work anyway. The second half of my graduate career was characterized by attempt after semi-subconscious attempt to distract myself from the philosophical tasks at hand. These attempts were overwhelmingly successful, with the result that even when I did devote large amounts of time to doing philosophical work – which happened in fairly infrequent but significant bursts – my heart wasn’t really in it. A second, related reason for the easy transition is that, as a result of years of productive distraction, I had a number of alternative, and more meaningful, identities on which to fall back: software/web developer, educational technologist, teacher, etc. I imagine that such a transition would be far more difficult for someone who didn’t have viable alternatives.

So, from the internal point of view, the transition from academic to ex-academic went more smoothly than I’d hoped. The external transition – how the change has affected my relationships with others, or at least how I perceive the relationships – has been a little bit harder.

The thing is that, while I’m not one of the academics anymore, I spend much of my time with them. Most of my professional work is for universities, and many of my friends and co-workers are tied to schools in one way or another. Thus, I haven’t been able to quit the academic world cold turkey: I still have to go to meetings, deal with institutional BS, navigate political obstacle courses, etc. These are the crappy parts about working in universities, and becoming an ex-academic hasn’t made them any better in my case. (Admittedly, this is because of choices I’ve made to continue working where I work. I have friends who have left to go into, eg, banking.)

In addition to the more obvious annoying bureaucratic details of working within the university, there are the negative social aftereffects of dropping out: I don’t have a PhD, and never will, which means that I’m viewed (or at least, I feel like I’m viewed) in a different way. My opinion on academic matters just doesn’t matter in the same way anymore. I suppose that’s as it should be: the more time I spend away from the day-to-day of the university, the less relevant my opinions about that day-to-day become. Thus, when doing development work with universities, I’m a developer with some helpful experience in academia, rather than a technically-inclined academic. I should note that there have never been any specific instances where I’ve been called out on this distinction, or where it’s had an obviously negative effect on a relationship, but it’s always there in the background. (On a related note, I’ll probably never have a job where I, say, lead an academic computing program – but that’s not something I really want anyway.)

Thus, while I hate to sound sourgrapesesque about it, I haven’t lost much of anything by dropping out. I’m still heavily engaged in enabling the functions of the university that I find most important: teaching and scholarship. It’s just that I do it one or two levels of abstraction higher than when I was in the classroom or the library, and maybe it’s just as well.

On a personal level, the gains of dropping out have been enormous. Not only do I no longer devote any time to working on a project that I’m not really invested in (the dissertation), but I also no longer feel the crushing weight of the unfinished dissertation in my spare time. In the past year, I’ve read more broadly than ever in my life, discovering and developing areas of interest that I would never have dared to devote time to. I’m a new dad and I work from home, which means that I’ve been able to be the kind of dad I’d always hoped I’d be, without feeling guilty about the work I “should” be doing. I’m making an amount of money that is directly connected to the quality of my work, a startling and frankly disarming contrast to the way things seem to operate in universities, especially in the work I did as an adjunct. It’s been about a year since I intentionally read anything explicitly philosophical, but recently I’ve started to feel that itch again – and when I pick up a book or article, it’ll be because of an independent interest, rather than because it’s in an Important Person’s Bibliography.

So for me, quitting graduate school has been a nearly unmitigated success. It took years to work up the courage, and to develop the alternative paths that would make quitting feasible, but once those factors were in place, it was really the best decision I could have made.

The GPL is for users

The General Public License (aka the GPL) is for users. This observation seems so obvious that it needn’t be stated. But for those who develop software licensed under the GPL (like WordPress and most related projects), it’s a fact that should be revisited every now and again, because it has all sorts of ramifications for the work we do.

Users versus developers

What do I mean when I say that the GPL is “about users”? Who are “users”? We might draw a parallel between software and books. Books have readers (hopefully!), and they have authors. Authors read too; proofing is a kind of reading, of course, and one might argue moreover that reading is an inextricable part of writing. Yet when we talk about a book’s “readers” we generally mean to discount its author. ‘Readers’ in this sense is a gloss for ‘just readers’, that is, those readers whose relationship to the book is limited to reading. The situation with software is more complex, but roughly the same distinction can be made between users and developers. ‘Developers’ refers broadly to those people involved in the conceptualization and implementation (and also often the use) of a piece of software, while ‘users’ refers to those who just use it.

My reading of the GPL is that it’s heavily focused on users. (References to the GPL throughout are to GPL 3.0. You can find older versions of the licence, such as version 2 that is shipped with WordPress, on GNU’s website.) Take the opening line from the second paragraph of the Preamble:

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program–to make sure it remains free software for all its users.

Here as elsewhere in the text of the GPL, no real distinction is made between “you” as it refers to developers and “you” as it refers to users. Closer analysis makes it pretty clear, though. Take, for example, the freedoms that are purported to be taken away by proprietary licenses: the freedom to “share and change” software. Developers – or, to be more specific, license holders, who are generally either the developers themselves or, in the case of work for hire, the people who paid for the software to be developed – generally do not restrict their own rights to share and change the software that they create. Instead, restrictions are imposed on others, the (“just”) users.

Similar reasoning applies to the core freedoms that are outlined in the Free Software Definition, a sort of unofficial sister document of the GPL, also maintained by the Free Software Foundation. The four freedoms:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

On the face of it, freedoms 1 and possibly 3 are focused on developers, in the sense of “those who are able to write code”. But, with respect to a piece of software that they did not write and whose license they do not control, coders are just regular users (in the same way that Vonnegut may have been a “reader” of Twain). All four freedoms, indeed, are user-centric. The license holder, almost by definition, doesn’t need permission to use the code (0); the developer doesn’t need to study the code to know how it works (1); owners can redistribute at will (2); owners can modify and redistribute at will (3). It’s only in the context of users – those who did not write the software – that these freedoms need protection in the form of free software licenses like the GPL.

The GPL does make a few explicit provisions for the developer/license holder:

For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

The second provision is a sort of legal convenience; the first intends to ease what may otherwise be a prohibitive consequence of the core freedoms guaranteed by the rest of the GPL. Both are important and valuable. But it seems fair to say that they are secondary to the user-focused parts of the document, at the very least because they are motivated by other parts of the document, while user freedom needs independent justification.

There’s no question that the people who bear the brunt of implementing and upholding the GPL are software developers. In that sense, the GPL is very much “for” them. But, in a broader sense, that’s a bit like saying that school is “for” the teachers because the teachers play a key role in education. Schools are for children; they provide the motivation and justification for the whole enterprise. Similarly, the GPL is for users; if everyone wrote their own software, and there were no “just users”, the GPL (or any free software licenses, or any licenses at all) would be unnecessary.


If I buy a pizza, I trade ownership of money for ownership of pizza. Once I have the pie, I can do pretty much whatever I want with it. I can eat the whole thing myself, I can share with a friend or two, I can throw it on the sidewalk. I can save the pizza in hopes that prices rise so that I can make a quick buck in a resale, I can retail off the individual slices, or I can give the whole thing away. I can’t use the pizza to solve world hunger (not because I’m not allowed, but because it’s not possible); I can’t use the pizza as a deadly weapon (not because it’s impossible, but because I’m not allowed). In short, ownership bestows certain rights. Not all rights – I don’t have the right to murder with the pizza, or to do impossible things with it – but many, even most of them.

The situation is more complex with intangible goods; especially those, like software, which can be reproduced without cost or loss. Copyright law in the United States (so far as I understand it; IANAL etc), in accordance with the Berne Convention, grants rights over intellectual and creative works to the authors automatically, at the time of creation. Thus, if I write a piece of software (from scratch – set aside issues of derivative work for a moment), I am granted extensive rights over the use and reuse of that piece of software, automatically, in virtue of being the author. That includes copyright – literally, the rights related to the copying and distribution of the software. In short, the default situation, for better or for worse, is for the developer – and only the developer – to possess the rights and freedoms enumerated by the Free Software Definition. By default, nothing is protected for the users.

Free software licenses exist in order to counteract this default scenario. But keep in mind what that means: When a developer releases a work under a license like the GPL, certain freedoms and rights are granted to users, which necessarily restricts the freedoms of the developer. The GPL admits as much:

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

“Responsibilities” is a nice way of putting what is essentially the stripping of certain rights (in the same way that, once you become a parent and thus responsible for your child’s well-being, you no longer have the right to go on a week-long bender). Once the software is released under a GPL, the original author has lost the right of exclusive distribution of the original software. Subsequent developers, those who modify and redistribute the software, are similarly restricted.

It’s a trade-off. Users get certain rights (viewing source code, copying, modifying, redistributing) because the developers have given up the default right of exclusivity. Examined in itself (without reference to subsidiary benefits for the moment), the trade-off is clearly made for the benefit of the users, and involves sacrifice on behalf of the developer, sacrifice which is usually quantified in monetary terms (Bill Gates didn’t get rich by writing open source software), but could also be associated with pride in being the sole author, etc. There are, in addition to this, secondary sacrifices involved in free software development (loss of identification with the software because of modifications or forking, less guaranteed income than in a proprietary development shop, increased support requests that come from wider use of a free-as-in-beer product [though the GPL explictly says that you can charge what you want, and that no warranty is implied]). To some extent, these secondary sacrifices can be mitigated by the realities of the market, and are anyway subject to the particulars of the scenario in which you find yourself. But the core sacrifice – giving up exclusivity over distribution – cannot be separated from free software licenses.

Software licenses are political documents

Developers have all sorts of reasons for releasing software under free software licenses like the GPL. A few, off the top of my head:

  • You want to modify and redistribute existing software that is GPLed
  • You want to distribute somewhere that requires GPL-compatibility, like the plugin repository
  • You believe that forkability and other GPLy goodness makes for a better product
  • You want to develop for a platform, or contribute to a project, that requires GPL compatibility

I classify these reasons as prudential, in the sense that they are focused on the material benefits (money, fame, better software) that you believe will come from developing under the GPL. All of these reasons are great and important, and many of them have motivated my own work with GPL-licensed software. Taken together or even individually, it’s easy to imagine that these (and other) benefits would outweigh the sacrifice involved in giving up exclusive distribution rights over your work.

There’s another kind of justification for releasing under the GPL: you endorse, and want to advance, the political and moral ends that motived the creation of the GPL. The GPL assumes that it’s a good thing for users to have maximal freedom over their software:

If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

The assumption here is that “greatest possible use to the public”, and by the extension the good of the public, is something to be actively pursued – a moral claim par excellence.

And, among free software licenses, the GPL is perhaps the most explicit about the ways in which user freedoms (and thus the greatest good of the public) should be guaranteed and propagated. The “viral” nature of the GPL constitutes a kind of normative statement about the value of user rights over developer rights, which goes beyond other free software licenses that do not share its viral nature. The difference might be summed up like this. Alice and Bob are coders, and Carol is a potential user of the software. If Alice writes a piece of software and licenses it under a free software license like those in the BSD tradition, Bob can fork the software, make a few changes, and sell it to Carol under any terms he’d like – he can compile a binary executable for distribution, without making the source code available, converting his fork into closed-source, proprietary software. If Alice licenses the software under the GPL, on the other hand, Bob can still modify and sell to Carol, but he may not change the terms of the original license – in particular, the source code must be made available for further modification and distribution.

The normative aspect of the difference is in the value that each license scheme ascribes to the rights and freedoms of various individuals involved. BSD is more permissive with respect to Bob; GPL limits his ability to license the derivitive work as he pleases. GPL is more focused on Carol, and protecting her – and other “just users” like her – at the cost of some of Bob’s freedoms. (The GPL is for users.) One might express the difference in political terms thus: the GPL is more liberal, and less libertarian, than the BSD. Users, who are on the weak end of the power spectrum when it comes to software, are protected under the GPL, in the same way that society’s underprivileged and weak are often the focus of political liberalism. On this picture, licenses, like laws more generally, are designed in part to create the restrictions necessary to protect the positive freedoms of a vulnerable population.

For developers who agree independently with the normative principles underlying the GPL, its moral benefits can outweigh the sacrifices it entails. Such a justification is the starting point for Stallman and the Free Software Foundation (see, for example, the FSF’s about page). You may, of course, foreground other aspects of free/open-source software when justifying your licensing. I’ve listed some justifications above, and entire movements have sprouted to focus on prudential, rather than moral, justifications for open source development.

But – and here’s the rub – licensing your work under the GPL constitutes an endorsement of its moral justifications, even if it’s not (from a cognitive point of view) what motivated you personally to apply the license. If you choose a free software license for prudential reasons, you are not justified in complaining when your project is forked. If you choose the GPL for prudential reasons, you can’t altogether disavow the inherently altruistic underpinnings reflected in the license’s preamble. Put another way: Among other things, software licenses are political documents, and it’s incumbent upon developers to understand them before adopting them.

It’s important for developers to think carefully about this before diving into a license. My own take is that the original motivation for free software – that user control over the software they use is fundamental to their autonomy – becomes truer every day, as more and more of our agency is mediated through software. For that reason, licenses like the GPL are ethically important, at least if your worldview depends (as mine does) on respecting the agency of other human beings.

This post was prompted by a recent post by Ipstenu. Much of my thinking on the matter is clarified and inspired by the first few chapters of Decoding Liberation: The Promise of Free and Open Source Software, a book about free software written by philosophers/computer scientists Samir Chopra and Scott Dexter. You can (and should) buy the book here.

Planning an Introduction to Philosophy course

In April, I’ll be teaching an Introduction to Philosophy course at my alma mater, Cornell College. (Cornell’s academic calendar is called One Course at a Time, or the “block plan”, so that an entire course happens in the confines of three-and-a-half weeks. Thus the ‘In April’ bit.) I am really excited to be teaching at my very small and very dear Cornell, but I am nervous about the class itself.

I find the very notion of an “introduction to philosophy” course to be slippery and intractable. Philosophy is not like, say, Russian, where the first introductory course provides the pieces of knowledge (vocabulary, conjugation, stuff like that) that will be required in all subsequent courses. Philosophy has its own techniques and terms of art, of course, but they’re not the sorts of things that lend themselves naturally to sequential teaching, like in a foreign language (or math, or chemistry….).

The term ‘introduction’ seems apt in the case of philosophy, because the act of putting together and teaching an intro to philosophy course seems very much like the act of introducing more generally. If I want to introduce a visitor to Brooklyn, for example, I have to decide which of the things that I know about Brooklyn (which are too many to share in a short visit!) are salient enough, pleasant enough, relevant enough to include. The calculus depends not only on my knowledge of Brooklyn (the introducee) but also the visitor (the introduced). I want the picture of Brooklyn they get from my visit to reflect the way that I represent Brooklyn to myself, or at least a sort of idealized version of my own idea that will leave the visitor with pleasant memories of the city and a desire to learn more. And how I aim to develop that picture will change based on what I know about the visitor and his background knowledge.

So the trick is to do something similar with philosophy. The problem is that – like Brooklyn! – philosophy is vast and deep. Moreover (and here is where the analogy breaks down), in the case of philosophy it actually matters that the introduced comes away with a truly representative sense of what philosophy is like – at least, that is, insofar as the work of philosophy is independantly important. How do you select a reading list that is representatively broad without being vapid? sufficiently deep to represent the way philosophy is done without giving an overly narrow perception of the philosophical landscape? simplified enough to be approachable without overly caricaturing? relevant enough to the existing interests and knowledge of underclassmen without being pandering? And, given that there is no “right” way of making these decisions, how do you weigh each of these factors in the inevitable compromise that is represented by the eventual syllabus?

For me, it’s helpful to think about the different indexes one could use to organize and shape one’s course. These indexes are not mutually exclusive, but they must necessarily be ordered – favoring one index means making another index secondary, etc. Also, some of these indexes might necessarily be nested within others.

  • Chronology (eg 18th century)
  • Nationality/language
  • Great texts
  • Philosopher biography
  • Philosophical schools (eg empiricism, or utilitarianism)
  • Philosophical subfields (eg epistemology, or philosophy of language)
  • Topics (eg abortion, personal identity, truth)
  • Questions (eg ‘What is the nature of justice?’)

There are surely more, and different ways of carving them up, but this is a starting point.

In the past, when I have taught intro, my primary index has been philosophical subfields, with secondary index of questions and tertiary index of Great Texts. Thus I might do a unit on the philosophy of religion, which contains a section on proofs for the existence of God and a section on the problem of evil. In each of those sections, we read very well-known texts that represent different kinds of answers to the questions at hand. The next unit might be on ethics, with sections on virtues and on the goodness of acts, where we read Aristotle and Mill and Kant. And so on.

Clearly, this is not a terribly creative way to teach an intro class. From what I can remember, it mirrors closely the way that intro was laid out when I took it many years ago. But even though it is probably a pretty common approach, it is in many respects quite arbitrary. I try to take the Indian Buffet method – give just a taste of a couple different kinds of dish, and even the terrified diner might find something to latch onto. But the variety that constitues the strength of the buffet approach is also a weakness, as it threatens to give the newcomer an impression that the discipline is really a jumble of neato but otherwise unrelated questions. This is not indicative of the way that real philosophers actually do their work; that this approach is shallow means ipso facto that it does not display the best of what philosophy has to offer.

For this reason I find myself very drawn to an approach that takes a question or a topic as its primary index. A mentor of mine recently told me that he taught his most recent intro class as a sort of seminar on the topic of death, which involved both the reading of philosophical Great Works with a focus on death, as well as the introduction of texts that might not often be seen on a typical intro syllabus. This approach does not do away with the issue of arbitrariness – why choose death, after all? – but it does bump it up to a global arbitrariness, where once the overall topic of the course has been decided, the relevance of each reading to a cohesive whole is manifest.

There are several problems with this kind of approach, though. First is the obvious one: what topic do I choose? Ideally it’d be something that would permit the inclusion of at least some Great Works, and with a broad enough appeal to non-philosophers to convince them that it’s worth studying. A more fundamental problem that nags at me is whether this approach to an intro course is justifiable. Every course I ever took that was pitched this way, around a single topic, was an upper-level seminar type course. Does an intro course have the responsibility to be broad in scope? Or is it perhaps possible to have an appropriately broad list of readings centered around a single topic? Or – and this is my secret suspicion – is the concept of an introduction to philosophy class vague enough to be more or less meaningless, so that just about any kind of legitimate philosophy course might be explained away as an “intro” under the right kinds of circumstances?

I’d like to hear what some of my philosopher friends think about this dilemma (or maybe lack of dilemma, if I’m way off base). I’m also curious to hear what happens in other disciplines where the concept of an “introductory” course is just as inscrutable.

Fake Retweets

Twitter communities are built on trust – sometimes too much trust. Recent XSS and XSRF exploits on Twitter have shown that the Twitter platform has been designed in a way that accidentally allows such trust to be used for evil purposes. My Fake Retweets experiment suggests that not all Twitter exploits are platform-level, architectural problems.

First things first: The real point of fake retweets is that they’re funny. What better way to make fun of your friends (or enemies) than to pretend to retweet stupid things that they allegedly said? I am not a performance artist, online or off.

Yet fake retweets do seem to say something worth saying about the medium. FRTs only work as a vehicle for jokes because there is a general assumption that all retweets are genuine. To some extent, this has nothing to do with Twitter. The only reason why jokes (or lies, or metaphors, or irony) work at all is because there exists a contrary convention that the jokester (liar, ironist) consciously flouts. In a world where people only tell lies, lies do not work in the same way that lies work in our world. I might lie about my dog eating my homework so that the teacher will give me an extension; but if there is no presumption in favor of truth, teachers will have no reason to grant an extension based on such a claim. (Echoes of Kant.) Jokes seem to work in a similar way: if we all spoke in puns all the time, for instance, then the utterance of a pun would have no element of surprise, robbing the joke of much of its value.

Thus the efficacy of fake retweets is at least in part an instance of a broader phenomenon. Anecdotally, though, it seems like Twitter is a particularly fertile yet underutilized environment for this kind of convention-flouting. With limited exceptions, people on Twitter generally seem to believe that everyone else is being genuine. There are some counterexamples, like Mark Sample‘s #MarksDH2010 or accounts like FakeAPStylebook. But both are either so absurd that no one could possibly think that they weren’t fake, or actively wear their fakeness on their sleeves with hashtags, or both. (This fact doesn’t necessarily take away from the funniness of the jokes in question. It just means that they don’t have the intent to deceive.) Aside from these sorts of extravagant Twitter charades, it’s hard to think of examples (from personal experience) where real live lying takes place on Twitter.

That’s not to say that my Fake Retweets were meant to deceive. (But they did. In one instance, a follower who commented on the fakeness of one retweet took another one as serious just a few minutes later.) The spirit of Fake Retweets in this case is to poke fun at friends, which means that my FRTs were mainly friendly and totally directed at friends. Yet I have to admit a little trepidation to the FRT, even with such benign content. There’s something about faking another person’s voice (perhaps especially in a community of academics) that seems to cross a sacred line.

The sense of violation in FRTs, it seems, is related to the fact that we all spend so much effort cultivating a specific persona via Twitter. Yet again, such cultivation is not a Twitter-specific phenomenon – surely there’s a sense in which personae are necessarily self-constructed – but it seems to be especially evident on Twitter. Maybe it’s because on Twitter, you control your own stream. In real life, all my eloquence and fashion won’t prevent the occasional piece of food in my teeth; in meatspace, there are infinite vectors for our self-constructed selves to get out of hand. Twitter, in contrast, has very few dimensions for self-presentation to run amok: tweets are finite in length and in number, you get to choose your avatar, you can spend hours crafting your 140-character pearls, you can even edit and delete mistaken tweets. The FRT threatens to cleave this controlled space, to taint our carefully manicured self-images.

The aspects of Twitter that make FRTs so uncomfortable aren’t necessarily bad things. Maybe the world would be a better place if it were as trusting as the Twitter community. (Though I wouldn’t want to be vulnerable to cross-site scripting in real life.) But certainly there is room for a little more skepticism when you see something come across your screen. Think before you click that link, before you believe that RT.

Why punish plagiarists?

A recent post at the great philosophy teaching blog In Socrates’ Wake had a reader asking the audience whether, by not automatically giving a student an F for the course after plagiarizing a one-page assignment, he had “gone soft”. Simultaneously, I empathize with the instructor and I am baffled by why I empathize.

In the past I have taken hard stances against plagiarizers, stances which at the time made a lot of sense to me. Like the author and commenters at the ISW post, it seemed to me that plagiarism is the worst kind of crime and deserves the worst kind of punishment. In retrospect, this attitude seems ludicrous. There is a broad spectrum of actions one could reasonably take in reaction to a cheater, ranging from expulsion to doing absolutely nothing. Why is the transition from “hard” to “soft” to be found between failing the course and not failing the course, a consequence that seems to be pretty far toward the severe end of the spectrum?

To shed light on that question, it might help to think about this one: Why should students be punished for plagiarism at all?

Before thinking carefully about this question, it’s really crucial to remember that there are different kinds of plagiarism, and treating them all alike is like claiming that a candy-bar thief should be punished like Bernie Madoff. I want to know whether there is any justification for plagiarism being punished so harshly, so it makes sense to consider the most serious kind of violation. I take it that this would be a student who copies (buys, whatever) an entire paper and passes it off as his own. If any kind of plagiarism is going to warrant harsh treatment, presumably this will be it. Unless otherwise mentioned, then, this is the kind of plagiarism I’m talking about.

That said, let’s consider a few arguments one might give for why plagiarism is a punishable offense.

  1. Plagiarism is cheating, and cheating is unfair to the other players. I take ‘cheating’ to mean ‘breaking the rules’, which is unfair because everyone else has to abide by the rules. But different kinds of cheating are immoral in different ways. Cheating in golf, for instance, is wrong at least partly because my actions have immediate negative ramifications for the other players of the game: I take a stroke off of my game, and you are that much more likely to lose. In golf, what’s good for one person is necessarily bad for the other players (assuming they’re opponents – in fact, this might be a functional definition of what it means to be opponents). The same is not true of plagiarism. Unless you grade on a curve (a practice that a philosopher who is concerned with “fairness” would be hard-pressed to defend, by the way), one student’s cheating his way to an A when he otherwise would have gotten a D does not have a negative effect on other students in the class. You might maintain that students are obligated not do things that their classmates are forbidden to do out of abstract principle, a position that I can imagine various sorts of arguments for. But if the only thing wrong with plagiarism were that it was a violation of an abstract moral principle, it would take a very warped theory of retributive justice to justify such draconian punishment.
  2. Stealing is unfair to the person stolen from. Like in the previous case, “fairness” could be judged along two metrics: the practical and the theoretical. Stealing is often bad in a practical sense. If you steal my Charleston Chew, I no longer get to enjoy it myself. Therefore, :'( . Intellectual “theft” works differently, since the person stolen from hasn’t lost the use of the ideas. Of course, intellectual theft sometimes amounts to material theft, as when a breach of patent costs an inventor lots of money. And a parallel consideration might be at work when we talk about plagiarism in the academic community at large. If Dr X writes a great draft, and Dr Y steals it and publishes it, it could mean that Dr Y beats Dr X out for that Ivy League faculty position. Generally speaking, though, this is not a relevant consideration for student papers. Students – especially undergraduates – are neither publishing their term papers (much less their one-page, low-stakes assignments) nor using their papers to compete with others for jobs. The only situation where I can imagine real harm to the victim of classroom plagiarism is where the victim writes a paper with a great, novel idea or argument, the professor reads two or three plagiarized versions of the same argument before getting to the original, and as a result the professor is less impressed with the argument and gives a lower grade to the originator of the idea.
  3. Plagiarism devalues a degree, which is unfair to classmates. A bit different from the first consideration above, which is concerned more with a single game. This argument has more to do with iteration. If you cheat once and get away with it, other people will realize that cheating is possible; thus more people will cheat; and thus, somehow, everyone’s degree will be worth less; decreasing the value of a non-cheater’s degree through your own cheating is morally wrong; therefore cheating is wrong. (Andy Cullison lays out this argument here.) There are a couple things to notice about this argument. First, the mechanism by which the actual devaluing of the degree come about are not specified, and presumably they’d have to be abstracted away from in order to count out obvious counterexamples where the student could cheat, get away with it, and never let anyone else know about the cheating. Second, this justification for the punishment of plagiarism is less a moral indictment of cheating than of harming other people’s degrees. In the end, this might amount to the same thing, but it does not justify the kind of snooty self-righteousness that tints some instructors’ lectures on plagiarism, which suggests that plagiarism is akin to a mortal sin. As for punishment, you might argue in this case (as Mill does in his wonky “sensitive feeling on the subject of veracity” argument near the end of Chapter 2 in Utilitarianism) that a harsh punishment fits this crime even though the actual consequences of this particular action are relatively small (or non-existent) because the action has the potential to contribute to the weakening of a larger feeling of trust that is so manifestly important. It strikes me that this is the best reason considered so far for punishing plagiarists.
  4. Plagiarism is bad for scholarship/academia/the university. I’ve heard this sort of argument before: if everyone plagiarizes from everyone else, how will any new things be discovered? In one sense this rhetorical question is clearly overblown. Taken more seriously, you might grant that the posting of falsified or plagiarized material in, say, a journal of medicine could end up distracting scientists for several years, thereby diverting valuable research resources. But this argument does not extent to students, who are generally not doing original research, are not publishing, and are not in a position to affect the discipline either positively or negatively.
  5. Plagiarism is so frowned upon in graduate school and the professional world that students must be trained as undergraduates not to plagiarize. In other words, you might grant many of the points I’ve made above, which suggest that plagiarism at the undergraduate level is really not worth punishing in itself, but still think that punishment is prudent so that students are trained not to plagiarize when it really counts. I think there are a couple of limitations on this justification, though. First, it’s not obvious that plagiarism really is all that frowned upon in most of the careers that our students are going to end up in. If I crib the opening paragraph of an earnings statement I’m preparing, who cares as long as it gets the job done? I suspect that relatively few of our students end up in careers – academics, journalism, writing – where plagiarism really is so disdained. Second, I am highly dubious that scaring students shitless is a good way to train them not to plagiarize. If you want to train a dog not to jump on a couch, you use a rolled-up newspaper instead of reason; the same should not be true of students. Even if punishment – in the form of failed assignments, failed courses, or grade deductions – is part of the instructor’s arsenal, it should be proportionate with other, more humane teaching methods.

I take away from these considerations that there are both moral and prudential reasons that justify the punishment of plagiarism. But the assumption that harsher is better that I so often see in instructors appears to me to be far off of the mark. Few would say that you should teach philosophy, or chemistry, or poliical science, or mathematics, by threatening and slapping students. Why teach intellectual honesty that way?

Tensions between disciplinary and media instruction

I’ve been talking with a colleague about coming up with a mission statement for our educational technology program, so as to better position ourselves to assess our successes and failures. We’ve got a ways to go before we’ll have anything approaching a final version, but the brainstorming conversations we’ve had so far have been fruitful. In particular, a conversation we had yesterday gave me a chance to articulate a tension fundamental to the promotion of meaningful ed tech, a tension that had been bouncing around in my head for a while but that I had never formalized. I thought it’d be worthwhile to post it here.

My view is that there are two broad, interrelated reasons for implementing various kinds of technology in the classroom. One, certain kinds of technology can help to achieve the independently existing goals of the course. (For example, blogging in an Intro to Philosophy class might help students get a better introduction to philosophical methods and topics.) Two, it’s independently valuable for students to engage critically with and create content with new media. There are a couple justifications for this second point, I suppose, the more obvious of which is that there’s a vocational advantage to having Google-fu, web savviness, etc. More important, perhaps, the nature of information, and the relationship between information and its producers and consumers, is in significant flux. The more information the internet provides, the more necessity there is for students to develop effective bullshit filters – filters which can only be developed through critical practice with the medium. Moreover, the increasing ease of production (computers, cameras, etc that are cheaper and easier to use; sites like YouTube that allow people to publish and distribute in free and massive ways) means that today’s students could potentially be much greater participants in the creation and dissemination of knowledge than past generations. Part of the educator’s job is to teach students how to harness their creative power for their own good as well as for the greater good.

So I take it as given that there are plenty of justifications, independent of the specific content of a course, for teaching new media literacy. And such literacy can only be taught through practice and iterative reflection. I propose the caveat, though, that one can only become fluent with new media by the right kind of practice. What counts as “right” can vary, but what is definitely not right is to simply do digital versions of analog assignments. If I have my students write traditional, argumentative papers, and then post them on a website, I am just porting an analog assignment to a digital medium. When they add videos or hyperlinks or a comment section or a “tweet this” button, only then are they engaging with features of the native features of the medium that set it apart from what they’d do on paper.

From this I conclude that an educational use of a technology isn’t independently beneficial unless the use engages the meaningful or “native” features of the medium enabled by the technology. Instructional technologists, if they are to be advocates for the most effective uses of tech in learning, should therefore be advocating for native uses.

Here’s the tension. “Native” uses of ed tech – uses that are typified by a real engagement with the features of the technology that set it apart from different media – are, at least prima facie, exactly the kinds of uses that instructors will and should resist. Most instructors I’ve talked with see the instructional goals of their class as primarily disciplinary. Broader benefits, like the kind of media literacy I’ve urged here, are nice, but distinctly secondary, considerations. And the problem with the desire to teach your discipline first is that your sense of what counts as good disciplinary instruction is determined by the state of your discipline in general. Take philosophy as an example. With few exceptions, what constitutes quality philosophical work is linear, text-only, relatively long-form prose. The bodies which are de facto responsible for setting the standard for philosophical legitimacy – journal editors; tenure, promotion, and hiring committees; graduate school professors; etc. – reward this kind of work nearly exclusively. The ramifications for the philosophy instructor are that (a) in the absence of alternative motives, the production of traditional philosophical works is the end goal when training budding philosophers, and (b) the means for achieving the goal of traditional philosophers will mirror the results that we desire to achieve – in other words, the only way to produce a student who’s good at writing traditional philosophy is to have them write traditional philosophy.

What it boils down to is that the instructor who focuses on disciplinary goals is, at least at first glance, beholden to the traditional disciplinary methods to get there. And since those traditional methods are necessarily at odds with “native” uses of instructional technology (because in order to be native, a use must engage in a critical way with a feature of the medium that sets it apart from traditional media), disciplinary instruction seems almost incompatible with new media literacy instruction.

I have a few ideas about how the cycle might be broken. One is that the de facto standards of excellence in a discipline are de facto only, and if we examine what we really value in (say) a good philosopher, we’ll see that the traditional medium is not critical. Another is that traditional disciplinary excellence can and should be taught by methods other than simply aping the greats – in other words, it might not be the case that writing a lot of traditional philosophy texts is not the best way to make a better writer of traditional philosophy texts. Whatever the response to the tension I’ve described above, it is crucial to respond to it if instructional technology is to be able to fulfill both its goal of enabling disciplinary ends and striving for increased student facility with new media.

The ethics of Turnitin, or How I Learned To Stop Detecting Plagiarism

Yesterday I was feeling sorry for myself with regard to Turnitin and the like. I ended up having an interesting discussion with @LanceStrate, @mattthomas, and @KelliMarshall about the ethics surrounding plagiarism detection service. It got me to thinking about why it bothers me.

My gut feeling is this: Turnitin, SafeAssign et al make big bucks off of their database. More papers scanned means a bigger database; bigger database means (in theory) better plagiarism detection; better detection means (in theory) more value and more profit. Forcing students to relinquish their papers to this machine feels exploitative.

John Stuart Mill – Awesome Guy | cc licensed flickr photo shared by netNicholls

But I wonder why this bothers me. I have no problem feeding different kinds of information-gathering machines. Take Google. I use Gmail, Google Reader, Google Calendar, and extensively. The more I use these services, the more information they gather about my online activities; bigger database means better ad targeting; better targeting means more value and more profit. My “stuff” – information about me, writing I produce, records of my activity, etc. – is not sacrosanct. I’m willing to give it up in some cases.

So what’s the difference? Most obviously, I am choosing to use Google’s products in a way that students are not asking to use Turnitin. I will grant that there are different levels of “forcedness”, as @LanceStrate points out. Students can opt out of a class, or out of school in general. And if instructors make the Turnitin requirement explicit in the syllabus on the first day of class (or earlier), students will be reasonably well-informed about what they will be “forced” to do. But no matter how you conceive of the spectrum of requirement, the fact remains that my use of Google is far freer than students’ use of Turnitin.

That a professor requires students to do certain things that they wouldn’t otherwise do is not, in itself, an indictment of the requirement. I doubt that my own students would write about the Nicomachean Ethics if their grade didn’t depend on it. But, in this case, I as an instructor am obligated to exercise my power in a responsible way. (Heavy is the head that wears the crown.) Requirements should not be arbitrary, but should serve the goals of the class and the best interest of the students. Requiring a paper on Aristotle has negative effects on students – it takes away from the time and energy they could be spending on other things that are valuable to them – and it’s my responsibility to ensure that these negative effects are outweighed by the benefits bestowed by such an assignment. A well thought-out term paper assignment will, in the long run, have positive utility for the student.

Is the same true for plagiarism detection? Are the negative effects of such technologies (being forced to enrich a corporate entity, losing control over one’s intellectual property, feeling a presumption of one’s own guilt in the absence of supporting evidence) outweighed by some benefits? It’s at this point in the thought process that the pedagogical implications of Turnitin should be considered.

  • Is Turnitin good at detecting plagiarism? My experience says: Not really. While Google’s database doesn’t include as many student papers as Turnitin’s, Turnitin is in turn pretty awful at identifying plagiarism from the open web. Thoughtful reading and Googling has been more effective for me. I’d like to see data on the larger trends, though – for example, what percentage of student copying comes from the open web (Google’s domain) versus for-sale paper databases.
  • How much harm does “plagiarism” really do? This is really the more important question. Even if it turns out that Turnitin is very, very good at plagiarism detection, there is very little benefit from the software’s use if it turns out that plagiarism, as defined, isn’t really that harmful. This question is tough to answer, though. For one thing, there are lots of different kinds of plagiarism, certain kinds of which are more harmful than others. A student who copies a paper wholesale from Wikipedia is doing more harm than one who synthesizes a coherent paper from a bunch of different sources, or one who fails to cite a paraphrased argument. Surely the second and third students are getting more out of the assignment than the first. Furthermore, I have an untested gut feeling that the most harmful types of plagiarism – where a student steals wholesale – are easier to detect without using Turnitin, since they’re more likely not to be even approximately in the student’s voice or level of expertise. If this is right, then it might be the case that Turnitin is most necessary for the least harmful varieties of “plagiarism” – varieties whose ethical implications, some might argue, ought to be reassessed in light of how new technologies are affecting knowledge creation. (Too big a topic to address here, but you get the idea.)
  • Are there less troubling alternatives to Turnitin? Let’s grant that Turnitin is very good at detecting plagiarism, and that plagiarism is hugely pernicious. All things being equal, if we could avoid plagiarism by means that have less of a downside, we should choose those other means. In my experience (again, I have no comprehensive data to back this up), the answer is yes, there are far better ways. @KelliMarshall suggests assigning unique paper prompts, making plagiarism more difficult. I’ve found that the scaffolding of assignments – such that students write early, write often, and write in a low-stakes milieu – is extremely effective at lowering the tempation to plagiarize. To be more specific: When students are writing in journals or blogs – spaces where they are not harshly graded – and when their formal assignments allow students to pull from and build upon the ideas that they’ve already put to paper(/bits), cheating simply doesn’t happen very often. That initial moment – when a student sits down at the computer the night before the due date, not having written a single word, not knowing where to start, and copying out of desparation – is averted altogether. In the semesters I’ve used blogs and structured assignments in this way, I’ve had to deal with plagiarism maybe once per semester (out of 70+ students writing hundreds of papers). Another thing that’s worked really well for me is having frank discussions with students about why plagiarism is so demonized in academia in the first place (perhaps this conversation is a little more justified in an Ethics course). When they understand the motivations, and are not simply handed seemingly (and perhaps actually?) arbitrary rules about the Evils Of Plagiarism, they’re more likely to grok.

On balance, then, it seems to me that there is very little, if anything, to be gained from Turnitin et al that cannot be gained through other, less harmful means. Now I have to work up the guts to start sending links to this post whenever a faculty member asks me how to do plagiarism detection! But I suppose my lack of intestinal fortitude is a topic for another blog post.

Hub-and-spoke blogging with lots of students

Inspired by some of the blog posts that have recently come through my reader on the topic of classroom blogging, I thought I’d throw my hat in the ring. In particular, I wanted to respond to some of the concerns raised in the comments to Mark Sample’s post regarding the “hub and spoke” method, where students maintain individual blogs that are linked through the teacher’s hub blog. Can this model work with a large number of students?

Not quite drowning | cc licensed flickr photo shared by Jaako

Over the course of several semesters using such a model in Intro to Philosophy and Intro to Ethics classes, I’ve hit on a couple of techniques that have made it easier to deal with somewhere between 60 and 70 students (from two sections of the same course) blogging roughly twice per week. Here are some thoughts, in no particular order.

  • Groups – On the right hand side of, you’ll find a link to the blog of each student in the class. The links are organized into groups of five or six students each. The students’ first assignment at the beginning of the semester is to register for a blog and to email me its URL. As these URLs land in my inbox, I number them 1-7 (in sections of 35 or so students, seven seemed like the right number of groups). The blogroll is then split into groups, using Wordpress’s link categories.

    In practice, the groups serve several purposes. First, membership in a group give individual students a more focused and manageable reading load. That’s because the syllabus requires students to read only the blog entries of their group members. As the semester progresses and students get to know each other, their blog reading (as evidenced by, among other things, the scope of their commenting) increases dramatically, but this is self-motivated rather than required. Second, focused groups mean that each student has a guaranteed audience. If all students were assigned to read all blogs, then only the most popular blogs (or those appearing first in an alphabetical list!) would get regular readers and commenters. Groups make sure things are more spread out. Third, dividing the class into blog groups provides ready-made groups for in-class work as well. I’ve found that the camaraderie that forms in a blog group (see these comments for an example of what I mean) translates very nicely into in-class work, and vice versa.

  • “In the blogs” and classroom integration – When my students first started blogging a few years ago, I would make a habit of finding a few posts that caught my eye before most class sessions to discuss with the class. Bringing the blogs to the center of the classroom experience does a couple of things: it highlights good student work (I try to talk about everyone’s blog at least once per term), it creates the impression that the blogs really are a crucial part of the class, it’s a good way to revisit issues that went either unexplained or underexplained in the previous session, and it makes future blog posts better when blog authors believe that their work might be discussed in class.

    Since I was going through the process of picking out and making notes about interesting posts anyway, I figured I might as well make my notes available to students before class. So I started writing “In the blogs” posts, digests of what caught my eye that day, and a brief description of why. I’d generally try to post this at least twelve hours before the class session where the posts would be discussed. After a few weeks of doing this, I noticed that many students had actually read the posts that I blogged about (though I didn’t require it). Comment counts on those posts also tended to be a bit higher.

    Near the beginning of the term, I deliberately overdid it with In the blogs, in order to give students the sense that the blogs were really significant intellectual spaces and important to the class. See, for example, digests from the beginning, the middle, and the end of the semester.

  • RSS and grading – The purpose of the blogs in these classes is to give the students a space for reflection that they take seriously (publicness does this) but that is low-stakes enough to allow for risk-taking and experimentation. Thus my pass-fail grading: if the blog post is on time, and demonstrates even a modicum of thought, you get full credit. The happy byproduct of this arrangement is that a close reading of every blog entry and comment is not necessary. Early in the semester I try to read every post relatively carefully and comment on most of them – largely so that I can model the kind of thoughtful but not-too-formal commenting that I’d like the students to adopt – but as the term progresses the community generally takes care of itself pretty well. By the end of the semester, I hunt and peck my way through the blogs at my leisure, much like the students do.

    I used Google Reader to keep track of the students’ blogs, so that at the end of each blog grading period (every two or three weeks, I think), I could scan back through the feeds to see that they were on time. Comments work in a similar way: I subscribed to the comment feed of each blog, and at the end of every grading period would scroll through the comment feeds, keeping a tally of comment authors (this makes comment counting a bit more time-consuming than post counting).

Requiring such prolific blogging with so many students is not for the faint of heart (or, perhaps, for those with a 5-4 load), but I’ve found that some of these techniques – and especially the general rule that doing a lot of work early in the semester means that a self-sustaining community will develop – make the job much more manageable.

Doctorow on ethics and copyright

I’m posting this passage from Cory Doctorow’s generally awesome discussion of copyright to Microsoft because it’s too long to tweet:

Copyright isn’t an ethical proposition, it’s a utlititarian one. There’s nothing *moral* about paying a composer tuppence for the piano-roll rights, there’s nothing *immoral* about not paying Hollywood for the right to videotape a movie off your TV. They’re just the best way of balancing out so that people’s physical property rights in their VCRs and phonographs are respected and so that creators get enough of a dangling carrot to go on making shows and music and books and paintings.

Now, I think this is perhaps overly simplistic, since utilitarian considerations might ipso facto be ethical ones. More explicitly, if it’s true that violating copyright reduces the efficacy of Doctorow’s “carrot”, and if the ensuing decreased productivity of content producers has negative overall “utilitarian” impact, then that initial act of piracy might rightly have negative ethical import.

But the core of what Doctorow is saying strikes me as absolutely correct: to act like the copying of a CD is a violation of someone’s rights is to make a lot of very questionable assumptions about the concept of intellectual property.

The piece is worth reading in its entirety. Do it!

Mashups, authorship, and audience

At the BLSCI Symposium last week (see the previous post for more info), I had the good fortune to work a bit with Gardner Campbell, including attending his afternoon workshop titled “Speaker, Listener, Network: The Concept of Audience in a Web 2.0 World”. The main thrust of the talk was that Web 2.0 technologies, and in particular the phenomenon of open APIs and the mashups they allow, call into question our notion of what constitutes the (or even an) audience for the content that we produce. It is through the lens of the author that one can really see this at work.


via quinnanya

Here’s how I would reconstruct the argument. Communication – I’m thinking primarily here of linguistic communication, but it could be the case with other kinds of conventionalized communication as well – works because of a set of assumptions that the author (a term I might for the moment apply broadly to anyone who “authors” an utterance with communicative intent) has about his audience. If I say “Gee, it’s cold in here” because I want you to close the door, I am assuming, among other things: that you are a sufficiently competent speaker of English, that your hearing is functioning properly, that you will grasp the “literal” meaning of my sentence (i.e. that the ambient temperature in the room is too low for my comfort), that you will assume that I must have uttered the sentence not just to inform you of my beliefs regarding the temperature of the room but to make you close the door, that you like me well enough to want to make me more comfortable, that you are physically able to close the door. And so on, ad nauseum. More generally, the communicative gesture that an author chooses to make (a gesture like an utterance) will depend on his beliefs about who or what his audience is. (None of this is very new or very original, of course.)

We might think of certain kinds of authorship, such as writing a book or painting a picture, as less direct than the kind of authorship described in the foregoing paragraph, because the author is separated further from his audience and, as a result, has less information about them. When I write a book entitled Gee, It’s Cold In Here, I make some of the assumptions discussed above, but some I do not. Using Twitter is probably something like this, as you might be justified in making some assumptions about your audience (you know the handles of your followers, for instance), but it’s impossible to judge the potential scope of this audience, or to know many details about most of them.

When an author’s work is mashed up after the fact, his connection to his audience is so indirect that you might call it altogether disconnection. I might send a tweet, something like “boonebgorges: Gee, it’s cold in here”, with the intent to get a rise out of my Twitter followers. Let’s say it gets pulled into Twistori (perhaps the tweet should have been “I hate how cold it is in here”…). Think about the people who now view this tweet in its new context. Not only do I not know who they are, but I had never really even considered the possibility of their existence when authoring the original tweet. In this sense, whatever assumptions I had originally made about my audience have been entirely subverted by the reuse of my work. There is a sense in which I am no longer the author of what I wrote: I didn’t code Twistori, I didn’t conceptualize the potential visitors to, etc. As with any remix – from DJing to quilting to objet trouvĂ© art – the idea of authorship being vested in a single individual has been overthrown (if it was ever that simple even in the case of more traditional authorship).

Once authorship becomes decentralized, so too does audienceship. Let’s say that you are one of my Twitter followers. You saw my initial tweet in its original context, in your Twitter timeline. Let’s imagine further that you are checking out Twistori at some later date and see my tweet repurposed in the Twistori timeline. Who, at that moment, is the audience for my tweet, and why? Are you the audience, because the tweet was originally written with you in mind? Are you the audience, because you’re now reading the tweet on Twistori? Is no one the audience since no one can be definitely picked out? There is a certain amount of self-selection that has to happen; the reader must construct an audienceship around himself. Reading a disembodied, mashed-up tweet written by a stranger, you could imagine yourself as a friend of the original tweeter, as a viewer of a piece of abstract art, or any number of other things. When you get enough people – enough intentional actions – between you and the original producer of the content, you have to make decisions for yourself about what kind of audience you are a part of, if any.

Anyway, this is all very interesting to me, and I have some thoughts about whether there are – or should be – any “right” answers to the questions of how to circumscribe authorship and audience. I need some more time to think about that, though.