Tag Archives: Twitter

Importing Ning users into WP

Today Ning announced that it would be ending its free social networking service. I tweeted something to the effect that this event is a wake-up call: When you use closed-source, third-party hosted solutions for something as valuable as community connections, you are leaving yourself open to the whims and sways of corporate boards. It’s not that Ning is evil or anything – it goes without saying that they need to make a profit – but their priorities are importantly different from those of their users. In the same way that Ning moves from a freemium model to a paid model, Facebook could start selling your crap, Twitter could crash, Tumblr could go out of business, etc.

All this is a good argument to be using software solutions that are more under your control. Like – drumroll – WordPress and BuddyPress.

Enough moralizing. I whipped together a plugin this afternoon called Import From Ning that will allow you to get a CSV export of your Ning community’s member list (the only content that Ning has a handy export feature for, alas) and use it to import members into a WordPress installation.

As of right now, it does not have any BuddyPress-specific functionality. But the data that it does import – display name, username, email address – are enough to populate at least the beginnings of a BuddyPress profile. The next thing to add is the auto-import of certain profile fields. I might try to do this tomorrow. The plugin is based on DDImportUsers – thanks!

Instructions:

  • Download the zip file and unzip into your WP plugins directory
  • Look for the Import from Ning menu under Dashboard > Users (unless you’re running a recent trunk version of BuddyPress, in which case it will be under the BuddyPress menu)
  • Follow the instructions on that page

Download the plugin here.

Social Media and General Education: My Queens College Presidential Roundtable talk

This week I gave a Presidential Roundtable discussion at Queens College. The talk was titled, somewhat anemically, “Teaching on the Coattails of Text Messages”, though arguably what I was saying didn’t really end up having much to do with text messages! (I justify my being misleading by reference to the fact that the Presidential Roundtable was not in fact a roundtable format.)

The thrust of the talk was that there are important structural similarities between social media like blogs and Twitter (their openness, their relative lack of imposed structure, their focus on audience and emergent conventions, their positioning of the individual as the locus of value and meaning) and the kind of general education that we’re seeking during this year of gen ed reform at QC.

I transcribed the video after the break, mainly so I’d have the text for my own purposes. It’s lightly edited to cut out some of the more egregious ums and ers and actuallys. Video of the talk is below for anyone who is interested. I spoke mostly extemporaneously and said some dumb things, so please be generous in your interpretation!!

Special thanks to Zach Whalen, who generously answered some of my questions about his Graphic Novel class. (And to his students, whose tweets served as fodder!)

Teaching on the Coattails of Text Messages from Boone Gorges on Vimeo.

Continue reading

2009 by the numbers

What’d I do in 2009? Some of my numbers are paltry and lame, but here they are anyway.

I posted 51 posts to this blog, teleogistic.net (and a handful of posts in other places). Those posts brought 183 legit comments. 3,299 unique visitors stopped by from 84 countries and 49 US states (WTF South Dakota?). The most popular search terms that led people here were: 1) read it later kindle, which led people to this post, 2) os x migration “less than a minute remaining”, which led people to this post, and 3) boone gorges, which led people to my beautiful face. The most popular posts on this blog were 1) Help me alpha test BuddyPress Forum Attachments (which is listed as the help page for a BuddyPress plugin I released, and so probably gets a lot of confused eyeballs), 2) Displaying the BuddyPress Admin Bar in Other Applications, which got added to StumbleUpon and, appropriately enough, contains hacks that did not originate with my paltry brain, and 3) Hub-and-spoke Blogging with Lots Of Students, which was interlinked with a lot of other great posts on the issue of classroom blogging. Not terrible for the first year of a blog, considering that BLOGS ARE DEAD.

I learned a lot about coding during 2009. When 2009 started, I knew quite a bit about HTML and CSS, as well as a smattering of PHP. I opened my first WordPress code file in about March. Since then I have released seven WordPress/BuddyPress plugins, a MediaWiki extension, and a handful of smaller hacks through the GPL, comprising some 4300 lines of code (about half of which was modified from existing code, and half of which is more or less from scratch).

I tweeted around 3300 times this year.

I racked up somewhere in the neighborhood of 180 hours of time this year commuting to and from work. Less impressively, I ran a pathetic 675 miles.

As some of you know, I do lots of crossword puzzles. According to my back-of-the-envelope calculations, I did around 1,960 crosswords this year, a number that is made up mostly of the first 13 puzzles listed on this page. I made a pledge at the beginning of the year to do my crosswords with pencil and paper (rather than on the computer) to improve my lackluster performance at ACPT. I stuck to that pledge: I can remember doing about three crosswords on the computer this year, as the rest were done on paper. We’ll see how all the practice pans out in February.

Here’s to a better 2010!

Saving tweeted items for later

I get a ton of reading material through recommendations on Twitter. But Twitter has a few problems as a source of reading material (problems that, among other things, keep it from being the “RSS killer” that people like to yammer on about). Perhaps the most pressing problem is that my normal use of Twitter is more or less at odds with the way in which I like to consume reading materia online. Typically, for me, Twitter is a sort of attention dump: if I’m doing work that doesn’t require all of my attention (or if I’m doing work that is sufficiently boring that I don’t want to give it all my attention), I’ll often pour my excess attention into Twitter. Web reading, on the other hand (which for me is typified by the kind of reading I do in Google Reader or Read It Later) is usually quite different, in that I generally don’t multitask as I’m reading. As a result, I frequently see links in my Twitter stream that I’d like to look at but can’t at the moment.

There are a couple of different ways that I deal, or have dealt, with the problem of collecting Twitter recommendations for later.

  • Readtwit – There’s a service called Readtwit that collects every link from your Twitter stream and creates an RSS feed out of the linked items, which are also annotated with the name of the original linker and the text of the tweet containing the link. Subscribe to the link in your RSS reader, which presumably is a place much better suited for concentrating on reading.

    readtwit

    I used Readtwit for about a week before I had to give it up. The problem is volume. I follow enough people, some of whom send out a huge number of links, that I found myself sifting through literally hundreds of items every day – on top of the hundreds of items I receive from all my other RSS feeds on a daily basis. Making things worse, I was only interested in a very small percentage of the items being linked to. I was wearing out my j key in Google Reader.

    Readtwit provides a bit of filtering tools to prevent against overload. You can filter out links from certain individuals, or (I think) links in tweets containing certain text. If you’re following a relatively small number of people, and you think you’ll be interested in most of the links that they send out, then Readtwit might be a good option for you.

  • star

    Subscribing to my Favorite Tweets feed – Twitter creates an RSS feed for any individual’s favorite tweets (located at http://twitter.com/favorites/yourtwitterhandle.rss). When I read a tweet containing a link that I want to check out later, I favorite the tweet. I’ve subscribed to my favorites feed in Google Reader, and a few times per day I receive the tweets that I’ve most recently favorited.

    The huge advantage of this method over Readtwit is that it lets me select only those items that I want to read, instead of getting every linked item in my stream. And since the favorites function is part of Twitter itself, I can mark items in any Twitter client. (It’s especially handy when, for instance, someone links to a video or something that can’t be viewed on my phone – I favorite the tweet, and check the link out the next time I’m reading my feeds on a computer.)

    The downside: What ends up in Google Reader is not the content of the linked article, but only the tweet containing the ink. That’s fine when I’m reading feeds on the computer and can easily click on the link. But it’s not a great way for me to read things on my phone while I’m in transit, which is what I like to do with longer-form web stuff.

  • ril

    Read It Later – That’s where Read It Later comes in handy. It’s a Firefox extension and a web service that allows you to mark individual web pages for later reading. (Instapaper is similar.) I use the iPhone Twitter app Twittelator Pro primarily because it has the built-in ability to save things to Read It Later. As you see at the right, I can click a button that will scour the current tweet for links and send the linked items to my Read It Later list (which I can then read in Firefox or in the RIL iPhone app or on the Kindle, as I have geeked out on before).

My workflow ends up being a combination of (2) and (3). If the link contains mostly text and images (which can be viewed in the Read It Later iPhone app, where I do almost all my RIL reading), I send it to Read It Later. If it’s got video or audio or Flash, I favorite the tweet and check the link out when it arrives when I’m reading my feeds in Google Reader at a computer.

Empowering through openness – my application for the OpenEd 2009 travel scholarship

This blog post is my application for one of the travel scholarships to OpenEd 2009. Here’s how the prompt goes:

  1. What you would “bring” to the conference? What can you contribute, be it a willingness to volunteer to moderate a session, some special expertise or project, an already accepted proposal…
  2. What you see as the most critical issue facing you in your efforts around Open Education, and how you think the conference can help you address it?

I approach the subject of Open Education from two different angles. The first angle is a humanist one. I’m trained as an academic: I’m doing my doctoral studies in philosophy at the CUNY Graduate Center. This academic training fuels an interest in education, and especially the way that education might (or must) move toward openness as time passes. The second angle is a technical one. I’m (in the process of becoming) a coder and developer of web applications. Working primarily with open source applications like Wordpress, Mediawiki and Drupal, I’m developing an increasing sense of the user-empowering potential of open source software. These two angles on openness converge in my career in various ways, both in my day job as an instructional technologist at Queens College and as a developer for the CUNY Academic Commons.

As such, I think I could bring to OpenEd 2009 an interesting perspective on the nature of openness. As a user of – and contributor to – open source products, I can speak confidently to the community benefits that emerge when powerful tools are developed in an open way. And, more specifically, as someone who has used these tools toward both educational purposes (for example, in support of blogging initiatives both in my classes and in the classes of others) as well as in more broadly scholarly contexts (like the community of collaborative research that the CUNY Academic Commons is designed to foster), I have a concrete sense of the way in which openness in one realm – the realm of software – can foster and feed another kind of openness in the educational realm.

In service of these (somewhat abstract!) goals, I’m willing to participate in as many concrete ways as possible at the conference. I’m an active and energetic Twitter backchannel user (see, for example, the Twitter conversations I took part in at this year’s THATCamp, as well as my previous musings on the role of Twitter at conferences). In discussions both on and off Twitter, I can offer up experience both theoretical (I am a philosopher, after all) and practical (I’m also a geek). I’d also be happy to moderate a panel, if I were asked to do so.

As for what the conference will do for me, I envision that my attendance at OpenEd 2009 would help me to further bridge the gap between the practical and the technical that characterizes so many of the things I do in my career. As an instructional technologist, I think it can be easy to think of yourself as a purveyor and teacher of tools, tools that merely replicate the kinds of learning that have always happened in classrooms. This, after all, is often the path of least resistance. The challenge, I believe, is empower faculty members (and, ultimately, the students themselves) not only to use technology but to understand the extent to which it shapes the world and, by extension, ourselves; only by appreciating this can an individual engage with the technology in such a way that it expands (rather than controls) his or her humanity. Openness is the linchpin: students cannot make the connection between what happens in a class and what happens in the rest of their lives unless the window between the two is open. So I guess my goal is to see what kinds of practical approaches are being taken by people in positions similar to mine, in order to help faculty and students understand how they can empower themselves by embracing openness.

Tweeting the CUNY Gen Ed Conference

On Friday, May 8, I attended the 2009 CUNY General Education Conference at Lehman College. I got a chance to see some really interesting presentations: Marc Prensky’s broad keynote on how today’s students demand a different kind of education; a panel on using games in education; and a panel on ePortfolios and the Online BA. More importantly, I met a few people doing cool stuff in instructional tech around CUNY.

There was a bit of a Twitter backchannel, which I thought I would post here for posterity’s sake. For the time being, it can be viewed via Twitter Search. I’ve also used Cast Iron Coding’s awesome (and free) Tweetripper PHP script to archive the stream. Download that text file here: cunygened-tweets.txt.

Mashups, authorship, and audience

At the BLSCI Symposium last week (see the previous post for more info), I had the good fortune to work a bit with Gardner Campbell, including attending his afternoon workshop titled “Speaker, Listener, Network: The Concept of Audience in a Web 2.0 World”. The main thrust of the talk was that Web 2.0 technologies, and in particular the phenomenon of open APIs and the mashups they allow, call into question our notion of what constitutes the (or even an) audience for the content that we produce. It is through the lens of the author that one can really see this at work.

hillmill

via quinnanya

Here’s how I would reconstruct the argument. Communication – I’m thinking primarily here of linguistic communication, but it could be the case with other kinds of conventionalized communication as well – works because of a set of assumptions that the author (a term I might for the moment apply broadly to anyone who “authors” an utterance with communicative intent) has about his audience. If I say “Gee, it’s cold in here” because I want you to close the door, I am assuming, among other things: that you are a sufficiently competent speaker of English, that your hearing is functioning properly, that you will grasp the “literal” meaning of my sentence (i.e. that the ambient temperature in the room is too low for my comfort), that you will assume that I must have uttered the sentence not just to inform you of my beliefs regarding the temperature of the room but to make you close the door, that you like me well enough to want to make me more comfortable, that you are physically able to close the door. And so on, ad nauseum. More generally, the communicative gesture that an author chooses to make (a gesture like an utterance) will depend on his beliefs about who or what his audience is. (None of this is very new or very original, of course.)

We might think of certain kinds of authorship, such as writing a book or painting a picture, as less direct than the kind of authorship described in the foregoing paragraph, because the author is separated further from his audience and, as a result, has less information about them. When I write a book entitled Gee, It’s Cold In Here, I make some of the assumptions discussed above, but some I do not. Using Twitter is probably something like this, as you might be justified in making some assumptions about your audience (you know the handles of your followers, for instance), but it’s impossible to judge the potential scope of this audience, or to know many details about most of them.

When an author’s work is mashed up after the fact, his connection to his audience is so indirect that you might call it altogether disconnection. I might send a tweet, something like “boonebgorges: Gee, it’s cold in here”, with the intent to get a rise out of my Twitter followers. Let’s say it gets pulled into Twistori (perhaps the tweet should have been “I hate how cold it is in here”…). Think about the people who now view this tweet in its new context. Not only do I not know who they are, but I had never really even considered the possibility of their existence when authoring the original tweet. In this sense, whatever assumptions I had originally made about my audience have been entirely subverted by the reuse of my work. There is a sense in which I am no longer the author of what I wrote: I didn’t code Twistori, I didn’t conceptualize the potential visitors to twistori.com, etc. As with any remix – from DJing to quilting to objet trouvĂ© art – the idea of authorship being vested in a single individual has been overthrown (if it was ever that simple even in the case of more traditional authorship).

Once authorship becomes decentralized, so too does audienceship. Let’s say that you are one of my Twitter followers. You saw my initial tweet in its original context, in your Twitter timeline. Let’s imagine further that you are checking out Twistori at some later date and see my tweet repurposed in the Twistori timeline. Who, at that moment, is the audience for my tweet, and why? Are you the audience, because the tweet was originally written with you in mind? Are you the audience, because you’re now reading the tweet on Twistori? Is no one the audience since no one can be definitely picked out? There is a certain amount of self-selection that has to happen; the reader must construct an audienceship around himself. Reading a disembodied, mashed-up tweet written by a stranger, you could imagine yourself as a friend of the original tweeter, as a viewer of a piece of abstract art, or any number of other things. When you get enough people – enough intentional actions – between you and the original producer of the content, you have to make decisions for yourself about what kind of audience you are a part of, if any.

Anyway, this is all very interesting to me, and I have some thoughts about whether there are – or should be – any “right” answers to the questions of how to circumscribe authorship and audience. I need some more time to think about that, though.

The catalytic effect of a Twitter backchannel

Yesterday I attended the Annual Symposium on Communication and Communication-Intensive Instruction at Baruch College, put on by Mikhail and the fine folks at the Bernard L Schwartz Communication Institute. I’ve got a couple of blog posts in the hopper that are inspired by conversations that happened there, but for now here’s a quickie.

hillmill

Inspired by @hillmill’s tweet, a discussion took place at our lunch table (I think it was me, Suzanne, Matt, and Luke) about how using Twitter as a conference backchannel can turn someone from a casual twitterer to a Serious Twit. Here’s a theory for why that is. The benefit that Twitter backchannels (TBs) can have for conferences has been pretty widely discussed (though, lazy guy that I am, I don’t have any good links right at hand). TBs allow attendees to keep tabs on what’s happening in sessions other than the ones they’re physically attending. They provide a space where people can share immediate feedback on keynotes without all that distracting whispering. TBs also give users a chance to connect to each other in ways that are in a sense more organic than more traditional conference events. I made some connections, for instance, during our morning roundtable discussions, but these were largely accidents of who happened to be at my table – I connected to users of the TB, on the other hand, because of the things they were tweeting about. Even if this isn’t a better way to connect, it’s at least another way, which is surely a good thing. Moreover, TBs allow the conference to benefit people who aren’t in attendance, an effect that is multiplied by retweeting. (If you want some evidence of these effects, check out the #blsci tweet timeline.)

All this is to say that TBs are good for conferences and conference-goers. What makes TBs a good induction into Twitter is the act of witnessing these benefits. When I attended the 2008 CUNY IT Conference last year, I expected it to be like most conferences I’d attended – good in parts, but largely isolating and kind of boring. Given these expectations, experiencing the benefits of that conference’s TB was exhilarating. I knew before going to this conference that Twitter could be a fun performance space, maybe a good place to share links – but seeing it in action as a TB was what really sold me on the technology.

Here’s hoping that #blsci had a similar effect on @hillmill and the other relative Twitter-newbies who experienced the event’s TB.

I’d be interested to hear whether this has happened to others. Have you attended an event where the TB changed the way you think about Twitter?

On the cloud

Google freaked out this weekend, which, in turn, freaked me out. I’m a pretty ardent user of Google’s cloud services. Gmail is the most important to me, as it’s where all my email from the past four or five years resides. Reader has streamlined my online reading process so much that’s it’s hard for me to imagine how in the pre-Reader days I managed to read even a tenth of what I get through now. So when Google hiccups – even when the hiccup is apparently unrelated to where I store my data – I get scared.

Neato

via Reza Vaziri

These Google fears came just a week after I read Jason Scott‘s delightfully titled “Fuck the Cloud”. I don’t really buy into all the too-simple “you’re a sucker if you use cloud services” rhetoric, and I think (as urged in a Twitter conversation I had with @GeorgeReese) that a lot of what Scott is complaining about is more about backups than it is the cloud. Still, this piece, along with my Google woes, was enough to get me thinking about how wise it is to depend on web services like I do.

My first reaction on Saturday morning, when Google was acting up, was to back my stuff up. I saved all of my Reader subscriptions in a local OPML file, updated my POP3 backups of my Gmail messages in Thunderbird, and saved local copies of my important GDocs. I was able to make these backups because Google has allowed it by embracing the right kinds of standards. And this fact – that backups can be made and exports done – is one of the things that makes me relatively comfortable using Google’s services so extensively.

This relatively straightforward exportability stands in contrast to the situation at some of the other sites where I create and store content. I’ve used Tweetake to export my Twitter activity to a CSV file, but the solution is far from elegant. For one, I don’t really like giving my Twitter password out to a bunch of sites. Also, I’m not crazy about the fact that I can’t really do incremental backups. Ideally Twitter itself would offer some streamlined way to export one’s tweets. Facebook is even worse. I feel uncomfortable using Facebook’s message/email system because I know that there will probably come a day when I want access to those messages but cannot get them.

I don’t necessarily blame Twitter or Facebook for their total failure to provide content exporting. There is a sense in which the kind of content being created in these spaces – or, rather, the meaningful units of content to which we attach value and thus would want to save – is quite different from the most discrete units provided by email. What’s really valuable in Facebook is not just what I write, but what others write to and about me and my friends. Only a total snapshot of my entire immediate network would provide the kind of value for posterity that I want. With Twitter the situation is perhaps even more extreme: like in Facebook, the content I value is closely related to the content created by others, but in Twitter these people are not necessarily part of my immediate network at all (like when you @reply to someone you don’t follow because of some term you’re tracking). Pushed to the limit, you might even say that only a snapshot of all Twitter activity would really capture its value at any given time, since part of the value of Twitter lies in the potential you have to mine the collective consciousness, to get a sense of the zeitgeist. When the content that you value is so holistic, the details of backing it up become dicey.

On a more local scale, it’s probable that standard export formats will emerge as services like Twitter become more popular, in the way that something like Atom or RSS can be used to backup or restore a blog. In this sense, maybe my worries about certain kinds of cloud data storage are the kinds that will go away with time. Or at least until the next new kind of content is invented.

There are some other aspects of the cloud question that I find interesting, such as whether one should really feel more comfortable with local backups than with remote ones, and whether paying for a service really makes it more reasonable to feel comfortable keeping your stuff there, but I’ll save that for another day.

Hard work and distraction: together at last

I just read this piece by Mike Elgan. Elgan’s argument is that hard work is dead in an age where we have Twitter, Facebook, email, etc. to constantly and effortlessly distract us.

There seems to be a mistake in this reasoning. If all that’s changed from now and the golden age of hard work (whenever that might have been) is that we have more media for distraction at hand, what follows immediately is that people were less distracted in the good old days. But to say that someone is less distracted doesn’t suggest anything about their “work ethic” without some meaty assumptions.

The lack of distractions (or, to put it in more neutral terms, the lack of alternative avenues for your attention!): this sounds like the very definition of boredom. But boredom – a state you find yourself in – isn’t directly related to how hard you work – a choice you make. It’s true that boredom might drive you to devote your energies to something in the way that exemplifies a good work ethic, but on the other hand it might not, and you might end up staring at the wall as I so often do. On the flip side, someone who is never bored (i.e. is constantly distracted) might well be working very hard all the time. Anyone who tries to keep up with their feed reader knows how hard you have to work to maintain a respectably high level of distraction.

More importantly, though, the assumption that there is something holy about the work ethic of our grandparents is off. Work ethics are not inherently valuable; they only derive value from their products. Thus, for example, a writer’s work ethic is valuable because of the things that she writes, or even the kind of person she becomes as a result of this work ethic. But things like good writing and being a good person are, as philosophers are wont to say, multiply realizable, and while it’s true that the supposed tunnel vision of our forebears sometimes resulted in the kind of work that is independently valuable, it doesn’t mean that equally good or better work can’t come out of more distributed, “distracted” processes.

Isn’t it at least conceivable that, for instance, an obsessed Twitter user might write a poem that is not only as good as a more “focused” poet, but one that would be impossible without something like Twitter?

This is not to say that I don’t think total focus is not valuable. I do think, however, that distraction can have value too, or at least that the question is an empirical one.