About Kevin Hawkins


semantic XML versus HTML, revisited

There is a long tradition in publishing, which has carried over into methods for representing text in digital form, of marking components of a document not by their appearance but by their function. That is, a given span of text is not simply labeled as being bold 14-point Helvetica, horizontally centered, but is a chapter heading. By identifying the components like this, you can easily change the way you want all chapter headings to be displayed without changing the appearance of another component of the document which might also happen to be bold 14-point Helvetica. This concept of “markup” from typesetting is the foundation of GML and LaTeX. Such markup, variously called “descriptive”, “structural”, or “semantic,” was the envisioned use for a general-purpose markup language like SGML or XML.

HTML was first created in the vein of semantic markup, though it included some tags, such as <B> for bold text and <I> for italicized text, for describing appearance without reference to the document components (“presentational markup”). These elements soon overtook the semantic markup in everyday use and gave HTML a bad reputation among people working with SGML and XML; the W3C has long been working to move HTML back in a semantic direction. But even without taking into account the very good changes coming in HTML5, is HTML really so unsuitable as a general-purpose markup language for publishing and representing text in digital form, as is sometimes claimed by XML experts? Continue reading

on entrepreneurship

All entrepreneurs have an aptitude for risk, but more important than that is their capacity for self-delusion. Indeed, psychological investigations have found that entrepreneurs aren’t more risk-tolerant than non-entrepreneurs. They just have an extraordinary ability to believe in their own visions, so much so that they think what they’re embarking on isn’t really that risky. They’re wrong, of course, but without the ability to be so wrong—to willfully ignore all those naysayers and all that evidence to the contrary—no one would possess the necessary audacity to start something radically new.

—Chris Anderson, “Elon Musk’s Mission to Mars”, Wired (October 21, 2012)

vulgar American habits of the 1960s


I had a very good friend in the neighboring college when I was teaching in Oxford—we shared students and dined together frequently during vacations when they would close one kitchen and we would dine in one or the other college. He was a French medievalist, a man with great intellectual interests. He was always doing something like learning Turkish or studying Byzantine architecture, but he had absolutely no interest in ever publishing about medieval French literature. To him that seemed vulgar, the sort of thing Americans do. I remember his saying to me, “Jonathan, I understand that Americans have something called a curriculum vita [sic], in which you keep records of everything you do in lists. Is that true?”

—Jonathan Culler, in Jeffrey J. Williams, “Clarity of Theory: An Interview with Jonathan Culler”, The Minnesota Review, ns70 (Spring/Summer 2007)

in the old days


The modern phone combines all the qualities of a library, a university and a Swiss Army knife. It is nothing short of a miracle and a friend to humankind.

The best measure of the boon these phones have been is to recall the woe of life before them. Just 15 years ago on trams and trains people buried their heads in books, newspapers and periodicals, searching for the precious little information they contained, or stared out the windows at a world going past that without Wikipedia they could not begin to understand. People knew very little then. They were lonely in ways known only to those who knew the world before Facebook and texting; and who, to escape their loneliness, made eye contact with perfect strangers and even spoke to them.

—Don Watson, “Comment: Phoney Education”, The Monthly (December 2011 – January 2012)

what took so long?

It might surprise you to learn that, despite my line of work, I’m never among the first to try out a new piece of technology. But the risk for a stubborn contrarian like myself is that once people start giving you a hard time for not using their favorite online tool or service, you just dig in your heels.

Even when I know something’s not a fad, I usually have principled reasons for not jumping aboard. For Facebook, it’s the reciprocality requirement and binary nature of “being friends”, the difficulty of exporting data, and the complexity of its privacy settings (all of which are addressed in Google+, which I’m considering joining more seriously than I’ve ever considered Facebook). For microblogging, it’s my preference to receive news in digested form rather than as a cacophony of late-breaking chatter. And for both, it’s my feeling that I’ve got enough job responsibilities, ways to waste time, and things I’d like to do but don’t have time for that I don’t need more distractions.

But is it really wasting time? Well, this is where I’ve come around. Twitter’s network effect is becoming too strong to resist, and despite @shanakimball keeping me informed of things I should know about, there’s still a growing number of things I’m missing out on—more of which I’ll miss out on once @gerg_g/@greg leaves us for greener more sepia-colored pastures and no longer sits around the corner. And my resistance to Twitter means that I’ve long ago become practically invisible to people who would otherwise like to maintain peripheral awareness of what I’m doing. I’m at risk of falling into obscurity: in fact, like a print-only scholarly journal, I might as well not exist. So it’s time for me to take the next step in my digital migration. While I work and think best undistracted, it’s time for me to strengthen my social and professional bonds, even at the risk of introducing more distractions into my life.

Though I’ve blogged for periods of time while abroad—when there was exceptional interest in my life—I’ve always found it awkward to address multiple audiences simultaneously. After all, I don’t tell stories the same way to family, friends, and colleagues, and I don’t know how to blog or tweet with one voice for multiple audiences either. Of course I can have multiple identities online (just as I have multiple email addresses), but while my email addresses have fairly fixed identities (work, professional, and social), my potential audiences for broadcasting my thoughts will never be fully stable. Still, the digerati manage with a hybrid professional and personal just fine, so I’ll figure it out too.

So, I hereby relaunch Ultra Slavonic (the blog) and announce that I am micro-blogging on both identi.ca and Twitter. Follow me! If you’re still a holdout from microblogging but read web (news) feeds, look for the feed icons in the lower right of my identi.ca page or check out this hack for Twitter.