alt text

| | Comments (5)

In an
imperfect world, we need to rely on the person using an image to provide text equivalent content for their intended use.
Remember the computer in Star Trek? Picard could tell it to zoom in on a feature in a landscape, and it new exactly where to zoom. Geordi didn't need to type in any pixel coordinates. With that capability, a computer would be able to answer any questions a user who can't see the image has about the scene. When I was working with Multimedia and Emerging Technologies I found the work of James Wang at Stanford. He was developing a program that imbued computers with the ability to recognize images. It was primitive, but if you had a photo of a mountainous landscape and wanted similar shots, the program could do an image search that returned similar landscapes. Of course, there were a few images that didn't belong: a close-up of a crumpled blue and gray truck bumper, for instance, but many hit the mark. Wang now is continuing his work right here at Penn State. It's remarkable, and has the promise of an ideal in which computers can read images and give accurate descriptions of all of the image's content.

Unfortunately that ideal is a long way off. For now we have metadata descriptions that rely on users to write them or metadata that's added by cameras, we have the longdesc attribute, and we have the alt attribute. I like the idea of creator added metadata. I've tried it, but there are no screen readers that can harvest it, so it's utility is in question. I've used longdesc when necessary, and try to always use alt text. Alt text is the easiest to use, but like the others, is far from perfect.

If I look to the W3C for guidance on alt text, the WCAG say that equivalent content must be provided. There are very few content rich images on the W3C's pages, but I found one to examine for insight into proper handling; the W3C Team has a group photo. The text in the body of the page includes the names of the 56 team members in the image, and the ALT text for the image says, "Photo of W3C Team, November 2005." I'd say the W3C calls that equivalent.

So would I, really; but I can see the image. I can also see that Shadi Abou-Zahra has a beautifully colored blue pull-over on, and that Ivan Herman's beard is a distinctive black and silver. The ALT text doesn't mention the radiance of Eric Miller's smile or how attractive Coralie Mercier is with high cheekbones and a white blouse open at the neck. Dermatologists may perceive other qualities in the faces, and fabric dealers may perceive qualities in the clothing. An electrician may be interested in the large wall mounted globe lamp. Sociologists may be interested in the poses and distance between the people or who is touching whom. What, then, is really an equivalent? Would a typical longdesc include the information a blind Socio-anthropologist needs after finding this image on the web? I doubt it. In an imperfect world, I think we need to rely on the person placing an image to provide text equivalent content for their intended use.

And if I give you
an image for your
web page, who is responsible for the alt text? Me or you?
So what, then, is appropriate Alt text for a Wordle image? I used two wordle images in this blog several days ago. Both have alt text that I think is appropriate. Each features the primary words only, which was the intent in my use of the images. I wanted to subtly point out that our web page didn't seem as "student focused" as other CIC schools regardless of our "student focus." Is it critical then, or "equivalent", to include that the Penn State wordle has the word "make" very small? Or "Thursday?" I don't think so. Some word centric people may see it otherwise, I'm sure. It's hard to see other things as data, and hard to see words as anything else. Who should be the arbiter?

Incidentally, both Wordle images from my blog have a slightly more extensive though certainly not complete set of words in their metadata descriptions. As does the Wordle for Spanier's State of the University address; which is happily far more student centric, with the word Students right after Penn State.

5 Comments

dave said:

Additional links. A shared warehouse primarily for my use, but please check it out if you're interested:

Visual Communication Lab
Experiments in visualizing data and collaborative sense-making.

Many Eyes
Many Eyes is a project of the IBM Visual Communications Lab. It explores many viewpoints of data visualization sparking new insights and discussions.

Mark Wattenberg
Media artist and mathematician, Wattenberg founded the IBM Visual Communication Lab.

Brett Bixler said:

Sounds like we need the interactive fiction writers designing the longdesc text!

"You see a shadowed building with a dozen Tudor windows. A ruddy light shines through the attic window, flickering as if a tortured candle's flame was being wrung from it drop by drop...."

dave said:

You're on to something Brett. I wonder about the non-literal or purely emotional content in images and who would be best to recreate those qualities in another medium. I also wonder about those things that are suggested by nuance in an image and are there to tease viewers emotions to a different level- is a clear description unfair? It's very complex, and having the conversation is difficult.

What sort of text description of Beethoven's Ninth can convey "equivalent" content to someone with a hearing impairment? A snip being used in a music theory class can probably have an adequate analysis of a specific technique described. In that circumstance, for that purpose, the description may be considered equivalent. But what sort of description would be necessary for someone to be able to write a reaction paper?

I went to the Van Gogh show at the National Gallery a few years ago. I was pushed and jostled for a good while, when all at once I was standing directly in front of his Wheatfield with Crows. My words would be useless; photographs of the painting are not equivalent. For someone with complete vision loss, I'd have to hold them under water until they nearly drown- then pull them up and hold them in an autumn wind.

Hello.
First, thanks for the kind words and compliment.
And second, thanks for giving perspective on how different people would find the ALT text useful or not.

david stong said:

Thank you for being kind, Coralie. I apologize for any discomfort my words may have caused you.

Penn State
April 18, Symposium 2009; reimagine.
New content. Symposium 2008.Digital Commons at Penn State. Improve the workplace; hire for variety.

Archives

Blogging at Penn State.

Recent Comments

Podcasts at Penn State.

My del.icio.us Network:

Me with a camera.

My del.icio.us Links: