ELIZABETH J PYATT: November 2009 Archives
So...it turns out that it's one thing for a desperate individual student to sell/create a term paper for a paper mill, but it's a whole other matter for one person to sell a group paper without approval from the other team members (especially when the team doesn't get a cut).
Although the mills have been immune from most legal challenges, the issues of papers for sale without authorization from all the authors may have some teeth. According to a recent U.S.A. Today article, one judge found a company liable. Of course, the company appealed and now it is in U.S. District Court.
The paper mills are preparing for trouble ahead though. ProfEssays.com (which sells custom papers) noted that "All custom essays and term papers completed by (the) company's writers will be double-checked with the newest anti-plagiarism software." And a corporate spokesperson for schoolsucks.com comments, "We avoid all those issues because we're totally free."...except for a pesky monthly membership fee to join and access the archive.
A mashup tool I ran into a while ago is Montage-A-Google by Grant Robinson. This is a Flash-based app in which you enter a Google search term and it generates a montage of different images pulled up in the search.
The Art Option
Each montage uses about 12 pictures, but repeats them from multiple angles. Depending on what you enter you get some very interesting results. I first tried a "pretty" picture by entering "aurora". As expected, the montage pulled in some lovely aurora images, but it also pulled in a B-2 bomber and a pinup model (named Aurora).
FYI - I tried multiple aurora attempts to see if I could remove the bomber and the bombshell, but no luck. They seem to be stuck in the queue (more on that later). In fact, based on what I found on the Spock montage, I think the tool is designed to throw the most diverse set of images together that it can.
The Social Option
What's more interesting (and devilishly entertaining) is to enter a famous name (or your name) and you will see what the Internet thinks of you.
Some, like Farrah Fawcett, are eerie since her montage features her swimsuit picture as well as later pictures of her illness. You see what was lost in terms of looks, but what was gained in terms of character and dignity. Others, like Kate Jackson (the "smart" Angel) are interesting because her montage pulls up an early publicity photo which I will only describe as "saucy" and not all what I would expect from her current persona.
You can expand it further and enter things like "Wonder Woman" (some pictures are fashion fierce and others warrior fierce) or "Israel" (got a rife, a flag and a bikini)...or whatever. Needless to say, I and others have imagined some interesting applications for a media studies or woman's studies class.
I was adventurous and entered my name and got this montage.
The result was no photos (Yes!), but lots of images I uploaded including the svasti (the new friendlier name for the swastika in Unicode). It's a little scary because it looks very questionable out of context. The Arabic bismallah image also appears along with my Facebook network. What do these images add up to really?
The montage tool does point you to the original image, but just the image. Without the original page or blog entry, many of these images are very perplexing out of context. So the result is that the montage gives you a surface, slightly kicked up view of a topic...kind of like real life perception of casual acquaintances or a 5-minute news segment.
An issue that I wrestle with a lot is how to help students transition from rote exercises with canned data to a real world problem in which data comes with minimal organization and the solution is really open ended. It's also related to a similar problem in teaching linguistics which is training students to extract data from a language they may only have minimal familiarity with.
In one of those weird Internet search coincidences, I ran into this blog entry about the pronunciation of certain Chinese consonants. If you go to the site, something will leap out immediately - it is written in Chinese. I should state right now that I know almost zero Chinese characters, and few of my students do either. In fact, it's genuinely frustrating that I can't read the entire entry because I know I'm missing lots of key context. However....I was still able to extract some useful information and showed the students that they could too.
One helpful piece of information is that there were some diagrams showing the pronunciation of certain "letters" in pinyin along with an IPA transcription. For another, there was a bibliography listing at least one key article in English. Sweet!
Believe it or not, many of my students appeared to enjoy this little exercise. Part of it was because a lot are taking Japanese (one student could read some of the characters), but I think part might because it was a real world scenario. However, I think they also appreciated that there was some hand holding (I said that it wasn't a Chinese reading exercise, but a spot the citation/transcription exercise). I was also able to explain why you might need to do this (i.e. you are not a Chinese expert, but need some information).
So my personal teaching lesson is that I have to do a better job of translating what linguists do to a classroom context. Between working on this and talking with other instructors, I am beginning to really appreciate how much content experts "automate" their analytic skills. Unpacking it for learners can be hard (that's why I could probably use an instructional designer for a linguistics course).
However, it's definitely worth the effort. Those times when I see a spark of enlighetnment in a student's eyes are amazing.
Probably the best way to create a color blindness issue on a Web is to use Red/Green color coding. Even the latest WebAim Screen reader survey uses red and green pie charts. The good news is that the WebAim pie chart sections are labeled, but they still come out as a giant mustard brown pie in the Photoshop CS4 Color Blindness filter proofing tools.
One way use the color coding, but enhance usability for color blindness users is to bump your greens towards a blue. Blue is a good color because it typically appears blue to users with different types of red/green color blindness (the most common type of color deficiencies). So instead of a pie chart with lots of yellow slices, the color deficient users would see blue and yellow - which maintains a hue distinction.
You can see a demo below.
Red-green vs red-blue pie chart.
Same chart in protonopia color blindness filter view.
The first image shows two pie charts in which red is "Bad" and "Good" is green in the first version and a bluish cyan in the second. The next shows the charts in one of the Photoshop Color Deficient Proofing views. The red/green original becomes mustard yellow and brown while the red/cyan version has both blue and brown. Neither matches my original artistic vision, but at least the one where blue is maintained shows more of a difference.
How do actual users of screen readers behave and what do they want? The WebAIM organization has been conducting surveys in the past few months, and they recently released results from the second screen reader survey of 655 users. Some things I thought were worthy of noting:
Windows, JAWS & IE Still Rule
There wasn't an explicit question about Mac vs. PC, but when the top responses for screen readers (JAWS, 66.4% and Window Eyes, 10.4%) and browser (IE 6/7/8, 70.9%) are all Windows-only...you don't really need to. It's ironic that Internet Explorer is the most preferred browser, since it is known for its non-standard quirks and thus a major headache for Web developers. Nevertheless, it is still the standard for the screen reader community.
One visually-impaired user even considered a recommendation to use Firefox an accessibility barrier. If you need to learn a new interface by speech only, you could see that a switch to a new application is not necessarily a trivial matter.
On the other hand, there are some bright spots for the Web developers. One is that 8.9% of users use Apple's VoiceOver as their primary screen reader and 14.6% of users report commonly using VoiceOver. Apple appears to be a viable system for some users. Users are also willing to user Firefox and Safari. About 18.8% use Firefox as the primary browser and 39% report using it at least some of the time.
Other important metrics include how often the screen reader is updated and proficiency. Most users (83.6 %) have upgraded their browsers in the past year; this is important for Web developers since many accessibility code recommendations typically work only for newer screen readers. The more recent the technology/recommendation, the newer the screen reader has to be.
Proficiency is another metric, since many accommodations may require that users know to switch into different modes. For instance, JAWS contains table mode and forms mode where the special tags actually do their magic. If users don't know these modes, then a page would be considered "inaccessible" even with the technology properly implemented. Fortunately....only 4.7% of the users surveyed considered themselves beginner in screen reader usage and over half (52.6%) considered themselves expert.
On the other hand, it's not clear if the respondents are true representatives of the population. This was an opt-in online survey, so the population could be skewed towards more advanced users who are aware of the WebAIM organization or online accessibility resources. For instance, this audience is also 50% on mobile devices, which indicates a certain level of affluence and technological sophistication.
Social media tools (Facebook, Blogs, YouTube, Twitter, etc) did fairly well. For most of the tools, over 50% of the users rated them as "somewhat accessible" or better. The two lowest scoring tools were LinkedIn (only 38.5% considered it "somewhat accessible" or better) and Facebook (58.7% consider it "somewhat accessible" or better). Compare this with Twitter which is rated 91% (with 61.9% of users rating it as "very accessible").
There are still concerns, especially if the tool is relying on Flash, but developers of social media are getting the job done, whether it be through standards compliance or through accessibility testing.
Technologies identified as being problematic included Captcha and Flash. Although Flash can be made accessible, the perception is that it probably isn't (62.2% agreed that Flash content was somewhat or very unlikely to be inaccessible). If nothing else, Adobe has a PR problem with the screen reader community (and probably a PR problem with the Web developer community as well).
Another interesting problem reported is "ambiguous link text" - that is a link whose destination is unclear out of context. The classic example are multiple "Click here" links. In principle, this is easy to solve, but lack of awareness means it's extremely prevelant...especially in a lot of content management systems which program canned link text statements. A good example of what to do can be found in Movable Type (i.e. Blogs at Penn State platform). The output generally includes distinct links including the blog title (which doubles as the permalink), tags and categories.
The last set of major problems included:
- Improper or missing image ALT Tags - A file name (e.g. Photo1356) is not a good ALT tag, but that's only option in tools like Flickr.
- Complex forms - Web forms (including login screens) can still be a major barrier unless they are properly structured.
- Poor Keyboard accessibility - I suspect this is going to be a bigger problem for Penn State as veterans enroll in college. Many may be having various hand mobility issues depending on their injuries.
Another good piece of news is that Skip Link technology has minimal impact. It should still be included, especially to skip the main navigational link block, but doesn't need to be implemented everywhere.
Use H tag Headers
An accessibility recommendation the community stresses is to break up content into headers (e.g. H1,H2....). In fact the survey indicates that most users (50.8%) prefer to scan though headers if the page content is long. That is, it seems like users still want a sense of overall information architecture. And don't forget the benefits of descriptive headers for your Google search ranking!
In terms of images, users definitely want ALT tags, even for decorative images (77.3%). So if your page has a cute cartoon lion, go ahead and describe him in the ALT tag.
A preference that may cause more of a headache are preferences for complex images (e.g. a bar graph or map). Web developers may be using the LONGDESC attribute or D link to send users to a different page, but most users (55%) actually want the description on the same page. But here, there is divergence - of the 55% mentioned, about half want the description right after the image (no link) and the other half want the description, but as an optional link. The remaining 45% either want want the link on a separate page (19.8%) or in the ALT text (19.8 %).
WebAIM says there is no clear consensus, but I do see a general preference for staying on the same page for the image description (either text or ALT tag) and not linking out (sort of like how visual users don't want to click a link to open an image in a pop-up window).
What you can do may depend on context. There are times when it makes sense to have a visible description for everyone, and others when you need to split the description for visually impaired users as a separate entity. The long ALT tag would be an ideal situation here, but JAWS may still have a 155 character limit (grrr). So....if you do have to go to a separate page for the image description, make sure you can link users back to the main text
A Positive Note
Any user of a screen reader can tell about the many problems encountered in everyday browsing, but the plurality (46.3%) feel the Web is becoming more accessible. It's slow progress, but at least it's progress.
Something that caught my eye in the weekly W3C Newsletter was the release of EmotionML 1.0 XML schema (link corrected). The main purpose is to annotate emotional reaction within a recording (video/audio, but conceivably text as well), but the other is to define a framework for emotion recognition on video (Hmmmm).
There are some use cases listed on the site as well as the first draft of the markup, but it looks like a psychology degree would be helpful here. Interestingly, a lot of it has to do with concepts like "arousal", "friendliness", "dominance". At first glance, the values seem a little more related to body language (and inferring emotion from body language).
I can see some very legitimate uses for a markup schema like this, but I also have to confess being a little spooked. How accurate will an "automatic recognition" system be and will it hold up in court? Stay tuned, I guess.