NMC 2013 Summer Conference Notes

| | Comments (0)
I attended the 2013 NMC Conference and just thought I'd pass along my thoughts... 

Karen Cator's keynote "Participatory Learning - Powered by Technology" was an interesting look at how things are changing thanks to technology. She cited the earthquake in Fukushima, Japan and how people were not dependent on government statements about radiation levels. It was challenge-based learning in that there was a challenge in measuring radiation levels and how to best communicate them. Someone came up with a geiger-counter attachment for the iPhone that could measure radiation levels in the user's area so they can report them
 back to a central locations. They used a kickstarter campaign to produce the device, which totally bypassed the government efforts and got results.

The trend seems to be pointing to electronic learning progressions. That is, you pinpoint the things you need to learn to do, then you figure out where you are on the progression of getting to that point where you can do them. This will eventually transition us out of traditional courses and grade level barriers. This completely makes sense, because many courses in the same subject area tend to repeat a lot of topics. My son took 2 astronomy courses at Penn State and said that most of the content in the second one was the same as what was taught in the first one. Having a continuum of concepts could eliminate such repetition and get students to a higher level of thinking in the subject at a faster pace.

She talked of leveraging the best possible ideas:
- League of innovative schools
- League of innovative teachers/faculty
- Consumer information service
- Entrepreneurs and developers
- Challenges and prize competitions
- Research and development

The challenges ahead include:
- Infrastructure access
- Data access and transparency
- Interoperability standards
- Crowdsourcing challenges and engagements
- National campaign to help people understand the importance of education

I also attended a session on digital publishing by a guy from Adobe. It was the typical commercial for Adobe products, but made me want to look again at using InDesign and the other CS6 suite of tools to create ePubs.

Mike Griffith of Tulane had a session on 3D printing that was very interesting. Showed some samples including a piece that can act as an interface to connect different building systems such as Legos, tinker toys, and erector set pieces. Things that could not normally be joined can now be joined with these interface pieces. He also talked about creating objects with RFID devices and showed an example of an object that can talk about itself when held using a special glove called a reading glove. So a museum object when picked up can start talking to the holder about itself in first person. http://www.youtube.com/watch?v=UE6vllYI5RI
It can also collect information about user habits and how many times each object was picked up. Although there is only one glove, other people around can hear the same narration.

He also talked about Bronys, males who like the My LIttle Pony show and all the franchising that goes along with it. He talked about how meme images of My Little Pony were used by groups such as Anonymous to hold hidden data when unzipped. So they would look like a regular gif image, but they actually contained secret data.

He talked about Thingaverse and Pirate Bay, who distribute digital files called Physibles that could be printed on a 3D printer.

He also showed an example of table data that can be printed out in 3D so that blind people could understand the relationships of each measured item just by feel.

Probably the most interesting thing I saw at NMC, though, was the video called We Are Makers and how they get people who like to make things together and provide them with the tools and equipment to create things. They get both designers and engineers together in a lab so that they can help each other build new things. I can see a lab like this being built at Penn State that would be equipped with a 3D printer, wood and metal shop power tools, computers, and anything else that might be needed to help students to come together to create new objects. Since the art department already has much of the shop equipment, they might be a logical hub for such a lab. Students from all different fields of study could be invited to work in this environment and also help create things. This seems to dovetail with what Frans Johansson said about collaboration between people with diverse skills and backgrounds coming up with new and innovative solutions that might not otherwise have come about in his talk about his book The Medici Effect. I think we should further explore the idea of having a Maker lab at Penn State. Here is a video called We Are Makers that talks about this idea:

HTML5 Re-programming

| | Comments (2)

Last week I had the opportunity to attend a four day workshop on HTML5. Here are a few of my thoughts. Being a Flash developer since the late '90s, I kind of half felt like I was getting my reprogramming the same way South Vietnamese people were indoctrinated into the communist way of life after the fall of Saigon. Not that I didn't appreciate the chance to learn something new, but I was never convinced that HTML5 had too many advantages (especially for the developer) over Flash.

First of all I have to say the instructor, Andrew Andrews of Learning Tree, did a great job. He made what could be rather dry programming instruction into something fun and interesting. My goals were to find out what exactly HTML5 could do most of all and gain some understanding of how to write code for HTML5 so I could better support iPads and the like. He was responsive to our questions and seemed to know his stuff. But now I'll talk about my impressions of HTML5.

It seems to be a lot more work and coding to make a simple animation or graphic in HTML5 compared to Flash. Flash has a graphical user interface that allows me to draw on the screen and add code to make it interactive, and I can use tweens to animate very simply. To do the same thing in HTML5 takes a ton of code, mostly javascript to pull off the same thing. 
For instance, to draw a box on the screen requires creating a canvas and plotting the points of a polygon and connecting the lines. 

Where Flash would usually just work (or ask you to update the plugin, which takes about 2 minutes), there are issues with HTML5 being consistent among browsers and support for older browsers requires using some sort of library of javascript, JSON, JQuery and the like to insure that older browsers will be able to duplicate the effects you're trying to pull off with HTML5. 

While there are a few things I liked about HTML5, such as it being an easy way to validate Web forms, most of the great stuff that is attributed to HTML5 is actually done via Javascript. So, if you want to be a good HTML5 coder, you'll need to focus mainly on learning Javascript just to survive. There are libraries such as Modernizr that can help, but they apparently need to be updated quite frequently and you need to be able to call them correctly with the right syntax to use their code effectively. There seem to be a lot of IF phrases to write for IE, especially if you want it to work in all browsers.

Video is another thing I have a bit of a problem with. Okay, so to embed a video in a web page just takes a simple embed tag, right? Sure, but then you have to contend with the fact that different browsers support different video codecs for different reasons. In order for viewers of each of the browsers to be able to view your video, you must create your video in three different formats: Mpeg-4, Ogg, and WebM to cover your bases. You need to use a Video tag to create a list of links to your fail-over videos should the viewer's browser not support the first one listed. Flash used one format for all. Simple.

After I got back to the office, I decided to look at Adobe Edge, which allows you to create  simple animations that are published in HTML5 code. It has a user interface similar to Flash with a timeline, and does all the Javascript work behind the scenes to get you what you want. Here is a simple animation of the ETS logo I created in Edge (click the image below). 

PSU and ETS logos
Cool, huh? This simple animation is made possible by no less than 7 Javascript files that must be local to the HTML page. I also could have done this back in 1996 with an animated GIF. Glad we're moving forward on the Web.

There is hope, however. Apparently someone has been able to get Windows 7 to run on an iPad and will play Flash content without issue (Apple's gotta be worried about this). This is the OnLive Desktop Plus app that allows you to run applications via the cloud for $5 per month. Check it out here: OnLive Desktop Plus 

For me, this represents the future of computing. There will be no need to worry about plugins or whether you have the latest version of an app or browser. All your apps will be served via the cloud and your files will live there as well. It will compile the screen for you and zap it down to your device in microseconds, optimized for your connection the same way NetFlix movies and multiplayer games are delivered. Here's to the future!

Adobe Max 2011

| | Comments (0)
I attended the 2011 Adobe Max conference Oct. 2-6th that was held in Los Angeles. I did a pre-conference workshop on using Flashbuilder 4.5 for creating mobile, web, and desktop applications. I was a bit disappointed in this workshop as it was pretty difficult to keep up the pace. I didn't feel that enough explanation was given for why certain code had to go in certain places, as there were several files that had to be created to make the "simple" application work. Once you get it coded, however, it's a simple thing to export to different devices. Both iPhone and Android can be created. You would need a developer's license ($99) from Apple to be able to even test your application on an iOS device as a certificate is required when exporting. Android is much simpler, however. No certificate is needed. In fact, the only one you need you can generate on the spot within Flash Builder. Once you specify that you want to test it on a USB device, it will show up on your device when you run a test or publish. Perhaps if I had a better feel for the software I might have had better luck, but my success was a bit limited.

The next day was the keynote. It started with an awesome mix of on stage action and projected animation and video. A spotlight shown on a lone fiddler, who began the fast-paced music to which a couple of professional dancers interacted with. When they disappeared from the stage, they reappeared multiple times on the projected screen, sometimes hurling graphic effects back and forth to each other. The 32 projectors and 300 million pixels per second action was incredible.  

Once the intro died down, Kevin Lynch came on stage and  began giving highlights of the latest that Adobe has to offer. 

They recently acquired (or are in the process of acquiring) TypeKit, a subscription-based service for fonts. This could be hugely helpful for designers and will be included in their cloud-based service called Adobe Creative Cloud, which would save documents to the cloud so they could be viewed on all sorts of devices. 

Adobe unveiled its gesture-based computing tools that could be used with iPads and similar Android devices. These include Photoshop Touch, which can work with the device's camera to pull in content and automatic sharing to social sites such as Facebook. So you could use the camera to take a photo, then edit/enhance it, and share it with your friends while you're out in the field. And you could use Carousel as a light table for all your photos.

Another app they showed was Kuhler, another acquisition. Kuhler makes it easy to create a palette of complimentary colors that will enhance your design work. You can take a photo of an object and it will create color palettes that will look good with it. So you can be like Martha Stewart with and whip up designer colors at the touch of a button. 

Adobe Collage allows you to use images from your photo library or to search the Web and get images that you can combine and share right using multitouch on your tablet. They really seemed to be pushing the touch philosophy. Personally, I prefer either a mouse or a stylus, as it gives me  more control, so I'm not sure just how much I would use touch-computing.

Adobe Proto seemed like a really cool idea. Just using your finger, you can sketch out a Web site design and it will translate your finger motions to clickable buttons, columns of text, headers, photos, etc. Very impressive!

Adobe's Ideas was a pretty cool finger-painting app that allows you to make vector art quickly and easily. The finished product could easily be imported into Photoshop or Illustrator. It seemed to me that most of these apps would really appeal to a designer who could work on projects somewhere other than their studio. All Adobe Max attendees are supposed to receive a one year subscription to their Adobe Creative Cloud. Will be fun to play with it and see if it's something I might use. I also had to wonder whether Adobe is going to embrace a cloud-based software mentality where you don't actually own a copy of Photoshop, for instance, but only rent access to it via the Web. This is a lot like the old days of computing where you had a mainframe and terminal access to its applications. Probably due to piracy, I'm guessing.

Another big push seemed to be self-publishing. They showed how InDesign could be used to create ebooks and magazines for both the online bookstores and as a subscription where people would download a desktop app and get notified that they can download the latest issue. 

I also attended some good workshops, mostly on mobile app and game creation. One workshop allowed us to create simple game interfaces for our Androids. Those who had an Apple developer certificate could publish to their iPads. It was pretty cool to see it work directly on the device. One project involved the use of the accelerometer to track a ball on the device as it moved across the screen and bounced off the edges. It would display X and Y values of the ball as it moved in real time.

I attended another session where they showed examples of 3D video games built on the Flash platform, many using new libraries such as Stage 3D. They were as good as many console games and it looks like Flash will be a gaming platform of choice for Web-based games. The detail in the scenes were very rich, the motion was fluid (60 frames per second), and you could use game controllers with it as well. The particle rendering was really nice, too. The online game Tanki was very impressive. The physics and particle effects were amazing.  A sneak preview of Angry Birds, which was made with Flash, showed how they will be using the Flash 11 plugin to display much more particle motion in the new version. The 3D car racing games ran incredibly well using the new plugin. Why Apple doesn't support this is beyond me. I think they may get left in the dust if they don't. Even the new Kindle Fire tablet supports Flash, so iOS users may miss out on a lot of good game titles. The HTML5 gaming workshop I attended was less than impressive. Only fairly simple games seem to have been developed with it. There are libraries available, though, so that may change. These include EaselJS, Context2D, and WebGL. You really can't compare performance unless Apple opens up the ability to use hardware graphics acceleration like Android does.

I also attended a couple good workshops with After Effects. One was showing how they used rotoscoping to edit frame by frame to build certain effects, such as complex green screen shots. Another showed how to let After Effects do the heavy lifting for special effects, then import that file into Flash to become part of a larger animation.

Another interesting topic was game interface design. This looked at the optimum philosophy for placing controls, such as menus, interface panels, HUD displays, chat boxes and the like into a game, with special consideration to mobile devices. What works on one may not be ideal for another, depending on how the user might hold the device. Keeping action in the "sweet spot" will prevent problems like the character moving under your hand while you're holding it and trying to press buttons. He said it was also important to have just the right level of information on the screen. Minimal is best so that it doesn't destroy the user's immersion in the game. He referred to this as the "poetics of space".

I also attended another session where they talked about creating apps for Facebook using the Flash Actionscript SDK. Using Actionscript 3.0, much of the Facebook data is available, such as authentication and user management, via certain calls. These can be separate Web sites made to interact with Facebook or apps internal to Facebook itself. This might be worth looking into.

The Sneak Peeks showed some really cool technologies that Adobe has been working on that may or may not be included in future releases. One of the most impressive was a filter for Photoshop that took a badly blurred photo (due to camera movement) and used an algorithm to establish the blur's start and end point. The result showed a photo that was previously nothing but a blur become readable text. This elicited lots of oohs and ahhs from the crowd. Another showed how they took some video and was able to lay in a 3D grid that figured out where objects in the movie were in space. He was able to copy the video of a man walking and shrink him down and move him behind a column in a colonnade. It was impressive how it was able to establish whether the copied video should go in front of or behind an object in the video. All done automatically.

I think the biggest message I got from the conference is that the Flash platform is far from dead. Between the 3D gaming and the ability of Flash Professional and Flash Builder to create games and applications for Apple and Android mobile devices, even if there are not many interactive Flash animations per se, there will still be plenty of call for people who know Actionscript to create new and useful soft products for the next generation of computing. 

Adobe Max 2011

| | Comments (0)

Sharing Student Notes

| | Comments (2)
I know I don't blog a whole lot, but I just had a random idea that I thought might be worth mentioning. I think it would be cool to add a link in our new LMS where students could share their class notes online with the other students in the class. A rating system could percolate the best notes to the top and a search feature could possibly return a page of student notes using that word or phrase. This could be helpful for students who aren't good at taking notes and also it would give other students a different perspective on what they heard in class and what was important to other students. Not sure how difficult this would be to pull off, but I suspect a good programmer might be able to add this type of feature. I think this could add still another level of collaboration that our students could benefit from.

 I don't think the local student note services would like it very much, but that's life in the fast lane I guess. 

Adobe Max 2010 Conference Notes

| | Comments (0)
I had the good fortune to attend the Adobe Max 2010 conference this week. This was a very good conference. Adobe and Android really seem to be partnering to bring better service to their customers. Android's support of Flash and Air applications, and Adobe's built-in presets to create apps for Android will really become a force to be reckoned with. Bottom line seems to be that Flash isn't going anywhere, but will be a major player in handheld, desktop, and television devices. Here are a few observations from the conference: 

Developers are engouraged to 'build for mobile first'. Adobe is addressing the need for developers to design for multiple screens. InDesign has a free plugin that will help to make web content reflow with changing screen size from smartphone to desktop automatically. This may be a new reason to learn dreamweaver for some of us "code-by-hand people." Adobe showed two tools to help with animation in HTML5. One was called Edge and the other is Wallaby. Edge was made to construct animations and Wallaby is made to convert Flash animations to HTML5 compatible png image sequences. They showed how they used Adobe tools to create Martha Stewart's magazine, which looked beautiful on the iPad.  Martha herself came out on stage to show all the different features of her eMag. It included panoramas and if you touched items like the flower on the cover, it would bloom - very slick. I attended a session that talked about ePublishing and it is thought that there will be a very big market for digital book and magazine designers. Some thought the interactivity was over the top, but also thought this would be great for other things like children's books, for example. They also showed ePubs, Flash, and HD Flash video running on new Android tablets. They showcased new iPad-like tablets like the Samsung Galaxy Tab and the Blackberry Playbook that run on the Android OS, with Flash and Air support, and are true multitasking machines. Very impressive and should give the iPad major competition. It was pretty clear that Flash isn't going away any time soon, in fact it stands to be a major player in video delivered both on the desktop, hand-held, and television. I think Steve Jobs will be forced to eventually jump on the bandwagon and support Flash. Everyone else is. Blackberry says that "if you don't support Flash, you're not delivering the entire Web." Developers are really excited to develop on the Android system, since it's more open than Apple's iOS. Adobe is adapting its development tools such as Flash Builder, Flash Pro, InDesign, and others with this in mind. It was quite impressive that they could drag a photo or graphic on one of the magazines and the text would flow seamlessly around it no matter where it was dragged to. Adobe Air apps are also supported. They also have a content delivery system in place for publishers and they can collect granular data on which pages users click on. Air allows developers to develop applications once for both Mac and Windows. They talked more about Flash and its impact on gaming. Flash accounts for nearly 75% of video games now, and it can now work with game controllers such as PS2 and Wii controllers. Other hardware manufacturers such as Litl are making devices that will play Flash games with Wii controllers on a large screen TV. They said most video on the Web is delivered via Flash and Flash video grows by 100% per year. They demo'd Google TV and announced that all Max attendees will be getting Google TV sent to them (that on top of a new Droid 2 phone)! Google TV is based on the Android OS and streams HD video thru Flash. Flash uses hardware accelerated graphics for impressive performance. They showed a 3D race car driving and it was using less than 1% in the Activity Monitor. They also demo'd it with a USB steering wheel. An immersive 3D environment was shown that was very fluid and fast. There are quite a few 3D frameworks available now for Flash game development. There was a neat demo that showed using an iPad as a color palette where the guy used his finger to mix colors on the iPad much as you would in a water color palette and the iPad would send the selected color to the Desktop computer running Photoshop. The color palette in Photoshop would pick up the color to paint with. 

HTML5 Coordinate Plotting

| | Comments (2)
I was asked by an economics professor if there was a way to plot points in an HTML page. Having recently researched HTML5 and the Canvas, I knew this might be a possibility. Turns out it wasn't too hard. I used a loop to create all the grid lines and just had it draw the X-Y axis in a darker blue. The origin 0,0 is in the upper left hand side of the canvas. I moved it with code to 250 over and 250 down to create the center of my 500 pixel grid. Then there was a little bit of math to convert the number input into the form into where it would be on the grid.
(the right side of the canvas is being cut off by the blog width)

Enter a number for each coordinate between -500 and 500.


Canvas is not supported

2010 NMC Conference Notes

| | Comments (0)
Mimi Ito, 
Learning with Social Media: The Positive Potential of Peer Pressure and Messing Around Online

Mimi Ito, a cultural anthropologist who studies new media use, described her views on how students use of the web and social media is not being taken advantage of fully either by the students or their instructors. There is a clash between originality vs. appropriation. Students are using the web to find easy routes. They use cheathouse.com for term papers and ratemyprofessor.com to weed out difficult instructors. While their instructors expect them to produce original work, yet they use standard measures to assess them. Students today tend to share knowledge more than they ever have before. She gave examples of the amateur webcam video genre and how young people are creating mashups of their favorite videos and creating fan remix videos, something she termed as "disruptive innovation". They are adept at reshaping existing media to create new media expression. The old model of focused attention seems to be breaking down. We need to embrace this fact and train students to become constantly adaptive. She raised the question of how do we leverage the social media to get the students' attention? She gave the example of Snafu Dave, who started an online comic strip, which ended up becoming quite famous. It was something he could pour his energies into. She gave other examples of how the web was being used for learning, such as The Peer to Peer University (P2PU). The P2PU helps you navigate the wealth of open education materials that are out there, creates small groups of motivated learners, and supports the design and facilitation of courses. She also showed examples of anime remix videos and the community of fams of animae. It was all about experts who look to each other for teaching and learning.

Breakout Session
Developing Online and Interactive Content with Adobe Flash Catalyst - No Coding Required
Adobe Systems, Inc.

This was a hands-on session with Adobe's Flash Catalyst program. The program is an interface builder especially helpful for designers who work in Photoshop. They can import their Photoshop images into Catalyst, preserving all layers, then select separate elements to enable as buttons, pulldown menus, text areas, etc. Once the component was created, it was simply a matter of choosing the action each component would initiate when selected. Catalyst would then write all the actionscript for the component immediately in the Actions window. These are called "code snippets" and are touted as an easy way to learn the code for creating such things. For instance a Flash video game could be created by selecting the arrow keys to make the character move about on the screen. It just lays in a pre-canned script that you can go into and modify if you choose. The idea behind Catalyst was to allow designers to design and let them create a base level of interactivity that could be handed off to a coder who could add more elaborate interactivity, like connecting to a  database for example. 

Entourage Edge

I spoke the the folks from Adobe and they showed me the Entourage Edge eReader they had with them. I w as quite taken with both its looks and functionality. They said it would fully support Flash 10 in about a month (much to the chagrin of Steve Jobs no doubt). I looked it up on YouTube and found a great demo video at http://www.youtube.com/watch?v=28vvRbhOdg8. It is very much like an iPad or Kindle, except that it has two screens, one for the eReader and a tablet notebook run by the Android operating system on the other, which can play thousands of applications. It has the ability to record and play audio and video, is Wifi enabled, has a touchscreen keyboard (or you can plug a USB keyboard into it), The eReader side has a Wacom tablet built into it that you can either use to highlight text or annotate in your book or use it standalone to draw and write into a notebook application. This would be great for taking notes in class. The Edge has a one-touch backup system so you never have to worry about losing your books or notes. I have requested a loaner from Entourage to assess and pit against the iPad. They will be sending me one to assess for 2-3 weeks starting next week.

Digital Literacy: The State of Play
Angela Thomas, University of Tasmania

Angela discussed why they did digital storytelling, what students got out of it, training, and assessment. A question was raised as to how a course like Physics might use digital storytelling and an example was given where one instructor personified a concept and used digital storytelling to tell its story. She mentioned Viddler, which allows you to assess and comment right on the video. The classroom seemed to bond as a community, she noted. The students got to know each other better through their stories. They used peer-supported learning groups as well as hands-on training. Students would self-assess to see if they needed to enroll in a one-credit video production course. Students had to do criteria-based assessment of their peers' work. They were theng instructed to send the instructor an email and answer the criteria for each video shown in class and assess them, picking the top 3 videos. That gave the students the incentive of winning as well as a touch of competition to do well.

When Work Gives You Lemons, Make Strategy
Al Gonzalez, Cornell University

The same week Al was hired as a publications and marketing director, the university laid off 4 people in his unit. The university was moving to the Web and didn't need as much print design. In response to the lemons of the obstacles and challenges, he reacted by trying to foster trust, identify strengths, and control fear. The results were innovative media systems and strategic plans. After the layoffs moral was understandably low, and those that were laid off were replaced by Web developers. He couldn't say for sure whether there would be more layoffs, and this created an atmosphere of fear, jealousy, and lack of trust. He was the project manager for 18 staff, and they had one IT person for 70 people, so they had to make strategies for economies of scale. They had no formal testing group for quality assurance and needed a comprehensive time-tracking tool for 40 hours per week. In 14 months they delivered a new message framework application and a 5 year plan to promote the University of Cornell. They rolled out their new CURLO system, an online repository that learns and organizes news stories. They also rolled out a redesigned HR website with hundreds of pages of content and over 300 publications. Dominant themes for their promotional plans were the student experience, messaging and branding, and a caring community. He said "What makes a message stick is when it's unified." He gave the example of a rash of suicides at Cornell - 6 within a 3 week period. They needed a website that addressed what they were going to do about it. A Caring Community was promoted. The website was built in one day. "You don't want to just capture the public's attention," he said, "You want to also capture their imagination." He discussed how he did an audit of his staff's strengths and weaknesses and compiled them into a dashboard to easily view them. He described four different types of workers: the Visionary, who leads by vision, the Warrior, who is always ready to attack a problem, the Nuturer, who wants to make sure everyone is safe and happy, and the Critical Thinker, who figures out how to carry out the vision. He talked about fostering a healthy working environment, saying feedback is the oil that runs the machine. You need to identify sensitive issues and proactively strengthen relationships. We also need to understand the role we play in developing conflict and how to diffuse it. He offered his website http://algonzalez.info for more information about the team personality mix as well as some of his other topics. Looks like a helpful site.

Learning and Teaching the Tools of the Animation Trade
Steven Martinez and Stacey Eberschlag
ToonBoom Animation, Inc.

I was a bit disappointed in this breakout session as it was billed as a hands-on workshop, but the s/w wasn't even installed on the computers in the lab. They did, however, demo their products. They have a storyboarding s/w and several animation packages. ToonBoom is used by all major animators, including The Simpsons, South Park, Pixar, and many more. He showed how characters are built so they can be easily animated. It looks like a great package and I'll be downloading a demo copy to assess.

The Mobile Horizon: A Panel Discussion
Kyle Dickson, Kyle Bowen, Shan Evans, and Bryan Alexander

As mobile devices are becoming more mainstream, campuses all over the country are struggling with how to best support their use. Smartphones are thought to be the most competitive market in the last 5 years. There are now many different hybrid devices available. According to Morgan Stanley, smartphones will overtake desktops and laptops in the market, and as far as popularity goes, it's all about the number of applications that are available to them. The Horizon Report predicts that mobile phones will be pedagogically important and that video production on smart phones will be the next new thing. Web content will shift to mobile content stored in the cloud via phone. The term "cone of distraction" was mentioned. This was the fact that a person sitting in the front row with their laptop open will distract 2 people behind them, 3 behind them, and 4 more behind them. Purdue created a web application called Hotseat to allow for student backchannel discussion in the classroom, which can be used with Facebook, Twitter, and SMS. They were able to get 86% student adoption. They were also using their lab computers as farms when they are not in use to help with encoding video for their server. Abilene Christian University's iPhone initiative gave all students an iPad or an iPod Touch. They were able to mobilize anonymous feedback systems to eliminate the need for students to buy clickers. Their students also like the handheld version of BlackBoard better as it was simpler and easier to navigate to the info they were looking for. They were also going to roll out FaceTime this fall, which is a small group discussion tool to simplify setup of groups and discussion. You can either join a discussion on your smart phone or create one. You can create groups, deploy topics, assign roles, and distribute points of view for discussion. Their lessons learned were to build on proven practices, seek faculty input, use mobile phones to complement class instruction, and consider different points of view.

Project Management to Foster Creativity
Megan Bell, University of North Carolina at Chapel Hill

Talked about her approach for project management in a creative multimedia shop. Project management starts with a good project definition. Document the who, what, when, where, and why of the project in a Statement of Work (SOW). It's best to make the other party sign the document so you know they read it. They may not read it otherwise. Home in on the target audience for the project and what the goals are. What are the high level responsibilities? What do higher ups need to know and sign off on? How long will the project take? Typically, faculty will need a more general schedule and a more detailed schedule for staff customers. Make clear short and long term goals. Figure out what the deliverables are. Decide on copyright ownership. Manage changes to the project effectively. Also state exclusions (what is NOT included in the agreement). Also talked about stimulating creativity through mind mapping. Discussed left and right brain thinking and how both must be present to get the most creative solutions while staying on track. Agendas should be created for all meetings. You should know what your steps are so you know you're making progress. A needs assessment should also be done prior to starting the project. At the end, a project close statement that includes all the measurables from the SOW that both parties can sign off on as being complete.

Poster Session

Using Adobe Photoshop as a Visual Analysis Tool in Research
Dave Wilson, Univerity of North Florida

A few of the more interesting posters were one that described using Photoshop as a research tool. By using controlled photography, photos could be analyzed and compared using the color tables in Photoshop to compare the healing of a wound, for example. Another was measuring the distance between say a man on a bicycle and a car passing him by using Photoshop's built-in tools.

Less Filling, Same Great Taste - Comic Books Instead of Video for Student Projects
Jerod Bendis, Case Western Reserve University

This was a simple, but effective idea. Instead of using video, students were asked to take photographs and use a program called Comic Life to help them construct the comics with word balloons. This was a low tech digital storytelling technique so that students wouldn't get bogged down in learning digital video and could instead concentrate more on their content. He said it required minimal training for the students as most have used a camera and many have used some sort of photo editing software as well.

Collecting the Digital Story
Kenneth Warren, University of Richmond

The University of Richmond showcased their resources for no-cost digital storytelling production and publication. They produced over 250 DST stories last year and are using the free open-source web-publishing platform, Omeka to archive these stories.

5 Minutes of Fame

Houston Community College let a large group of students use Kindles for their classes. They found that Amazon was bad to work with, but didn't elaborate as to why. Students overall took to the Kindle quickly, but found that it wasn't very easy navigating to parts of a book they wanted to review and that it was hard to find things in general. eBooks were roughly 1/3 the cost of print textbooks. The students also didn't like the limited use of the Kindle since it could not do chat and Facebook. This might be were the iPad and Entourage Edge will have a distinct advantage.

MIT had their students create a Russian history timeline for the Kerensky conflict. Students were assigned to research certain groups of people involved in the conflict, such as soldiers, and were also told to assume that group's point of view. This was an exercise to get siloed information into a diachronic format where it would be easy to cross-reference different groups and their point of view at any given point on the timeline. 

A demo was given of "virtual bubbles" where live video of the audience was displayed and a particle generator was overlaid over the live video. When people in the video would "touch" a bubble, it would bounce away from them. Not enough explanation given as to how this was done, but it was interesting.

Pearson eCollege is using free web-based audio recording tools such as Jingo, PhotoStory, and Voki so faculty can better connect with distance education students. They are used for a variety of things such as speaking practice where the instructor can listen to and critique foreign language students, for assignments, and for verbal feedback.

Closing Keynote

A New Culture of Learning
John Seely Brown, University of Southern California

John talked about how small moves, smartly made, can set big things in motion. The new thing now is Global Processing Units (GPUs) where computer systems share the processing load to do heavy computing. He said our technical skills now only have a 5 year half life, and that if you're not leading the way, you're falling behind in a way. He posed the question "How do we afford curiosity?" How can we better leverage mobile devices as curiosity amplifiers? We need to rethink how we learn, what we learn, and how new media has changed the fundamentals of what we learn. The saying used to be "I think, therefore I am." Now it's more like "We participate, therefore we are." We live in a much more collaborative environment. Understanding is socially constructed, and a student's ability to join study groups is the greatest indicator of their success in college. Study groups are both physical and virtual. He talked about how Reyerson University missed the point when a student started a chemistry study group on Facebook. He was expelled for cheating, when it was merely an organized study group. He was later reinstated. He then told of a 14 year old boy from Hawaii, whose life goal was to become a world champion surfer. Although his parents wanted him to have a fallback job in mind, they said they would support him. He picked 5 friends who were just as into surfing as he was and they formed a study group aimed at perfecting their surfing techniques. They used video to replay the top surfers in the world to break down their techniques frame by frame, so they could quickly learn them. They also pulled the best ideas from adjacencies: wind surfing, skateboarding, mountain biking, and motor cross. They kept in tune to what was happening in the surfing scene all over the world. The boy became a world champion surfer by the age of 20 and makes a lot more money than his father. He said you need the passion to achieve and the willingness to fail. World of Warcraft was also presented as an example where it is important for players to pay attention to the social life on the edge of the game (the knowledge economy). The World of Warcraft mantra is "If I'm not learning, then it ain't fun." Gamers like to be measured to see their improvement. There is much in-game and out-of-game learning going on. They use graphic dashboards and analysis tools created by users. After-action reviews are conducted to rate how raid team members performed. We need to spend more time on the tacit instead of the cognitive structures of learning. Speed chess was cited as an example of how to learn patterns quicker. 

Captioning Videos Made Easy

| | Comments (9)
In my last post I looked at using YouTube's auto-captioning. After using it a few times, I've found that it seems to take just as much or more time fixing their mistakes as it does starting from scratch. After going through and captioning Michael Wesch's keynote for the TLT Symposium, I was looking for a better way. Even though MovCaptioner makes it much easier to create the captions, the business of typing them in can be tedious. I can type fairly quickly, but the process just wears you out when you're doing a long video such as a keynote address that's around 52 minutes long, so I was looking for other ways. 

I have a copy of MacSpeech Dictate, a speech-to-text program, that I thought I could somehow employ for this. So I tried opening the movie in QuickTime and repeating what I heard through the headphones into the mic, so that MacSpeech could type what I say and then I could import the resulting text file into MovCaptioner and synchronize the text with the video. This proved to be unworkable and actually crashed my computer at one point. May be due to using 64-bit QT X, but I'm not sure. So I thought I'd try loading the movie onto my iPod Touch and controlling the movie from there. This was also no good either, as I could not have good control of the movie. If I missed what was being said I couldn't easily rewind without going back too far or not far enough, so I gave up on that idea. Back to using my laptop. I thought I'd try setting a few captions using MacSpeech and importing them. While I had MovCaptioner open, I activated MacSpeech and hit the Start button on MovCaptioner. To my surprise, MacSpeech began typing what I said right into the MovCaptioner text area. This made it much easier. No typing! So as MovCaptioner looped thru the first 4 seconds of video, I merely had to repeat what I heard through the headphones into the mic and there it appeared. Once I saw the caption was correct, I simply hit the Return key to go on to the next few seconds. This really was easy. The biggest problem was that I tend to slur my words a lot, but MacSpeech did a great job of translating regardless. If I remembered to just take my time and pronounce each word carefully, it was in most cases 100% correct. I didn't have to correct it too often. I think with a little more practice I could do it quicker and more accurately. It still takes a long time to caption videos this way, but it's much faster than typing, and much less tiresome, at least for me. In case you're wondering why I didn't just have MacSpeech translate directly from the video, it's because you have to train MacSpeech to your voice for better accuracy. Also, videos tend to have a lot of ambient sounds that may interfere with the translation.

I have been trying to caption all the videos that get posted onto our PSUTLT YouTube site, but it's tough to keep on top of them, especially the longer ones. So, what I'm thinking is that in order to get all our department's videos captioned, we could hire an intern or wage payroll person to do this. They would need minimal skills. I'd look for someone with a clear speaking voice and good punctuation skills. They could concentrate on captioning all the new videos that get posted on our site, and when they are caught up with the new ones, they could start picking away at the older videos. 

I think this is going to be important for several good reasons. 

• Number one is that I we will eventually be forced to do this anyway, and we need to position ourselves to be in compliance. House Bill 3101, the 21st Century Communications and Video Accessibility Act, is picking up more and more support from congressmen. This bill would require that videos posted on the Internet be captioned. 

• Another reason would be that it would open up our content to a whole new market of viewers, that being the deaf and hearing impaired. We have such a wealth of good video content available on our YouTube site that none of them would be able to learn from otherwise. 

• Videos that have caption files associated with them are now searchable by Google and other search engines. This would increase our visibility when people are searching for content that we (and a bunch of other institutions) have. Let's position ourselves as highly on those searches as we can.

• Another important reason is that it's just the right thing to do. I would like to see ITS be an example to other institutions. There are not many that are doing this and it affords us an opportunity to be leaders, perhaps not in the technology or process we use necessarily, but in that we view accessible content important enough to put resources towards. 

• Finally, I think that this would also be a great thing to be teaching our students to do for their classroom video assignments. It we get them to see that creating captions is part of the video process just as shooting and editing video is, we will be training a group of young adults that will go into the work force with accessibility in mind. Would you create a video without an audio track? No, but that is essentially what you have offered deaf people when you don't caption your videos. So we need to start teaching that videos are a combination of video, audio, AND text.

That's word.

YouTube's new auto-captioning feature does the heavy lifting

| | Comments (1)
YouTube has finally opened auto-captioning to all users. This is available on any new videos uploaded to YouTube. To see captions for a video, click on the red CC  button and a translucent window will open up. Click the Transcribe Audio option at the top and after clicking Yes to the disclaimer, the text will show up at the bottom of the movie automatically. Now, don't get your hopes up too high. Depending on the quality of the audio you will get varying results and in almost no circumstances will you get anything better than 80% accuracy. Some of the results are quite comical in fact. Take this movie: http://www.youtube.com/watch?v=2iDciW9n8lo&feature=popular
and at about 36 seconds in, the announcer says "A little crossover move by Carl" and it gets transcribed as "prostrate removed their back or old". So it's quite amusing to see what they come up with. In all fairness to YouTube, the audio on this clip is not the best for transcribing in that there are two different announcers and lots of ambient noise. 

At any rate, if you were the one who uploaded this movie, you can go to the movie's Captions link and there you will get the option to download YouTube's "Machine Transcription" of your movie's audio in text format. It will download as a file called "captions.sbv.txt". You can open this file in any text editor and correct all the inaccuracies, and then re-upload it to YouTube so you'll have an accurate set of captions for people to read. It also comes in handy if you want to output the captions in other formats such as plain text transcripts, Flash DFXP captions, embedded QT captions, SCC closed captions or others by importing the caption file into MovCaptioner on the Mac and exporting to whatever format you want. You will need to load your movie into MovCaptioner, however, to enable the various export options. Then go under the Import menu of MovCaptioner and select the YouTube (SBV) import option. It's quickest to correct the text first in a text editor before importing, however. A quick formatting tip would be to first export it as Sonic Scenarist (SCC File Only) from MovCaptioner, because it will quickly add line breaks every 32 characters so you don't get one very long line with one small word on the next line. 

Although the YouTube machine transcripts are often inaccurate, I believe it's a great step in the right direction and may turn out to be a great tool in making more video content accessible not only to YouTube viewers but also for viewers of other formats (TV broadcasts, embedded QT, DVDs, etc) with the help of other captioning software that can import these files.