Well, last night I joined in an event that I won’t soon forget.  In case you missed it (and I almost did) U2 broadcast their concert from the Rosebowl LA last night online via YouTube live for the world to watch – and FREE.  As I watched it really felt like a new dawn (although it actually was very late for me) with many technologies coming of age over past years were put to full test to share an experience globally.

First a note on how I found out about this.  Maybe my head had been in the sand, but I was not aware of this event until shortly before when I saw notes coming across my twitter feeds (which is quickly becoming my news and information source).  I am no U2 expert or even one who follows them closely, but I recall many of their songs from my youth and thought this might be fun to watch – I had no idea what I was in for!

I followed a link and logged online and within minutes the band took the stage around 12am EST.  Now this being U2, many expect nothing short of a great show.  Certainly for those attending live in LA the show had too have been phenomenal considering the stage, lights, 360 degree rotating screens, energy, etc – the list goes on.

But what about those of us watching online? Generally the online community ends up getting the “scraps” when its a live broadcast as the show is produced for the live audience.

Not this show!  The sound quality was incredible – mixed well, clear, crisp.  The video production was also great.  I watched it full-screen and the clarity was superb almost the entire time providing incredible camera angles and effects to bring the show home to me.  The online broadcast worked for me without any issues, which seems like a major accomplishment as I assume quite a few were watching globally.  I definitely felt like a 1st class participant receiving the same attention to quality as those in attendance.

While I’m no expert in U2 songs, I did hear messages tonight touching on violence, war, sickness, and most importantly God.  They sang one of my favorite songs – Amazing Grace as an intro to “Where the streets have no name.”  It was incredible hearing a message being sung for the world of God’s love and grace for all of us even though we are all lost and blind.  I know that God’s love can move mountains in our world of pain and brokenness.

And beyond watching and listening was the social experience.  The twitter feeds were alive with posts in various languages across the globe.  Folks were posting where they were watching from and the band referenced several remote simulcast (I think).  Having a global audience U2 used this experience to communicate on important social issues including hunger and democracy!  They had Nelson Mandela talk via video and a special focus on the Burmese democracy activist Aung San Suu Kyi even providing everyone signs to hold up.  At one point they also spoke directly to anyone who might be online from Iran.  And there was what appeared to be a live message from the international space station.

My only negative was Bono at one point took an American flag from an assistant and opened it up on the stage floor and was nearly laying/walking on it.  I don’t think Bono was trying to be disparaging, but this is definitely not a proper handling of our flag.

I’m not personally equipped to gauge the impact this event may yield – but realizing that folks across the globe were watching left me feeling at a minimum there may be someone watching who is facing repression and who may have received encouragement.  And for those of us living in freedom it was an important reminder of the value of our freedom and life battle that others face for their own freedom.

In case you missed it – here is the YouTube link where it will be rebroadcast –  http://www.youtube.com/user/U2official

If you’re an audiophile you may enjoy this post on the sound setup – http://clairglobal.com/u2/

Digital Media applications (sometimes called DAM or MAM) are designed to interact with digital media – video, images, audio – and face a number of challenges where the application architecture plays a significant role.  These challenges come from the nature of the bits themselves (the large number), the number of supporting technologies, and established workflows of which many have evolved over years of managing content, and the rapidly evolving industry.  While this topic could cover volumes, my goal here is to highlight some of the noticeable requirements I have seen recently.

VLF – Very Large Files

Rich media files, be it high-res print ready images or high-def video, are very large in size and often not well suited for direct interaction with end-users.   Considering the master files can be 100s of GB in size, the architecture of the application should support a number of requirements including:

  • Content delivery by separate application to help optimize movement (streaming servers, CDN, storage services)
  • Ensuring operations across process minimize content movement
    • While still supporting that some processes will need to touch/process content
  • The pre-existence of a large “library” of content – see point about minimizing movement

Content Processing Technologies – legacy & emerging

Rich media content processes depend heavily on many 3rd party technologies for everything from transformation, manipulation, delivery, editing, compressing, producing, etc.  The specific technology decision can influenced by factors including legacy implementations, support for specific file types, sometimes even variations of files types generated by a specific program, other integrated technology limitations, etc.    Realizing the application cannot possibly embed all technologies, the architecture should provide for:

  • Practical integration of 3rdparty technologies at key points within workflows. More and more these points are becoming anywhere within the flow.
  • Support for legacy or proprietary technologies still in use today for content processing
    • It may not be possible to force use of content processing technology
  • Ability to integrate with emerging technologies
    • Both as libraries and SaaS model
  • Recognition that 3rd party technologies may not be platform independent
  • Support of atomic transactions across multiple technologies, HW, systems

Workflows – often established and complex

The issues detailed above have led to many creative and often custom solutions.  Given the resulting content often has high value to an organization’s business (recall that rich media often is themonetized product), these processes become established and relied upon across the organizations within a business (ie. creative, legal, distribution, archiving).  In order to best support customers needs, expectations, and initial roll-outs – the system architecture should provide for the following requirements:

  • Support for modeling long established workflows already in operation. This requires high degree of flexibility as legacy workflows may have originated as custom code providing unlimited ability.
  • Ability to incorporate into workflows new “services” coming into market around rich media features – how to leverage.
  • Often multiple “media renditions” exist with different workflow/tool required for delivery
    • For example – FPO in the print world and low-res vs hi-res video proxies

Search is interactive

Originally search was taking a word/phrase and matching to an index for results.  While this works especially well for text based assets, in the media world you are often looking for the emotional connection of “did I find the right asset for the right purpose.”  Finding this right result requires a more interactive process than simply matching a word to an index – it is about the process of searching, understanding and refining a set of results for which I have permissions.  Architecture requirements to support this may include:

  • Low latency on search operations. Often users will need to leverage search as they are working with assets, for example to review changes. In addition they will be using the search process in a very dynamic manner to identify the right asset.
  • Support for dynamic structured metadata – while structure is important, it will change.
  • Relationships are critical – often the find process involves understanding how an asset was previously used and its relationship to other assets.
  • Ability in real-time to interact with your search via concepts such as narrowing, filtering, clustering, etc.

This is a great time with many new (and some not so new) video related technologies coming to market.   Many of these are service based and cover everything from capture, management, editing, tagging, distribution, and monetization of video. 

In doing research recently, wanted to at least capture and list technologies I have come across that sounded interesting.  Considering this is a big space, these will only be a drop in the bucket – so please comment with more!

This list does not imply endorsement nor confirmation of their products capabilities.  Considering the sophistication of these products you should see their web site for complete product information.

  • Omnisio– provided ability to create your own video applications by editing and mixing with non video elements such as onscreen comments and slide synchronization.  They were acquired by Google last summer and merged into YouTube but can only find the text annotation capability online now.
  • Jumpcut– consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • JayCut – another consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • GorillaSpot – their SpotMixer platform provides a turnkey solution to allow for “user-generated generation” of video to share via email and social networking sites.
  • Multicast –  providing a service for delivery and monetization of live and on-demand video content
  • Pathfire – from their site:

…provider of digital media distribution and management solutions to the television, media and entertainment industries. The Pathfire solution—which includes a robust distribution network, flexible hardware solutions and innovative software applications—delivers unprecedented control for both content providers and stations…

 The company’s proprietary computer-vision based video indexing, search and interpretation algorithms empower content owners and publishers to efficiently monetize their digital video content, and advertisers to automatically target ads to thematically relevant video content.

  • Spinvox – voice to text service
  • Inlet Technologies– provide encoding, transcoding, and streaming solutions & services which “enables new media for new networks”
  • Ooyala – video platform providing delivery, analytics, syndication, advertising, interactive video features.
  • Blinkx– early pioneer in video search who “uses a unique combination of patented conceptual search, speech recognition and video analysis software to efficiently, automatically and accurately find and qualify online video.”

Please comment and share new technologies you have seen and I will continue to post more I come across.

With advances in new media and online collaboration, I find it interesting to consider how new media might be used in the corporate world of tomorrow.   No doubt there are uses we cannot yet conceive, but it’s always worth trying.   As a kid one of my favorite activities was to draw pictures of “cities of the future” which looked more like “cities in outer space” – it’s fun to imagine the impossible considering it may actually become real.

Here is my initial list of ideas.  Some may already be started outside corp walls and will eventually move behind.

  • Use of robust and integrated social networks for identification of content
  • Video based presentations – ppt of the future?
  • Video for how-to’s, support, etc
  • Video email.  Or are words simpler and better for email/chat.
  • Video based lan navigation – like a virtual office to help organize information.  Users from different languages might be able to navigate storage by pictures instead of words…
  • Blending of doc & video format – maybe auto conversion between spoken/written/visual
  • Video Reports – could time based media provide richer reporting format?  Instead of Excel spreadsheet you get a media clip with integrated navigation

Are these crazy ideas?  How about more so I can update the above list with credits to each of you.