January 2009


This is a great time with many new (and some not so new) video related technologies coming to market.   Many of these are service based and cover everything from capture, management, editing, tagging, distribution, and monetization of video. 

In doing research recently, wanted to at least capture and list technologies I have come across that sounded interesting.  Considering this is a big space, these will only be a drop in the bucket – so please comment with more!

This list does not imply endorsement nor confirmation of their products capabilities.  Considering the sophistication of these products you should see their web site for complete product information.

  • Omnisio– provided ability to create your own video applications by editing and mixing with non video elements such as onscreen comments and slide synchronization.  They were acquired by Google last summer and merged into YouTube but can only find the text annotation capability online now.
  • Jumpcut– consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • JayCut – another consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • GorillaSpot – their SpotMixer platform provides a turnkey solution to allow for “user-generated generation” of video to share via email and social networking sites.
  • Multicast –  providing a service for delivery and monetization of live and on-demand video content
  • Pathfire – from their site:

…provider of digital media distribution and management solutions to the television, media and entertainment industries. The Pathfire solution—which includes a robust distribution network, flexible hardware solutions and innovative software applications—delivers unprecedented control for both content providers and stations…

 The company’s proprietary computer-vision based video indexing, search and interpretation algorithms empower content owners and publishers to efficiently monetize their digital video content, and advertisers to automatically target ads to thematically relevant video content.

  • Spinvox – voice to text service
  • Inlet Technologies– provide encoding, transcoding, and streaming solutions & services which “enables new media for new networks”
  • Ooyala – video platform providing delivery, analytics, syndication, advertising, interactive video features.
  • Blinkx– early pioneer in video search who “uses a unique combination of patented conceptual search, speech recognition and video analysis software to efficiently, automatically and accurately find and qualify online video.”

Please comment and share new technologies you have seen and I will continue to post more I come across.

With advances in new media and online collaboration, I find it interesting to consider how new media might be used in the corporate world of tomorrow.   No doubt there are uses we cannot yet conceive, but it’s always worth trying.   As a kid one of my favorite activities was to draw pictures of “cities of the future” which looked more like “cities in outer space” – it’s fun to imagine the impossible considering it may actually become real.

Here is my initial list of ideas.  Some may already be started outside corp walls and will eventually move behind.

  • Use of robust and integrated social networks for identification of content
  • Video based presentations – ppt of the future?
  • Video for how-to’s, support, etc
  • Video email.  Or are words simpler and better for email/chat.
  • Video based lan navigation – like a virtual office to help organize information.  Users from different languages might be able to navigate storage by pictures instead of words…
  • Blending of doc & video format – maybe auto conversion between spoken/written/visual
  • Video Reports – could time based media provide richer reporting format?  Instead of Excel spreadsheet you get a media clip with integrated navigation

Are these crazy ideas?  How about more so I can update the above list with credits to each of you.

Have you ever felt that navigating the web is like living life around town with blinders on?   I spend a great deal of time navigating the net doing research, reading, and shopping – and even with the exhaustive searching performed I continue to get “surprised” with new sites and information which I have never seen before.  It can feel like you are working with very narrow tunnel vision.

This got me thinking how in the physical world an important way we learn is simply through awareness of our surroundings.   We go to the video store and see a new restaurant opened in the plaza, we are driving to work and see a new electronics store from the highway, you go to lunch and see/smell something new a friend is eating…   We pick up a lot about secondary topics while pursuing a possibly unrelated primary topic.  

How can browsers or tools evolve to take on this behavior pattern?

Browsers evolved originally as tools for delivering page oriented information – think newspapers, magazines, etc – and have now become capable of supporting dynamic information and controls thereby supporting rich applications.  In the physical world these activities (reading a news paper, using an application) do not replace the physical.   But as the internet grows, one can spend more and more time on it and less time out and about in the physical world.   Folks do their work, shopping, reading, books, ordering food, etc – all from the internet and often from home.  

Should browsers provide more of a “3rd Dimension” of information by providing knowledge of our “surroundings” based on the browsers knowledge of the user and what the user is doing?

We are starting to see initial concepts here with tools such as tags, blogs, rating, twitter – but these rely on others.   Instead it would be great if my window on the web were able to tell me about an information source which was new to me.

Enterprise applications present interesting challenges to those tasked with defining the architecture and design for the application.   In some cases the application is solving a well-known problem in a better way and other cases it is solving a still evolving problem.    In addition, more and more customers are seeking OOB applications for their enterprise solutions – so they expect to achieve their needs with limited “customization.”  This begins a series of posts to discuss the architecture of enterprise applications.

Regardless of the actual problem being solved, there are many non-functional requirements which should be accounted for in the system architecture.   When building these applications, the “architecture” will be a significant factor in the success of the application and organization building/supporting it especially as it relates to the non-functional requirements.      

Below are some of the non-functional requirements which I have found to be very important to customers and highly dependent upon a good architecture.

  • Flexibility.  Customers generally have a need for high degree of flexibility.  This will include the business rules embedded in the application, configurations, look & feel, etc.
  • Integrations.  Generally there is a need to connect the new application to existing tools and processes.   It is not unusual for these tools to be proprietary or be dated 3rd party applications.
  • Scalability.  Applications must scale to support large numbers of users.  Often these users can be dispersed around the globe and may include users from external companies.
  • Global.  This should not be confused with scalability as this reflects the reality that users are more and more spread across the world.  This requirement impacts how the system must interact with users.
  • Dependability.  Application must be dependable – both application availability and data integrity.  Users have come to depend on software and have a low tolerance for outages or data loss.  In some cases there are also regulatory rules at play.
  • Long Install Shelf Life.  An “Installation” will have a long shelf-life – customers cannot afford to re-install once a year.  It is not unusual to have enterprise applications in use for 3-5 years.
  • Long Development Shelf Life.  I have listed this separate because while it sounds similar to above it should be considered separately.  This requirement relates to the reality that a product will be under development for many years and modules of code may exist for 5-10 years before being re-written.  In addition this code/architecture will often out-live several generations of engineers within an organization.

Can you think of others?   The next post will begin considering how these impact the system and ideas for meeting the long term needs.

What is it that makes the best software engineers?  I expect this question yields strong opinions and varying answers.   Over the years I have been a part of various processes and tools all created to help make better engineers, but I often wonder how effective these are.   At times it feels like the old saying “you can lead a horse to water, but you can’t make them drink.”  Ultimately I see something more intangible which exists in the best engineers.  The processes & tools then help these engineers work better together, and in some cases raise the bar of others.

Measurement of software engineers can include such metrics as # defects, code maintainability, schedule, cost, performance and usability.  While many books, processes, and classes have emerged over the years to improve each, underlying these are a set of behaviors which are critical for those processes to yield results.  My experience has shown the following intangibles exist in the best software engineers:

Desire to specifically understand what code is actually doing.  I am amazed how often engineers will work within a code base with limited knowledge of what the code “around them” is actually doing.   They rely on what a couple of tests demonstrate or others have told them.   These engineers may be using 3rd party libraries or extending existing code.  The engineer who has the natural interest to understand how it actually works will yield substantially better results.  For example, when one is enhancing existing code (i.e. adding new features) do they “wander” around the code to get a sense of the impact/risk and how the structures and algorithms are actually used?   This wandering may include stepping through a debugger extensive code and building specific test harnesses to observe code behavior.  This knowledge is what helps ensure new code does what is expected with minimized side-effects in the system.

Naturally ensure code is maintainable – even fixing others code.  This includes consistency in formatting, clarity in naming, thorough functional/algorithm comments, consistency with code organization and architecture standards, and refactoring when possible.  This is not something one performs after the fact, so it must be part of how the code is originally written.  Even with the IDE code formatters in use today, I believe how an engineer handles this reflects their approach to writing code.  Software is precise and requires thinkers who are precise and well organized – and expect this from others.  This focus on maintainability is what ensures new and updated code remain maintainable for years to come.

Can decompose a complex problem into simple sub-problems.  Does one naturally notice common code patterns and extract to a reusable block or when doing code reviews do you see evidence of regular copy/paste?  While this can (and is) taught, it is generally something one “gets” and applies or they do not.   Part of the challenge here is being able to identify “almost similar” bits of code and realize they are performing a similar function with some variable modifier.  How the problem is viewed and solved originally will determine how easily it can be enhanced and maintained in the future.

Seek to learn and enhance their programming skills.  How many software engineers actually take some time – say even 1 hour per week – to read and follow any number of very informative web sites.   Folks may be aware of these sites and might even visit them once in a rare while – but which engineers on your team actually take time each week to read and digest something new.   This can be everything from what’s new with relevant technologies to best practices for various programming methodologies.  The key here is someone who desires to advance their abilities on their own.

Understand their own limitations and collaborate around solutions.  Here I have seen examples on both ends of the spectrum – engineers who are like deer in headlights and cannot move without getting input before each step and others who are unable to sense when they have exhausted their own knowledge/experience and need help.   I can see where this attribute can also be heavily influenced by the environment including culture, processes, and past experiences.

There is no mention of languages, operating systems, colleges attended, years of experience, technologies used, size of products and/or teams…   I believe it is more about someone with that natural curiosity to understand how a system works, has the innate attention to detail to build maintainable code the first time, and aptitude to understand large and often complex problems.

Ultimately the question remains – how much of this is natural and how much can be taught effectively?

I hope you will enjoy and participate in this blog.