One of the challenges with traditional software development was to understand and estimate software through problem decomposition.   This process has involved taking a complex problem specification through recursive decomposition until you have a set of problems which are well understood.  Ironically it is much like how we are taught to solve problems from advanced math disciplines such as differential equations (everyone should try one of these classes).

Unlike a math problem with a concrete specification, software concepts tend to be much less understood.  These traditional software methods would depend on designers, architects and leads performing the decomposition required to provide specific designs and exact estimates.  Unfortunately this cannot occur without a complete specification.  While this problem should be obvious, often teams fall prey to pressures created by those removed from the process who do not understand. 

Agile processes embrace the reality the specification is a work in progress and instead focus on a compositive process whereby a solution is built up one step at a time.  Instead of wasting time & resource on an arbitrary big picture definition (which may not be right from the outset), the focus switches to building up a solution with regular checks and balances. 

Would you agree that Agile effectively helps reverse this backwards approach?  How might this change your approach to leveraging agile?

Advertisements

The value of traditional ECM systems is well documented.  In addition more and more are seeing the enabling value of social networking tools within a corporate enterprise.  Now does the marriage of ECM content and social networking create a new, untapped monetization opportunity for the enterprise?

While recently considering monetization options for twitter, started to ask if similar ideas might apply to ECM.   Twitter has a wealth of information around content, location, relationships and trends – and all very current.  Similarly within the enterprise there exists a wealth of information and knowledge across ECM systems including content sources & owners, social platforms (wiki, blog, micro-blogging, IM), organizational knowledge, communication tools, queries, workflows and relationships.

This knowledge becomes even more powerful when connected together!  Imagine how the interconnected knowledge from combining ECM content and social networking can create a new set of opportunities.  Could this knowledge help identify corporate trends (like morale, confusion, duplication), improve optimization by matching up related information (teams, projects, people) or possibly even identify risky or illegal transactions?

Each of these scenarios could be mapped to value – cost to rehire/retrain, cost of duplicate efforts, cost of lawsuits.  There remains the important balance around privacy, but expect this will be manageable.

What do you think?   Do you agree this marriage creates new opportunities to capture value from knowledge?  Also this further strengthens the value of enterprises first breaking down collaboration barriers internally by enabling and encouraging the use of social tools in conjunction with ECM suites.

There has got to be a better way!  Having spent years building large business applications, it seems the time has come for the traditional monolithic application model to become a thing of the past … and be replaced by a new concept I have labeled a “mash-app” much like a mash-up but different.   First I’ll clarify what I mean by enterprise business applications and then consider challenges this creates along with how this can be overcome with new architectures – heavily influenced by SOA and web2.0.  Of course this requires some change in mindset as to what constitutes an “application.”

Enterprise Applications:  They’re broken

The term Enterprise Application takes on various meanings, but for the sake of this discussion it means an application used to perform mission critical business functions across the organizations of a business.  While the specific functions will vary from business to business, they generally include requirements around high availability, scalability, flexible and strong security, robustness of features, ability to model a customer’s business processes and ability to integrate with other technologies already part of the business process.

As these applications have grown and become more complex, they have effectively started to bulge and crack at the seams.  Years of growth, acquisitions and technology advances have pushed this type of software to point where they are broken.  This software often falls victim to:

  • Lengthy time to market for new innovations – software that is behind the times.
  • Contamination of features when new features added – over time even best intentions are hard to overcome as product becomes unwieldy.
  • Costly upgrade processes which may include re-integrations – more features, more customizations, more integrations as these large installations create upgrade headaches.
  • Unusable interfaces – often evolved over years of adding new features on top of old, using features in ways not originally intended, etc.

Deb Lavoy shares her thoughts on this also in a blog post titled enterprise software has 5 years to live.  Many of these challenges are rooted in the traditional desire to produce a single “application” including a single interface, single base of technology, single database, single installation – all ideal, but at what cost ($$ and time).

Mash-App: An Enterprise Application architecture of the future

A mash-app is a concept where the traditional application, both UI and services, are sufficiently componentized such that the final application is effectively a mash-up of the components while still delivered as a packaged solution.  These components would be integrated to meet the business and functional needs with integration via services (via web services) and user-interface components (via URLs, json, web services).

One challenge here is a possible change in how we envision an “application.”   Unfortunately we have grown accustomed to the Microsoft Office style of integration for a suite – one where everything looks almost identical and is so tightly integrated you might not know which technology is driving what features.   While this might be the holy grail of suite integrations – it introduces a number of challenges which limit a software businesses ability to innovate.

Leveraging this new Mash-App architecture, the larger application will now be decomposed into more distinct service groups allowing for:

  • True agile delivery of updates to those components without requiring complete reinstall, configuration, and re-integration of the complete application.
  • Diminished risk of cross-contamination when adding new features because features are more physically separated.
  • Efficient integration of 3rd party technology which at best share an underlying technology.  This becomes particularly important today when roll-up acquisitions occur frequently to enhance and augment functionality through innovation of start-ups and other vendors.
Mash-App:  simple example

Let’s suppose we have a document management application which leverages “users” for everything from access control to auditing and workflow.  The application generally provides a means for managing these users (even if fed from external source such as LDAP) to maintain application data.

Leveraging this new model, we will define the “User Services” as a component – both business services and user-interface services.   These services include the interfaces to define, manage, search-for, and display details of user information and would now effectively be a module.  When a different component of the application needs to interact with user services, i.e. search for a user, it would “call” the search-for user API (maybe via URL) which would present the end-user with a search screen and the ability to select 1..N users.   Upon completion the component would return control back to the caller providing the key identifying information.

With this approach, the vendor could easily decide to enhance the user services and deliver an update of this component with limited/managed risk to other parts of the application.  I believe this approach could be extended to the other functional groups within the application.

Mash-App:  so what do you think

Imagine entire applications built using this architecture?

While it might take time to achieve fully (unless building from scratch) – a movement in this direction will allow for a more agile approach to software development and delivery.   New acquisitions will deliver value to customer more quickly.  And even delivering “software+services” through mash-ups is more achievable as the core application is designed in a similar architecture.

Finally had one of those “ah-ha” moments to appreciate how micro-blogging (Twitter, Yammer, etc) is basically “hall conversation meets the internet!”

In years past much collaboration occurred in the halls at work (or around the coffee machine).   During these ad-hoc discussions folks share what they are working on, maybe something new they have seen, possibly even something personal.   Key was short and often disjoint bits of information, but often valuable to your work.  As you would come and go you would pick up parts, some days more than others and might followup on something you heard.  

And then there was Twitter… and Yammer…

These new technologies and communities provide almost an electronic “coffee machine” around which discussion occurs.  While it took me a short while to appreciate and understand, this is clearly part of the significant change underway in how the internet is used.  And now the field of scope from whom I can have ad-hoc conversations has grown to be global.  This is really exciting – I can see this filling a critical gap in distributed work environments. 

So is there a negative?   Does this promote further distancing in folks from developing good interpersonal skills?   Take a look at this article at mashup http://mashable.com/2009/02/10/mobile-dating-stats

Digital Media applications (sometimes called DAM or MAM) are designed to interact with digital media – video, images, audio – and face a number of challenges where the application architecture plays a significant role.  These challenges come from the nature of the bits themselves (the large number), the number of supporting technologies, and established workflows of which many have evolved over years of managing content, and the rapidly evolving industry.  While this topic could cover volumes, my goal here is to highlight some of the noticeable requirements I have seen recently.

VLF – Very Large Files

Rich media files, be it high-res print ready images or high-def video, are very large in size and often not well suited for direct interaction with end-users.   Considering the master files can be 100s of GB in size, the architecture of the application should support a number of requirements including:

  • Content delivery by separate application to help optimize movement (streaming servers, CDN, storage services)
  • Ensuring operations across process minimize content movement
    • While still supporting that some processes will need to touch/process content
  • The pre-existence of a large “library” of content – see point about minimizing movement

Content Processing Technologies – legacy & emerging

Rich media content processes depend heavily on many 3rd party technologies for everything from transformation, manipulation, delivery, editing, compressing, producing, etc.  The specific technology decision can influenced by factors including legacy implementations, support for specific file types, sometimes even variations of files types generated by a specific program, other integrated technology limitations, etc.    Realizing the application cannot possibly embed all technologies, the architecture should provide for:

  • Practical integration of 3rdparty technologies at key points within workflows. More and more these points are becoming anywhere within the flow.
  • Support for legacy or proprietary technologies still in use today for content processing
    • It may not be possible to force use of content processing technology
  • Ability to integrate with emerging technologies
    • Both as libraries and SaaS model
  • Recognition that 3rd party technologies may not be platform independent
  • Support of atomic transactions across multiple technologies, HW, systems

Workflows – often established and complex

The issues detailed above have led to many creative and often custom solutions.  Given the resulting content often has high value to an organization’s business (recall that rich media often is themonetized product), these processes become established and relied upon across the organizations within a business (ie. creative, legal, distribution, archiving).  In order to best support customers needs, expectations, and initial roll-outs – the system architecture should provide for the following requirements:

  • Support for modeling long established workflows already in operation. This requires high degree of flexibility as legacy workflows may have originated as custom code providing unlimited ability.
  • Ability to incorporate into workflows new “services” coming into market around rich media features – how to leverage.
  • Often multiple “media renditions” exist with different workflow/tool required for delivery
    • For example – FPO in the print world and low-res vs hi-res video proxies

Search is interactive

Originally search was taking a word/phrase and matching to an index for results.  While this works especially well for text based assets, in the media world you are often looking for the emotional connection of “did I find the right asset for the right purpose.”  Finding this right result requires a more interactive process than simply matching a word to an index – it is about the process of searching, understanding and refining a set of results for which I have permissions.  Architecture requirements to support this may include:

  • Low latency on search operations. Often users will need to leverage search as they are working with assets, for example to review changes. In addition they will be using the search process in a very dynamic manner to identify the right asset.
  • Support for dynamic structured metadata – while structure is important, it will change.
  • Relationships are critical – often the find process involves understanding how an asset was previously used and its relationship to other assets.
  • Ability in real-time to interact with your search via concepts such as narrowing, filtering, clustering, etc.

Although video has a great deal of attention, there remain equally exciting advances occuring around publishing.  Everything from Print on Demand (POD), electronic readers, and accessibility.  These technologies look to be rapidly advancing what “publishing” means, its availability to writers and readers, and opportunities for new advances.  As traditional revenue streams change, folks are forced to innovate bringing new ideas to how we produce and consume published media!

Recently in doing some unrelated research I came across a neat technology called Scribd.  While not new, it was new to me and quite interesting.   I also have been watching another technology from Amazon called Kindle.  With this I wanted to list out some interesting publishing technologies I was aware of and hopefully get comments on others.

This blog is not about traditional publishing nor meant to be an exhuastive list of eBook type options.   Instead this is meant to highlight some technologies I have recently come across and seek input from others.  If you haven’t already, I encourage you to wander the web and look at what’s new in with publishing.

  • Scribd – a service that provides a “place where you publish, discover and discuss original writings and documents.”  This services allows you to upload in multiple formats (like Word, PDF, PPT) and then publish in their iPaper format which can be embedded into web sites, blogs, etc.  They provide sharing and community type features.
  • Amazon Kindle – provides both a device and service for delivery of published media from Amazon.com.  A convenient aspect of the Kindle solution is the Kindle connects to the Amazon service using cell phone technology – so no need to sync via your PC.
  • Sony Reader Digital Book – Sony provides an eBook reader and store for eBooks.
  • LuLu– is a “digital marketplace” providing a service that “eliminates traditional entry barriers to publishing, and enables content creators and owners – authors and educators, videographers and musicians, businesses and nonprofits, professionals and amateurs – to bring their work directly to their audience.”
  • Publish2 – provides a “free service for journalists and newsrooms to save, share, and publish links to the best content on the web.”  In addition, the inventors of this technology host the Publishing 2.0 blog discussing how “technology is transforming media, news, and journalism.”

This is a great time with many new (and some not so new) video related technologies coming to market.   Many of these are service based and cover everything from capture, management, editing, tagging, distribution, and monetization of video. 

In doing research recently, wanted to at least capture and list technologies I have come across that sounded interesting.  Considering this is a big space, these will only be a drop in the bucket – so please comment with more!

This list does not imply endorsement nor confirmation of their products capabilities.  Considering the sophistication of these products you should see their web site for complete product information.

  • Omnisio– provided ability to create your own video applications by editing and mixing with non video elements such as onscreen comments and slide synchronization.  They were acquired by Google last summer and merged into YouTube but can only find the text annotation capability online now.
  • Jumpcut– consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • JayCut – another consumer and community targeted service for online editing, remixing, and publishing of videos and images
  • GorillaSpot – their SpotMixer platform provides a turnkey solution to allow for “user-generated generation” of video to share via email and social networking sites.
  • Multicast –  providing a service for delivery and monetization of live and on-demand video content
  • Pathfire – from their site:

…provider of digital media distribution and management solutions to the television, media and entertainment industries. The Pathfire solution—which includes a robust distribution network, flexible hardware solutions and innovative software applications—delivers unprecedented control for both content providers and stations…

 The company’s proprietary computer-vision based video indexing, search and interpretation algorithms empower content owners and publishers to efficiently monetize their digital video content, and advertisers to automatically target ads to thematically relevant video content.

  • Spinvox – voice to text service
  • Inlet Technologies– provide encoding, transcoding, and streaming solutions & services which “enables new media for new networks”
  • Ooyala – video platform providing delivery, analytics, syndication, advertising, interactive video features.
  • Blinkx– early pioneer in video search who “uses a unique combination of patented conceptual search, speech recognition and video analysis software to efficiently, automatically and accurately find and qualify online video.”

Please comment and share new technologies you have seen and I will continue to post more I come across.