There has got to be a better way!  Having spent years building large business applications, it seems the time has come for the traditional monolithic application model to become a thing of the past … and be replaced by a new concept I have labeled a “mash-app” much like a mash-up but different.   First I’ll clarify what I mean by enterprise business applications and then consider challenges this creates along with how this can be overcome with new architectures – heavily influenced by SOA and web2.0.  Of course this requires some change in mindset as to what constitutes an “application.”

Enterprise Applications:  They’re broken

The term Enterprise Application takes on various meanings, but for the sake of this discussion it means an application used to perform mission critical business functions across the organizations of a business.  While the specific functions will vary from business to business, they generally include requirements around high availability, scalability, flexible and strong security, robustness of features, ability to model a customer’s business processes and ability to integrate with other technologies already part of the business process.

As these applications have grown and become more complex, they have effectively started to bulge and crack at the seams.  Years of growth, acquisitions and technology advances have pushed this type of software to point where they are broken.  This software often falls victim to:

  • Lengthy time to market for new innovations – software that is behind the times.
  • Contamination of features when new features added – over time even best intentions are hard to overcome as product becomes unwieldy.
  • Costly upgrade processes which may include re-integrations – more features, more customizations, more integrations as these large installations create upgrade headaches.
  • Unusable interfaces – often evolved over years of adding new features on top of old, using features in ways not originally intended, etc.

Deb Lavoy shares her thoughts on this also in a blog post titled enterprise software has 5 years to live.  Many of these challenges are rooted in the traditional desire to produce a single “application” including a single interface, single base of technology, single database, single installation – all ideal, but at what cost ($$ and time).

Mash-App: An Enterprise Application architecture of the future

A mash-app is a concept where the traditional application, both UI and services, are sufficiently componentized such that the final application is effectively a mash-up of the components while still delivered as a packaged solution.  These components would be integrated to meet the business and functional needs with integration via services (via web services) and user-interface components (via URLs, json, web services).

One challenge here is a possible change in how we envision an “application.”   Unfortunately we have grown accustomed to the Microsoft Office style of integration for a suite – one where everything looks almost identical and is so tightly integrated you might not know which technology is driving what features.   While this might be the holy grail of suite integrations – it introduces a number of challenges which limit a software businesses ability to innovate.

Leveraging this new Mash-App architecture, the larger application will now be decomposed into more distinct service groups allowing for:

  • True agile delivery of updates to those components without requiring complete reinstall, configuration, and re-integration of the complete application.
  • Diminished risk of cross-contamination when adding new features because features are more physically separated.
  • Efficient integration of 3rd party technology which at best share an underlying technology.  This becomes particularly important today when roll-up acquisitions occur frequently to enhance and augment functionality through innovation of start-ups and other vendors.
Mash-App:  simple example

Let’s suppose we have a document management application which leverages “users” for everything from access control to auditing and workflow.  The application generally provides a means for managing these users (even if fed from external source such as LDAP) to maintain application data.

Leveraging this new model, we will define the “User Services” as a component – both business services and user-interface services.   These services include the interfaces to define, manage, search-for, and display details of user information and would now effectively be a module.  When a different component of the application needs to interact with user services, i.e. search for a user, it would “call” the search-for user API (maybe via URL) which would present the end-user with a search screen and the ability to select 1..N users.   Upon completion the component would return control back to the caller providing the key identifying information.

With this approach, the vendor could easily decide to enhance the user services and deliver an update of this component with limited/managed risk to other parts of the application.  I believe this approach could be extended to the other functional groups within the application.

Mash-App:  so what do you think

Imagine entire applications built using this architecture?

While it might take time to achieve fully (unless building from scratch) – a movement in this direction will allow for a more agile approach to software development and delivery.   New acquisitions will deliver value to customer more quickly.  And even delivering “software+services” through mash-ups is more achievable as the core application is designed in a similar architecture.

Digital Media applications (sometimes called DAM or MAM) are designed to interact with digital media – video, images, audio – and face a number of challenges where the application architecture plays a significant role.  These challenges come from the nature of the bits themselves (the large number), the number of supporting technologies, and established workflows of which many have evolved over years of managing content, and the rapidly evolving industry.  While this topic could cover volumes, my goal here is to highlight some of the noticeable requirements I have seen recently.

VLF – Very Large Files

Rich media files, be it high-res print ready images or high-def video, are very large in size and often not well suited for direct interaction with end-users.   Considering the master files can be 100s of GB in size, the architecture of the application should support a number of requirements including:

  • Content delivery by separate application to help optimize movement (streaming servers, CDN, storage services)
  • Ensuring operations across process minimize content movement
    • While still supporting that some processes will need to touch/process content
  • The pre-existence of a large “library” of content – see point about minimizing movement

Content Processing Technologies – legacy & emerging

Rich media content processes depend heavily on many 3rd party technologies for everything from transformation, manipulation, delivery, editing, compressing, producing, etc.  The specific technology decision can influenced by factors including legacy implementations, support for specific file types, sometimes even variations of files types generated by a specific program, other integrated technology limitations, etc.    Realizing the application cannot possibly embed all technologies, the architecture should provide for:

  • Practical integration of 3rdparty technologies at key points within workflows. More and more these points are becoming anywhere within the flow.
  • Support for legacy or proprietary technologies still in use today for content processing
    • It may not be possible to force use of content processing technology
  • Ability to integrate with emerging technologies
    • Both as libraries and SaaS model
  • Recognition that 3rd party technologies may not be platform independent
  • Support of atomic transactions across multiple technologies, HW, systems

Workflows – often established and complex

The issues detailed above have led to many creative and often custom solutions.  Given the resulting content often has high value to an organization’s business (recall that rich media often is themonetized product), these processes become established and relied upon across the organizations within a business (ie. creative, legal, distribution, archiving).  In order to best support customers needs, expectations, and initial roll-outs – the system architecture should provide for the following requirements:

  • Support for modeling long established workflows already in operation. This requires high degree of flexibility as legacy workflows may have originated as custom code providing unlimited ability.
  • Ability to incorporate into workflows new “services” coming into market around rich media features – how to leverage.
  • Often multiple “media renditions” exist with different workflow/tool required for delivery
    • For example – FPO in the print world and low-res vs hi-res video proxies

Search is interactive

Originally search was taking a word/phrase and matching to an index for results.  While this works especially well for text based assets, in the media world you are often looking for the emotional connection of “did I find the right asset for the right purpose.”  Finding this right result requires a more interactive process than simply matching a word to an index – it is about the process of searching, understanding and refining a set of results for which I have permissions.  Architecture requirements to support this may include:

  • Low latency on search operations. Often users will need to leverage search as they are working with assets, for example to review changes. In addition they will be using the search process in a very dynamic manner to identify the right asset.
  • Support for dynamic structured metadata – while structure is important, it will change.
  • Relationships are critical – often the find process involves understanding how an asset was previously used and its relationship to other assets.
  • Ability in real-time to interact with your search via concepts such as narrowing, filtering, clustering, etc.

Enterprise applications present interesting challenges to those tasked with defining the architecture and design for the application.   In some cases the application is solving a well-known problem in a better way and other cases it is solving a still evolving problem.    In addition, more and more customers are seeking OOB applications for their enterprise solutions – so they expect to achieve their needs with limited “customization.”  This begins a series of posts to discuss the architecture of enterprise applications.

Regardless of the actual problem being solved, there are many non-functional requirements which should be accounted for in the system architecture.   When building these applications, the “architecture” will be a significant factor in the success of the application and organization building/supporting it especially as it relates to the non-functional requirements.      

Below are some of the non-functional requirements which I have found to be very important to customers and highly dependent upon a good architecture.

  • Flexibility.  Customers generally have a need for high degree of flexibility.  This will include the business rules embedded in the application, configurations, look & feel, etc.
  • Integrations.  Generally there is a need to connect the new application to existing tools and processes.   It is not unusual for these tools to be proprietary or be dated 3rd party applications.
  • Scalability.  Applications must scale to support large numbers of users.  Often these users can be dispersed around the globe and may include users from external companies.
  • Global.  This should not be confused with scalability as this reflects the reality that users are more and more spread across the world.  This requirement impacts how the system must interact with users.
  • Dependability.  Application must be dependable – both application availability and data integrity.  Users have come to depend on software and have a low tolerance for outages or data loss.  In some cases there are also regulatory rules at play.
  • Long Install Shelf Life.  An “Installation” will have a long shelf-life – customers cannot afford to re-install once a year.  It is not unusual to have enterprise applications in use for 3-5 years.
  • Long Development Shelf Life.  I have listed this separate because while it sounds similar to above it should be considered separately.  This requirement relates to the reality that a product will be under development for many years and modules of code may exist for 5-10 years before being re-written.  In addition this code/architecture will often out-live several generations of engineers within an organization.

Can you think of others?   The next post will begin considering how these impact the system and ideas for meeting the long term needs.