Monolithic vs. Modular Software Reuse Libraries (Part II)

Posted on Monday 9 June 2008

In part I of this series, we discussed the benefits of the monolithic reuse library.  These benefits make it a very attractive solution in the early stages of a reuse library's evolution. Most of these benefits are a result of the fact that there is a single unit, which is easy to distribute, version control, and manage.
Monoliths
However, a monolithic reuse library can quickly become a victim of its own success -- the more you use it (on various projects) and improve it (by adding new components and fixing bugs), the more effort that is consumed in maintaining and improving it.  Let me explain:

The slow death of the monolithic reuse library

A monolithic reuse library is one where all components of the library are released at the same time as a single unit. There are two main problems with monolithic reuse libraries:
  • As the monolithic library grows large in size, it becomes hard to distribute -- it consumes large amounts of disk space and network bandwidth
  • Users must choose a single version of the entire monolithic library for use in their projects -- it's all or nothing
These two issues inevitably drive users of the reuse library to do the following:
  • Users fork the reuse library -- Users make a copy of the library locally in their project and modify it.  They possibly only copy subsets of the reuse library, breaking the monolith.
  • Users stop upgrading to newer versions of the reuse library -- Users forever stick with an out-of-date, stagnant version of the reuse library, even though they might be able to take advantage of improvements made to some parts of the library.  The cost of upgrading, due to the possibility of breaking code is not worth the possible benefits of the new features and fixes.
The end result is that the reuse library suffers from one of two fates:
  • Improvements to the library stop, because nobody ever upgrades to newer versions of the library.
  • Improvements to the library happen in project forks.  Either these changes never make their way into the main reuse library or are copied between projects adding tremendous uncertainty and ambiguity as to which versions of library components are being used.
Either way, once these fates have been realized, the effectiveness of the reuse library has plateaued.  The effort to use and contribute to the library begins to outweigh the benefit, and this is when the monolithic reuse library dies from neglect, frozen in time.

Enter the modular reuse library

The modular reuse library is one where several individual libraries are released at different times, independently as separate and distinct units. One of the primary benefits of a modular reuse library is that users can choose which versions of each component they want, independently.  This provides the following benefits:
    • downloads are faster and less project disk space is consumed
    • upgrades can be done on a per-component basis, minimizing the risk of upgrades and the amount of testing required to re-validate code
However, there are some challenges to having a modular reuse library that stem from the fact that there are several individual components:
  • there are more software products for library developers to manage -- each has its own life-cycle
  • there are more components for users to obtain -- these have to be found and downloaded, and if one component depends other components, these have to be identified and obtained, too
The good news is that these challenges can be automated using software development tools.

VIPM 2.0, a modular reuse library development tool for LabVIEW

As you might know VI Package Manager is a tool for installing the OpenG (modular) reuse libraries.  I'm happy to tell you that JKI has just announced the alpha program for VI Package Manager 2.0 and, in doing so, we have taken on the challenge of delivering the ultimate developer tool for software reuse in LabVIEW.  We want to help LabVIEW developers everywhere realize the true potential of reusing their existing software by hiding the complexity of managing multiple independent components. Granted, these are bold ambitions.  In some upcoming articles, I'll discuss more about the challenges of developing a modular reuse library in LabVIEW and how VIPM 2.0 is going to help solve them. Oh, and if you're facing challenges with software reuse in LabVIEW, please consider applying to our alpha program -- we could certainly user your feedback.

  1.  
    crelf
    June 10, 2008 | 7:34 pm
     

    Another great post Jim (especially the Stone Henge reference :)

    I think you hit the issue on the head with two points in particular:

    Users fork the reuse library – not only do they make a copy of the library locally in their project and modify it, that often becomes their own personal reuse library in parallel with the main group monolithic library, and the individual beings to make improvements that aren’t rolled back into the shared library – this is damaging for two reasons: the component isn’t widely distributed so it’s not of benefit for the whole team, and the individual’s version doesn’t have the benefit of other’s experience for improvement.

    User stop upgrading to newer versions of the reuse library – like the previous issue, not only does the individual miss out on the love, he/she also checks out of the reuse world and the team can’t benefit from their experience.

    That’s why monolithic libraries simply don’t work. Packaging individual as their own components can have exactly the same issues – releasing every little component every time it’s updated can also lead to forking and users ignoring upgrades – that’s why packages of components that share common functionality or apply to a common programming paradigm are the way to go.

    Another benefit of grouping common components is to better manage the release process, as well as being able to disseminate ownership of different packages through your team.

    As you can tell, reuse is something I’m pretty psyched about :D

    cheers,
    crelf

  2.  
    June 10, 2008 | 8:24 pm
     

    crelf,

    Thanks for the feedback and for adding your own perspective on some of the article’s points. I agree with you about the issues associated with treating individual VIs as atomic components. However, as the cost of revising, distributing, and testing a package goes to zero, I think that the decision factors about how to group VIs into packages are based on VI (inter)dependencies. But, that’s a topic for another article ;)

    PS – I’m excited that you’re psyched. Code reuse is an area of software that I really enjoy. I’ll try to keep the articles and tools coming.

  3.  
    June 11, 2008 | 4:23 am
     

    Is not OOP designed for code reuse. Why not use LVOOP?

  4.  
    crelf
    June 11, 2008 | 6:27 am
     

    Jim Kring wrote: “I agree with you about the issues associated with treating individual VIs as atomic components. However, as the cost of revising, distributing, and testing a package goes to zero, I think that the decision factors about how to group VIs into packages are based on VI (inter)dependencies.”

    I couldn’t agree more. I don’t expect the cost of all those things to go to zero anytime soon, but the management of interdependancies is absolutely key in managing a medium to large sized reuse library, or any library that has more than just a couple of users.

  5.  
    crelf
    June 11, 2008 | 6:28 am
     

    Manoj C wrote: “Is not OOP designed for code reuse. Why not use LVOOP?”

    OOP is a programming pattern that indeed facilitates better reuse in your applications, but it’s not a method of actively managing reuse.

  6.  
    June 11, 2008 | 9:27 am
     

    Manoj: crelf is right. OOP facilitates reusing code — similar to how VI’s facilitate reusing code. In fact, OOP opens up reusing high-level components, such as user interfaces, coordinators, etc., which is not possible with VI’s alone. Doing configuration management of classes (and instances of classes) is a slightly different and more complex problem, which is typically solved by using a dependency injection framework that can dynamically resolve and bind run-time dependencies together to create systems of objects. This is very interesting stuff and something I’ll probably blog about in the future.

Sorry, the comment form is closed at this time.

Bad Behavior has blocked 5048 access attempts in the last 7 days.