How it Works in "Real Life"

Since these pages went up several years ago I've gotten many comments (mostly good) and questions from people who are interested in understanding how sponsored project development actually works. Here's how we do things at one of my larger clients. In my opinion it's fairly typical of how sponsored project management should be done. Their System Development Methodology differs from mine in many respects, but you'll see in this narrative that the big picture is the same.

Typically, the project starts with a request from the Business Unit (BU). In the request they will list out all of their requirements for the system. Further technical requirements that will be needed to make this happen will be added by IS.  These are bulleted in an executive summary and elaborated upon in the body of the request and design documents. Note that this encompasses Phase 1, Phase 2, and Phase 3 of my SDM.

Here's an important refinement that this client added to their version of the Business Requirements document: There is a clause stating that any change in requirements requires amendments to the design documentation (and the agreement of the BU to delay the implementation date by at one week at a minimum, with associated costs.  No agreement, no change. This is to stifle frivolous requirements and curb "scope creep"). With the adoption of a newer SDM for Rapid Application Development (RAD) they've unfortunately dropped this clause. Even in a RAD environment it's even more important to keep the original requirements in check. With an iterative development model it's far too easy to sneak new requirements into the system, losing sight of the original target. This is one of the biggest weaknesses of RAD.

Back to business: we use the bulleted requirements I mentioned above to build a checklist that will be used both for user acceptance and for post implementation review. There is and isn't a pro-forma for post implementation review... meaning that it's custom built for each project at the beginning of the project and refined during design. Some criteria are added as a matter of form, but these typically deal with infrastructure issues that apply to any project (got enough bandwidth? Too much? etc.)

At this point the system is designed. Since I typically act as the system designer, this starts with me. Using my understanding of the business process, the existing systems with which I'll have to interface, and some experience in creating a broad range of systems, I "sketch out" a high-level overview of what I want to accomplish and how I think it should be done, as well as how this will meet the requirements. I may actually draw up several alternative plans from which to choose. Then I typically get my team together and we look at it and ask the following questions:

  1. Is this practical? Can it be done? An error reporting system requiring an extra-sensory API is an extreme example of an impractical system.
  2. Is this cost-effective? In other words, is this worth doing? Is there a process improvement that we could use instead? Actually, the BU is supposed prior to handing use the project request, but they're not always aware of the effort involved in granting their requests.
  3. Has it been done before? Is there something else (commercial, previous in-house development, or Open Source) that we can build on?
  4. Is there a better way? I'm not infallible, and often team members know more about the pitfalls of implementation than I do. We may rough out the interfaces (sockets vs. RMI, for example) or the core technologies (DOM vs. CORBA) at this time.
  5. What are the risks? Which elements of the rough design are going to require additional work, a learning curve, or the invention of new technologies?
  6. Does this really meet the requirements? Is it possible to do so? If not, which of the prioritized requirements can be delivered, and which cannot? If some cannot, why not?

Out of this review (and brainstorming) session we decide on a rough design. I come up with a cost/benefit analysis, polish the design up for the BU (and list several alternatives), and work with the Business Analyst to present it for the the BU's approval. In presenting our solution, we honestly list the risks and whether all of the requirements are practical. The design at this point is specifically targeted at the BU. It's phrased in what I like to call Business Unit Markup Language. (BUML) It's vitally important that the BU is not bogged down in implementation details. In all reality this is a sales presentation. It's important for the BU to understand what's going to happen on their behalf and to have "mindshare" invested in the project. They must feel good about its funding... after all, they're paying for this stuff.

There are several possible outcomes from this meeting:

  1. The design can be approved. We can move forward.
  2. I can be asked to re-think the design. This is rare. Typically it happens when one of the alternative designs has some attractive features and I'm asked to flesh it out for consideration. If it happens, I simply re-do the Functional System Design until I get approval or rejection.
  3. The design can be rejected. In this event the game's over, I file the design (because it will come in handy later) and move on to the next project request.

Armed with BU approval, we begin the design in earnest. I start with the Functional System Design and flesh it out to become a Technical System Design. This defines the actual classes to be used and their interfaces. If I'm blessed with experienced programmer/analysts, I can hand the interface specifications to them and they can do their own module designs, which I approve. Otherwise I might have to do the module design as well. In either event we review the design before actual programming begins. I act as moderator and arbiter in negotiating the final design of the interfaces. This is much more effective and less prone to errors of ommission than than simply dictating an interface.

The programmer/analysts can now start programming; and they're responsible for testing their modules before submitting them as complete. During construction I'm busy generating test plans for System Integration, Load Testing, and User Acceptance.

Also during the programming phase I create a plan for implementation. This involves working with Technical Services and other project managers to schedule the implementation. I create UML Deployment Diagrams, and work with the Help Desk to prepare them to support the new system. Included in the Implementation Plan is a plan to have developers on-hand (or on-call) in the event that problems are encountered, as well as a plan to roll back the changes in the event of unresolvable problems

Now that the system is constructed, tested, and approved, it can be implemented according to the implementation plan. Post implementation review isn't scheduled until at least a month after implementation, but it could be as long as a fiscal quarter.  In the meantime the system is closely monitored, and statistics are gathered regarding usage, performance, errors and bugs, help desk tickets, and the like.

At the post implementation review all of this information is trotted out, along with the checklist of requirements.  The question is asked, "Does this meet our requirements in production as well as we thought it would prior to implementation?"  If so, cool.  If not, we have a few options:

  1. We can go back to the original system (one technical requirement is a fail-back plan, but honestly, if that were to happen, it would have happened prior to review).
  2. We can use what we learned post-implementation to enhance the system. This happens most often. It's one reason why a software package almost always has multiple revisions. (Another reason is "planned inadequacy", which pretty much describes any iterative process, and much of open source development.  This isn't meant to be perjorative, you're just limiting functionality for other gains (time-to-market, reliability, or other))
  3. We can just live with the limitations we've discovered.  This would typically be a prelude to 4.
  4. We can cut our losses and replace the system with something better.

An example of 3 and 4 would be a Sales Force Automation package I recently replaced.  The original was underpowered and kludged up to work with 5 times the number of users it was designed to accommodate.  We knew it sucked, but it was better than nothing, and we were willing to live with it until something better was designed commercially. Basically, we gambled development time on the wager that "somebody" would come up with a solution. Eventually the wait paid off and we were able to replace it with the commercial product that we modified to our specifications without spending a lot of development money fixing the old system.

Now, if we come up with anything but success we put together a list of "Lessons Learned" so that we don't make the same mistakes again.  These are put in the project document folder in a shared directory; in an issues database shared by the IS department; and are talked about in one of our monthly departmental meetings (with the entire department in attendance.)

As to who conducts the reviews... here it's done by the project requestors, the IS developers, and by a Business Analyst who is tasked to be the intermediary (he translates "Geekspeak" into "Business Unit Markup Language" and visa versa).  There is no outside agency involved (except perhaps in testing), because from experience we've learned that it's better to encourage honest assessment from those intimately involved than it is to bring in someone who is impartial, but will simply miss key factors due to unfamiliarity with the project.

Phase 9. Post-Implementation Review Swim Lane Diagram


The informational content of this website is copyright 1997-2002 by David F. Leigh unless otherwise stated. Permission to distribute is granted under the terms of the GNU Free Documentation License.