Created 6/27/99 - Last Updated 8/13/00 - Jack Harich - Go Back
| This Mini Process version emphasizes "perceived value" and quality, an improvement over the previous version. It represents an attempt to move beyond OOAD centric software, and on to customer centric. |
![]()

A New Perspective
The above is a condensed version of the full grid. First print out the full Mini Process Grid 2 Image, study it, and have it in front of you right now. For those wanting better print quality or a starting point for a customized version, here's the original Visio file in WinZip format. It contains the full and condensed grids. We don't have the very large full image on this page for better page breaks.
In my continual search for better ways to do things, I've discovered that the first Mini Process is flawed. It does NOT emphasize quality and customer satisfaction enough. It is simplistically OOAD centric. It requires tacking on quality and satisfaction, rather than an integrated approach. In practice, large projects using it suffer from far more quality problems than they should. (Small ones do fine) It still has the great advantage of good OOAD, simplicity and flexibility, but needs an additional version for folks getting serious about the total impact of process results.
Stepping back 10 miles, it looks like we need something like:
|
Software Development Drivers |
|
|
Dimension |
Driver |
| Project | Risk driven |
| Content | Perceived Value driven |
| Structure | Architecture |
| Total Process | Customer interaction driven |
Content is what you're trying to put into whatever you're delivering to the customer. It means that certain deep something of actual value, not the packaging, the glitter of the GUI, the extraneous stuff. For example, I try to make the documents on my website "content rich", staying away from razzle dazzle and impressive wording, trying for extreme conciseness, and just trying to deliver a simple, understandable, useful message.
Perceived Value is composed of the presence of things that allow the user to accomplish what they want easily and well, the absence of defects, and the effect of price. It has nothing to do with how developers perceive things. For example if a user doesn't use 80% of the features, they are perceiving a subset, and have their own valid view that may be very different from what you intended. For example if a product costs $2,000.00, the sales rep was sneaky, and the user isn't getting much done with it, the perceived value is low or negative. The whole idea is to use a term like "Perceived Value" so that we can leave our engineering bound mindset behind, and enter the user's mindspace. This does wonders with what we are trying to accomplish.
We often hear from experts that "Software should be architecture driven." Well, after years of trying this out, it's a good start, but there is a higher level of abstraction. Software should be customer driven, but bad architecture is software's biggest risk.. Since software can be considered as content plus structure, each should have their own driver. This way we do not ignore the customer as much as the first Mini Process did. (Jack repents, does 50 hail James Goslings, promises never to be so engineer centric again, and shakes his finger at Grady Booch who promoted "Software should be architecture driven". :-)
Total Process - This is a valid dimension that should not be mixed with others. Imagine that in addition to your Mini Process for creating systems, you have a greater process for your organization from the customer interface viewpoint, since the customer is the main driver of most businesses. The Total Process needs to be very clear to customers and your staff, so everybody knows what to do. We do not present that Total Process here, but refer you to Systems Thinking literature. For starters see the classic "Four Days with Dr. Deming" by Latzko and Saynders, and the deep best seller "The Fifth Discipline" by Senge. Bear in mind that the Deming book is based on lectures given decades ago when manufacturing was dominant.
Also see Quality Software Management, Volume 1: "Systems Thinking", Gerald M. (Jerry) Weinberg, Dorset House Books, NY, 1-800-DH-BOOKS. This is perhaps the most readable entry-point into quality-minded process, from the viewpoint of a software development culture. Contributed by Ken Ritchie.
Process Steps
We discuss only what is not clear from the full grid. Please do not confuse the full grid with the condensed grid above. See the worksheets for training on the Concept, Analysis and Design steps.
Requirements Phase (Analysis)
Perceived Value Goal Setting - Here we differ from OOAD by not doing Use Cases first. We are moving to a higher level. What exactly is important to the customer? It's not clicking on this to do that. It's more like "I want rock solid reliability and the ability to install the whole thing in 15 minutes. Everything else is just gravy." The project that knows that, and is driven by that, will deliver that.
High internal value goals can be added to this, such as "Only pursue projects that can be delivered in less than 6 months". Note these are usually internal standards, and not listed with every project.
Risk Management - This includes risks related to Value Goals, customer satisfaction and project success.
Design Phase
Acceptance Design - Ahhhhhh, here we go with Integrated Quality Control. Notice how in Design and Implementation we do quality before functionality. Many have objected with, "How can we design a test when we don't know what we're testing yet?" Others say, "This is great. Looking at our detailed requirements, I can think from a quality viewpoint and design that viewpoint first, instead of the functionality viewpoint I've been stuck in for years".
Some projects can do some or all Exit Criteria sooner, in Requirements. The other Acceptance Design artifacts are the same as Functional Design, maybe more for manual tests.
Functional Design - This involves modeling the solution. Here we design the functionality the customer wants, which is what we will deliver. Acceptance stuff is not usually delivered. The two should not only never be mixed, but function should follow acceptance, just as "Form follows function". The test is the function definition, and the functionality is the form it takes. And then there's the older, "Don't put the cart before the horse." The test is the horse that pulls the cart successfully to the customer. (Hmmmmm, getting preachy and subtle here. Hope we've made our point memorable.... :-)
This process is very iterative. In Acceptance Design you would first do high level design. For example, "We must test the following Messages into and out of parts A, B and C. The other parts are sub parts, and so the test can ignore them." Then Functional Design would design the parts. As this happens you would go back to Acceptance Design as you discovered each specific Message, and add it to the test design.
Regarding "Maximize chance of meeting or exceeding requirements." As Miguel Serrano pointed out, this could lead to scope creep. The idea is to always strive to exceed customer expectations, not just meet them. This way if you fall a little short things are still fine. It also keeps you from doing just an average job or falling into a "Let's just get by" attitude. Naturally one should not go overboard with exceeding requirements, also called "gold plating". This is a standard risk.
.
Implementation Phase
Notice how in this phase Functional Implementation is sandwiched by Acceptance Implementation and Acceptance Testing. This is beautiful. It puts the programmer closer to writing for the test, a.k.a. writing for what we have defined the user wants exactly. Much more conservative code tends to flow when developers see things from this perspective. The defect rate plummets because of all this and near instant feedback, because the test is ready when the code is. Continual, instant testing is possible. Each iteration is fully tested, leaving the project in a continual state of readiness. If you think people love to develop to a clear spec, wait till you see how they just adore pouring out their creative soul to a clear test that's still warm....
Predictive Acceptance - The net result of Perceived Value Driven and Integrated Quality Control is we can predict acceptance. We know how the customer will react, whether they will feel things are complete, etc, because they have approved the Value Goal, the rest of Requirements, and the Acceptance Design. Every iteration has been accepted (at least internally) as we go. The crunch that usually happens at the end of projects now happens up front throughout the project, and so at the end of the project we are in full proactive mode, not reactive.
Examples of Acceptance Tests are an automated/manual full regression test, stress test, usability test with quality bar defined and simulated run of customer data. The full regression test is the most important. It is mostly automated, grows with the project, is run after each significant work effort or daily, and stays with the product for life.
Learning Phase
Let me tell you a secret. The most important thing is what you learn, not what you do. Those in the Information Age spend perhaps 90% of their time thinking and learning, and the rest doing. For example the average developer produces 10 to 30 lines of code a day. This takes only 5 minutes to type in. The rest is thinking and learning. The cycle is learn, think, do. Therefore the better our learning capability, the better our thinking and doing is.
Moving our abstraction up to projects, the most important thing is to learn from each project cycle. Thus the last step is Lessons Learned, also known as Project Post Mortem. This is one way continuous improvement happens in the large. (The term "Lessons Learned" was suggested by Lou Deluca as more positive than Project Post Mortem.)
List of Common Important Perceived Values
Here are some generic ones:
- Rock solid reliability
- Easy and fast installation
- Ease of learning
- High likeability
- Speed of acquisition
- Ease of acquisition
- Low but not lowest price
- Sufficiently rich featureset
- Effectively gets the job done
- High performance
- Bulletproof security
- Great support
- Clear vision for future versions
- Painless upgrades
And here are some domain specific examples to give you some ideas:
- Get to any window in 3 clicks
- Logon and other utilities instantly available from any window
- Full driver set for all major databases
- Ability to enter a new patient in 3 minutes flat
- Fully and easily configurable for all our departments
- Totally compatible with Photoshop 5.0 files
List of Common Project Risks
These are mostly gleaned from my actual past project plans. Please send in you favorites!!! :-)
Note that your risks can be divided into external and internal. External risks are undesirable behavior whose source occurs outside of the project team. They are usually more difficult to resolve and have higher impact than internal risks, and so should be given higher priority.
External Risks
- Product poorly received by potential customers
- Product does poorly when customers try to use it
- Wrong requirements
- Unstable requirements
- Pressure from (pick a favorite) is distorting this project
- Dependence on subcontractors
- The (pick a favorite) project gets shortchanged because of this project
- A key technology may not work
Internal Risks
- Bad or distracted Project Management
- Schedule slippage
- Bad architecture
- Scope creep
- Infeasibility
- Insufficient team member skills
- A key resource is overextended
- Program Manager and Project Manager are not separated
- Performance
- Deadlocks
- High defect rate
- Rate of change exceeds ability to cope with it
- Project continues along rocky path to average or worse conclusion
- Too ambitious a design
- Second System Effect (see "The Mythical Man Month" by Fred Brooks)
- Many team members remote
- Incompatible team members
- Lack of good collaboration and teamwork (team has not gelled)
- Gold plating (surpassing requirements way too much, often in uneeded areas)
And we have these solutions from the book "Software Runaways" by Robert Glass. Read this if you are not yet very risk driven. It's about the 16 biggest and worst software failures of all time. This is the book that scared me into risk management. On page 245 a study of companies that had suffered from runaways proposed the following solutions. The percentage is how many suggested that remedy. The list of solutions to preventing runaways is short but astounding.
- Improved project management - 86%
- Feasibility study - 84%
- More user involvement - 68%
- More external advice - 56%
List of Common Traits
Good news. These are identical to the generic Important Perceived Values above. The difference is they're not crucial to the customer. (I may be missing something here. We shall see what experience brings. A little tired now....)
List of Common Acceptance Exit Criteria
"Exit Criteria" is used to determine if a step is done. "Acceptance Exit Criteria" determines if the project's final product is done and ready to give to the customer. Examples are:
- Number of bugs discovered per hour of testing.
- Usability Test receives a 98% rating.
- Full Regression Test passes when run by 3 different test managers.
- Beta Test Program is discovering less than 1/10 bugs per day.
- Live Data Special Test shows that customer's data behaves the same in the old and new systems.
- Graph of bug discovery rate reaches an obvious asymptote.
- Requirements to Implementation Mapping shows 100% done well.
- Performance Test shows 35% improvement over previous product version.
![]()
In ISO9000, Quality is often defined as "That which satisfies customers".
"Quality is an emergent property of having a suitable and appropriately detailed vision at the start."
Contributed by Steve Alexander