A Plan Might Help
by Matthew Cramer
“Cheshire Puss, …Would you tell me, please, which way I ought to go from here?” ”That depends a good deal on where you want to get to,” said the Cat. “I don’t much care where—“ said Alice. “Then it doesn’t matter which way you go,” said the Cat. “–so long as I get somewhere,” Alice added as an explanation. “Oh, you’re sure to do that,” said the Cat, “if you only walk long enough.” (Carroll, Lewis. Alice’s Adventures in Wonderland. Chapter VI)
All of us have hopes and dreams that come and go throughout our lives: career, places we want to visit, who we want to marry, and many more. Some are just dreams and pleasant what-if’s we entertain. Others, we pursue as specific objectives we will commit resources — time, talent and money — to achieve. It’s when we define an objective that plans and planning issues arise.
Plans come in many shapes and sizes in our personal lives, business, church and government. There are battle plans, wedding plans, long-range plans, annual plans, marketing plans, development plans, vacation plans, evangelization plans, and so on. Each of these has a unique flavor and special considerations. But all of them share some key features that can make or break a successful plan; these form the content of this posting.
When a seemingly uncomplicated, straightforward project begins, managers are frequently tempted to skip planning and just start work, shooting from the hip.
Several years ago, I accepted a consulting assignment to assess the operations of a sand and gravel company in the West. The site manager, whose background was principally in marketing and sales, had just won a large contract for railroad track bedrock, and the owners wanted to make sure everything was ready for successful performance.
Railroad track bedrock is defined by a specification that requires a certain size, strength, and sharp edges so the rock will interlock to withstand vibration and pounding as the trains roll over the rails. The owner’s site had a large, sand and gravel glacial deposit about twenty feet deep lying on top of the surrounding terrain. The rocks and boulders in the glacial deposit were strong enough (granite), but smooth due to grinding from glacial movement in eons past.
Raw material had to be processed through a rock crusher to break the larger rocks into smaller pieces with jagged edges, then sorted for the correct size. The site manager’s concept seemed simple enough; establish a mine face, remove material from the face of the mine with front-end loaders, process it through a rock crusher, sort and put the railroad bedrock into waiting rail cars.
As I reviewed the situation with the site manager, it became obvious he had no real plan to accomplish the work. All of the activities were new at the facility; hence he had no previous performance to guide his time estimates for each of the processing steps. He had not calculated the consumption of raw materials, and there were no contingencies for breakdowns and maintenance. It was not long before he ran into serious trouble.
The specially hardened steel plates, against which the rocks are thrown as the crusher spins, were very expensive and subject to frequent failure. The crusher broke down regularly and was inactive for long periods awaiting new plates.
Each time a load of material from the mine-face was delivered to the crusher, the face retreated slightly, so the next front-end loader had to travel farther to reach it. Over time, costs rose alarmingly for fuel, and more front-end loaders and drivers to keep up the pace traveling from the mine-face to the crusher and back.
The throughput was abysmal. Less than 10% of the mine-face material qualified as railroad bedrock after processing. Conveyors dropped the rest of it on the ground into piles of sand and gravel sorted by size and end-use (plaster, concrete, etc.). A few calculations revealed that, to complete the contract, the entire sand and gravel deposit would have to be picked up (over 5 acres worth), processed though the crusher, and then laid back down again in huge piles — enough to fill three football stadiums to the top. The contract losses would be staggering. Worse, there was little market demand to absorb the high tonnage of cast-off sand and gravel and recapture the loss in a reasonable amount of time.
After consultations with the owners, I arranged for the competitor in the original contract bidding to take over the contract. He was kind enough to only require that we compensate him for the difference between his higher bid and what the railroad would pay for the lower bid awarded to the site manager. The contract suffered a substantial loss, but the financial hemorrhaging was stopped far short of that projected to complete the contract.
The site manager didn’t know his competitor owned a limestone cliff outcropping. All the competitor had to do was drill holes in its face, dynamite several feet of thickness off of the face, then bring in portable sort screens and conveyors to select and ship the roadbed material. I asked the site manager how he had arrived at his bid price. He answered: “I knew what the competition was going to bid, based on earlier contracts. So I just figured we ought to be able to do it for 10% less.”
PLANNING’S BAD REP
At first blush, the desirability of quality planning and plans is a no-brainer. Even the Bible endorses good planning.
“For which of you, desiring to build a tower, does not first sit down and count the cost, whether he has enough to complete it? Otherwise, when he has laid a foundation, and is not able to finish, all who see it begin to mock him, saying, ‘This man began to build, and was not able to finish.’ Or what king, going to encounter another king in war, will not sit down first and take counsel whether he is able with ten thousand to meet him who comes against him with twenty thousand?” (Lk 14:28-31 RSV)
Yet, because of past difficulties or lack of proper training, many people approach planning with trepidation — some of it considerable. Here are four, common assumptions that get in the way of a healthy attitude toward planning.
- Planning is a waste of time. You can usually complete the job in the time it takes to finish the plan.
- The time and mental effort required to plan, will lead supporters to lose interest and enthusiasm, abandon the objective, or significantly reduce their support.
- Once established, the plan mandates specific actions in a rigid construct that eliminates flexibility in pursuit of the objective.
- A good plan must correctly predict the exact logic, cost, and schedule performance that will actually occur in pursuit of the objective.
All of these assumptions are false. Indeed, they are significant roadblocks to the many benefits of good planning.
The first and most significant benefit of planning is that it increases the certainty you will successfully accomplish your objective.
The playing fields of life are littered with the wreckage of failed attempts to achieve noble or otherwise enticing goals that were begun with great enthusiasm and little thought as to whether there was a clear path or sufficient resources to get there.
Impatience, idealism, the frontier spirit, and the lure of success are the principal enemies of planning. They conjure up seemingly supportive assumptions that are frequently false and lead us down the primrose path to failure. I am not averring here that we should insist on guaranteed success, or be significantly risk-averse. Indeed, whether simple or sophisticated, a well thought out plan often includes efforts to avoid or overcome obstacles; but more importantly, its existence demonstrates there is at least one sequence of steps by which we can get from Point A to Point B.
The second major benefit is that plans provide the yardstick and baseline with which we measure performance variances along the way.
Plans should never be taken as a rigid construct of activities that must be pursued, without change, to a bitter end. To the contrary, plans are usually out of date an hour before they are completed. Thus, once a plan has been established, its primary usefulness is to provide fixed reference points to check progress along the way.
Just because actual progress differs from the plan, doesn’t necessarily portend bad news. It could mean performance is better than anticipated and you will complete your objective early, or under budget. It could mean a new, better way to get there has been revealed that promises improved performance. It could mean an unanticipated obstacle has been encountered that mandates a major revision to the plan. And it could mean the task has become more difficult than originally estimated and more resources are necessary to complete on time.
Assessing, analyzing, and responding to performance variances is the purview of management. It constitutes the largest portion of management effort as work progresses toward the goal. But if there is no baseline, and no yardstick with which to measure in-process performance, management is blind and cannot implement necessary corrective action to insure successful completion of the objective, let alone arrive on schedule and within budget.
Faced with the inability to assess the meaning of progress to date, management is left with an unproductive, knee-jerk, Chicken Little approach — i.e., run around excitedly yelling, “HURRY” to the team’s performers, as often as possible.
A third important benefit is that the planning process often inspires creative ways to accomplish the objective.
Each time we plan, we open our minds to consider alternatives and possibilities that we would otherwise not do in the press to accomplish our objectives. Once project execution has begun, the sheer weight of day-to-day activities leaves little time to pause and reflect on the bigger picture.
The planning process requires us to take time at the beginning, before we send the gladiators into the arena, and think about risk, obstacles, resources and other key issues. It is precisely in these moments that creativity and inspiration often influence our approach so it becomes easier, more erudite and effective.
This is one of my favorite business concepts; it’s easy to understand, an invaluable tool in planning and other management tasks, and works well in our personal lives. It means that a certain task (large or small) has been successfully accomplished before — you have the equipment and technical expertise; you know its definition, how to do it, what it cost, how long it took, how good was the quality, and what were the risks.
At first blush, the concept seems simplistic, a no-brainer. But you would be surprised at how little most organizations know about the performance parameters of the many tasks that make up the products and services they provide.
Years ago, as a new Program Manager in a large company, I considered it a treat to observe the management culture at work when I was invited to a high-level, proposal-pricing meeting for a very large, highly competitive program.
The meeting droned on with the usual discussions about the product specification, contract terms, competitive analysis, risks, cost and schedule estimates, and so on. Finally, we reached the climax — setting the price. The Vice President in charge asked the assembled group a crucial question: “Have we ever done this kind of thing before?” The proposal manager responded: “Yes, four years ago we conducted the XYZ Program. It was very similar in many ways, with a few differences.” The VP then asked: “Well, what did that program cost?” “We don’t know”, answered the proposal manager.
Much to my surprise, the VP simply said, “Oh — OK”. After some additional discussion, he set the bid price. I later learned first hand, that the management culture at this company did not endorse or nurture the concept of tracking demonstrated performance and making it available for future reference.
Some people ignore past performance in the mistaken notion that the past is the past; today is always new and different. In the creative sphere, at the broadest level, it is often alleged that no two tasks are alike or happen in the same way. Creative work requires inspiration, which cannot be corralled and induced on demand. What’s more, creativity and inspiration cannot be separated from, and often happens during, the repetitive tasks.
A composer gets a new idea for chord progressions or harmony amongst the instruments, while copying the score for his composition. A software designer is completing final trials for a new user interface, when he suddenly realizes there’s a crack in the back door of the core program that allows hackers access to the user’s data. A writer is polishing his prose one last time before mailing it to the publisher, when it dawns on him there is a major plot inconsistency halfway through.
Well, all of that is true about creative efforts, at least to a large extent. But if you analyze the tasks involved in any segment of creative work, it would surprise you how many of them are repetitive in scope and have consistently similar performance parameters from one job to another.
Conversely, in high volume production runs, where you would expect repetitive results, the performance parameters vary from lot to lot. They are never identical. There are cost and schedule allowances required to restart the run. Changes in the machinery and personnel will affect the learning curve performance.
What’s more, all estimates are really guesses, hopefully educated by experience in many cases, but guesses nonetheless. I call this the glandular component because it is based on memories of past performance (often unreliable), feelings and intuition — not documented fact.
So if task performances are never identically the same, and estimates are glandular anyway, why is demonstrated performance so important?
The answer is easy. When demonstrated performance is used, the glandular component is significantly reduced, thereby increasing credibility and reducing risk.
Let’s use a cost estimate to illustrate the point. Suppose a job is estimated to cost $10,000. Further assume that the error rate for glandular estimates is plus or minus 20%. If the entire estimate were glandular, the estimating error would be plus or minus $2,000, or a $4,000 range of uncertainty.
Now assume that demonstrated performance for a similar job exists, and estimates for the new job are constructed as follows: $9,000 demonstrated performance for the similar job, plus $1,000 glandular adjustment for differences with the new job. In this case, the estimating error would be plus or minus $200, or a $400 range of uncertainty; reduced by a factor of ten.
Most managers would trade their golf clubs for a significant reduction in risks associated with technical, cost, schedule and quality performance in their organizations. Nevertheless, it continues to amaze me how few of us track past performance in our personal and professional lives so as to reduce risk in new efforts.
START WITH CLEAR LOGIC
A plan’s logic is the sequence of interrelated tasks or steps that establish a clear, continuous path from the project’s beginning to completion. It may not be the only path, but it is the one chosen to be the plan’s foundation, and the path down which efforts will continue until circumstances dictate otherwise.
For simple plans, the flow of tasks or work will be serial (each task followed by another in sequence). When the job is more complex, parallel paths of workflow are usually developed for scheduling or risk reasons.
Meal preparation has many parallel paths of work flow so that everything arrives at the proper time, and at the right temperature for the guests. Teenage snacking plans (sometimes called predatory grazing) are usually serial and closed loop: Open refrigerator, remove item, nuke it, eat it, return to refrigerator.
Whether simple or complex, a plan’s logic uses the concept of demonstrated performance to root out high-risk areas where substantial confidence is lacking in technical or specification performance. All of the major tasks (program phases) at the highest or summary level are first defined in the required sequence for successful completion. Then sub-tier plans are developed for each major task.
The key questions for each task are: “Do we know how to do it, do it well, and do we have the required talent and equipment to do it?” This process is repeated in successively lower levels until any tasks without demonstrated performance are exposed, regardless of the level at which they occur.
Keep in mind that demonstrated performance for a task does not mean that the identical task has been accomplished before. It does mean that a similar task, or group of lower level tasks, have been successfully completed before, whose scope is reasonably close to the task at hand.
Tasks that do not have substantial demonstrated performance are, by definition, truly unchartered areas — new to the organization, and very high risk. After all, if you’ve never done it before, or anything like it, the probability of complete success is rather low on the first try. A separate plan is developed for each of these high-risk tasks to define “experiments” necessary to establish demonstrated performance.
The experiments can be simple or very involved. To hold down cost and speed up the schedule, experiments can ignore many of the rules and demands of a formal project. But they must be sufficiently rigorous to establish demonstrated performance with confidence when they are completed.
After taking a new job downtown, my granddaughter laid out a plan to calculate what time to set her alarm in the morning — piece of cake, right? Well, maybe.
She already knew how long it took to rise, dress, get her daughter up and dressed, eat, and take her daughter to day care. The final step, “travel to work”, had no demonstrated performance because she had never worked downtown. She didn’t know how long it would take to drive downtown during rush hour, and whether she should take public transportation or drive.
So a few days before her job started, she drove downtown during rush hour, and took public transportation on another day. Conducting these “experiments” also gave her a chance to observe the dress code at her new job site and compare driving and parking costs with public transportation costs. There were significant differences and tradeoffs between the two alternatives. But she could now base her calculations and decisions on facts, rather than glandular urgings. She arrived on time her first day at work.
Because of constantly changing technology, the aerospace industry has to manage programs with a considerable lack of demonstrated performance. Experiments are often begun in laboratory settings, followed by small-scale prototypes, then full-scale development, qualification and field trials, before production and deployment. Each of these program phases is designed to develop increasing demonstrated performance in a controlled way that will keep the cost and performance risk low when production and deployment occurs.
Huge cost overruns and major schedule delays on some development programs have given them a bad reputation. Nevertheless, in aerospace, a good rule of thumb is that development is about 20% of a major program’s lifetime cost. The other 80% is for production, deployment, operations, maintenance and logistics.
The B-52 bomber program began in 1946. First flight was in 1952 and the aircraft became operational in 1955. Still operational, these aircraft have been upgraded and modified many times since their original production. They are now more than 50 years old — much older than the crews that fly them. Seen in that light, early overruns in developmental “experiments” to acquire demonstrated performance can be a small expense to protect the huge investments that follow.
NEXT ARE THE ESTIMATES
Whether you’re making a vacation plan, a household budget or a multi-million dollar project plan, estimates for cost (resources) and schedule (elapsed time) are usually required. Here again, demonstrated performance plays a significant role. The credibility of the estimates is vastly improved when actual, past performance can be used to substantiate a large portion of the estimates.
The process is similar to that of logic development. If demonstrated performance is available, one has only to estimate the differences between past performance and the new task at hand. Keep in mind that the glandular component, or adjustment, can be positive or negative. There is nothing wrong with an adjustment to make something smaller than before; just as there is nothing wrong with an adjustment to make something larger. What matters are the applicability of past performance to the new task, and the size of the glandular adjustment.
When a task has demonstrated performance, additional sub-tier plans are not required for logic purposes, but may still be required for estimating, performance tracking, and display purposes.
Past performance might say, for example, that a very critical task will require three months to complete. A prudent manager could require the schedule broken down into monthly, or even weekly, well defined increments so he can keep a close eye on progress.
Let’s say the gearbox proposed for a new waterjet pump requires redesign to accommodate a significant increase in throughput horsepower. All of the internal shafts, carry-through bearings, gears, and the housing will only require minor tweaking. Design costs for these components would be the same as for the earlier unit, with minor adjustments.
But the main, output thrust bearing requires a complete redesign and development to handle the additional loads. The cost estimate for all of the design might consist of $2,000,000 demonstrated performance for the earlier unit, plus $1,000,000 glandular adjustment for the new thrust bearing’s development. That glandular component may only be one-third of the total estimate, but it’s still very large in its own right.
Given this situation, a prudent manager might want the new thrust bearing’s entire estimate, or the glandular adjustment itself, broken down into sub-tasks for design, hardware and test where actual experience from the details of a larger thrust bearing’s development could be adjusted (scaled down) to the new requirements.
THE RISK EFFECT
The use of demonstrated performance will reduce uncertainty and risk in the estimates and the logic. The smaller the glandular component is, the lower the risk. It follows then, as the size of the glandular component grows, so does the risk.
On a practical basis, the size of the glandular component is best kept at less than 25% of the total estimate (schedule or cost) for a given task. When the glandular component approaches 40-50% of an estimate, a “tail wagging the dog” situation emerges.
High percentage adjustments indicate a considerable lack of similarity between the demonstrated performance and the task at hand. At the 40-50% level, the estimator almost doubles (or cuts in half) the past performance to represent the new task.
This action tends to move his glandular assessment outside the arena of educated estimates, and approach the arena of wild guesses. The glandular component thus morphs into the equivalent of a high-risk task where the adjustment itself injects unacceptable risk of its own.
If possible, very large glandular components should be treated as high risk and broken down into smaller pieces where demonstrated performance at lower levels can be used. If lower level performance is not available, very large adjustments should be treated as “experiments” where demonstrated performance is acquired early, before commitment to higher-level tasks.
In sum, it’s always best to use demonstrated performance; but keep the size of the glandular adjustments as low as practical.
AUTOMATION AND PRESENTATION
We are blessed that modern computers and software programs are available to automate management of the database details and the processing required for plans and planning. The latest desktop computers can handle all except the very largest programs. And, smaller programs can take advantage of these tools as well without incurring significant administrative costs to use them.
Gantt charting is the tool of choice for small and medium size projects since the early 1900’s. Tasks and sub-tasks are listed down the left side of the chart. The relationships of various tasks to one another are indicated by indentation and alphanumeric notation similar to that used in a legal document and outlines for a talk or publication. The organized task list is called a Work Breakdown Structure (WBS).
There is a timeline across the top of the chart, and a vertical line that changes with each publication indicating the “as of” or “time now” date for the notations. Open bars opposite each item in the WBS indicate the start and duration of the various tasks; they are filled in to show progress, ahead or behind. An open triangle at the end of the bar shows the scheduled completion.
Task completions are indicated by a filled (closed) triangle located under the actual completion date in the timeline. Open and closed diamonds are used to indicate potential and actual slippages and reschedules. Other, unique notations have been adopted by different genres such as the Department of Defense, NASA, and construction.
Gantt charts work very well for small to medium size programs where the interactions amongst the various parallel paths can be handled easily by the scheduler’s memory. But when programs are complicated with a large number of tasks, spread over many pages, it is difficult for the managers and schedulers to figure out how, for example, the lack of progress for a task on, say page 6, affects tasks on the other 30 pages.
Keeping Gantt charts up to date and carefully analyzed for a large, complicated program requires significant administrative cost for schedulers and analysts.
In the late 1950’s, the task oriented Critical Path Method (CPM) of automating the relationships amongst various tasks, was adopted by DOD and NASA. There are other names for this method: the Precedence Diagramming Method (PDM) and a more generic sounding “Schedule Networking”. Nevertheless, they all use the same basic concepts and are used almost exclusively throughout industry today for large or complex projects.
Tasks are shown as boxes that contain an identifier (usually a WBS code for report organization), and the elapsed time to accomplish the task. Lines connecting the boxes, called dependencies, represent the relationships of each task to the others. For example: A line from the end of Task A to the beginning of Task B means that Task B cannot start until Task A has been completed. This “finish to start” (FS) dependency is the most common one, but there are others for more sophisticated situations.
Because the CPM automates all of the dependencies amongst the various tasks, it is able to calculate the elapsed time for each path from project beginning to completion. The longest path is called the critical path. Tasks on this path require the most management attention because any delays on this path will delay the program completion.
Tasks on shorter paths are identified by “slack” or “float”. Float can be used to identify from where resources can be diverted to protect the critical path. These decisions are not automatic, but the data gives management considerably more information about where problems lie, what their possible consequences might be, and what resources are available for corrective action.
A schedule network’s graphic display for a complicated program is a busy presentation of boxes and lines — an invaluable tool for the scheduler in charge of the network. But it is not suitable for management review. Fortunately, software that automates the data includes options for modified Gantt and specially sorted tabular displays that give management a quick and accurate picture of program status.
With the schedule established, it’s time to lay in the cost estimates or budget for the various tasks. There are many software programs available for this effort; some integrate the cost or budget data with the schedule detail in a single database. Besides a “straight line” or “level loaded” time phasing of each task’s total estimate, some software programs provide the capability to “font load” or “back load” the cost distribution over time.
The default reporting of cost performance is usually a simple graph of cumulative budget vs. actual expenditures, plotted across a project’s time line. The charts are published for program total, and selected sub-levels in the WBS. Unfortunately, these charts may have some “quick look” value at the top level, they are of little use in the day-to-day management of a program.
Budgeted costs are time-phased in accordance with the project’s planned schedule. Actual costs are resources expended as the work actually occurs — whether on-plan, ahead or behind. To the extent actual performance differs significantly from the planned schedule, comparing budget vs. actual can quickly turn into an “apples and oranges” situation.
An underrun to date (apparent good news) could mean that a large task is running behind schedule (actually bad news). An overrun to date (apparent bad news) could mean that a large task is running ahead of schedule (actually good news). There are as well, offset possibilities where large overruns to date for tasks in progress (actually bad news), are offset by underruns for delays in the start of major tasks (more bad news), so that the net of both hides two problems.
In times past, the solution to these threats was to review lots of data, at a very low level of detail. However, for large or complex projects, substantial administrative costs are required to run and analyze the data, then summarize and escalate those findings through higher levels of management.
In the early 1960’s, DOD and NASA adopted an automated version of a concept called Earned Value that had been used in industry since the 1900’s. Earned Value simply accumulates the originally budgeted value for completed tasks, along with a “percent complete” value for tasks in process.
Comparing the Earned Value of work performed to the Budgeted Costs to date, reveals whether the work actually performed is consistent with the original plan — hence, a schedule status. Comparing the Earned Value of work performed to the Actual Costs, reveals whether the work actually performed is consistent with their original estimates — hence, a cost status.
The automated Earned Value concept provides faster and less expensive insight into the correct status of a project. But it is not science, just more accurate indicators. As they say, “The devil is in the details”. Four key issues come into play on a regular basis:
- Has the task really started so I can claim at least some percentage of completion?
- Has the task really been completed so I can claim the full value originally budgeted?
- On what basis do I establish a legitimate percent complete for in-work tasks?
- How are scope changes to be handled?
A comprehensive treatment of these issues is beyond the scope of this presentation but I think you get the picture.
Some software programs provide other “bells and whistles” such as Resource Leveling, and support for the highly structured Cost and Schedule Control System Criteria (C/SCSC) used by the DOD and NASA on large or complex programs. Their sophistication also goes far beyond the intent of this presentation so I only mention them in passing.
HINTS AND LOOKOUTS
- From the simple to the complex, there are always three elements of a plan that must be synchronized — the definition of the objective, the costs, and the schedule. Great care should be exercised to keep them coordinated. In large or complex programs, with many different actors involved, it’s very easy for the assumptions, definitions and groundrules for one element to drift apart from the other two.
- Planning efforts usually begin at program start and lay out the succeeding tasks (serial and parallel) leading to completion. Sometimes it is not clear which task should go next. When that happens, try precedence thinking. Move to the end of the program and work backwards asking: “What tasks must be completed before this one starts?” Thus, the logic can be worked from both ends towards the middle.
- Handoffs from one entity to another require special attention. They are classic breeding grounds for schedule problems.
Engineering approves the design and sends it to Document Release. But the Shop cannot start work on the part until the print is available to them. This processing time to release the design into circulation can take a few days to a week, and is often overlooked. Modern Computer Assisted Design (CAD) systems have done a lot to eliminate this specific bottleneck, but the point is still valid.
Parts have been received on schedule at the receiving dock, but their assembly cannot start until they have been logged in to Stores inventory, kitted and released to the Shop for processing. The supplier for an assembly on the Critical Path calls the buyer two days before it’s due, to announce it will be three weeks late. These are all typical schedule delays that surprise and frustrate the program team in tight situations.
Nevertheless, handoff delays are predictable and frequently overlooked in the initial plan. They must be identified and dealt with up front, through careful definition of task completions, inserting additional tasks that identify the real work often overlooked, and requiring regular reports from key outside sources.
- Beware of Creeping Comfort. The insertion of allowances for risk and uncertainty should occur at the higher levels, where proper consideration can be given to offsetting elements. Every estimator adds a little pad — in case he is slightly wrong, and because it’s better to underrun than overrun. When estimates are made at a very low level, or in considerable detail, estimator pad accumulates and Creeping Comfort will guarantee a high estimate.
- Work the red lights first. When experiments and trials are necessary, schedule them early before heavy investments that repeat already demonstrated performance. Nothing is more frustrating than to accomplish a lot of easy work first (which always feels good), only to discover that the experimental efforts fail, and a lot of time and resources have been wasted.
THE BOTTOM LINE
A good plan is one that provides clear and impeccable answers to the following questions :
- How do I know I’m done?
- How do I know it’s done right?
- Along the way, can I tell if I’m going to finish ahead or behind schedule?
- Along the way, can I tell if I’m going to finish over or under budget?
Planning does require some discipline, organization and thought. But it need not become a six hundred pound gorilla that dominates your life. Make simple plans for simple projects, and more sophisticated plans for complex projects. Once completed, the plan will inspire confidence in project execution, and become an invaluable baseline with which to track performance and identify difficulties in time for corrective action.