Brooks’ Law: Adding Manpower to a Late Project Makes It Later

 

Brooks’ law is the observation that “adding manpower to a late software project makes it later”. Applied broadly, this principle denotes that when it comes to various types of projects, adding more resources—especially more people—is often unhelpful and even counterproductive.

Brooks’ law has important implications when it comes to personal and organizational productivity, so it’s important to understand it. As such, in the following article you will learn more about Brooks’ law, and see how you can account for it in practice.

 

Examples of Brooks’ law

A classic example of Brooks’ law is a software project that’s behind schedule, which leads management to allocate more developers to it, which causes further delays, because of the time it takes to train the new developers, and because the tasks that were delaying the project can’t be divided between many people, so the new developers can’t help with them.

Another example of Brooks’ law appears is a project that could be handled quickly by a small team, but ends up being delayed because so many people are added to it unnecessarily that communication and decision-making become much slower.

In addition, a key aspect of the logic behind Brooks’ law is exemplified in the humorous adage that “nine women can’t make a baby in one month”, which highlights the limited divisibility of certain tasks, an issue that often limits the benefits of adding manpower to projects. This issue is also demonstrated in other sayings, such as “you get water by digging one well a hundred feet deep, not a hundred wells one foot deep”.

Finally, a notable example of Brooks’ law appears in the original book that proposed it:

“What does one do when an essential software project is behind schedule? Add manpower, naturally. As [the previous figures] suggest, this may or may not help.

Let us consider an example. Suppose a task is estimated at 12 man-months and assigned to three men for four months, and that there are measurable mileposts A, B, C, D, which are scheduled to fall at the end of each month…

Now suppose the first milepost is not reached until two months have elapsed… What are the alternatives facing the manager?

  1. Assume that the task must be done on time. Assume that only the first part of the task was misestimated… Then 9 man-months of effort remain, and two months, so 4½ men will be needed. Add 2 men to the 3 assigned.
  2. Assume that the task must be done on time. Assume that the whole estimate was uniformly low… Then 18 man-months of effort remain, and two months, so 9 men will be needed. Add 6 men to the 3 assigned.
  3. Reschedule. I like the advice given by P. Fagg, an experienced hardware engineer, ‘Take no small slips.’ That is, allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again.
  4. Trim the task. In practice this tends to happen anyway, once the team observes schedule slippage. Where the secondary costs of delay are very high, this is the only feasible action. The manager’s only alternatives are to trim it formally and carefully, to reschedule, or to watch the task get silently trimmed by hasty design and incomplete testing.

In the first two cases, insisting that the unaltered task be completed in four months is disastrous. Consider the regenerative effects, for example, for the first alternative…

The two new men, however competent and however quickly recruited, will require training in the task by one of the experienced men. If this takes a month, 3 man-months will have been devoted to work not in the original estimate. Furthermore, the task, originally partitioned three ways, must be repartitioned into five parts; hence some work already done will be lost, and system testing must be lengthened. So at the end of the third month, substantially more than 7 man-months of effort remain, and 5 trained people and one month are available… the product is just as late as if no one had been added…

To hope to get done in four months, considering only training time and not repartitioning and extra systems test, would require adding 4 men, not 2, at the end of the second month. To cover repartitioning and system test effects, one would have to add still other men. Now, however, one has at least a 7-man team, not a 3-man one; thus such aspects as team organization and task division are different in kind, not merely in degree.

Notice that by the end of the third month things look very black. The March 1 milestone has not been reached in spite of all the managerial effort. The temptation is very strong to repeat the cycle, adding yet more manpower. Therein lies madness.

The foregoing assumed that only the first milestone was misestimated. If on March 1 one makes the conservative assumption that the whole schedule was optimistic… one wants to add 6 men just to the original task. Calculation of the training, repartitioning, system testing effects is left as an exercise for the reader. Without a doubt, the regenerative disaster will yield a poorer product, later, than would rescheduling with the original three men, unaugmented.”

— From “The Mythical Man-Month: Essays on Software Engineering” by Fred Brooks

Note: a humorous statement that’s attributed to Fred Brooks and that’s associated with Brooks’ law is “what one programmer can do in one month, two programmers can do in two months”.

 

Generalizing Brooks’ law

The original formulation of Brooks’ law denotes that adding manpower to late software projects delays them further. However, it’s possible to expand this concept in three main ways, to make it more generalizable, and therefore useful in a broader range of contexts. Specifically:

  • Brooks’ law can be applied to endeavors other than software projects. For example, it can apply to other types of business, academic, and hobby projects.
  • Brook’s law can be applied to negative outcomes other than delays. For example, the addition of manpower to a software project might lead to a worse product or to increased conflicts between team members.
  • Brook’s law can be applied to resources other than manpower. For example, spending more money on a certain project might not lead to it finishing faster, because spending more money doesn’t address the problems that are causing the delay.

Based on this, a generalized corollary of Brooks’ law can be formulated as follows:

“Adding more resources to a project is sometimes unhelpful or even counterproductive.”

Furthermore, note that even in cases where adding resources to a project is beneficial, there may be an issue of diminishing returns, whereby the more resources are added to the project, the less the additional resources help.

Finally, another way Brooks’ law can be modified is by changing the term “manpower” to “people”. This does not change the meaning of the law, since its original formulation uses the common meaning of the term “manpower”, which can be described as “the number of people working on something”, so it refers to all people—regardless of their gender—rather than just men. Nevertheless, saying “people” instead of “manpower” makes the adage more inclusive, and also slightly shorter and simpler, without detracting from it.

 

Rationale behind Brooks’ law

Brooks’ law is based on the idea that adding more people to a project doesn’t mean that it will be completed faster, due to various issues, such as:

  • Teaching time, which is the time that people who are already working on the project must spend in order to train newcomers. In the context of a software project, for example, this can involve things such as taking the time to explain the existing code, or training people on the specific technical tools that are used in the project.
  • Learning time, which is the time that it takes new people to become productive after joining the project.
  • Communication overhead, which is the time spent on communication rather than work, and which generally rises rapidly as people are added to a project. As noted by Fred Brooks in his book on the topic: “If there are n workers on a project, there are (n2-n)/2 interfaces across which there may be communication…”, which means that 2 people have one interface of communication between them, 3 people have 3 interfaces, 4 people have 6 interfaces, 5 people have 10 interfaces, 6 people have 15 interfaces, and 10 people have 45 interfaces.
  • Limited divisibility, which represents the limited ability to divide certain tasks into partitions that can be handled by separate people.

Such issues are discussed in detail in the original book on the topic:

“… when schedule slippage is recognized, the natural (and traditional) response is to add manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster…

[One] fallacious thought mode is expressed in the very unit of effort used in estimating and scheduling: the man-month. Cost does indeed vary as the product of the number of men and the number of months. Progress does not. Hence the man-month as a unit for measuring the size of a job is a dangerous and deceptive myth. It implies that men and months are interchangeable.

Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them… This is true of reaping wheat or picking cotton; it is not even approximately true of systems programming.

When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule… The bearing of a child takes nine months, no matter how many women are assigned. Many software tasks have this characteristic because of the sequential nature of debugging.

In tasks that can be partitioned but which require communication among the subtasks, the effort of communication must be added to the amount of work to be done. Therefore the best that can be done is somewhat poorer than an even trade of men for months…

The added burden of communication is made up of two parts, training and intercommunication. Each worker must be trained in the technology, the goals of the effort, the overall strategy, and the plan of work. This training cannot be partitioned, so this part of the added effort varies linearly with the number of workers.

Intercommunication is worse. If each part of the task must be separately coordinated with each other part, the effort increases as n(n-1)/2. Three workers require three times as much pairwise intercommunication as two; four require six times as much as two. If, moreover, there need to be conferences among three, four, etc., workers to resolve things jointly, matters get worse yet. The added effort of communicating may fully counteract the division of the original task…

Since software construction is inherently a systems effort—an exercise in complex interrelationships—communication effort is great, and it quickly dominates the decrease in individual task time brought about by partitioning. Adding more men then lengthens, not shortens, the schedule.”

— From “The Mythical Man-Month: Essays on Software Engineering” by Fred Brooks

In addition, the issues of communication and organization have been explored in many other works on the topic. For example, as one paper notes:

“…the larger a group is, the more agreement and organization it will need. The larger the group, the greater the number that will usually have to be included in the group agreement or organization. It may not be necessary that the entire group be organized, since some subset of the whole group may be able to provide the collective good. But to establish a group agreement or organization will nonetheless always tend to be more difficult the larger the size of the group, for the larger the group the more difficult it will be to locate and organize even a subset of the group, and those in the subset will have an incentive to continue bargaining with the others in the group until the burden is widely shared, thereby adding to the expense of bargaining. In short, costs of organization are an increasing function of the number of individuals in the group…

When there is no pre-existing organization of a group, and when the direct resource costs of a collective good it wants are more than any single individual could profitably bear, additional costs must be incurred to obtain an agreement about how the burden will be shared and to coordinate or organize the effort to obtain the collective good. These are the costs of communication among group members, the costs of any bargaining among them, and the costs of creating, staffing, and maintaining any formal group organization.”

— From “The Logic of Collective Action” (Ch. I: “A Theory of Groups and Organizations), by Mancur Olson (1965)

Furthermore, increasing manpower can paradoxically reduce the overall team productivity due to other issues, such as reduced personal accountability and reduced motivation.

Finally, similar issues apply when it comes to other resources that might be added to a project, such as time or money. For example, in the context of research, if a certain biological sample takes a month to grow, and requires exactly one hour of work per day, telling a researcher to spend more time working on it won’t get it to grow faster, and will therefore be a waste of time. Alternatively, in the context of business development, if the best tools that an employee needs to complete a task cost $1,000 in total, telling them to spend $5,000 on extra tools won’t help them complete the task faster, and will therefore be a waste of money. Furthermore, in both these cases, spending the extra resources can even be counterproductive, for example if it causes the researcher to be more likely to make a mistake and ruin the sample, or if it causes the employee to waste time picking the unnecessary extra tools.

Note: a closely related concept is Parkinson’s law, which is the adage that “work expands so as to fill the time which is available for its completion”. This signifies that the more time people dedicate in advance to a certain task, the longer it will generally take to complete it, even if it could have been completed in less time.

 

Caveat about Brooks’ law

The main caveat about Brooks’ law is that Brooks himself referred to this concept as an outrageous oversimplification, since whether or not this concept applies in a certain situation depends on various factors, such as:

  • The attributes of the people who are currently working on the project, for example in terms of their ability and willingness to teach newcomers.
  • The attributes of the people who are being added to the project, for example in terms of their ability and willingness to learn new material.
  • The number of new people who are being added to the project.
  • The overall team size.
  • The group social dynamic, before and after the addition of new people.
  • The ways in which group members are expected to communicate.
  • The hierarchy and decision-making process within the team.
  • The type of tasks involved in the project, and especially the degree to which they can be divided.
  • The reason why additional manpower is being added.
  • The other steps (if any) that are being undertaken in order to support the project, beyond the addition of manpower.

For example, as one paper states:

“’Brooks’ Law’ has been questioned in the Free Software context: large teams of developers, contrary to the law, will not need an increasingly growing number of communication channels. As advocates claim, this is due to the internal characteristics of the Free Software process: the high modularity of the code helps developers to work on comparted sections, without the need to coordinate with all other contributors…

This paper argues that the claim of advocates holds true, but with limitations: in the KDE project, the few initial developers needed a significant amount of communication. The growth of KDE brought the need to break the number of overall communication channels to a significant extent. Finally, an established amount of 300 developers currently needs the same amount of communication as when the developers were only 10. We interpret this result by arguing that Brooks’ Law holds true among the core developers of any large Free Software project.”

— From “Reassessing Brooks’ law for the free software community” (Capiluppi & Adams, 2009)

This caveat also applies when it comes to the generalized version of Brooks’ law. For example, whether or not adding money to a project can make it go faster depends on a wide range of factors, such as what the money will be used for.

 

How to account for Brooks’ law

To account for Brooks’ law, you should avoid assuming that adding more resources to a project will necessarily help. Instead, assess the situation first, to determine whether you should add resources, and if so, then how many and in what way.

Specifically, before adding resources, you should consider the following:

  • How can adding more resources help the project?
  • How can adding more resources harm the project?

Then, you should consider the answers to these questions in light of your specific goals, and decide if and how to add more resources to the project. When doing this, you can consider how effective adding the resources will be, in terms of leading to the desired outcomes (e.g., finishing by the deadline), and how efficient adding the resources will be, in terms of avoiding wasted resources (e.g., developers focusing on development, rather than on unnecessary bureaucracy).

For example, if your main goal is to get the project done as fast as possible, it might be worth adding more people in order to increase the productivity of the team as a whole, even if this decreases the productivity of individual team members. This can happen, for instance, if a team of 2 people operates at 100% individual efficiency, so their cumulative productivity is equivalent to 200%, and adding another person reduces everyone’s efficiency to 80%, but increases the team’s overall productivity to 240%.

Furthermore, when doing this, keep in mind that the decision to add resources is not necessarily binary, meaning that even though adding all the resources at your disposal might be counterproductive, adding some resources could help. For example, it might be the case that adding three developers to a software project would hinder the project, but adding one would help.

Similarly, keep in mind that it’s sometimes preferable to help a project by doing something other than adding resources, such as changing the scope of the project or moving its deadline, either in addition to or instead of adding resources.

Conducting this kind of analysis before deciding what to do can also help you see that Brooks’ law doesn’t apply in every situation, and can potentially help you identify ways to minimize the issues associated with adding resources to a project. For example, in the context of software development, solutions such as pair programming have been found to be effective in reducing communication issues, which could help mitigate some of the issues associated with increased manpower.

In addition, conducting this kind of analysis can be beneficial when assessing the way a project is currently being conducted, to determine if it’s using existing resources in an ineffective or inefficient manner.

Finally, if someone else in your organization insists on adding more resources when you know that doing so is a mistake, there are several things you might be able to do, depending on the circumstances:

  • Ask them to explain their reasoning, and explain the flaws in it.
  • Explain to them why it’s a bad idea, potentially while mentioning Brooks’ law and using relevant examples to illustrate it.
  • Help them find an alternative solution, such as changing the scope of the project or changing its deadline.
  • Convince them to add fewer resources than they originally intended, to reduce the issues associated with this addition.
  • Accept what they’re doing, and try to minimize the potential issues of adding more resources.
  • Ignore them.
  • Go along with their plan entirely, even if you know that it won’t work.

Note: a humorous solution for the problems associated with Brooks’ law is the Bermuda plan, which involves putting 90% of a company’s programmers on a boat and sending them to Bermuda so that the remaining 10% can finish the work.

 

Additional information

The origin and history of Brooks’ law

Brooks’ law was proposed by Fred Brooks, an American computer scientist, in his 1975 book “The Mythical Man-Month: Essays on Software Engineering”, where he states the following:

“Oversimplifying outrageously, we state Brooks’s Law:

Adding manpower to a late software project makes it later.

This then is the demythologizing of the man-month. The number of months of a project depends upon its sequential constraints. The maximum number of men depends upon the number of independent subtasks. From these two quantities one can derive schedules using fewer men and more months. (The only risk is product obsolescence.) One cannot, however, get workable schedules using more men and fewer months. More software projects have gone awry for lack of calendar time than for all other causes combined.”

— From “The Mythical Man-Month: Essays on Software Engineering

Since then, Brooks’ law has been discussed in many works, and some empirical research has been conducted to test it. For example, one study on the topic states that:

“The common problem most software companies face is ‘how to get relatively large groups of people with varying abilities and skills to work together like nimble and talented small teams’…

First, we found strong evidence that the maximum size for a software project team significantly diminishes productivity. Second, we also identified two types of complexity that lead to larger team sizes – the logical complexity of the software product and the number of interfaces with other software.”

— From “Brooks’ law revisited: Improving software productivity by managing complexity” (Blackburn, Lapre, & Van Wassenhove, 2006)

 

Brooks’ law vs. Linus’ law in software development

Linus’ law is the adage that “given enough eyeballs, all bugs are shallow”. It was formulated by Eric S. Raymond in his 1999 book “The Cathedral and the Bazaar” (originally presented as an essay in 1997). In the book, Raymond states the following about Linus’ law and how it relates to Brooks’ law:

“Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

  1. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

Or, less formally, ‘Given enough eyeballs, all bugs are shallow.’ I dub this: ‘Linus’s Law’.

My original formulation was that every problem ‘will be transparent to somebody’. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ‘Somebody finds the problem,’ he says, ‘and somebody else understands it. And I’ll go on record as saying that finding it is the bigger challenge’…

In Linus’s Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you’ve winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

And that’s it. That’s enough. If ‘Linus’s Law’ is false, then any system as complex as the Linux kernel, being hacked over by as many hands as that kernel was, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered ‘deep’ bugs. If it’s true, on the other hand, it is sufficient to explain Linux’s relative lack of bugginess and its continuous uptimes spanning months or even years.”

— From “The Cathedral and the Bazaar

Linus’s law is sometimes said to contradict Brooks’ law. However, Raymond addresses this apparent contradiction in his book:

“Thus, source-code awareness by both parties greatly enhances both good communication and the synergy between what a beta-tester reports and what the core developer(s) knows. In turn, this means that the core developers’ time tends to be well conserved, even with many collaborators.

Another characteristic of the open-source method that conserves developer time is the communication structure of typical opensource projects. Earlier I used the term ‘core developer’; this reflects a distinction between the project core (typically quite small; a single core developer is common, and one to three is typical) and the project halo of beta-testers and available contributors (which often numbers in the hundreds).

The fundamental problem that traditional software-development organization addresses is Brooks’s Law: ‘Adding more programmers to a late project makes it later.’ More generally, Brooks’s Law predicts that the complexity and communication costs of a project rise with the square of the number of developers, while work done only rises linearly.

Brooks’s Law is founded on experience that bugs tend strongly to cluster at the interfaces between code written by different people, and that communications/coordination overhead on a project tends to rise with the number of interfaces between human beings. Thus, problems scale with the number of communications paths between developers, which scales as the square of the number of developers (more precisely, according to the formula N*(N–1)/2 where N is the number of developers).

The Brooks’s Law analysis (and the resulting fear of large numbers in development groups) rests on a hidden assumption: that the communications structure of the project is necessarily a complete graph, that everybody talks to everybody else. But on open-source projects, the halo developers work on what are in effect separable parallel subtasks and interact with each other very little; code changes and bug reports stream through the core group, and only within that small core group do we pay the full Brooksian overhead…”

— From “The Cathedral and the Bazaar

In addition, Raymond also says the following:

“In The Mythical Man-Month, Fred Brooks observed that programmer time is not fungible; adding developers to a late software project makes it later. As we’ve seen previously, he argued that the complexity and communication costs of a project rise with the square of the number of developers, while work done only rises linearly. Brooks’s Law has been widely regarded as a truism. But we’ve examined in this essay a number of ways in which the process of open-source development falsifies the assumptions behind it—and, empirically, if Brooks’s Law were the whole picture, Linux would be impossible.

Gerald Weinberg’s classic The Psychology of Computer Programming supplied what, in hindsight, we can see as a vital correction to Brooks. In his discussion of egoless programming, Weinberg observed that in shops where developers are not territorial about their code, and encourage other people to look for bugs and potential improvements in it, improvement happens dramatically faster than elsewhere. (Recently, Kent Beck’s ’extreme programming’ technique of deploying coders in pairs who look over one another’s shoulders might be seen as an attempt to force this effect.)…

The bazaar method, by harnessing the full power of the egoless programming effect, strongly mitigates the effect of Brooks’s Law. The principle behind Brooks’s Law is not repealed, but given a large developer population and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible. This resembles the relationship between Newtonian and Einsteinian physics—the older system is still valid at low energies, but if you push mass and velocity high enough you get surprises like nuclear explosions or Linux…

So to Brooks’s Law, I counter-propose the following:

19. Provided the development coordinator has a communications medium at least as good as the Internet, and knows how to lead without coercion, many heads are inevitably better than one.”

— From “The Cathedral and the Bazaar

Finally, Raymond also provides the following note:

“John Hasler has suggested an interesting explanation for the fact that duplication of effort doesn’t seem to be a net drag on open-source development. He proposes what I’ll dub ‘Hasler’s Law’: the costs of duplicated work tend to scale sub-quadratically with team size—that is, more slowly than the planning and management overhead that would be needed to eliminate them.

This claim actually does not contradict Brooks’s Law. It may be the case that total complexity overhead and vulnerability to bugs scales with the square of team size, but that the costs from duplicated work are nevertheless a special case that scales more slowly. It’s not hard to develop plausible reasons for this, starting with the undoubted fact that it is much easier to agree on functional boundaries between different developers’ code that will prevent duplication of effort than it is to prevent the kinds of unplanned bad interactions across the whole system that underly most bugs.

The combination of Linus’s Law and Hasler’s Law suggests that there are actually three critical size regimes in software projects. On small projects (I would say one to at most three developers) no management structure more elaborate than picking a lead programmer is needed. And there is some intermediate range above that in which the cost of traditional management is relatively low, so its benefits from avoiding duplication of effort, bug-tracking, and pushing to see that details are not overlooked actually net out positive.

Above that, however, the combination of Linus’s Law and Hasler’s Law suggests there is a large-project range in which the costs and problems of traditional management rise much faster than the expected cost from duplication of effort. Not the least of these costs is a structural inability to harness the many-eyeballs effect, which (as we’ve seen) seems to do a much better job than traditional management at making sure bugs and details are not overlooked. Thus, in the large-project case, the combination of these laws effectively drives the net payoff of traditional management to zero.”

— From “The Cathedral and the Bazaar

Note that there is also some empirical research on the association between these two concepts. For example, one study examined a large collection of free open-source software projects, and tested three potential hypotheses:

  • The Linus’ Law Hypothesis: FOSS projects with larger development teams will be more successful (Linus’ Law).
  • The Brooks’ Law Hypothesis: FOSS projects with larger teams will face added coordination costs which will hinder FOSS development. Consequently, they will be less successful (Olson and Brooks).
  • The “Core Team” Hypothesis: The size of the development team won’t have any effect on FOSS project success, because the core developer groups are almost always small teams (Zawinski).

From “Brooks’ Versus Linus’ Law: An Empirical Test of Open Source Projects” (NCDG Working Paper No. 07-009, by Schweik & English, 2007)

The results showed that as the number of developers on the project increased, the likelihood that it will be successful also increased. However, two important caveats should be stated about this.

First, it is difficult to establish causality based on this study. As the researchers themselves note:

“Our finding creates a kind of chicken-egg problem. Do more developers lead to project success? Or is it that successful projects attract more developers, in part because of the economic motivations that drive some programmers to participate (e.g., signalling programming skill to a broad community, self-learning through reading other’s programs and peer-review)…? We can’t answer this question with the dataset we have, because it represents generally one point in time…”

From “Brooks’ Versus Linus’ Law: An Empirical Test of Open Source Projects” (NCDG Working Paper No. 07-009, by Schweik & English, 2007)

Second, although Brooks’ law focuses on the potential disadvantages of increased manpower, while Linus’ law focuses on its potential advantages, these laws are not contradictory or mutually exclusive. For example, there may be a situation where adding more people to a project delays it (thus satisfying Brooks’ law), but also helps solve some long-standing bugs (thus satisfying Linus’ law). Furthermore, these two laws were meant to generally apply to different—although related—contexts, and involve a different type of manpower.

 

Summary and conclusions

  • Brooks’ law is the observation that “adding manpower to a late software project makes it later”.
  • Applied broadly, this principle denotes that when it comes to various types of projects, adding more resources—especially more people—is often unhelpful and even counterproductive.
  • Brooks’ law can be attributed to various causes, such as the time it takes to teach new people, the increased communication overhead associated with more personnel, and the limited divisibility of certain tasks.
  • Brooks’ law is only true in some situations, and this depends on various factors, such as the attributes of team members, the team size, group dynamics, the types of tasks involved, and the current status of the project.
  • To account for Brooks’ law, avoid assuming that adding more resources to a project will necessarily help; instead, assess the situation first, to determine whether you should add resources, and if so, then how many and in what way.