Summary
Malcom Gladwell's latest book is a selection from his New Yorker columns. The underlying theme is what I call "wrong correctness," which is fascinating because there are enormous possibilities to be mined, but only if we can learn how to create a new approach to business.
Advertisement
We love our models. Create a model and you can cheaply predict outcome, without actually doing the experiments. They are magical windows into the future.
We often think of models in terms of mathematical equations, but any kind of representation is a model. "The map is not the territory" is one of the more succinct descriptions of the disconnect between the model and the thing that it represents.
Every model is an abstraction. This means that some information is removed and/or lost in creating the representation. We attempt to abstract away only the pieces that don't affect the results we seek, and in doing so we assume that the system is made up of discrete, isolated components that can be added and subtracted with little or no impact on the rest of the system. Indeed, the concept of reductionism is itself a model, that says, in effect, that the pieces are more important than the interactions that form a whole (although I suspect it started as "let's see how far we can take this idea" rather than an expectation that this is how the world works. Only subsequently did people adopt reductionism as an accurate world view).
Ironically, a model becomes a problem when it starts to work enough of the time that you begin to believe in it. You start to see the model as the world, and it becomes annoying and time consuming to constantly remind yourself that "all models are wrong, some are useful." In fact, humans are too limited not to see the world through our abstractions. If, every time we had to make a decision, we went back to first principles and took everything into account, we'd never do anything.
Newton's laws of motion provide an excellent example. Within our realm of perception these are absolutely "true" and accurate, all the time. And yet, they are only an approximation. When you start looking at the very big, very small, very fast, etc., they don't apply anymore. But it makes no sense to take the more complex factors into account when we live well within the limits of the approximation. So we abstract away the extra bits because they have virtually no impact in our realm. Unfortunately, we tend to forget the approximation and assume that our model is the real thing. Therein lies the danger: not approximation, but ignorance.
Models are so enticing. When you get one that works, it gives you a tremendous advantage: cheap predictability of results. It is very tempting to create a model even when it's not possible, because you want one, and you want the outcome it promises.
My father spent many years in the "charting" movement, trying to understand what a stock was going to do by looking at the curve of what it had done before. There's an entire mythos around the idea that there are patterns in stock charts that will tell you what they are going to do. Numerous mathematical studies have shown conclusively that this doesn't work. The experience of the chartists themselves has shown it; the only people who make any money from this movement publish newsletters. But the idea that such a model would work is so compelling -- it would make you rich -- that people build a religion despite consistent disproof.
An even more insidious problem happens when a model is possible, perhaps one that produces very limited results, but it requires very complex information in order to produce more valuable results. It becomes too expensive or you just can't figure out how to do it, so you decide to just ignore that term. The classic example of this is the cost-benefit analysis. You want numbers to produce a conclusion, so if something doesn't produce numbers, or when those numbers are too difficult to collect and quantify, the easiest thing to do is just not to include that factor. Customer satisfaction, arguably the most important value that any company needs to consider, is often left out of a cost-benefit analysis because it's too hard to include. Ditto employee satisfaction and employee effectiveness.
One big flaw with models is that they assume the future is predictable. In particular, they posit that the way things are going now is pretty much how they will go in the future. This is one of the more comforting whispers you can speak to important portions of our brains. Those portions stand up and say "Yay! Sounds good to us!"
The other parts, the ones that say "something bad might happen," are shushed. After all, most of the time, things are comfortably predictable. And you never know when something bad is going to come flying out of the blue, so what's the point of dwelling on it? On the personal level, at least, a cheerful outlook sounds like pretty good advice.
To top it off, figuring out what to do about the random catastrophes is far from simple. A model that predicts a steadily-increasing stock market tells you what to do: invest and wait. One that says that every once in a while you'll encounter an unpredictable shake-up doesn't give any clear direction. It doesn't tell you when these things will happen, so there's no buy/sell prescription. Mostly it counsels keeping some of your money (enough to survive on?) somewhere safer, and only risking what you can live without.
The essays in Gladwell's Book "What the Dog Saw" suggest that just because something is unpredictable doesn't mean we should ignore it -- there is still value, sometimes exceptional value, from factoring in the chaotic.
That doesn't mean it will be easy or obvious. Sometimes "factoring in the chaotic" simply means observations, and adjustments to our predictions. In the software field, consider Waltzing with Bears, by Tom DeMarco and Timothy Lister. Their previous book, Peopleware, was a relatively easy read because it gave sound evidence that those management practices that you already knew were dumb were, in fact, dumb -- and often more expensive than anyone credited. But in Waltzing with Bears, they move away from the obvious and into the arena of risk management. Here, it's not about what we know will happen, but about what usually won't happen. Many people get away with saying "it won't happen, let's ignore it" most of the time. Sometimes they're even rewarded for being positive thinkers.
DeMarco and Lister first point out something very important. When someone asks you how long a particular subproject will take, it's usually implicit, and sometimes explicit, that they want to know the shortest, most optimistic time for this task. DeMarco and Lister note that the actual time for finishing a task is a probability curve, and if you only ever give the shortest time, you are giving the leading edge of the curve, where it touches the axis. Thus, each subtask prediction has a 0% probability of being correct. This means your project completion time estimation starts out, from day one, with a 0% probability of being correct. They suggest a relatively simple change in behavior: give, instead, the middle of your probability curve for each subtask, so you begin with a palpable completion time. It doesn't make the completion time predictable, but it does make it significantly less wrong.
This is a big shift in perspective. All this time we've been doing project estimation quite badly. We are pressed into doing this. This pressure comes from our basic business model, which says that money is the only reason for doing anything. We optimize around money, and so naturally when we ask for a project estimate, we want the most optimistic one, the one that appears to cost the least. But if we look at it realistically, we see that (1) you can't know how long something will take, you can only guess, and (2) a collection of best-possible guesses produces a useless estimate.
In Peopleware, DeMarco and Lister also look at estimation, but for its effect on productivity. They run a test where managers and programmers estimate the completion of a project, in various combinations: the manager alone, the programmer and manager together, the programmer alone, and no estimate at all. The rather striking result was that the programmer was most productive when there was no estimate at all. So not only are we estimating very badly, the cost of estimation itself appears to be quite significant. Of course, current business thinking will look at these issues and say "very interesting, but we must have estimates and naturally we want the most optimistic ones."
This is the same business thinking that ignores hard facts in favor of myths and reflex reactions. An excellent example is pay-for-performance. Watch this TED talk by Dan Pink: 40 years ago, a seminal study showed that pay-for-performance only produces improvements in rote assembly-line-type work. For any work that involves creativity, pay-for-performance actually decreases productivity; apparently it demeans people to think that their creative work is only evaluated in terms of money (in the programming profession it's relatively well-understood that, as long as they can get by OK, programmers don't care that much about the money -- it's the quality of experience that matters). The negative impact of pay-for-performance has been emphasized in all the important business books of the past decades, the books that all the business leaders profess to read and agree with. And yet the only reward these same business leaders can think of is money, so they do exactly what has been shown again and again to produce negative results.
This is what I mean by "wrong correctness." Somehow the behavior makes sense and, like many of the stories Gladwell tells in his books, practicing that behavior doesn't produce a big, catastrophic failure. If you fall off a cliff, you learn fast that walking on air doesn't work. But if you only occasionally stumble and slide partway down and can climb back up with only a few scrapes and bruises, you can convince yourself that this path is a reasonable one, that we can just man up and push through and we don't have to look for a better, easier path.
In Outliers, Gladwell looks at how disasters happen. It's never one big thing, but a combination of small, seemingly mundane and manageable mistakes that, taken together, produce a crash. One of Gerald Weinberg's maxims is "Things are the way they are because they got that way, one logical step at a time." Each decision is a small one and appears to be logical in isolation (there's that reductionism problem again), especially if you base your decisions on what you want to believe, or what is convenient to believe, rather than looking at experiments (this is not to say that I believe in all experimental data, just that taking an experimental attitude is more likely to produce better results).
Here's another example of "wrong correctness," also from Peopleware: the Furniture Police. This is the team in a company that decides what furniture you can have, and how much should fit on a floor, etc. From the standpoint of the Furniture Police, the more people you can squeeze onto a floor of a building, the better. And the only metric they have for measuring their success is how much money they save. So they do the thing that is correct for them under their constraints, and cram people closer together, and show positive results through lowered costs.
The actual effect is very, very wrong. It greatly reduces job satisfaction and thus productivity. It appears to save the company money but the amount it actually loses vastly outweighs the tiny savings. Of course, you'd have to look at the big picture to see the loss, rather than the quarterly report where the furniture police seem to produce a win.
Notice the trend. We do these things because the small decisions seem simple and relatively obvious, and in the short term they appear to make the numbers jump in the right direction. Kudos all around, now let's see what we can do for the next quarter. And when the pressure of the long-term trend eventually bursts the dam, everyone is confused and runs around desperately trying to figure out how to fix things -- but of course, the only solutions that make sense are short-term quick fixes.
So is it any wonder that, as companies get bigger, their productivity per worker goes down and down, until we get Microsoft, with billions flowing through it and amazing profitability and lots of smart people, unable to create anything new? For years and years? This is what happens when you accrete lots and lots of wrongly-correct practices. At some point you start going backwards.
The marshmallow experiment demonstrates that children who understand deferred gratification tend to be much more successful later in life. A study of successful people shows again and again the need for patience and perseverance. Even those who appear to become successful overnight turn out to be preparing, watching and waiting, typically for years, so they are ready when the right opportunity appears.
But it's as if we are a nation of five-year-olds, who only understand instant gratification. We don't want to hear about the years of preparation. We don't want to know the backstory, we only want to hear about the sudden fireworks and imagine ourselves magically walking into the same situation and suddenly being wealthy (which will apparently suddenly make us happy -- another thing we want instantly without any long-term investment).
Even if we do manage to create a human-centered company, it's only a matter of time before the incessant demands of the quarterly-profit beast erode these values. The only (rare) exceptions occur when the creators make up-front decisions to prevent such erosion from taking place; staying private, maintaining controlling interest, or creating employee/customer-owned cooperatives. Of course, such organizations don't have the potential of growing cancerously fast. To create and maintain a business like this requires strong and experienced leadership in the face of questions about optimizing growth and profits.
Steve Blank tells a story that's been repeated in many forms: the seemingly small, one-logical-step-at-a-time event that makes the key players look up and notice that the company has just gone from sweet to sour. In this case it is the slightly-comical decision by a new CFO to stop providing the human resources with free soda, which was costing the company some 10K/year. An easy and rational call, which made the CFO look like a go-getter. The key engineers, once sought avidly by the company, quietly announced their availability and began disappearing. The company didn't panic because it had already gone through its change of life and become more important than its pieces; it was no longer an idealistic youth who valued things like people and quality of life. It had grown up and matured and was now in the adult business of making money. Workers had become fungible resources, easily replaceable.
I remember the first time I saw this happen, in the second company where I had a "real job" after college. I'm not sure what the inciting incident was, perhaps the 3rd or 4th business reorganization within a couple of years, perhaps a sudden withdrawals of bonuses and raises. Whatever the case, a number of the engineers that I considered to be extra-smart began quietly disappearing, with the company making no-big-deal noises as this happened. My own direct manager left, which should have been cold water in my face (but I typically have to learn things in the hardest possible way, and this lesson was -- eventually -- not lost on me).
When did we decide that we were no longer "personnel" (which at least sounds personal) but instead the resources that are human? To the MBAs that probably came up with it, it was certainly the next logical step in fitting everything into a spreadsheet: we've got machine resources, building resources, manufacturing resources, etc., etc., and human resources.
It's the term everyone uses these days, without thought. But recent experiments with Émile Coué's theory of autosuggestion show that repeating something to yourself has an effect whether you believe what you're saying or not. Coué came up with "every day in every way, I am getting better and better." What do you suggest to yourself every time you say "I am a human resource?"
Gladwell tells the story of outstanding college football quarterbacks, the majority of whom are abject failures in professional football -- because the game is played entirely differently in the two domains. Thus, you cannot predict the success of a quarterback based on their success in college. Later in the book, he looks at the way we interview prospects for jobs. It turns out the most critical point of the interview is the initial handshake (or other initial impression). If you like the way someone shakes hands, you take whatever answers they give you and adapt them to that first impression. It's basically a romantic process, except with a real romance you decide the outcome after many months, whereas with a job interview you decide after only hours -- or actually in a moment, with the initial handshake. Even our lame attempts to simulate "real" work (by asking programming puzzles, for example), tell us nothing about the truly critical things, like how someone responds to project pressure. We suffer from Fundamental Attribution Error -- we "fixate on supposedly stable character traits and overlook the influence of context," and we combine this with mostly-unconscious, mostly-inappropriate snap judgments to produce astoundingly bad results. Basically, we think that someone who interviews well (one context) will work well on a task or in a team (a completely orthogonal context).
The answer is something called structured interviewing, which changes the questions from what HR is used to -- questions where the answer is obvious, where the interviewee can generate the desired result (not unlike what we've been trained to do in school) -- to those that extract the true nature of the person. For example, when asked "What is your greatest weakness?" you are supposed to tell a story where something that is ostensibly a weakness is actually a strength. Structured interviewing, in contrast, posits a situation and asks how you would respond. There's no obvious right or wrong answer, but your answer tells something important about you, because it tells how you behave in context. Here's an example: "What if your manager begins criticizing you during a meeting? How do you respond?" If you go talk to the manager, you're more confrontational, but if you put up with it, you're more stoic. Neither answer is right, but the question reveals far more than the typical interview questions that have "correct" answers.
Studies show again and again that repeat customers are your best source of business. And again and again, companies start looking at the cost of making customers happy in the same way they look at free sodas for employees: "hey, here's some fat that can be trimmed." It's a perfect example of wrong correctness and short-term thinking to say that we can cut back on customer support because it doesn't contribute to the bottom line, which is defined as sales for this quarter. Somehow everyone gets on board with these cost-cutting measures, because it seems so obvious. And often, at the same time, these same folks are saying that yes, repeat customers are very important. Except that it's so easy to make this quarter's numbers look better by doing some quick cutting. You end up cutting something that has taken years to develop, just for a quarterly bump. It's a bit like saying "I could lose 30 pounds overnight just by cutting off my leg!" Oh, sure, when I put it like that it sounds deluded. But how different is it, really?
A horrible customer support experience isn't an accident, it's a brilliant money-saving strategy for the company. And once you've reduced everything to quarterly profits, it's the only logical strategy. To do anything else requires a fundamental shift in perspective and company structure (the very shift I'm interested in). Sure, Apple could do a better job, but they have obviously decided that customer experience is what the company is about. Things should just work, and if they don't you should have a clear path to a solution. Who even thinks about calling Microsoft? Microsoft wins by saving money. Except that they are so out of touch with their customers they don't know what to make next. And more and more of my friends, long-time Windows users, are happily defecting to Apple (and try going to a conference filled with "developers, developers, developers!" They're almost exclusively Macs these days).
You know when you've found a customer-centered company (not the ones who put it in their mission statement because it sounds good, but the ones who actually do it). Trader Joe's and Costco come to mind. The experience is instantly good, and there's no sense of hidden traps waiting to spring when something goes wrong (health insurance and cars come to mind). Very quickly, you're thinking "I'm coming here from now on!" It's what most companies want, but don't have the patience for.
The list of examples of wrong correctness goes on and on. I'm sure I could write a book exclusively on the ways that we screw up. I have a reading list of books describing why we make these bad decisions. But I think that people who have spent any time in business have been personally frustrated by enough of these mistakes to know it's an overwhelming problem.
I do think "why?" matters, but I also see it as an endless recursive hole; I could easily spend the rest of my life becoming an expert on why people persist in turning their businesses into hellholes.
In the end I'm not so interested in understanding why we go wrong as much as discovering ways to inspire us toward naturally better decisions. In the same way that an open-spaces conference guides us to spontaneously create the best possible conference experience, I believe that there is some structure that will guide us to spontaneously create the best possible business experience (and for the rest, those not quite ready to jump in completely, make them question the knee jerk addition of structures "because that's the way you run a business").
That's what I'm working on now. It's very ambitious, but it's the only thing that I find compelling: completely change our experience of work and business to make it happiness instead of drudgery, in the same way that open-spaces make conferences wonderful instead of an effort (for both organizers and attendees). I know I can't do it by myself -- I need to find the right community-building tools so that lots of ideas can appear and flow (I imagine some kind of web-based conversation, along with in-person events like open-spaces conferences and workshops). I don't want to "own" the result, in the same way that Harrison Owen didn't try to "own" open spaces. I just want it to happen, so we can stop cramming ourselves into this small, dank, oppressive space that we've been calling business and instead venture into a big world of ebullient possibility, measured by creativity, self-expression, productivity, and joy.
I find the theme of "completely change our experience of work and business to make it happiness" running through your last few posts to be very inspiring and encouraging. I wish you the best in your pursuit of answers. Unfortunately, I don't feel like I have any to offer, but I will provide this observation: there do seem to be more small software companies that 'get it', at least in terms of many of the points you raise (like customer support, furniture police, etc.). But big companies (including government which has no short-term profit driver) seem far less likely to get it. Does the transformation from small to big generally cause a loss of 'happiness'? My suspicion is that having an organization filled with people who are passionate about what they do is a necessary (if not sufficient) condition.
There's a confusion here, certainly not original with you (and I don't blame you for it) between risk and uncertainty. What probability, for example, would you as a manager assign to the event of a critical member of a critical team becoming ill for three months and unable to work? Well, you could ask an actuary, but nobody does -- and in fact this game-changing event is generally ascribed a probability of 0%. That's because risk is what you can foresee, whereas uncertainty is what you can't and don't foresee. Gambling is risky: you figure the odds against you, you figure how much you have to lose, and you lose it, presumably getting your return in the form of entertainment. If you can make side bets against suckers, make them. The stock market, on the other hand, is not risky but uncertain: not only does nobody know what it's going to do, but nobody has a risk model that works for it, because its movements are unpredictable -- even in 20-20 hindsight. (An interesting fact: if we take the 50-year growth of the S and P 500 in the period 1950-2000 and delete the top ten performing days, we wind up throwing out about half the growth. Half of it.)
But as for businesses, you have a hold of the right stick but at the wrong end. Businesses exist in order to make short-term profits for their owners, and for no other reason. If Microsoft is dazzlingly profitable, they are doing everything right. For that matter, IBM's mainframe business is spectactularly profitable, and they aren't innovating at all. That's as it should be. Being "customer-centered" is about exploiting your customers as efficiently as you can while making them believe that your purpose in life is to serve them rather than exploit them. Apple is superb at this.
As to the probability of the estimation in software development: have you heard of fogbugz and their evidence based scheduling? Not that I am a user or anything, but the idea of creating the very same probability curve that you (and Lister&De Marco) are mentioning here and at the same time measring the actual accuracy of the estimation of an individual programmer seems great.
> There's a confusion here, certainly not original with you > (and I don't blame you for it) between risk and > uncertainty. What probability, for example, would you as > a manager assign to the event of a critical member of a > a critical team becoming ill for three months and unable > to work? Well, you could ask an actuary, but nobody does > -- and in fact this game-changing event is generally > ascribed a probability of 0%. That's because risk > is what you can foresee, whereas uncertainty is > what you can't and don't foresee.
Do you have a reference for these definitions because this doesn't make much sense to me. If you were certain that you'd get a return on your money, there would be no risk. I agree that the aren't the same thing but I don't think the above clarifies anything.
> Gambling is risky: you > figure the odds against you, you figure how much you have > to lose, and you lose it, presumably getting your return > in the form of entertainment. If you can make side bets > against suckers, make them. The stock market, on the > other hand, is not risky but uncertain: not only does > nobody know what it's going to do, but nobody has a risk > model that works for it, because its movements are > unpredictable -- even in 20-20 hindsight. (An interesting > fact: if we take the 50-year growth of the S and P 500 in > the period 1950-2000 and delete the top ten performing > days, we wind up throwing out about half the growth. Half > of it.)
I really don't think it makes sense to say that if you can't create a risk model for something, you don't have risk. There are many risky plays you can make in the stock market. 'Risk model' is defined in terms of 'risk' and not the other way around.
> But as for businesses, you have a hold of the right stick > but at the wrong end. Businesses exist in order to > make short-term profits for their owners, and for no other > reason.
That's a very narrow way of looking at business. A business exists to make money for it's owners. Whether it's over the short term or the long term depends on the business and what the owners want. I might get a 30% annualized rate of return but have to wait 10 years to see a dime. There's no rule says that businesses can't work that way. Many businesses lose money for quite a while before turning a profit e.g. Amazon.
> For that matter, IBM's mainframe > business is spectactularly profitable, and they aren't > innovating at all. That's as it should be. Being > "customer-centered" is about exploiting your customers as > efficiently as you can while making them believe > that your purpose in life is to serve them rather than > exploit them. Apple is superb at this.
IBM mainframe hardware is vastly different from what it was a decade ago. Storage, in particular. What hasn't changed is the OS (mostly), and the paradigm (COBOL/VSAM, despite the existence of DB2). They are moving more into a linux mainframe; we'll see how that works out.
IBM pretty much invented "client management" in the 1950's.
There are many ideas in this article and I don't understand the relationship among them, but at the end there is the motivation: to take better decisions in life, so I suppose you mean there is something wrong that need to be corrected, and this wrong thing is related to the idea of models.
Speaking about risk, I should say there is a risk when we are talking about bad things that can happen to us.
For example: There is a risk that you get wet if it rains, but there is uncertainty about if today is going to be a rainy day.
I hope you find a way to construct a community to improve people life, but I don't see a logical picture here.
>Do you have a reference for these definitions because this doesn't make much >sense to me. If you were certain that you'd get a return on your money, there >would be no risk. I agree that the aren't the same thing but I don't think the >above clarifies anything.
IIRC, "risk management professionals" usually define "risk" in terms of events that may occur and impact a project or operations. So it's something like "there's a 30% chance that our subcontractor will deliver 2 months late, thereby delaying our delivery by at least six weeks." They define "uncertainty" as the something like the margin of error in an estimate. So instead of having a point estimate for the cost/duration/effort/whatever for a task you estimate a probability distribution for it. What you can't foresee are "unknown unknowns" and technically you can't plan for something you know nothing about, although usually I think most of these get wrapped up in a vaguely worded "stuff happens" risk and padding of estimates.
I've always hated this terminology, but there is a real dichotomy between the inherent variability in the resources a task will require and in various things that can go wrong. I think if you poke around the PMI website you probably can find the "correct" definitions.
>> Watch this TED talk by Dan Pink: 40 years ago, a seminal study showed that pay-for-performance only produces improvements in rote assembly-line-type work.
Patently false. PfP works 100% in sales. Whether it *should* be used is an altogether different question. It leads to all the levels of corruption oft discussed.
> That's a very narrow way of looking at business. A > business exists to make money for it's owners. Whether > it's over the short term or the long term depends on the > business and what the owners want. I might get a 30% > annualized rate of return but have to wait 10 years to see > a dime. There's no rule says that businesses can't work > that way. Many businesses lose money for quite a while > before turning a profit e.g. Amazon.
That business exists to make money for its owners is only strictly true of sole proprietorships and partnerships. Corporations suffer from dissociation. The issue was/is brought into stark contrast by Goldman-Sachs. The venerated Adam Smith based his analysis on enlightened self-interest, which some have taken to mean a defense of present day capitalism. Unfortunately, Smith wrote a fantasy in which *no* actor could create externalities.
What GS has made manifest is that "managers" of corporations can, in collusion with BoD's with which there is significant cross membership, maximize their self-interest while harming both the corporate and social interests; with no penalty. Economists call this "externalities", and generally assume them out of analysis; especially those of the right wing.
> That business exists to make money for its owners is only > strictly true of sole proprietorships and partnerships. > Corporations suffer from dissociation.
Corporations exist to make money for their stockholders. THe only distinction between a sole proprietorship and a corporation owned by a single person is the filing of a form.
How well the officers of a given corporation do at protecting the interests of the owners is a different conversation.
> I've always hated this terminology, but there is a real > dichotomy between the inherent variability in the > resources a task will require and in various things that > can go wrong. I think if you poke around the PMI website > you probably can find the "correct" definitions.
I can accept that these terms have a technical meaning in that context but that doesn't mean that the general meaning must change to be consistent with the technical terminology.
Bruce, thank you very much for your insightful post and good luck with your ambitious challenge!
> completely change our experience of work and business to > make it happiness instead of drudgery, in the same way that > open-spaces make conferences wonderful instead of an effort (for both organizers and attendees).
Unfortunately, I am not too optimistic that this will work. I fear that there are so many aspects that contribute to such a thing as "happiness" that there will be no silver bullet nor a general process/procedure to make this happen.
Several posters seem to confuse the purpose of a business with aspirations of its owners. The purpose of a business is to provide goods and services to those who want/need to pay for them. Having revenue - cost >= 0 is a necessary condition for a business to remain alive. (To keep it simple, I lumped tax in with cost. Also, I'm glossing over businesses that stay in business while taking losses for some period of time in the hope that they will establish a market and turnaround financially.)
The goals of a business' owner can be more than ever-growing profits. For example, a business can provide greater flexibility in work hours for its proprietor; or, perhaps the proprietor hopes to hand down the business to his/her child; etc.
Admittedly, corporations on the public market need to provide good returns for its investors so the goals become more narrowed. However, owners of privately held SMB's can have myriad goals.
Flat View: This topic has 35 replies
on 3 pages
[
123
|
»
]