Management Research

I used to regard the pursuit of knowledge as a means to an end,

I now regard the pursuit as an end itself.

Preface

 

This research grows directly out of the author’s quest to understand, explain and ultimately assist businesses in replicating the results of so called “successful” and “great” companies.  The genre of management research that we refer to as “success study” has come far since the days of In Search of Excellence.  It has a long and eminent record, and is gradually shifting from academic into mainstream publications whilst enjoying an ever tighter grip on our collective imagination (Raynor, Ahmed, & Henderson, 2009).

This field of management research has undoubtedly produced some of the biggest names and finest work in the business. It will always remain debatable whether it was the quality of the work that raised the profile of its authors, or the quality of the authors that raised the profile of their work.

The roots of ’success’ or ‘excellence’ studies can be traced back to 1982, Peters and Waterman introduced the idea with In Search of Excellence.  This remains one of the most cited management studies of recent times.  Jim Collins’ Built to Last, which looked at the successful habits of visionary companies, set the trend and opened the floodgates to more than a dozen similar efforts since its publication in 1996.  Good to Great, the successor to In Search of Excellence and prequel to Built to Last, was published by 2001, and still reigns supreme at the top of the ‘success study’ tree.

This was followed by numerous other attempts, with each new study building on the previous one as authors sought to address the criticisms levelled at their predecessors.  They include works by Zook and Allen (2001), Foster and Kaplan (2001), Nohria et al. (2003), Marcus (2005) and more recently, McFarland’s work (2008) on medium sized firms.  At the same time, the literature was being saturated by an increasing rainfall of reports, studies and white papers from think-tanks and consulting firms taking a similar approach to specific management practices (Raynor et al., 2009).

The scripts for these studies seem to share an uncanny similarity, as one paper noted: They begin with a population of companies and identify the most successful among them; examine their behaviours and identify traits associated with that success; distil those patterns into a general framework; and then claim that if managers use that framework to guide their own businesses, similar results can be achieved (Raynor et al., 2009).

As more researchers joined this popular field, the study of ‘success/excellence’ gathered momentum.  From researchers to managers, professors to CEOs, and across classrooms and boardrooms, such studies were hailed by academics and practitioners alike.  The study of success/excellence has been elevated into a science.  The findings are derived from exhaustive research and painstaking analysis, often requiring tens of thousands of man-hours. The resulting publications are translated into different languages, shape the habits of business managers, and affect the decision making process in the corporate world.  According to one article, the authors of such studies launch research firms and think-tanks, consult for Fortune 500 companies and command $50,000 fees to speak at conferences and corporate retreats (Bennett, 2009). If ‘organisational behavioural’ studies were in fashion during the 60s and 70s, ‘success/excellence’ studies defined the 80s and 90s, reaching their peak in the early 2000s.

As the literature drew much attention, the spotlight inevitably fell on the research methodology employed.  At around the same time, companies identified as being successful became ‘not so successful’[1] [2].

Three fairly recent works suggesting that there are some weaknesses in these ‘success studies’ have targeted some of the most prominent names in the business – particularly those with the highest reputations (Pfeffer & Sutton, 2006; Raynor et al., 2009; Rosenzweig, 2009). If ‘success study’ is still in vogue, the trend definitely appears to be shifting.

Criticisms began mounting, ranging in tone from ‘dangerous half-truths’ to ‘pseudoscience’ and ‘fundamental irremediable flaw’ to ‘useless and invalid’ (Caulkin, 2007; Pfeffer & Sutton, 2006; Raynor et al., 2009; Rosenzweig, 2009). Bennett (2009) argues that much of this literature is irrelevant. For example, Rosenzweig (2009) observes that the data is usually seen through the lens of the company’s success. They do not explain the company’s success, they are explained by it. He cautioned against being carried away by the massive amount of data that these ‘success studies’ try to impress the reader with.

This view is supported by (Raynor et al., 2009) who analysed 13 of the most influential ‘success studies’ and concluded: “every one of the studies that we  investigated in detail is subject to a fundamental irremediable flaw that leaves us with no good scientific reason to have any confidence in their findings”. The authors contend that these success studies are studying firms with performance profiles that are statistically indistinguishable from fortunate random walks.

Pfeffer and Sutton (2006) take it one step further by explaining that it is not only the researchers and academics that ‘get it wrong’. They claim business leaders and managers often follow deeply held yet unexamined ideologies that substitute facts for conventional wisdom which are then treated and accepted as ‘truths’, much to the detriment of their own organisations and industries. Pfeffer & Sutton (2006) call for a more evidence based management approach similar to that currently employed by the medical industry.

Rosenzweig (2009) attacked the research design of using only ‘excellent’ companies, describing it as an elementary error and likening it to trying to identify the cause of high blood pressure without the use of a control group of healthy normal individuals. The mere fact of choosing successful companies and having their managers account for their success promotes post-rationalisation. In such cases, it is unlikely that managers would not mention strong values, people management, strategic focus and listening to the customers (Caulkin, 2007). However, describing success does not explain what caused it, since strategic focus could equally be the by-product of success as well as its cause.

This and other criticism is forthright and perhaps justified; however, articles critical of ‘success studies’ are abundant.  Confidence in the ‘success study’ method may be dented, but not enough to make a significant impact on the ‘success study’ juggernaut. As one author puts it, there has been no run on the bank yet (Raynor et al., 2009), no smoking gun, and certainly no compelling reason to dismiss the value of ‘success studies’. After all, this is exploratory social science research; even though the data may be inaccurate or perhaps the analysis may be culpable, confidence in the value of studying ‘successful/great’ companies remains high (Pfeffer & Sutton, 2006; Raynor et al., 2009; Rosenzweig, 2009).

That may be about to change. In a scathing attack, one that would shake the foundations of success studies to its very core, Raynor et al. (2009) claim to have debunked the success study method. The burden of proof usually resides with the accuser and in this case, Raynor et al. has provided what appears to be ‘damning evidence’ against the success study school.

In a debate, debaters usually acknowledge their weaker arguments, thus anticipating certain attacks, and they usually have counter arguments prepared in defence.  One unusual though not uncommon debate strategy is to attack the opponent’s strongest point where such attacks are least expected. If carried out successfully, this will usually undermine the opponent’s ability to retaliate, as the bedrock of all their arguments will be compromised.

Raynor et al. (2009) employ a similar strategy in their provocative attack on ‘success studies’. Instead of prevaricating and accusing success studies of cherry picking their data, using anecdotal evidence such as unpublished company records and interviews from the very managers whose performance is subjected to evaluation in the first place (think Hawthorne Effect), Raynor et al. cast doubt on the validity of success studies by questioning the very object of the studies, namely the companies themselves (and their claim to greatness).

In order to understand and fully appreciate the nature, extent, and ramifications of this dispute, it is necessary to examine and contextualise the claims of both parties (Raynor et al. v Success studies).  Since the burden of proof lies with Raynor et al., the party that does not shoulder the burden of proof – in this case the success studies – has the benefit of assumption.

At the top of the success study research tree, Collins (2001) examined great companies that produced, on average, cumulative stock returns that were seven times higher than the general market over a 15 year period (J. Collins, 2001, p. 2).

How can that be anything but remarkable?  However, herein lies the conundrum, as Raynor et al. argue.  Consider the following words of Prof. Henderson that the authors use to illustrate their point:

“I begin my course in strategic management by asking all the students in the room to stand up. I then ask each of them to toss a coin: if the toss comes up ‘tails’ they are to sit down, but if it comes up ‘heads’ they are to remain standing. Since there are around 70 students in the class, after six or seven rounds there is only one student left standing. With the appropriate theatrics, I approach the student and say “HOW DID YOU DO THAT?? SEVEN HEADS IN A ROW!! Can I interview you in Fortune? Is it the T shirt? Is it the flick of the wrist? Can I write a case study about you?(Raynor et al., 2009, p. 3)

Raynor et al. (2009) describe this phenomenon as systemic variability, and that the success studies they reviewed are confusing the long run consequences of systemic variability with individual attributes such as skills.  They argue that any system subject to variation in outcome will ultimately produce streaks of high and low performances that fool our intuition.  So the questions are: If it is possible for one out of 70 students to produce seven ‘heads’ in a row[3], then with a sample large enough, is it inconceivable that some companies will produce ‘winning streaks’ due to the inherent variability of the system rather than the companies’ unique attributes?  Is it also improbable that ‘success studies’ are therefore (for the most part) studying the equivalent of the ‘lucky’ college student who flipped a coin seven times?  Is it also unreasonable to assume that (in the words of Raynor et al.) researchers who think they are studying successful companies are usually studying the winners of random walks?

With the recent proliferation of critiques levelled at some of the most influential management volumes, it appears that we are on the precipice of the ‘success study’ trend.  As the number of articles attacking these management blockbusters increases, so does the temerity of their criticism.  Some authors have not disguised their disdain for the findings and research methodologies of ‘success studies’, their comments being as profane as the implications of their articles are profound.

What does this mean for the validity of some of the most influential and cited management research?  More importantly, what does this mean for the success study method?

Across the spectrum of management research issues, academics disagree. Researchers disagree on qualitative versus quantitative, qualitative AND quantitative, qualitative OR quantitative, grounded theory, single or multiple case approach, research design and analysis.  Not only do we disagree, but we disagree vehemently with academics from different schools of thought who proclaim the appropriateness of their approach thus defending the soundness of their arguments.  We argue on the extent of our arguments, the nature of our arguments and the reason for our arguments.  Everything is debatable, whether it is the cause of skewed results or the fact of skewed results, the findings of the study or the biases to blame for the findings, and the lack of academic rigour or the definition of academic rigour.

The author has few illusions about the inherent difficulties associated with such a project and admires the courage and tenacity of those with the intellectual stamina who undertake academic research for the purpose of a modest contribution to knowledge, and whose printed articles open them to scrutiny and criticism.  The author also appreciates the tedious nature of academic research that is necessary to keep the balance of objectivity in check, yet, at the same time sympathetic to the argument that embracing rigorous scholarship in business schools comes at a cost of realistic view of business.

This review offers no unifying theory of SME management, nor do these pages provide a road map for SME success, complete with an action plan, to-do-lists and timeless management principles.  Instead, what is offered is something more modest: a chronological critique of the ‘success study’ literature, a descriptive approach that seeks to explain the difficulty in understanding high performance (as opposed to the success study’s prescriptive approach of providing the magic formula or key ingredients for success), and some thoughts on the inadequacy of management research in bridging knowledge, theory and practice.

Some readers will ultimately regard the presentation of these issues to be insufficiently balanced (namely, and amongst many others, due to the occasional reference to management studies published in the popular domain instead of peer-reviewed academic journals). This accusation is justified and expected.  However, a critical reader may instead ask: “In order to understand successful companies, is it sensible to exclude publications that have a profound effect on shaping the habits of practising managers merely because they do not conform to standards in academia or scholarly norms?”  After all, these studies are carried out by the very people who occupy the highest echelon of management research, with a team of dedicated researchers (by profession), the financial backing to undertake studies of such magnitude, and an army of Ivy league graduates to help them sift through and codify an even greater amount of research data.

The author acknowledges he is a prisoner in his own paradigm.  He cannot help but view management practice and research through the lens of a postgraduate research student with modest management experience, mindful of how ‘conventional wisdom’ plagues management decisions, how unproven practices become accepted ‘truths’, and how academics and practitioners trapped by their beliefs and ideologies continue to shape our understanding of successful companies.

Undoubtedly, some of the following pages will raise many eyebrows and possibly attract accusations of inference.  The author is new enough to the academic research scene to serve as a blank sheet on which readers of vastly differing experience and background can project their own views.  As such, these pages are bound to disappoint some, if not all.

Recently, the author was asked a question on a theme of which he was cognisant, yet nonetheless had always avoided answering.  The founder of a successful company asked him, “I wonder if your work can help my company?” by which the founder meant: I wonder if you can learn how to manage my company for success though academic research.

The author ponders that question too, and hopes that writing these pages will help him answer the question.

 

Part 2: Epistemological Grounding of Thesis

 

 


[1]Of the 43 companies identified in In Search as being ‘excellent’, more than one third was in financial difficulty within 5 years of the study.

[2]the 11 companies that were identified as ‘Great’ by Collins, Circuit City filed for chapter 11, Fannie  Mae was bailed out by the US taxpayers, and all but one (Nucor) has underperformed the stock market.

[3]One out of 70 with seven ‘heads’ is an estimate with a confidence interval of almost 2. Hence 95% of the time, between zero and three players with seven heads out of seven tosses is expected.

Advertisements
  1. Rich
    February 23, 2010 at 12:54 am

    Some food for thought…

  2. Sal
    June 21, 2011 at 10:04 pm

    This really is an issue I must find more information about, i appreciate you for the publish.

  3. james
    March 18, 2013 at 4:00 am

    who said that quote at the top about pursuit of knowledge?

    • March 18, 2013 at 8:45 am

      That would be me,

      regards,
      David

  4. May 9, 2013 at 7:19 am

    I love reading an article that will make people think.
    Also, thank you for allowing for me to comment!

  5. May 28, 2013 at 10:18 am

    Does your site have a contact page? I’m having a tough time locating it but, I’d like to shoot you an e-mail.

    I’ve got some suggestions for your blog you might be interested in hearing. Either way, great blog and I look forward to seeing it grow over time.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: