Tag Archives: Research

Leading Digital (2014) by Westerman, Bonnet and McAfee

I read Leading Digital with a mixed sense of anticipation and suspicion. Heightened anticipation was there for a reason: I think that it is not common to read some rigorous, organic, extended, articulated analysis focused on how traditional corporations face the changes brought about by digital technologies. That slight suspicion came instead from the frequent déjà-vu that often happens to me when I get hold on something on the subject. This is an old debate now. Two decades have gone by since the New Economy highs and lows; some of the very same questions have been raised there, and left unanswered I’m afraid. Then, a few years into the new Millenium, with the advent of Social Media and the much awaited mobile explosion, and the new wave of enthusiam and investments that ensued, we got into the same discussion once again, especially in the professional service realm (where I have been working for a long time, as an agency guy – perhaps I should specify “digital agency” – or as a freelance). Sometimes this debate has turned into a rhetoric, or worst a trade event kind of cliché; paradoxically, it is often addressed to people already convinced of the importance of the issue – very much preaching to the converted, as they say. So, beside the debate and all of the digital “evangelism” (how dated it sounds!), now I would really like to read some systematic overview, have research results, and examine well founded reasoning. This is the promise of Leading Digital, and I think that to a large extent it delivers on that promise.  The book is the outcome of a collaboration between the MIT and Capgemini. It has been written by three authours: two of them have an academic profile, George Westerman and Andrew McAfee (the latter is also co-author of The Second Machine Age, with Erik Brynjolfsson), while the third, Didier Bonnet, is one of the global leader of the French-based consultancy.

The book is based on a three-years research work, from about 2010 to 2013 I would say (it is not specified but the book has been published in 2014). First, Westerman, McAfee and Bonnet, with the help of a team, have interviewed about 150 executives and managers at 50 large corporations that don’t have technology as a core business. This is an important distinction, as it specifies the generic term of “traditional corporation” I have used above. Secondly, they have run a survey involving almost 400 large corporations in 30 countries (“large” it means with revenues in excess of 500 million dollars). The authors are very clear about their global perspective, not centred on the United States. In fact, even if most of the major technology leaders are indeed from the US, as a matter of fact a vast number of large and very large corporations are based outside of the US in Asia or in Europe (where I’m based, en passant).

The focus on “traditional corporations”, defined as the ones that don’t have technology as core business, is a cornerstone of the all work: these firms make “the 90% of the economy”, so it is of outmost importance to understand how they react to the tecnologies brought on the market by the global platform leaders or by the all range of startups  – many of them coming the Silicon Valley or the US. The strenght and momentum of this wave of digital technologies, platforms and services are such that nobody can escape it. Westerman Bonnet and McAfee have no doubts: the firms that choose not to react are going to face obsolescence and decline.  Here it comes an analogy that has been made many times in these debate: digital technologies are the Second Industrial Revolution. Nothing can resist their momentum. It’s a warning for the executives out there: we have come to a point in which it is possible to discern between the corporations that have undergo a successful transformation, taking advantage of these technologies, and those that haven’t. The analysis of these outcomes has allowed the authours to devise an approach or a transformation roadmap that others can follow too.

En passant, Leading Digital is also the book of choice to get a synthesis of the many scholarly articles and white papers coming out from the cooperation between the MIT and Capgemini on the “digital transformation” idea. The expression has been quickly adopted by the industry jargon but it could be that many are not aware of the original formulation, or, better yet, of the formulation that has got the widest adoption. A 2011 MIT and Capgemini document reported the following definition:

Digital transformation (DT) – the use of technology to radically improve performance or reach of enterprises – is becoming a hot topic for companies across the globe.

I think it’s important to start again from here – it’s not about defining something once and for all but bringing some clarity about the context in which has been shaped. The first Altimeter report on the topic (published in 2014) says that in their instance digital transformation is analysed from the customer experience lens. A second Altimeter report on the same subject credits an earlier formulation by scholars Erik Stolterman and Anna Croon Fors. For what I can read through Google Books scans, they were pretty distant from an interest in corporations performances. In that discussion, “Digital transformation” is an emergent phenomena that calls for a critical scrutiny, it is a novel focus for information technology research – they might even quoting Marcuse if I’m not wrong.

… the most crucial challenge for IS [information Systems] research today is the study of the overall effects of the ongoing digital transformation of society. The digital transformation can be understood as the changes that the digital technology causes or influences in all aspects of human life. This research challenge has to be accepted on behalf of humans, not int their role as users, customers, leaders, or any other role, but as humans having a life.

This was about 2003. Fast forward to 2014 and typing “digital transformation” in the Google bar will get you a couple of ads from big and huge consulting businesses (Accenture, to mention one), followed by a deluge of organic results. Anyhow, my point is that for all of these mentions there is little research, so it’s worthwhile to have a close look at the book from the people that triggered the most informed debate.

I think the book offers three main original results and contributions. The first is a set of models and categories that frame and define the all question; they are the tools that allow to investigate its dynamics and produce practical recommendations. The second set of results includes the case  and example reviews, the corporations that have been analyzed, with all of the excerpts from the research interviews. The third is a proper “discovery”, so to say, regarding the fact that the best corporations from the digital transformation angle show also better business results.

Let’s have a look at the first and at the second point. One key categorization or model makes a distinction between three dimensions relevant to the digital transformation concept: customer experience, operations and business models. These are different but interdependent aspects, so that changes in one would influence the others, to some degree. All of the corporations cases mentioned in the book can be mapped to these domains. So Burberry and Starbucks e.g. are explored mostly in the customer experience perspective. Very distant businesses like Asian Paints (India), Codelco (Chile, mines) or Zara are in the spotlight when it comes to the operations dimension. Hailo, Uber, Airbnb, Fujifilm, Zipcar, Car2go and many others illustrates the business model discussion. So this is about how corporations react to digital technologies in one or another of these key three dimensions, or all of them at once. Then the authors introduce also a typology based on another distinction. There are digital capabilities or competencies and leadership capabilities. Here you get a typical two axis matrix with four cases, in which the upper right quadrant is for “Digital Masters” . I think that these are the most analytical parts of the book. Combining these models with real cases offers a very rich material, interesting per se and useful as the basis to build advice for other corporations. In fact the book offers plenty of checklists, summaries and an entire final “playbook” addressed to executives that want to face the digital transformation challenge. Those are not at ease with the business book flavor might be slightly annoyed at this point, but the authours have been impeccable in pointing to the many scholarly or public sources in the endnotes (to testify again the research rigour).

By the way. The book has 9 mentions of the term “advertising” and 8 of the term “campaign” (just 7 in the proper advertising context) and just one of the expression “digital advertising”. I am aware that this is nothing scientific but this rough count made me think that the research has not been conducive to the discovery of some distinctive way of doing advertising by the digital leaders. It is as if a smart, sensible usage of digital advertising is taken for granted, just as a necessary element of a broader framework. In other words, where the digital transformation is in place digital advertising will be   a part of it, but simple budget shifts from one channel to another don’t make a big difference.

Let’s move to the third result. Here we have a very sharp and interesting conclusion, based on the research empirical work combined with the typology created by the authors. “Digital masters” make more revenues and profits than their competitors.

[…] Digital Master outperform their peers. Our work indicates that the masters are 26 percent more profitable than their average competitors. They generate 9 percent more revenue with their existing physical capacity and drive more efficiency in their existing products and processes.

Even though Westerman, Bonnet and McAfee are keen to stress that this conclusion indicates a correlation and not a causal factor, it is evident that these are big figures (think about the 10% of a 1 billion in revenues). So here the authours are really zooming in on an opportunity, a huge one. Grab it is open for everyone – every company that is willing to. There is no need to be based in Silicon Valley, no need to have hundreds of software engineers, no need to have onboard some one of a kind maverick pioneer. For sure it will be an endeavor, more or less challenging depending on the starting point, and the honesty of your initial self-assessment, but it is something attanaible by every company with adequate willingness and practical means to go forward.

Once again, it is not just about a big opportunity. Beside the call for action to grab it, there is another take running through the book. Westerman Bonnet and McAfee warn readers that they need to get moving anyhow, since the transformation has just begun, and its effects are barely starting to emerge.

We ain’t seen nothin’ yet.

This is not secondary, as said above. Moreover, it highlights a sort of paradox, a relative lack of solid knowledge about the possible negative outcomes of the transformation as depicted by this research. If we accept that at this stage the impact of digital technologies and platforms is only beginning to take shape, and much stronger changes are to come, then the reason to react is not only the opportunity to have more revenues and more profits (as Digital Masters have), but first and foremost companies is about the companies survival and essential prosperity. So what are the “traditional corporations” that prove the point? Yes, the authors mention Kodak, or cabs (“Uberized” as it has been said), and then? Talking about the standard verticals, or categories that have lost their descriptive power (say “telecommunications”, “advertising” or “newspapers”) is of little help I think. Here again what is badly needed is solid research, well organized reviews, structured cases, empirical evidence and models. After so many years of debates about the effects of digital tecnologies, how one can’t see the paradox of not having a great pars destruens in the library? If you know it, please tell me where it is, and I’ll get it straight away (likely via Amazon Prime).

The Apple Watch comment

It’s a very quick one – and one has to write something anyway after such a long hiatus. Even though we’re talking about pretty different things, both the Watch and the Research Kit have Apple taking on different verticals (luxury, healthcare), the “traditional” industries that are very much any agency or consutling firm clients base. In the automotive industry e.g. we’re seeing self-driving cars coming both from technology or service platform leaders (say Google, Uber) and the conventional automotive players. I guess this is a kind of different competition that it’s worth exploring with clients.

On a general note, I enjoyed the news discussion from the Leo Laporte Twit tech podcast élite (as usual). Here is the link to the 9th of March edition of Tech News Today.

“Smartphones”: market share & usage data

After every quartery release industry analysts, experts and all comment on the latest market share data, based on sales in that timeframe — something a bit misleading if you think about the expression “market share”: in fact, these numbers does not tell much about the actual distribution (i.e. platform share in a given period: look e.g. at the market share of Symbian, RIM and iOS published alongside this FT piece on Nokia CEO troubles, in which you have Symbian declining from over 60% in 2006/2007 to slightly above 40% in 2010, RIM moving from less than 10% in 2006 to 20% in 2010 and Apple iOS raising to something like 15% after the 2009 slightly higher peak; sorry for not being precise but the chart is very small… precious exact figures are missing ofc).

Update: via @tomiahonen I just found a Reuters infographics, Strategy Analytics data, that shows the general dynamic very well.

This is not to say that this information is not important: of course it is, 100%, for a number of obvious reaasons. But there is big but here in my opinion: if we want to look at the “user” side of the coin (end-user or business), then discussing smartphones market share makes sense as long as they are accompanied by some data on the actual usage of the specific capabilities that make them different (supposedly “smart”) when compared to “dumb” phones: i.e. online applications usage, be they related to Web app/mobile sites or native apps. Even in this case, we would still be at a very high level, unless we discuss about some sort of activity or product/service category: e.g. search, games, social networks etc.

To make the point clearer, look e.g. at the chart below, taken from a MocoNews.net post on a recent Pew survey:

In other words: we might well have a relatively small number of iPhones around, but if iPhone users (or Android users, or whatever) are those mostly actively browsing the mobile Web, using and spending on mobile apps, searching and possibly clicking on those paid search ads etc. then this is what matters most from a business and marketing perspective.

Now, data on mobile products/services usage vis-à-vis actual smartphone penetration divided by platform do not seem easily available, at least in the public domain — or am I wrong?

Update (27-7-2010): cf. e.g. these conclusions from a Yankee Group report (Why iPhone matter; premium access only, the following quotation is from the public executive summary): Two-thirds of iPhone owners use the mobile Web daily … Plus, iPhone owners download more apps, are more interested in mobile transactions and conduct more mobile e-commerce than users of other [smartphone platforms I guess — it’s truncated right there!]

PS: I put the quotation marks on smartphones in the post title for the same reason: Wikipedia tells that a smartphone “allows the user to install and run more advanced applications based on a specific platform” and then that they “run complete operating system software providing a platform for application developers”. Still you can use a smartphone pretty much in the same way of a dumb phone, as perhaps one went for it for other reasons than the possibility to use apps, the mobile Web and the likes. In short, couldn’t be this one the case for so many Nokia smartphones around? (especially in Europe) Smartphones are not created equal…

In memoriam: William Mitchell

William Mitchell, MIT dean and professor, architect, urbanist and theorist, widely regarded as one of the most prominent thinker on “smart cities”, has passed away; see here the official MIT obituary.

William Mitchell
Photo Webb Chappel from MIT obituary page

Right now a Twitter search shows a flow of related messages. My personal impression is that Mitchell is being remembered by a really diverse big bunch of people, ranging from fellow specialists to an original crowd of professionals, scholars and students of different disciplines, all sharing the appreciation for his work and intuitions. It’s not something that I can prove with the numbers, but I feel it’s quite right. And I think it’s a mark of oustanding intellectual achievements.

Update: Adam Greenfield, author of Everyware, now at Nokia, has a short but intense post in memory of Mitchell: “Bill’s optimism about technology and cities was infectious, even if (like me) you thought of yourself as the kind of person who’d been inoculated by experience against anything as uncritical as everything implied by that word.” There is an upcoming book from Adam on technology, the city and “networked urbanism” titled “The City Is Here For You To Use” (see more on Speedbird, his blog).

I first heard about Mitchell quite late; it was end of 2004 or beginning of 2005. I was attending the first public meetings of what then became the network of Living Labs, a mixed formal and informal coalition of various organizations engaged with open innovation (see the site of ENOLL, European Network of Living Labs). In that context, Mitchell was credited as the one that originally forged the concept at MIT Media Lab. I remember especially references made by Veli Pekka Niitamo (Nokia, CKIR Helsinki) and architect/professor Jarmo Suominen. See e.g. this definition reported in a presentation given in Budapest by Niitamo (I can’t publish it right away as it reports a copyright notice; likely the document has been just shared between meeting participants — can’t remember exactly):

[The Living Lab idea] [O]riginates from the MIT, Boston, Prof Wiliiam Mitchell, MediaLab and School of Architecture and city planning. ‘Living Labs as a research methodology for sensing, prototyping, validating and refining complex solutions in multiple and evolving real life contexts’.

I found the idea quite fascinating. The “living lab” image was very powerful, if anything. Perhaps it might appear as nothing big when one considers the amount of books and scholarly work produced by Mitchell, but I think that these concrete imagery is badly needed in the research and innovation discourse. It helps a lot in communicating the vision, it creates the opportunity for more articulate conversations.

At that time I also started following a bit the Living Labs community, and I tried to kick-start an interest group in Milan, but without much success (see the archived page); anyway, I haven’t been much involved in the community as such since then, even though I managed to keep some contacts alive.

Photo source: http://newsoffice.mit.edu/2010/obit-mitchell

Latest “Internet trends” from Mary Meeker

Mobile business and online advertising enthusiasts have welcomed this latest deck from Mary Meeker, perhaps the most famous Wall Street Internet analyst to date (see the Wikipedia bio). I noticed it on the blog of London-based mobile agency Addictive (their weekly Mobile Fix is also worth reading).

The presentation has been given at a major industry event in New York just a couple of days ago. I read somewhere that Meeker has been often credited with an outstanding capability to capture big trends early on. So, her takes on the “unprecedented early stage growth” of the mobile Internet are of particular interest for all of those concerned with mobile things.

Meeker co-authored a seminal report on then emergent Internet industry more than 10 years ago — “The Internet report”. There is a digital version available from the Morgan Stanley web site but it comes also in book form from Amazon. The picture below is from KPCB site.

Mary Meeker portrait
Mary Meeker (pic from KPCB site: http://www.kpcb.com/partner/mary-meeker)

“Which [mobile] operating system does your future device run?” (RTM survey results)

RTM-Remember the Milk has published the results of their Mobile Survey, addressed to the RTM users’ base. With 3.300 respondents recruited only through RTM and no incentives I think that this is an original and very interesting piece of research even beyond the scope of mobile RTM evolution (I joined the survey too as an RTM pro (!) user).

See here the table concerning the question cited in the post title, one of general interest.

table about RTM users responses about their expectations on OS

A previous question on the mobile OS currently in use has Apple first and Google second. So, the very short brutal synthesis about mobile OS evolution could be:

  • iPhone first and Android second, the rest is just fragmentation;
  • then, Android first and iPhone second, same as above, that’s it.

Or is this oversimplification?

You don’t ask your customers what they want

“Being customer-driven doesn’t mean asking customers what they want and then giving it to them,” says Ranjay Gulati, a professor at the Harvard Business School. “It’s about building a deep awareness of how the customer uses your product.”

via Prototype – Seeing Customers as Partners in Innovation – NYTimes.com.

This is from an article by Mary Tripsas, associate professor in the entrepreneurial management unit at the Harvard Business School; it describes “Customer Innovation Centers”, special facilities set up by big companies like 3M. Bruce Nussbaum has a post on it in which he refers also to the discussion raised by a provocative short essay by Donald Norman on the role of technology in radical innovation (“Technology first, needs last”). I won’t try to make a synthesis of Norman’s argument and the related debate (see e.g. one of the always nice ChittahChatta Quickies by Steve Portigal pointing to an interesting and critical post). But I would like to add here my 2 cents. The quotation above points to a common negative prejudice about design research, way less articulated than the takes by Norman. Quite many design research methods and techniques — or even the entire design research approach (see e.g. the MIT Press reference) — are often miscoinceved as ways to just extract innovation directly from users’ and customers’ minds, e.g. by inviting them to dull focus groups in which they are asked “what they want”. This is *not* design research but a caricature at best <grin>
Update: if you are interested in the discussion raised by the original essay from Donald Norman, see this other post from Nussbaum and the related comments, including one from Norman himself. En passant, and with all the due respect to everyone (the big and famous and all the others), I am a bit puzzled by the almost total absence of explicit philosophical argumentation. E.g. am I wrong or the all discussion might also be seen as a reneweal of the debate on technology determinism? The comment from Michele Visciola on the relative importance of human needs and their relation to culture points in the same direction from this point of view. Then one could argue that the all idea of contrasting technology and culture is weird, as technology is a cultural phenomenon — the cultural phenomenon for some, but this leads to wider questions.

Vertigo on paper

One of the most interesting projects on which I have been working over the last few months is finally on paper — at least part of it (download available from the publications list). For once, there is even a better name than the usual acronym: it is “Vertigo”, from the Hitchcock 1958 movie. But the proper meaning of “the sensation of spinning or having one’s surroundings spin about them” (Wikipedia) is not irrelevant: the only difference is that the surroundings investigated by the project are the media surroundings, or a mix of media and “real world” surroundings. The main goal here is making possible a more enjoyable and interactive exploration of movies, videos, music (linear media in general) by shaping, following and sharing “media trails” or traces. As reported in the paper, this is an idea well rooted in the early history of hypertext. The work has been done in very close cooperation with Jukka Huhtamaki, researcher at the Hypermedia Lab of Tampere University of Technology, and Renata Guarneri, a former project colleague in MobiLife (with Siemens, one of the main industrial partners in the consortium led by Nokia) now Principal Technologist at CREATE-NET (I am consulting them on different initiatives), plus several people at various research organizations in Europe.Renata has just presented the paper at Digibiz 2009,

I am very grateful to Jukka, Renata, CREATE-NET and all the others for the opportunity to delve once again in the intriguing subject of bringing interactivity to screen based media and music, to the living room context in general.

Vertigo movie poster
Vertigo movie poster (from Wikipedia)

It is now about ten years since the first time I tried some serious effort on the topic by contributing to an essay on TV and interactivity (in a book edited by Laura Tettamanzi and published with the sponsorship of Italian public broadcaster RAI). Ten years is a long span of time: we have seen the dotcom boom and bust, the social media explosion, the 3G come of age etc. Yet TV and movie watching haven’t changed that much — compared to music say. It is no chance that this work started with very inspiring discussions about Last.fm

Italian translation of “40 years of Design Research” by Nigel Cross”

“40 years of Design Research” is a short but very informative piece by Nigel Cross (currently president of the DRS-Design Research Society, professor at the Open University, author of many books and articles). Originally written as a 2006 conference address, it has been then published in the Design Research Quarterly (the DRS official publication), where I found it some months ago. I thought that this was very well suited for my Design Methodology module on “Philosophy of Design” at NABA Media Design, and I completed the Italian translation before Christmas. Nigel has kindly given his permission for using it in teaching; he also made me smile as he replied to my final thanks commenting “how much more elegant it seems in Italian!” 😉

CREATE-NET workshop on forthcoming EU research calls

The conference room in Bergamo (picture is mine)
The conference room in Bergamo (picture is mine)

Last week in Bergamo I had the opportunity to attend the two-days 4th technical and funding workshop promoted by CREATE-NET, a dynamic international research institute in Trento;  “the focus of CREATE-NET’s research is on the Internet of the Future, both in terms of infrastructure and service”. I was invited there because of my previous work in FP6 projects (MobiLife and SPICE, both with Neos), links with the industry (I actually extended the invitation to a major Italian publisher) and established contacts with people working there (this time I have been also introduced to CREATE-NET president, professor Imrich Chlamtac).

The workshop was very well organized and to me it has been quite satisfying to join an event like this in Italy for once (instead of Bruxelles or some capital up in the Nordic region — I love the Belgian beer and the Nordic light, but I can not rush there with my motorbike in 45 minutes 😉 (joking… but the relative rarity of these settings in Italy is an issue; I will not discuss it here anyhow).

Talking about content, I enjoyed very much the informal exchanges with a few other attendants interested in the “networked media and 3D Internet” research area of the forthcoming 2009 calls (including friends from some of my preferred examples of excellence in European ICT research like HIIT and Fraunhofer FOKUS). We started discussing after a very nice visualization example of Last.fm listenings made with Vizter (created by super-brilliants Jeffrey Heer and Danah Boyd) from a Tampere Technical University Hypermedia Lab researcher; having just seen an overview of the research agenda brought forward by NEM, a prominent European and global forum on future media and network technologies, we had an initial but intense chat on possible research proposals at the intersection of media management and consumption, social network visualization and other related stuff.