Let’s take a look at ‘best practices’

“According to best practices…”

How many times have we heard this hoary phrase? It can be especially interesting to hear when used as an one-note explanation for doing things a certain way. So, let’s take a step back.

When someone states their position/request because it is “best practice,” the first question that—should—come to mind is, “According to whom?”

It might be a best practice if one must use “best practices” in a sentence, by taking ownership of this non-concrete term: “According to what I [understand/have read/have been told/just made up/etc.], this is my take on best practices.”

Don’t attribute it to the ubiquitous “they” or an “industry standard.” Also, be prepared to share the source(s). By owning one’s take on best practices, a condescending and dismissive tone is bypassed.

Best practices in any situation is subjective and should remain flexible. Better yet, don’t get lazy and throw a term around that has no meaning or relevance, as there really is no such thing.


For more thought leadership on this in Forbes, read:  Best Practices – Aren’t where Mike Myatt @mikemyatt explains, “too much common management wisdom is not wise at all, but instead flawed knowledge based on a misunderstanding or misapplication of ‘best practices’ that often constitutes poor, incomplete or outright obsolete thinking.”

Why People Quit Their Jobs

Employee retention
hbr.org

Imagine that you’re looking at your company-issued smartphone and you notice an e-mail from LinkedIn: “These companies are looking for candidates like you!” You aren’t necessarily searching for a job, but you’re always open to opportunities, so out of curiosity, you click on the link. A few minutes later your boss appears at your desk. “We’ve noticed that you’re spending more time on LinkedIn lately, so I wanted to talk with you about your career and whether you’re happy here,” she says. Uh-oh.

It’s an awkward and Big Brother–ish scenario—and it’s not so far-fetched. Attrition has always been expensive for companies, but in many industries the cost of losing good workers is rising, owing to tight labor markets and the increasingly collaborative nature of jobs. (As work becomes more team-focused, seamlessly plugging in new players is more challenging.) Thus companies are intensifying their efforts to predict which workers are at high risk of leaving so that managers can try to stop them. Tactics range from garden-variety electronic surveillance to sophisticated analyses of employees’ social media lives.

Some of this analytical work is generating fresh insights about what impels employees to quit. In general, people leave their jobs because they don’t like their boss, don’t see opportunities for promotion or growth, or are offered a better gig (and often higher pay); these reasons have held steady for years. New research conducted by CEB, a Washington-based best-practice insight and technology company, looks not just at why workers quit but also at when. “We’ve learned that what really affects people is their sense of how they’re doing compared with other people in their peer group, or with where they thought they would be at a certain point in life,” says Brian Kropp, who heads CEB’s HR practice. “We’ve learned to focus on moments that allow people to make these comparisons.”

Read the rest of the article HERE.

Use big data to create value, not just targeting

Another great article from the folks (Specifically Niraj Dawar) at HBR. The gist? Targeting provides a short term advantage, creating value is long term. Read more.

Big data

Big data holds out big promises for marketing. Notably, it pledges to answer two of the most vexing questions that have stymied marketers since they started selling: 1) who buys what when and at what price? and 2) can we link what consumers hear, read, and view to what they buy and consume?

Answering these makes marketing more efficient by improving targeting and by identifying and eliminating the famed half of the marketing budget that is wasted. To address these questions, marketers have trained their big-data telescopes at a single point: predicting each customer’s next transaction. In pursuit of this prize marketers strive to paint an ever more detailed portrait of each consumer, memorizing her media preferences, scrutinizing her shopping habits, and cataloging her interests, aspirations and desires. The result is a detailed, high-resolution close-up of each customer that reveals her next move.

But in the rush to uncover and target the next transaction, many industries are quickly coming up against a disquieting reality: Winning the next transaction eventually yields only short term tactical advantage, and it overlooks one big and inevitable outcome. When every competitor becomes equally good at predicting each customer’s next purchase, marketers will inevitably compete away their profits from that marginal transaction. This unwinnable short-term arms race ultimately leads to an equalization of competitors in the medium to long term. There is no sustainable competitive advantage in chasing the next buy.

This is not to say firms should never try to predict and capture the next purchase – but that they can only expect above-average returns from this activity in industries where competitors are lagging and where there are still some rewards to being ahead of the game. In many industries, including travel, insurance, telecoms, music, and even automobiles, we are rapidly closing in on equalization of predictive capabilities across competitors, so there is little lasting competitive advantage to be gained from predicting the next purchase.

To build lasting advantage, marketing programs that leverage big data need to turn to more strategic questions about longer term customer stickiness, loyalty, and relationships. The questions that need to be asked of big data are not just what will trigger the next purchase, but what will get this customer to remain loyal; not just what price the customer is willing pay for the next transaction, but what will be the customer’s life-time value; and not just what will get customers to switch in from a competitor, but what will prevent them from switching out when a competitor offers a better price.

The answers to these more strategic questions reside in using big data differently. Rather than only asking how we can use data to better target customers, we need to ask how big data creates value for customers. That is, we need to shift from asking what big data can do for us, to what it can do for customers.

Big data can help design information to augment products and services, and create entirely new ones. Simple examples include recommendation engines that create value for customers by reducing their search and evaluation costs, as Amazon and Netflix do; or augmenting commodity utilities with customized usage information, as Opower does. More intriguing examples include crowd-sourced data that can give customers answers to important questions such as “what can I learn from other consumers?” or “how do I compare with other consumers?”

A look at startups that create new forms of value using big data is instructive. Opower allows customers to share their utility bills with Facebook friends to determine how they rank in relation to other customers like them. INRIX, aggregates traffic data from customers’ mobile phones and other sources to provide real-time traffic reports. Zillow combines information from an array of sources to provide consolidated insight about home attributes and values, competitive properties, and other market characteristics to buyers, sellers, and brokers. These companies are big-data natives. Their success should be a wake-up call to all businesses: Today, there is no business that is not an information business.

Every company should ask three questions to examine how its big data can create customer value:

What types of information will help my customers reduce their costs or risks? Multi-billion dollar businesses such as Yelp, Zagat, TripAdvisor, Uber, eBay, Netflix, and Amazon crunch quantities of data including ratings of service providers and sellers in order to reduce customers’ risk. Currently, these good-bad-ugly ratings provide generic evaluations of sellers on standard scales. But increasingly customers are looking for more specific answers to questions such as what do customers like me think of this product or service. Answering such granular questions requires a much deeper understanding of what customers are looking for, and how they see themselves. That is an opportunity for the next generation of big data value creation.

What type of information is currently widely dispersed, but would yield new insight if aggregated? Is there any incidentally produced data (such as keystrokes, or location data) that could be valuable when assembled? InVenture, a fascinating new startup operating in Africa, is turning incidental data on smartphones into credit ratings that allow base-of-the-pyramid customers access to loans and other financial products. In an environment where most of the population has no credit history, and therefore no credit rating, even rudimentary phone usage data serves as a handy proxy (people who organize their contacts with both first and last names are more likely to repay loans).

Is there diversity and variance among my customers such that they will benefit from aggregating others’ data with theirs? For example, a company selling farm inputs (seeds, fertilizer and pesticides) can collect data from farmers with dispersed plots of land to determine which combinations of inputs are optimal under different conditions. Aggregating data from many farms operating under diverse soil, climatic, and environmental conditions can yield much better information about the optimal inputs for each individual farm than any single farmer could obtain from his own farm alone, regardless of how long he had been farming that parcel.

Big data has helped marketers address fundamental questions whose answers have long been out of reach. But the true contribution of big data will reside in creating new forms of value for customers. Only this will allow marketers to turn data into sustainable competitive advantage.


Niraj Dawar is a professor of marketing at the Ivey Business School, Canada. He is the author of TILT: Shifting your Strategy from Products to Customers (Harvard Business Review Press, 2013).

Original POST

Core metrics for measuring marketing’s financial performance

This is an essential marketing white paper that applies to any industry (paper’s focus is on healthcare) by the fine folks over at Society for Healthcare Strategy & Market Development – here’s the intro and link.

HAVE YOU BEEN IN THIS MEETING? IT’S BUDGET TIME. Marketing says it is contributing financially to the organization. Finance asks, “How?” After 30 minutes of back and forth, the stalemate ends in less than a draw. No one wins, especially the healthcare system. But even after thirty-some years of contributions to healthcare systems, the marketing profession has yet to develop standardized guidelines for measuring its financial performance. In this time of accelerated accountability, it is a fact that the absence of measurable standards is no longer acceptable—for any discipline. Fortunately, efforts are underway to establish both basic standards and advanced metrics for healthcare marketers. This white paper focuses on efforts to date to achieve both.

For delegation to work, coaching is necessary

Senior leaders want to believe that delegating a task is as easy as flipping a switch. Simply provide clear instructions and you are instantly relieved of responsibility, giving you more time in your schedule.

The allure of delegation is tempting, especially considering how much time it can free up.

That’s the dream. In reality, we all know it almost never works that way. You’re often forced to step in at the last minute to save a botched deliverable. And because you jumped in to save the day, employees don’t have the opportunity to learn. They aren’t left to grapple with the consequences of their actions, and therefore are deprived of the chance to discover creative solutions. What’s more, morale takes a hit — employees begin to believe that no matter what they do, their work isn’t good enough.

Read the rest of this HBR article HERE.

Your late-night emails are hurting your team

This just can’t be shared enough. Credit to HBR.

MAR15_16_144654464

Around 11 p.m. one night, you realize there’s a key step your team needs to take on a current project. So, you dash off an email to the team members while you’re thinking about it.

No time like the present, right?

Wrong. As a productivity trainer specializing in attention management, I’ve seen over the past decade how after-hours emails speed up corporate cultures — and that, in turn, chips away at creativity, innovation, and true productivity.

If this is a common behavior for you, you’re missing the opportunity to get some distance from work — distance that’s critical to the fresh perspective you need as the leader. And, when the boss is working, the team feels like they should be working.

Think about the message you’d like to send. Do you intend for your staff to reply to you immediately? Or are you just sending the email because you’re thinking about it at the moment, and want to get it done before you forget? If it’s the former, you’re intentionally chaining your employees to the office 24/7. If it’s the latter, you’re unintentionally chaining your employees to the office 24/7. And this isn’t good for you, your employees, or your company culture. Being connected in off-hours during busy times is the sign of a high-performer. Never disconnecting is a sign of a workaholic. And there is a difference.

Regardless of your intent, I’ve found through my experience with hundreds of companies that there are two reasons late-night email habits spread from the boss to her team:

Ambition. If the boss is emailing late at night or on weekends, most employees think a late night response is required — or that they’ll impress you if they respond immediately. Even if just a couple of your employees share this belief, it could spread through your whole team. A casual mention in a meeting, “When we were emailing last night…” is all it takes. After all, everyone is looking for an edge in their career.
Attention. There are lots of people who have no intention of “working” when they aren’t at work. But they have poor attention management skills. They’re so accustomed to multitasking, and so used to constant distractions, that regardless of what else they’re doing, they find their fingers mindlessly tapping the icons on their smartphones that connect them to their emails, texts, and social media. Your late-night communication feeds that bad habit.
Being “always on” hurts results. When employees are constantly monitoring their email after work hours — whether this is due to a fear of missing something from you, or because they are addicted to their devices — they are missing out on essential down time that brains need. Experiments have shown that to deliver our best at work, we require downtime. Time away produces new ideas and fresh insights. But your employees can never disconnect when they’re always reaching for their devices to see if you’ve emailed. Creativity, inspiration, and motivation are your competitive advantage, but they are also depletable resources that need to be recharged. Incidentally, this is also true for you, so it’s worthwhile to examine your own communication habits.

Company leaders can help unhealthy assumptions about email and other communication from taking root.

Be clear about expectations for email and other communications, and set up policies to support a healthy culture that recognizes and values single-tasking, focus, and downtime. Vynamic, a successful healthcare consultancy in Philadelphia, created a policy it calls “zmail,” where email is discouraged between 10pm and 7am during the week, and all day on weekends. The policy doesn’t prevent work during these times, nor does it prohibit communication. If an after-hours message seems necessary, the staff is compelled to assess whether it’s important enough to require a phone call. If employees choose to work during off-hours, zmail discourages them from putting their habits onto others by sending emails during this time; they simply save the messages as drafts to be manually sent later, or they program their email client to automatically send the messages during work hours. This policy creates alignment between the stated belief that downtime is important, and the behaviors of the staff that contribute to the culture.

Also, take a hard look at the attitudes of leaders regarding an always-on work environment. The (often unconscious) belief that more work equals more success is difficult to overcome, but the truth is that this is neither beneficial nor sustainable. Long work hours actually decrease both productivity and engagement. I’ve seen that often, leaders believe theoretically in downtime, but they also want to keep company objectives moving forward — which seems like it requires constant communication.

A frantic environment that includes answering emails at all hours doesn’t make your staff more productive. It just makes them busy and distracted. You base your staff hiring decisions on their knowledge, experience, and unique talents, not how many tasks they can seemingly do at once, or how many emails they can answer in a day.
So, demonstrate and encourage an environment where employees can actually apply that brain power in a meaningful way:

Ditch the phrase “time management” for the more relevant “attention management,” and make training on this crucial skill part of your staff development plan.
Refrain from after-hours communication.
Model and discuss the benefits of presence, by putting away your devices when speaking with your staff, and implementing a “no device” policy in meetings to promote single-tasking and full engagement.
Discourage an always-on environment of distraction that inhibits creative flow by emphasizing the importance of focus, balancing an open floor plan with plenty of quiet spaces, and creating part-time remote work options for high concentration roles, tasks, and projects.
These behaviors will contribute to a higher quality output from yourself and your staff, and a more productive corporate culture.

~Maura Thomas

Original POST

 

Stay skeptical of numbers

I love this post. Originally titled, Misleading Types of Graphs For The Media, it not only tears down the repeated claims of video dominance, traditional media reach and more, it also questions info we are presented and often take for granted. Much like Nate Silverman discusses in The Signal and the Noise, be sure you are discerning the correct application of metrics and context. Long, but so worth your time. -pw


Website analytics and SEO data analysis concept. EPS10 file and included high resolution jpg

My friend Avinash Kaushik posted a wonderful article the other day about the importance of analysts to have a skeptical nature, and I absolutely agree with him. Skepticism, along with fact-checking, and a strong urge to take a step back to look at things from the larger perspective, is the key trait of anyone working with media strategies.

But I want to expand on his article because there are several types of graphs that I see all the time, each painting a completely misleading picture. And each one of these is dominating the media landscape, and is constantly used in presentations at pretty much all of the big media conferences.

So let’s talk about this.

The curse of the market share graphs

The first truly misleading graph is the one most people use to indicate market share of either their own business, their audiences, or the things that we use to get our publications to the market.

For instance, take a look at this graph:

Here you see two different things/products/whatever and how they changed from 2011 to 2016. So, what conclusions do you get from this?

The conclusion most people come to is that the yellow market has experienced catastrophic decline, while the red market is dominating more and more.

Right? Eh… no. What if I told you that the yellow market hadn’t declined at all? And to prove this, here is the exact same graph but using the raw numbers instead of a percentage.

What’s actually happening here is that the market overall has expanded. The yellow market has experience only a minor growth, while the new red market has created an entirely new market on top of the old one.

You see how bad this is? The first graph forced you to come to entirely wrong conclusions.

So, one of many places where this is happening is when people talk about the rise of mobile. For instance, people keep talking about how laptop computers are dead, and the graph they use is the one below:

What you see here is the same as before. As a percentage, mobile has been growing rapidly over the past five years at what looks like the expense of laptops. And if you see this graph (or those similar that are widely circulated by media executives) you may indeed think that laptops are dead.

What makes this graph even worse is when it is backed up by graphs showing volume of sales, where, again, mobile is dominating. This should not come as a surprise to anyone, since we buy a new mobile at a much higher frequency than laptops… especially today where we have so many devices to play with.

So, are laptops dying? Is it game over for desktop computing? Nope… not even close.

Because here are the exact same numbers, but this time drawn using their real data instead of as a percentage. And what you see here is that laptop use per person is the same today as it was five years ago.

The growth of personal laptop usage may have peaked, at slightly less than 3 hours per day, but it shows no sign of decline. What has happened instead is that we now have a new mobile market on top of the old one. It’s not killing the laptops. It’s extending it.

Think about how many times people have told you that mobile is killing laptops over the past few years, both at media conferences and on Twitter. It’s just insane how many have been fooled by looking at percentages instead of the actual numbers.

Don’t be one of those people. Always insist on seeing the real numbers.

There is, however, an even worse example than this. And that’s when we see studies from newspaper associations. Almost all of them are using completely misleading graphs by default. Either because they don’t know any better or, worse, because they are trying to hide the decline that we all know is happening.

The main reason why these graphs are so bad is because not only are they based on total percentages, they are also leaving out critical data.

Think about it like this. Imagine your market was defined by three types of audiences. Print, digital and not reached. Not reached are the people who have stopped reading newspapers as we know them.

Now imagine that we map this change over the past five years, we might see something like this:

What we see here is that the market overall is going up (due to the growth in population), but the share of people who don’t subscribe to newspapers is increasing. For newspapers, their market is in heavy decline, and even though digital is growing, it in no way makes up for the decline.

Sound familiar?

But then the newspaper associations do a study, and instead of looking at the market as a whole, they decide to only look at their remaining market. In other words, they decide to simply ignore all the people who have been lost, and then map the rest as a percentage.

The result is a graph that looks like this:

Now we have the same problem as before. This graph is not only completely useless, but also completely misleading. Now, you no longer see any sign that the market is in trouble, and while digital is growing, it looks like print is still going strong.

This is terrible.

One example of this was when the Canadian newspaper measurement agency Vividata published this study:

Just look at this. This graph actually makes it look like newspapers are winning. They are up from 77% reach to 81% reach. And even the magazines are doing fantastically. Sure they are down by a bit, but it’s nowhere close to anything that could threaten their future.

This is great. The Canadian newspaper industry has apparently found a way to keep winning with print. No problem here. Right?

Eh… no. The reality is, of course, that the media industry in Canada is in just as much trouble as newspapers anywhere else in the western world, and that their circulation and advertising revenue is in a terrible state… like what we see here:

I’m not saying that Vividata’s data is wrong. It’s probably an accurate reflection of what they measured. But what I am saying is that they designed the study to only look at things that had no real use for the challenges newspapers are faced with.

As they say:

Seventy percent of newspaper readers still read a printed edition daily. That’s down from 90% five years ago. While print remains the leading source for most newspaper readers in Canada today, digital and cross-platform continues to grow.

No. Just no. This is an idiotic way to look at the data.

The future of the newspaper industry is challenged by external factors, so it makes absolutely no sense to do a study that only looks at the internal factors. This is stupidity at its worst, and it has a serious consequence.

When the Spiegel’s Innovation report was leaked last week, one of the key problems they found was that they ‘lacked a sense of urgency’. And of course they did. If you are constantly being told print is still leading and digital is growing, you don’t feel any need to change. Things sound like they are going fine.

But this is not the reality.

So, be skeptical about newspaper studies. Almost every one of them is disastrously misleading… and especially so if they are based on percentages. Whenever I see a newspaper study that has percentages in it, my red warning lights begin flashing.

Not understanding what is being studied

Another huge problem that we see with studies about the media industry is that we often see that studies that are measuring one thing are used to prove something completely different.

One example is this slide from BBC’s Esra Doğramacı at the News:Rewired conference.

I have lost count of how many times I have seen media people use this study in relation to video, and people were absolutely lapping it up at the conference. Several people tweeted that video was the only thing to focus on in the future.

But this is not what that study is saying. The Cisco study has nothing to do with video consumption. It doesn’t tell us anything about whether people actually watch video more than before in relation to the media.

There are a number of reasons for this.

Firstly, the Cisco study is looking at the volume of network traffic going through their routers for all internet traffic as a whole. It’s not looking at video consumption specifically.

This alone is highly misleading. You think that an increase in video use means people also watch more video, but that’s not necessarily true.

Consider this.

Imagine that it’s 2010, back with poor wifi and relatively slow mobile devices and you look at the consumption patterns of a single person. So, you have a person reading 10 articles, at 2 MB, and spending 2 minutes reading each. Then this person also watches one video, at 360p (again because of slow connections), at 16 MB, spending 6 minutes.

What you get is this:

As you can see, 44% of the data usage and 23% of the time spent goes to video, the rest goes to the articles.

Now fast forward to 2016. We now have 4K and amazingly fast wifi connections, so now we have got this:

You see what’s happened here?

In this example, the consumption pattern stayed the same. This person is still only spending 23% of his time watching video, but look at the data usage. It’s now 96% going to video.

And this is what the Cisco study is predicting. It is looking at the size of the video files and it’s projecting how much data that will require in 2019. Cisco is not talking about time spent.

Mind you, I’m not saying that video isn’t growing. We all know that it is. But it’s not 90% of the internet. The idea that many executives have that video in the future will be 90% of people’s time spent is completely misleading.

The problem we have today is that we have no good studies about how much time people actually spend watching video on video sites. All the studies we have seen are either based on video views (which has nothing to do with real consumption time), or based on video volume, which includes all the time we spend watching on-demand TV on Netflix etc.

Neither really tells us anything about how people consume the content from newspapers or magazines.

We also see ‘attention’ studies, but I have yet to see one that really measures this correctly. What you want to measure is when people are really reading an article or really viewing a video. And you want to measure the completion rates for both. The studies I have seen either don’t do this correctly, or only measure one but not the other… leaving us with nothing to compare with.

So, again, be very skeptical about video statistics. What we have today is very likely to be misleading in a big way.

And of course, this isn’t just about video. We see so many examples where studies are looking at one thing, but people think it applies to something else. And, again, the media industry is very bad at this.

Let me give you another example.

Imagine that we wanted to do a study and I asked you: “Do you read newspapers?”What would you answer?

You see, if you were from the older generation, this question could only mean one thing. It meant you were reading a printed collection of stories, within a certain type of focus, in a certain format, delivered on a daily basis.

Or in other words, it meant you were reading something that looks like this:

This was the reality of the old world of media. Everything was sharply defined and separate from each other. A newspaper was very different from a magazine, which was very different from radio, which again was very different from TV.

Asking people if they were reading a newspaper always resulted in a clear and unequivocal answer.

But now look at the media as we know it today. The first problem we have is that our many formats of media are now part of the same mix. When you go to the New York Times, you are just as likely to watch a video as you are to read an article. So, are you reading a newspaper?… or are you watching TV?

Then look at the consumption patterns. In the past, reading a newspaper actually meant sitting down with this package of news and reading or looking over a substantial part of the articles. We don’t do this anymore. Today we read articles via links, which means that nobody is really ‘reading’ a newspaper anymore.

We also see this with how many news sources people are exposed to. In the past, because we were actually reading a single newspaper, the definition of a newspaper was just that one publication. Today, as we are reading the news via a link, we are exposed to hundreds of different newspapers every week, in which we may only read one or two articles.

When Canada’s Vividata says that 81% of the population is ‘reached’ by newspapers, they are not actually wrong. But that reach has nothing in common with how we defined that reach 20 years ago. It isn’t real reach at all.

Reach today is defined in the same way as retail stores measure foot traffic in large malls. Yes, you get a ton of people walking by, but their intent has very little to do with any actual sale, nor is it directed to any specific store. It’s a useless metric.

So what does this have to do with misleading media graphs?

Well, let me show you. Here is a graph from QZ, which is just one of hundreds that make this mistake.

What you see the old definition of media, in which each format is sharply defined with no overlap. This study would have made sense in the 1970s, but it’s meaningless in 2016.

For one thing, what the heck does ‘internet consumption’ mean? Is that only the consumption that goes to traditional publishers? Or is that general internet use? Does that include blogs, Facebook, Instagram, Snapchat?… does it include the time you spend in Outlook reading emails?

It’s a completely useless definition.

Secondly, when they asked people if they were reading newspapers, did people actually distinguish that as print newspapers, or were they thinking about any exposure they might have had to newspapers in general?

We also have the problem that today’s world is multi-device and multi-platform. So people might be watching something on Netflix while using their smartphones. So should that count as two separate activities (by adding minutes to the total), or are both of those really part of the same time (and thus not adding to the total)?

In other words, when QZ claims this study proves that “We now spend more than eight hours a day consuming media”, is that actually true? Is it more like we are spending five hours, with three of them using a mix of devices?

Think about what that means in terms of attention, retention, loyalty. There is big difference between a focused period of media consumption (which was what we had in the 1970s), and an unfocused period of media consumption, which is what we see with all the micro-moments people have today.

In the past, people were fully aware of what newspaper they were reading. Today, when you click on a link, you often cannot even remember what site it was on two minutes later.

So, this graph tells you absolutely nothing because it’s focusing on the wrong definition of media consumption. It’s assuming that people behave the same way today, and that we still live in a divided world of media scarcity as in the 1970s.

It’s crap.

What we should be doing instead is to measure how people consume media based onwhat type of moment they have, and what intent that moment carries with it. The format of media is irrelevant. That’s the old world.

In a very simplistic term, we can define this like so:

What you see here are the two major types of moments that we have, micro and macro moments. And within each are many different types of intent. We have those who are merely snacking, those who have a specific need, and those who consume media because of their passions.

Imagine how much better a media study would be if this was how it was measured. Instead of asking ‘how much time do you spend reading newspapers?’, we would ask ‘how much time do you spend snacking on media while also doing something else?’, and ‘how much time do you spend reading articles about something you need to get an answer to?’

And think about the study from Canada, the one showing that 81% of the population is reached by newspapers. What if we instead measured this in terms of moments. How many people do you reach as a low-intent micro-moment, and how many do you reach as a high-intent macro-moment?

You see the difference?

But no study is doing this today, because they are all based on the old definition of media consumption. Even when the data is right, the result is wrong because they are looking at it the wrong way.

This is why it’s so important to stay vigilant and to be a skeptic about the studies that we see. Always question the data. Always question the methodology. Always take that step back and ask if this data really measures what you think it measures.

Written by

Original POST

7 great reads for entrepreneurs, business & marketing

With six hours of drive time a week, I began using it to catch up on reading (or listening in this case). Here are the best so far.

The Big Short: Inside the Doomsday Machine

Big-short-inside-the-doomsday-machineSee the movie and read the book or vice versa. Either way, you’re in for a terrific ride. In a riveting fashion, Michael Lewis describes and makes sense out of the 2008 financial collapse that destroyed almost everything in its path. The real key to this story’s success is Lewis’ attention to the eccentric cast of characters who saw it coming from as far as a decade away.

Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future

Elon Musk: Tesla, SpaceX and the Quest for a Fantastic FutureIt really doesn’t matter what you think of Elon Musk, his story and work are amazing. Musk is one of the few extremely non-risk-averse entrepreneurs out there who also happens to be talented, smart and not afraid to question, defy and oftentimes battle the status quo (a LOT like Richard Branson, up next). The biography by Ashlee Vance is honest, compelling and pulls no punches with Musk’s frequently challenging personality.  Take the time to read and get inspired by one of the biggest thinkers alive.

The Virgin Way: Everything I Know About Leadership

The Virgin WaySpeaking of non-risk-averse types, let’s talk about Richard Branson. His business and lifestyle motto “Screw it, let’s do it” is tempered by his thoughts on successes and failures in his career. The history alone of how Branson developed into one of the most successful entrepreneurs of all time –after an unauspicious start as a sixteen-year-old dropout with dyslexia– makes one sit up and listen, but it’s his wise and experienced approach on how he chooses to lead with an emphasis on people/relationships and less on formality that makes the book so valuable. Another book by Branson, Losing My Virginity: How I Survived, Had Fun, and Made a Fortune Doing Business My Way gets a favorable mention here for the additional and very entertaining details on how Branson went from magazine publisher, to mail order record business to a recording studio/company to an airlines and his (ad)ventures in just about every product imaginable. Virgin Cola anyone?

Outliers: The Story of Success

OutliersMalcolm Gladwell has his critics, but I appreciate his focus and interest in areas I find fascinating. I have been a fan of his since The Tipping Point and Blink, both books which made me reexamine a few ideas I had about the world and reshaped a lot of the thinking I bring into my daily work. Outliers explores how people become extraordinary (or at least successful in their chosen fields) and it isn’t always how you might think.

The Signal and the Noise: Why So Many Predictions Fail-But Some Don’t

signal and noiseThe founder and editor of FiveThirtyEight.com, Nate Silver gained national celebrity status after his almost exact prediction of the 2008 and 2012 elections. Here he examines the difficulty in discerning what is relevant in the statistics collected and what may be simply unnecessary and sometimes misleading information. Anyone interested in analytics, statistics, prediction, numbers, science, economics and “seeking truth from data” will be fascinated.

Thinking Fast and Slow

thinkingfastslowIntriguing book by Daniel Kahneman, a psychologist and winner of the Nobel Prize in Economics, that examines the way our brain processes information. As I listened to this, I couldn’t help but wish I was reading it while able to do some of the exercises the book asks you to do to help discern its examples. Besides that, a very thoughtful and revealing view of how our perspective of the world, events, environment, etc. is easily skewed.

The 48 Laws of Power

48 laws of powerI’m almost embarrassed to admit I read this one as its Machiavellian approach makes you feel as though you’re preparing for a role on Game of Thrones. In its most basic form it serves as a primer on how to manipulate your way into getting what you want. If you work in an environment of snakes –and this book assures you everyone (including yourself) is a snake– then this is the book to read. However, there is some good advice to be found in its pages and most everyone in business has read it, along with Sun Tzu’s The Art of War. You might as well arm yourself so you can see ’em coming.

That’s all for now. I’ll update as I come across new books worthy of note. Please leave any suggestions for books that you have particularly enjoyed in the comments below!

~Paul