Saturday, November 3, 2012 Launches SMS Drip Campaigns

I classify my email messaging programs into 3 categories:

promotional blasts (1 message out to a whole list or portion of a list)
transactional messages (administrative messages whose primary purpose is not promotional)
drip campaigns (automated messages that are sent repeatedly to specific cohorts as they meet the qualifying criteria)

Super-easy-to-use bulk SMS supplier is currently beta testing drip campaigns for users of their text messaging services.

Currently, the configuration options are limited, with triggers based off of time passed since join date.  I've got my fingers crossed that they'll introduce greater control and more options over the triggers.

In addition to drip campaigns, the service offers both shared short-code and web-form list-building options, as well as shared short codes.  You can quickly set up an account with them and test the service for free at

Tuesday, October 2, 2012

How Do You Know When You've Lost a Customer?

Your customers don't typically bid you goodbye before defecting to a competitor.

 How do you know when they're gone? Is there a way to predict that they might be headed for the door in time to win them back? 

We worked on developing an approach based on a Poisson distribution.

 Basically the model looks at your historical customer data, and maps out: based on the number of days since my prospect's last visit or purchase, what is the likelihood that they will ever return?

 We looked at several years of data. We mapped out, of all the people who visited on any single day, how many returned the next day? Of the people whose last visit was 1 day ago, how many returned the next day, and so on.

 As you might expect, the longer it had been since the last time our prospects had been to the site, the less likely that they would return the next day.

 Based on this model we were able to determine a target date when our customers passed a critical threshold -- they were more likely NEVER to return than they were to return. Not a perfect model, of course, but a line in the sand that we could use to begin optimizing our way forward.

 We used it to develop a customer winback program targeted only to those customers who had reached the critical threshold -- more than 50% chance that they were gone for good. Preliminary results showed a 30% lift in customer return rates.

 We worked to lock in these gains by setting up an automated communication program that automatically messaged customers as soon as they hit the threshold and set it on autopilot.

 Next step -- we'll split testing the timing to see if we can improve on those 30% gains

 How does your company approach the challenge of figuring out when a customer has moved on to greener pastures?

Sunday, March 27, 2011

The Purpose of the Control in Split-Testing Email Campaigns

Recently, I’ve been training a team member to take over the responsibility of managing email marketing. She has a degree in communications, and is a really strong writer. She took one look at the automated messages that set up in the reflex campaigns, and cringed.

She then proceeded to write new, much better copy to replace the old.
She wrote better subject lines, better headlines, better ‘calls to action’.

When we met and she showed me her plans, I complemented her on the great work she’d done. But then I recommended against replacing the old messages with the new ones.
When she asked why, I answered with a question of my own.

I asked her: “if you launch this change, and get a lift in response and sales, what will you do next?”

She looked confused. “next?”

I asked her “What happens if we double our response from these campaigns? What will we do next?”

From her frustrated expression, I could see that she wanted to say something like “celebrate?”, but felt instinctively that this wasn’t the answer I was looking for.

Her plan to roll out this new copy implicitly assumed that:
1. Her new copy was better (which it almost undoubtedly was, but even when I’m sure, I always look for proof)
2. Once we rolled out the new copy, we would have ‘fixed the problem’ represented by the old copy
3. Her work on this project could be crossed off her ‘to do’ list.

“How will we get to a 400% increase?” was my next question.

The best way I've found to continually move from good to better. Secure the ground you’ve gained, and use it as a jumping-off point to move forward again and again and again. To do this, you need to establish, and rigorously maintain a ‘control’.

And when does it end? Never. As they say in the business “Always Be Testing”.

The purpose of the "Control"

The “control” is your currently best-performing piece of creative for any single purpose. If you currently send out 3 different email messages to your customers like:
1. Welcome to my online store
2. Thanks for buying
3. Please buy from me again.

These three messages are your three ‘controls’.

If you have only one message that you’re using, and it’s performing terribly, and you know exactly what you need to do to fix it – it’s still your “control”. Everything else you do should be measured against it. You should not abandon it without making your new message challenge and defeat it. If your new message outperforms, then it becomes the new control.

Generally You conduct a split test by sending the ‘control’ to a randomly selected portion of your recipients, and the ‘challenger’ to the remainder. Typically, you would split your list into 2 equal groups, then compare the response of one against the other, and whichever performs best becomes the new ‘control’ ready to take on new challengers.

Choosing What to Test

There’s a reason why we were still running admittedly ‘awful’ email copy when my colleague was handed the assignment of taking over managing our email marketing. The reason: I was testing something I considered far more critical than the content of the message, and didn’t want to cloud the results, even though the existing copy ALSO made me cringe.

I was testing the timing of the message.

When I took on this project, I found that we were automatically sending out a reminder message to new customers encouraging them to make a purchase 30 days after they opened an account with us. I suspected that after 30 days, many of our customers might have forgotten they’d opened the account, and that we could boost response simply by mailing them sooner.

I started by cutting the time in half. A 30/15 day test. When that proved successful, I cut the time in half again to 15/7 day. And finally when the 7 day test won, I ran a 7/3 day test. Currently the 3 day message is my control. At that point I intuitively felt it was time to start testing other elements.
Here are a few suggestions on elements to test:
Subject line
Position of links
time limited versus open-ended offers
Personalized versus not

Monday, November 16, 2009

Predictive Analytics...Maybe the Models are More Accurate than I Imagined

I attended an Emerging Markets conference last week alongside a group of entrepreneurs, libertarians, and disheartened taxpayers from the first world, looking for ways to escape the long grasping arm of the tax man, the encroachment of inflation and the total destruction of their wealth.

Bill Bonner, in his opening remarks, complained to us that economics is a dismal science. He shared a joke about 4 economists who get lost in the woods, and sit down to calculate their whereabouts.

The punchline goes...."you see the second mountaintop north of us across the valley? Well, we're there."

Reminded me of a lot of marketing budgets I've been called in to assess over the years. The inputs looks sort of right...the outputs are believable as long as you don't look at them too hard, or compare them to the reality.

It also reminded me of another joke about an economist and an engineer stuck on a desert island with a crate of canned beans.

After the engineer loses an eye and the contents of a precious can of beans in a failed thermodynamics experiment over a roaring campfire, the economist leaps to his feet. He proclaims he has the problem licked...

"First off, assume a can opener..."

I set about reading on the subject of asset allocation for the purposes of risk management over the last few weeks for work, and because predictive models interst me.

The prevailing theory speaks of an optimal risk curve, a theoretical line along which the maximum return is achieved for each level of risk. This formula supposedly automates the process of selecting the best assortment of investments for each individual according to their risk tolerance.

The problem with this, of course, is that someone -- a person -- inputs the assumed level of risk for each investment, or investment class into the model. The key input.

Which of course assumes that the person inputting actually knows the level of risk involved in an investment.

"assume a can opener....."

So what you end up with is the clear illustration of a concept with little utility. With worse than little utility. With awesome destructive power.

This model formalizes someone's 'educated guesses' and gives them the weight of prognostication via clever formatting. The individual investor is seduced into turning over their hard-earned funds under the guise of hard science.

Hang on...I guess it really is an extremely effective and accurate model....

Effect at separating the investor from his money.

Which brought me back to thinking about how accurate predictive analytics models are in predicting -- and altering -- buyer behaviour. I guess it depends on which buyer's behaviour you seek to alter. The online shopper, or the marketing exec thinking about buying the product?

Thursday, September 18, 2008

Brand Awareness is a Metric, NOT a Goal

OK just a quick one. Here it is 1:00 in the morning, and I have to write this because it's on my mind. I'll regret it when the alarm goes of at 6:something.

I read a whitepaper recently, and it started off by making the critical point that in order to design metrics, you must be clear about your objectives. OK we all agree.

But then this same document goes on to outline brand awareness as an objective.

Now at first, this didn't jump out at me.

But has brand awareness ever been a business objective? Do people go into business to maximize brand awareness? Or is brand awareness "quantified", a measure of your effectiveness at reaching your target audience and disposing them to consume your product?

Monday, August 18, 2008

The Acid Test for Custom Reports

OK, I'm speaking to the people who work in small entrepreneurial comapanies here. For those of us who chose to work in a dynamic entrepreneurial setting do it for the rush of immediacy. The ability to move quickly. to stay light on our feet, and make real change.

Lots of times, in performance meetings, someone at the table (ok, it's usually me) will pipe up and say "How hard would it to be able to just find out ..." It's a theoretical discussion only.

Someone has a new report request.

Maybe its something they've thought through, and is going to revolutionize the way we do our business.

Maybe its just idle curiosity.

The tech people at the table immediately tend to jump on the "How do we do this?" bandwagon, totally bypassing the "Is this something we should be allocating valuable resources to?" train.

And usually, at first blush, the answer looks and feels like ...pretty quickly.

But how often is that true?

Sometimes, what happens at that point, is someone pipes up and "authorizes" the report, based on the assumption that someone else can whip it up over sandwiches today at lunch.

Then that someone else spends the whole afternoon on it, because they ran into a few unforeseen stumbling blocks. Oh, and by the way, they have a couple of questions about how the reports should be now followup meetings are being scheduled, and the person who first requested the report, is now piling on additional feature requests, unchecked. And the dominoes start to topple....

The project has outgrown the petrie dish, and is limping hideously through the corridors of your place of work, wreaking havoc. People have forgotten its original, innocuous status as a "theoretical discussion". The hours invested in it have infused the project with the value of human sweat.

Let's take a peek a few weeks down the road, and go ask how much time the new report is saving? A couple of possibilities (terribly overgeneralized of course)

1. The person who first requested the report has a new spring in her step, and has lost the haunted look that comes from too many hours manually calculating stuff that is better done by a computer.

This is the result you're going for. Congratulations.

That young genius is probably going to start poring over the reports and come up with a great recommendation that will materially alter how you do you a gazillion dollars, and pay for the implementation 40 times over before next week's meeting.

2. The person who requested the report has relegated it to the pile of "stuff I don't need to babysit anymore." Ask her how it's progressing, and she'll pull a report for you while you wait. ....and oftentimes discover that the data is being pulled incorrectly or the report is garbled, or not in a format that is useful to anyone. ***By virtue of having been authorized in the first place, the project has been elevated to the status of "stuff worth doing properly"****. At this point countless additional hours may be sunken into the pursuit, before any kind of cost:benefit analysis happens.

3. The person who first requested the report is completely buried. The time requirement to analyze the implications of the new report is consuming them and they no longer seem to have time to stop and think about how much benefit the new information affords them(factoring in all of the data exceptions, annotating the performance anomalies that affect the data output, and closing the knowledge gap between the data and the information that data stands proxy for).

The Three Things That Should ALWAYS Happen Before Anyone Builds a Custom Report

1. Write a SPEC. Even for a little thing. This process is great for shining a light on the holes that are so easy to gloss over in discussion. We sometimes want to rush past this step. We know exactly what we want. We think we have expressed it clearly. We think there is no room for error.

I'm married to a system architect. I've tried asking him to just whip me up a report (I am a self-confessed data junkie). Without a written spec, he refuses, even when I assure him its quick and easy. Even when I say pretty please or bat my eyelashes.

Try this. If is a simple report, it will only take you a few minutes to write out the functional spec. Describe all the inputs, the data sources, and all the possible outputs, depending on what inputs the report receives. This will bring you a lot closer to a shared understanding with the people who are building the report.

If having the report is not worth the time it takes to properly specify how it works, you probably don't need it.

2. Remember that data is only a proxy for information. It is imperfect, and prone to misinterpretation. It can act as a red herring, or mask important trends. Be sure that everyone involved understands how the data is being calculated. Call the data what it is, not what it is supposed to represent.

Create a A data Glossary of Terms.

3. Things automated are easily forgotten. Ensure you build in a mechanism for following up on the results of the report.

Final Notes

There is a lot of room for assumptions when people toss an idea about over the boardroom table. But programming a custom report is an exact science. Many a programmer has misinterpreted the requirement, and built a report that does not meet the need. And often the report that DOES meet the need is a LOT harder to build.

Thursday, June 19, 2008

Multivariate Test Conversion Page

Thanks so much for participating in my multivariate test....

Or, if you came straight here from somewhere else, and are willing to take 2 seconds to participate, I'd appreciate it. Just click the link.