Friday, April 27, 2012

A scary thought about marketing attribution models

I was reading through Google's white paper on marketing attribution and a scary thought occurred to me which I tweeted. I'll self referentially quote my tweet here.

So what do I mean by this? Well, the paper talks about the different attribution models various companies use for click tracking that include first click, last click, time decay, linear, etc.

That's a lot of choices for a model without a clear way of deciding what's right. How to decide?

Well, the first step would seem to be do some quick mock up models comparing the result of using the different types. If you're lucky then the results will be pretty close to each other. That is your model will be insensitive to your choice. In that case it doesn't matter which type you choose so choose whichever one is the least amount of work and move on to the next project. If someone wants to argue over the choice let them win because the decision is literally not worth the time of the argument.

But what if you are unlucky? What if under a linear model organic unbranded search gets the attribution for 20,000 signups but under first click it gets the attribution for 12,000 signups? Then you're in serious trouble. Because while it seems like you are deciding between linear and first click you are actually deciding on how many signups to attribute to organic unbranded search. You might as well get rid of the guise of a model and just write down numbers for how many attributions you feel you should give to each channel. After all that is what is happening now. You are not modeling; you are deciding.

Fortunately, there actually is a bit of an end run around this problem. And that is to test which model actually lines up with your data. That is to make the decision non-subjective again.

I have a few options on how to do this which I will talk about in a future post. But the point is this: if you can't test the results of your model against your data (in a simple way) then you are probably better off not making a model.

Or to put it more bluntly: if you don't have a test then what you've built isn't actually a model. Not one that matters at least.

1 comment:

Matthew G P Coe said...

A lot of what you're saying is very similar to test-driven development as practiced in Agile development. If you don't have a test for your code, then how can you be sure it does what you think it does? Especially if you want to improve it--how do you know your "improvement" didn't actually break it?

Very cool thoughts. You should come back to Toronto and work for us. Or at least come back to Toronto. :)