There is significant debate in the data science community around the most important ingredients for attaining accurate results from predictive models. Some claim that it’s all about the quality and/or quantity of data, that you need a certain size data set (typically large) of a particular quality (typically very good) in order to get meaningful outputs. Others focus more on the models themselves, debating the merits of different single models – deep learning, gradient boosting machine, Gaussian process, etc. – versus a combined approach like the Ensemble Method.
Alison Lohse, Co-founder and COO at Conversion Logic outlines how attribution is a data-modeling exercise and validation takes marketers a step closer to better measurement
There is some great discussion coming out of Australia in the wake of ADMA Data Day. It’s always fun to see what other industry experts and professionals are saying about the marketing attribution space, but even better when it’s something I so wholeheartedly agree with.
Marketing attribution is a relatively simple concept supported by relatively complex data science. The idea of attributing an action or conversion to a particular activity or source intuitively makes sense; it is, in fact, what pretty much every marketer worth his or her salt has been trying to do since the beginning of time. It’s the how that gets tricky.
I’m calling it: Black box attribution is officially on its way out. It’s no longer necessary to blindly trust that your attribution platform will magically turn data into meaningful insights. It’s time for the data, and the insights, to escape the black box and join their technology friends in the light.