20/12/2016
It's been more than sixty years since renowned scientists and mathematicians converged on the Dartmouth Summer Research Project on Artificial Intelligence to take part in a massive brainstorming session around artificial intelligence.
The non-apocalyptic promise of machine learning based applications of artificial intelligence states that technology will be able to know what we want, need or intend to say before we even say it. The examples you may already be familiar with is Facebook tagging your friends before you do, your phone offering an accurate set of words that match your grammatical style and Tagxit assigning a tag to your pictures before you do. The last one is the latest in the machine learning landscape, an app that allows its users to categorize and tag their pictures in any way they want, in the same way we use hashtags. The difference here is that these tags serve as a point of interference to the system, correlating the qualitative and quantitative sentiments that the users attributes to the experience captured on camera. Unlike Snapchat, the app doesn't offer its users the ability to make or tag videos, focusing on pictures for now.
Given its popularity and daily engagement utility, it has the potential to refine an algorithm that can read media to decipher its meaning without eventual human intervention. Indeed, removing the task of typing a command does kill the job for the consumer side user but what about the corporate side? Hong Kong based venture firm, Deep Knowledge Ventures, once employed a floor filled with analysts to decipher the viability of investment opportunities and market data. Today it uses VITAL, a machine learning algorithm to do the same task with deeper and faster document analysis. The many barriers towards this being a global trend include the investment that goes into such technology and the algorithm's inability to judge a decision based on its qualitative factors such as sentiment.
Facebook's "reactions" feature though not explicitly marketed as a means towards an AI end, can in effect play a huge role towards guiding brand & agency marketers to the extent of understanding what set of emotions can drive decision making. For instance, the fear of an impending event could drive a consumer towards getting life insurance, but so could the promise of an investment windfall. Identifying emotional drivers behind decision making and the way visual media make digital consumers feel has always been a challenge for new age marketers. And unlike VITAL, Tagxit and Facebook reactions exist in order to augment human decision making capabilities, not replace them.
One the best examples of this comes from SalesForce's Einstein, a godsend for marketers fixated on raising the customer LTV, particularly in eCommerce. According to SalesForce, the update will automatically customize "for every single customer, and it will learn, self-tune, and get smarter with every interaction and additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks."
So in essence this will do for social marketers what C2 did for email marketers and neither require the eradication of jobs, rather, they insist on an upgrading of not only skills but speed.
Contact us
Spanning 8 cities worldwide and with partners in 100 more, we’re your local yet global agency.
Fancy a coffee, virtual or physical? It’s on us – let’s connect!