The Role of Big Data & Data Science in Today’s Marketing World

Back in the day, Marketing was a top-down process that is creative-led. We call that “Mass Marketing”.
Where some brilliant minds get together and come up with slogans and phrases, which will push down
to the audience hoping that they will resonate on it. It’s creative, but basically it’s guesswork.

They had focus groups to try on the idea first. But that was not enough. People who may look similar
from outside often want and respond to completely different things.

Let’s look at a few great failures back in the Day.

In 1984, Pepsi market share was gradually increasing and if it continues it was going to overtake
Coca-Cola in a few years. So Coca-Cola conducted marketing research and identified it was the “Taste”
that reduced the popularity of Coke. So they developed a new formula which has more sugar than the
old Coke and Pepsi. And launched in 1985 while discontinuing the old Coke.

It was the costliest marketing mistake in history and received backlash from consumers and the media.

 So what went wrong?

Brands are more than lists of individual physical characteristics. And people’s brains respond to reassurance and conformism that associates with a brand. In the USA, Coca-Cola has a symbolic meaning and it is seen as a cultural icon by some consumers. Most of the customers prefer tradition and stability over novelty. Coca-Cola just focuses on one attribute and customers feel a sense of loss with the discontinuation of the old Coke.

Let’s look at another failed attempt of the rival to Coca-Cola. In 2017, the soda giant Pepsi released an ad featuring TV Star and Model Kendall Jenner.

It was a scene of a street protest where Kendall Jenner joined and tried to defuse the tension between protesters and police by handing a Pepsi to a police officer. This ad triggered a firestorm of anger and outrage.

What went wrong?

Because people felt it underestimated important topics like racism, police violence, and Black Lives Matter. People protest because they disagree with something or outraged, worried, and even scared. This ad makes it look like protesting is a hip thing young people do for fun. So it came out as incredibly insensitive.

This kind of failure can be catastrophic. When they realize things went wrong, they have already damaged the brand name, and probably their competitors will mock them and use this chance to increase their market share. Not to mention the huge money loss.

 So what’s the solution?

Marketing and consumer research has proved that it is not simply effective or feasible to influence everyone with the same message. Therefore, the targeting of smaller subgroups comes into practice, which brings the term Market Segmentation or Customer Segmentation. Market Segmentation is the technique that divides a broad target population into smaller groups or subsets with similar needs, interests, preferences, and characteristics. In addition to that, the individual needs to respond in a similar way to pitched marketing.

We are talking about 4 types of marketing segmentation.

Now let’s look at how Big Data and Data Science come into play.

Before that, we have to talk about Data. Right now, Data is the most valuable asset in the world. Which already surpasses Oil. Whenever we approve a website cookie to monitor our online behavior or allow a mobile app to access our personal information, it collects all our digital trails. Not only the platforms and applications we use, but also a lot of personal information about us. This allows them to track us online, tracking the time we spent on sites, the locations we have visited, Payments we have made and reviews we have left. You may have already figured out that Google is reading your emails and chats with your permission.

Did you ever wonder about how this data is being used?

With this collection of Geographic and Behavioral data with Psychographic profiling, this platform obtains highly detailed insights about their users. And also they will be able to accurately predict and influence its users. These platforms now know “who” are you and “What” you do. And with Psychographics, they can predict “Why” you do it. Psychographics is the new weapon of digital influencers.

Now you understand this Psychographic information can be used to effectively target consumers. If we take U.S. retail company targets, they use predictive analytics to predict customers’ current life situations. Using demographics and psychographic data with search queries and historical product purchase patterns, they can predict when a female customer is pregnant or when someone is going to marry. Likewise, this allows a company to predict customers’ life events and target specific products to the correct people.

Not only that. Psychographic data analysis can be used to influence attitudes and beliefs which can be used to sway the votes. Using demographics, geographic, and psychographic data, they can differentiate the voters into target groups. Then they will pass the tailored messages to improve the communication and sway their decisions. Different variations of the same message can be used to bring all the members in a single family to the same belief and motivation as the way political campaigns are needed. It is believed that Cambridge Analytics used this technique to support the 2015 Brexit campaign in the UK and the 2016 Presidential Election in the U.S.

This is all done by running machine learning algorithms on the Data that is extracted from you, where they are able to predict your personality and behavior to razor-sharp accuracy and adapt content and advertising to fit your persona.

Instead of Mass Communication like back then, today’s communication is becoming ever increasingly targeted. It’s been personalized for every single person which is highly effective and saves a lot of money by directly delivering the correct message only to the right audience. But there is a question mark with this growing technology.

“Are we being controlled? “

Sankha Jayasooriya

Lead QA Engineer

Choosing the Right Algorithm at the Right Time – The Science of Impactful Product Recommendations

With the evolution of technology, online retail shopping has come into action, playing a major role in the modern world. A personalized recommendation system aims at identifying products that are of most relevance to a user, based on their past interactions.

This enhances a user’s intention to browse more products and makes them more likely to buy these products, effectively increasing business revenue and user experience. Hence, it is of vital importance that the evaluation of recommendations in such a context provides an end user output based on criteria which is selected in a way that maximizes business revenue and user experience. This chosen ‘most optimal criteria’ may vary due to different user preferences, seasons, and many other factors. Therefore, selecting the most optimal criteria has to be done very thoroughly, for which an effective and efficient evaluation technique is essential.

Where Do You Stand?

In this fast-moving modern world, People tend to buy online due to their busy schedules and easement and any outdated organization that doesn’t support this will be left behind. In a post Covid-19 world, online retailing and e-commerce without a doubt will increase immensely, forcing almost every organization to use online retailing for survival. Recommendation systems play a very important role in this, helping out with revenue and user experience. All the leading retailers worldwide use modern recommendation systems. It is definite that online retailers that use primitive recommendation systems will not be competitive enough to survive among the others who already use standard recommendation systems.

Multi Armed Bandit

Evaluation of recommendations can be categorized into two: offline evaluation and online evaluation. An example for offline evaluation is the Multivariate Testing Method which allows exploration of the most optimal criteria within a specific period of time, but afterward serves recommendations using the winning criteria. Hence it only provides a single cycle of exploration to exploitation, and does not allow automated further exploration cycles. This leads to a requirement of manual intervention once the criteria pass its optimal performance. These limitations bring out the necessity of online evaluation that supports automated multiple exploration cycles, which leads us to Multi Armed Bandit. The Multi Armed Bandit problem is a concept where a fixed limited set of resources are to be allocated among competing choices in a manner that maximizes their expected gain.

Multi Armed Bandit In A Retail Context

The endless expansion of e-commerce has led retailers to advertise their products by displaying. This is done via recommendation after considering various factors. Recommendation systems are growing progressively in the field of online retail due to their capability in offering personalized experiences to unique users. They make it easier for users to access the content they are interested in, which results in a competitive advantage for the retailer. Hence it is necessary to have smart recommendation systems. Recommendation systems using Multi Armed Bandit are capable of continuous learning, that is continuously exploring winning criteria and exploiting them without manual intervention.

What We At Zone24x7 Do

We excel in offering smart recommendation systems. We are well experienced in coming up with recommendation systems that give out different results to the user each day by processing massive loads of data in the intelligent back-end. We have studied every possible way to do that and selected 3 effective algorithms to the MAB problem, which are in summary:

  • Epsilon Greedy Algorithms
  • Upper Confidence Bound Algorithms (UCB)
  • Thompson Sampling

We chose Thompson Sampling for the retail recommendation system and it has been one of the highest performing solutions due to less cumulative regret. It is also the highest cost-effective solution when it comes to implementation.

Multi Armed Bandit can be recognized as the core ideology of the online evaluation system and only a brief explanation about it is given here.

To read more on this:

Read More >

Mariam Zaheer

Software Engineer

Stepping into AIOps: IT Operations meet Artificial Intelligence

Digitization has transformed the enterprise IT landscape of large organizations. The speed, scale, and complexity of multi-cloud infrastructure demand an innovative approach to meet the ever-increasing operational demands. In this context, a single IT support engineer should be enabled to handle a much broader scope than before. This is where AIOps comes into the picture. AIOps is an innovative approach that uses artificial intelligence through an algorithmic approach in automating IT operational tasks and providing remediation or solutions.

AIOps was a major buzzword that most CIOs discussed during the Gartner Data Center, Infrastructure & Operations Management Conference, in December 2018. But unlike many buzzwords that have come and gone, we can be guaranteed that AIOps is here to stay. As per Gartner, AIOps is an enabler of digital transformation and will take IT operations by storm.

As businesses become more complex and smarter, they look to be proactive and prevent critical errors, rather than reacting to them. Enabling proactivity in IT operations is one of the major advantages of AIOps. In addition, AIOps has the following advantages.

  • Identification of patterns and clusters of events
  • Better root cause analysis of critical events
  • Implementation of knowledge bases for faster remediation

Now let’s take a look at how IT Operations have been transformed to be analytics-driven and the role Data Science and Machine Learning plays in it.

Data Science and Machine Learning in Log Monitoring

The term Artificial Intelligence for IT operations (aka AIOps) was coined by Gartner in 2014. Operational analytics was able to automate and provide reports, alert triggers, and perform the analysis. Traditional log management has to now transform into a state where a program itself can act without explicitly being programmed or monitored.  

With multi-cloud infrastructures that have thousands of end-points, IT Operations and associated teams face major challenges in monitoring, configuring and maintaining. IT Operations Analytics (ITOA) and Application Performance Management (APM) are two technologies that address these problems. Even though it sounds similar, the underlying processes are different. APM is proactive where ITOA is reactive. Enabling you to be proactive by analyzing, past patterns and understanding how real-time changes mimic events that lead to major issues; and ultimately automating actions to respond to them in advance, are some of the major areas that will be covered by AIOps.  

The algorithmic approach looks to find solutions to the questions such as, when is an event going to happen next? What are the most effective & efficient actions to minimize or prevent it from happening? Finally, it will automate anomaly event identification and healing. This requires the expertise of extracting information from big data which comes in high speed and high volume and the expertise of predicting events. A rule-based method would not cater to the growing complexity of the systems. Machine learning models play a vital role in detecting patterns that exist in a large amount of data under supervision or without supervision. 

In this scenario, unsupervised techniques play a vital role as the patterns that need to be identified are typically complex. When the patterns have been recognized, then supervised models can be built. 

We have experienced that a hybrid of supervised and unsupervised models give higher accuracy in detecting and predicting events with reinforcement learning. Supervised learning methods alone cannot cater to the requirement, due to the dynamics and complexity of log events, which require a model to have the capability to learn new patterns and make the changes itself.

Benefits of AIOps

Automated anomaly detection helps IT teams in many ways. IT teams can work on important alerts (aka smart alerts) rather than attending to millions of unimportant alerts (aka Fatigue Alerts). Fatigue alerts are a big headache for monitoring teams, but with the adoption of automated anomaly detection, smart alerts will be triggered. If the actions that need to be taken for the given abnormal events are recorded, support teams will not only get the reason for the anomaly but the best possible recommendations for remediation. This increases the productivity of the IT team by minimizing the repetitive tasks of faster root cause analysis; which in turn helps in efficiently managing IT operations and lays the stepping stones of IT services automation. 

With smart alerts correlated, with critical events that have happened earlier, machine learning models can be built to proactively identify critical events that will happen and estimate the time at which it will happen. This will assist in preventing undesirable events from occurring, which in turn will prevent outages, that lead to uninterrupted operations, the dream of any ITOps team.

This discussion consists of four articles. The second article will discuss “Unsupervised Anomaly Detection”. 

Keep in touch to know more about automate anomaly detection framework!

Photo by Shahadat Rahman on Unsplash Shahadat Rahman

Hansa Perera

Associate Architect of Data Science

How Reliable is Data Science?

Will it be the death of data science? Are we relying on data too much? were some of the hard hitting questions that were pointed at the tech community at the end of the US Presidential Elections 2016. As we all know, the elections tossed out a very different result to what most people were expecting. The predictions drawn out by various media outlets and numerous polls were proven wrong and the reliability of data was questioned.


It brought home some of the limitations of data science. We cannot rely on data every time to give us a complete and accurate picture of what is happening.  Nothing is ever perfect and due to combination of many factors data can fail to make accurate predictions. Naturally, the curious minds started looking into where and how data science and poll predictions failed.

At Zone24x7, we too decided to give it a go. In the short video below, you can see few members from our own Data Science team tackling questions related to the reliability of data such as:

  • How reliable are the predictions?
  • How do you deal with uncertainty?
  • What is the future of data science?

Video Timeline

The context under which data science is applied in Zone24x7 – 00:37

On the reliability of predictions – 01:36

Dealing with uncertainty arising from data – 03:51

The Law of Statistical Regularity – 04:12

Test Data and Training Data – 04:32

The Ensemble Method – 05:12

Few thoughts on the future of data science and where it may be heading – 06:02

Image Courtesy: Header image from