The Impact of Generative AI on Marketing Strategies in 2024

Introduction

Generative AI  represents the latest evolution in the field of Artificial Intelligence  and is a group of AI models designed to generate new content, spanning text, images, and videos . According to a recent McKinsey report, marketing is projected to be the most affected firm function by GenAI, which is forecast to enhance marketing productivity by up to 15% of the total marketing expenditure, amounting to approximately $463 billion annually.

GenAI: A technical overview

In technical terms, we can define GenAI as deep neural networks, pre-trained on large amounts of data to create a foundation model, which is then fine-tuned to produce new content by following human instructions . In this section, we provide a technical overview of how GenAI models are trained and how they produce content. Given these technical specificities, we then explain why the output of GenAI can be helpful for firms, as it is both novel and appropriate–and, hence, creative.


Producing new content: Inherently random and conditional on the prompt

The adoption of a self-supervised learning approach, coupled with advancements in computing power  and a novel model architecture known as Transformer  that allows faster training, led to the emergence of foundation models. A foundation model is a large, pre-trained model used as a base for developing more specialized and task-specific models . Foundation models underpin generative capabilities. Specifically, they create new content  by using patterns learned during training to predict the next item in a sequence. For instance, OpenAI and Microsoft have deployed GPT-3 in a variety of downstream tasks, such as Bing, Duolingo, GitHub Co-pilot, and ChatGPT. To understand how foundation models produce new content, let us take the example of Large Language Models , a subset of foundation models that have gained significant prominence as they are trained to facilitate user interaction through natural language. 

Training on extensive, unannotated datasets: Self-supervised learning

GenAI is the outcome of a renewed focus on  machine learning rather than the supervised learning approach that characterized much previous AI developments . In a supervised learning approach, during the training, machines learn by comparing model output against a given correct answer. These correct answers are provided in forms of “labels” or “annotations,” which require human involvement in labor-intensive tasks. The significant cost of annotation severely restricts the volume of data available for model training, limiting the ability to generalize effectively to novel settings.

In contrast, self-supervised learning models are trained with no need for annotated datasets. Instead, training occurs by removing parts of the data and asking the model to “predict” the missed parts. For instance, with textual data, we can input a sentence like “This is a  article” and train a model to predict the omitted word, given its surrounding text. At the end of the training, the model should have learned that the words “review” or “scientific” are more likely to be omitted compared to, say, “umbrella.” Similarly, with images, we can mask some patches and train a model to predict the content in the masked patches based on the remaining image information.

Current limitations of GenAI

Although GenAI is able to create new content, it sometimes produces content that, while semantically or syntactically plausible, is factually incorrect or nonsensical . For instance, on February 6, 2023, Google announced its ChatGPT competitor named Bard with an image of Bard answering the question “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” As several astronomers pointed out, one of the three replies that Bard provided was factually wrong. As a consequence, Google stocks lost $100 billion. Similarly, just two weeks before OpenAI launched ChatGPT, Meta released Galactica, which the company positioned as a “large language model for science.” The open source LLM survived for only three days before Meta withdrew it in response to criticism for releasing a model that produced scientific-sounding text but that was nonetheless factually wrong. Capitalizing on Galactica’s failure when it launched ChatGPT, OpenAI explicitly acknowledged that it could make mistakes. The Bard and Galactica cases clearly indicate the limitations of initial GenAI. It works better when it is tasked with generating novel content for which there are no right or wrong answers.

To help marketers apply GenAI effectively, we provide in Table 1 a summary of studies that investigate GenAI’s emergent capabilities that are most closely related to innovation. These capabilities include idea generation, divergent thinking, analogical thinking, and inductive reasoning, which are all traditionally considered prerequisites for creativity . Additionally, our interviews reveal that an increasing number of managers rely on GenAI, or wish to, for decision-making support. Therefore, we also focus on capabilities related to reasoning, such as causal reasoning, logical reasoning on new cases, and making causal inferences.



Comments

Popular posts from this blog

Why Novox EdTech is the Best Place to Start Your Software Career in 2025