[go: up one dir, main page]

Skip to main content

Quantum-inspired framework for big data analytics: evaluating the impact of movie trailers and its financial returns

Abstract

In the context of the growing influence of businesses and marketers on social media platforms, understanding the impact of emotionally charged content on consumer behavior has become increasingly crucial. This study proposes a novel framework, leveraging quantum computing principles, to assess the emotional impact of movie trailers. The framework incorporates big data analytics and utilizes Quantum Walk andQuantum Time Series models to investigate the relationship between a movie trailer's emotional intensity and its financial performance. Unlike sequential problem-solving approach of traditional computing models, Quantum superposition allows exploring multiple options at once. An analysis of 141 movie trailers released after January 1, 2022, revealed a positive correlation between a trailer's emotive score and its financial success. These findings suggest that trailers evoking a stronger emotional response tend to achieve greater box office returns compared to those with a lower emotional impact. This research underscores the pivotal role of emotionally resonant content in shaping consumer behavior and cinematic outcomes. It would offer valuable insights for filmmakers and marketers to optimize audience engagement and financial returns.

Introduction

Data-driven techniques, particularly machine learning approaches, have significantly enhanced prediction accuracy across various domains, including healthcare, public policy, finance, and entertainment. Social media is a powerful platform for businesses to influence consumer opinions and purchasing behavior [1]. Marketing experts frequently analyze social media content while deciding marketing mix decisions [2]. The movie industry has witnessed tremendous growth. It has generated billions of dollars through sale of movie tickets and video-on-demand services. Accurate estimation of box office collection is instrumental in taking timely business decisions and guidance to film production and distribution companies. Such estimation is indispensable for sustained growth of global movie industry [3, 4]. Movie trailer is a crucial promotional medium for any film and its cast. It has the potential to significantly impact the films’ reception, popularity and financial success. A pre-release analysis of the theatrical trailer could significantly contribute to optimizing its content and sequence. This is particularly important because critical marketing decisions like trailer launch, advertising strategies, distribution channels, and release timing are made well in advance of the actual movie premiere [5]. US movie industry produces a large number of movies with an average investment of $60 million per month [6]. However, despite the significant investment, the profitability of a movie remains highly uncertain [7]. For movie houses and marketing managers, understanding the financial impact of movie trailers prior to the movie release is crucial. Some of the studies have shown a strong positive correlation between viewers’ emotional engagement on movie trailers and box office performance of a movie [8]. Movie trailers are advertisements that not only capture viewers' attention but also aim to generate interest and anticipation for the upcoming film [9].Trailers often showcase actual scenes from the movie and intend to build audience expectations [10]. Despite the significant cost associated with production of movie trailers, the research community has largely overlooked the impact of movie trailer design on the financial valuation of the film [11]. Investors also rely heavily on movie trailers to predict the box office success or failure of a film [12]. The effectiveness of a movie trailer is likely directly proportional to its ability to instigate emotional responses in viewers. Viewers evaluate films based on visual quality and storyline, regardless of filming style, and render emotions, which ultimately affects the film's revenue [13,14,15,16].

Extant literature has predominantly relied on readily available metadata, such as genre classifications, budget figures, and historical market performance of comparable films, to generate predictive models [17]. Further, traditional linear statistical models and rudimentary machine learning algorithms often prove inadequate in capturing the intricate, non-linear relationships among the multifarious factors influencing a film's commercial success [18, 19]. However, to the best of our understanding, the advanced computing techniques like Quantum computing has only recently enabled researchers to harness the wealth of information contained in critical reviews and blog content.

Quantum computing, leveraging the principles of quantum mechanics, provides a revolutionary approach towards computation, promising unprecedented computing speeds compared to traditional methods. By processing multiple solutions concurrently, it drastically reduces time taken to solve complex optimization problems. Unlike traditional computing, its probability-based structure means it excels at navigating vast solution spaces. Greater computational power and efficiency make it ideal for tackling today's complex data-rich problems like anticipating movie success.

This research intends to employ advanced quantum computing techniques, such as Quantum Walk and Quantum Time Series modeling, to identify significant patterns in the emotional dynamics underlying trailer performance and their effect on box office result [20]. The Quantum Walk approach will model the complex, nonlinear relationships between various factors impacting a film's financial success, including the emotional intensity of trailers, critic reviews, and social media engagement. Moreover, Quantum Time Series analysis will be applied to capture the temporal dynamics and higher-order correlations within the data, offering a comprehensive understanding of how trailer content evolves over time and its influence on a movie's commercial performance [21].

The rest of this paper is structured as follows: Sect. “Related work” presents a review of the relevant literature, Sect. "Results" details the experimental design, Sect. "Discussion of key findings" discusses the outcomes, and Sect. "Conclusion" provides the conclusion.

Related work

Presently, revenue forecasts for the opening weekend box office earnings are categorized according to the prediction algorithm [3, 4, 22,23,24,25,26] or the metadata [24, 27, 28] associated with the films. Several studies have been working on the development of prediction models because the predictions of movie box-office revenues are accurate only to a limited extent. The development of a multimodal framework that utilizes film trailers to forecast the box office performance during the opening weekend of motion pictures is a relatively recent trend.

Movie trailers are intended to attract audiences to theaters and their impact on the financial success of a film has been extensively studied by researchers [29]. Investors are willing to allocate significant resources to trailer advertising and use advanced technologies to create personalized trailers that captivate audiences [30]. Forecasts of movie demand made during the pre-release stage have been found to be reasonably accurate and new information can affect expectations about a film's financial performance, leading to changes in stock prices [31]. Stock returns immediately following a movie's release are primarily driven by its performance [32]. Trailer advertising prior to a film's release can provide valuable information to viewers and investors and can generate expectations about its future success [33]. Consequently, movie studios require a means of assessing the profitability of their investment in a film well before it is released. This assessment should concentrate on predictive factors that can be determined in advance [34].

The Mehrabian Russell PAD model suggests that advertising evokes emotional responses related to pleasure and arousal [35]. The study of emotions was originally conducted by psychologists, but it was later discovered that emotions play a crucial role in determining consumer behavior [36]. Consumer research uses two primary approaches, the Facial Action Coding System (FACS) and facial electromyography (EMG), to analyze facial expressions and emotional responses [37]. Although these techniques are effective in capturing emotional responses, there is limited empirical evidence regarding their effectiveness [38]. Advertisements include various factors that are designed to elicit emotional responses from viewers. Recent advances in image analysis and pattern recognition have made it possible to automatically detect and classify emotional and conversational facial signals [39].

According to the available literature, emotions can be categorized as positive or negative valence [40]. Emotions can also be viewed as either Dimensional, which includes Arousal and Valence, or Discrete, which consists of specific emotional states such as 'happy', 'sad', 'anger', 'disgust', and 'neutral' [40]. Measuring discrete emotional states requires additional hardware such as fMRI, EEG, and Galvanic Skin Response (GSR) sensors, which can be costly and impractical for industries [41]. Instead, researchers have focused on facial expressions as a key indicator of a person's mood or emotional states [42]. Several studies have attempted to use facial expressions to make inferences about emotional states [43]. Content-based psycholinguistic features have also been studied to predict social media messages, and supervised machine learning has been proposed to overcome the limitations of human and computer coding procedures [44]. Although facial expression recognition has improved over the years, it still remains challenging due to the subtle and variable nature of facial expressions [45]. Effective feature extraction techniques such as Dlib-ml, which identifies key features of a face that contribute to generating emotions, have been proposed to overcome these difficulties [46].

Movie revenue prediction based on purchase intention mining using YouTube trailer reviews is a potential application of natural language processing (NLP) techniques. In this approach, sentiment analysis and opinion mining are applied to user-generated content, such as comments and reviews, to extract information on viewers' purchase intentions [47]. The Affective-Knowledge-Enhanced Graph Convolutional Networks (AKE-GCN) model is proposed that is used for aspect-based sentiment analysis. It aims to incorporate both affective knowledge and graph convolutional networks into the task of sentiment analysis. The model employs multi-head attention to learn better representations of aspect-based sentiment [48]. A methodology is proposed for predicting the box office revenue of movies using sentiment analysis on Twitter data. The authors leverage the vast amount of information available on Twitter to extract insights into the public's perception of upcoming movies, and use this information to make predictions about their success at the box office [49].

The proposed quantum framework uses a quantum computer to manipulate qubits representing input data from YouTube movie reaction videos. It uses a Quantum Walk model to encode, detect, and quantify emotions with high accuracy and speed, represented through a quantum circuit using a combination of quantum gates. As far as we are aware, there has been no prior research on predicting human emotions from facial expressions using a quantum walk model empowered by quantum time series. In the upcoming section, a case study will be presented, which will be followed by a detailed explanation of the proposed approach.

A case study to gain insight into market behavior

The concept of event studies is based on the idea that an event of interest can have an immediate effect on the stock price of a company. The event window is a specific time frame used to measure this effect. In the case of a movie trailer release, the event window is typically a five to six-day period leading up to the release, as well as a one-day period after the release [50]. By using a short event window, researchers can limit the impact of confounding factors that might influence stock prices, such as other events related to the movie or its competitors. This allows researchers to focus specifically on the impact of the trailer release itself. Event studies have been widely used in the finance industry to analyze the impact of events on stock prices, and they can be a useful tool for investors and analysts seeking to understand the relationship between specific events and stock price movements.

The case study was evaluated taken into consideration two hypotheses as given below:

H1: The release of a movie trailer does not have an impact on the stock value of the movie.

H2: The emotional content of a movie trailer is not associated with the financial value of the movie.

Calculating the normal return of a particular stock is a crucial step in event studies as it provides a baseline against which to compare the stock's performance during the event window. The normal return is typically calculated as the average return over a period of time when no significant events are taking place.

In the context of a movie trailer launch, the normal return for a given movie "m" is determined by calculating the average of returns for the five-day period leading up to the trailer release, from t = − 6 to t = − 1, using Eq. (1). To compute the return for a particular day, the natural logarithm of the ratio of the stock's trading value for that day to its trading value for the previous day is taken.

$$Em= ({\sum }_{t=-1}^{-6}Mt )/6$$
(1)

Once the normal return has been calculated, the abnormal return (ABR) can be determined by subtracting the expected return (Em) from the actual return (Am) during the event window [51], as outlined in Eq. (2). The ABR represents the stock price fluctuation that happens directly due to the event, and it is computed for each day within the event window.

$$ABRmt = (Amt - Em) / \sigma m$$
(2)

where ABRmt is Abnormal return on day t for movie m, Amt = Actual return on day t for movie m, Em = Expected return (normal return) for movie m and σm = Standard deviation of returns for movie m over the estimation window. In this equation, the ABR for each day is calculated as the difference between the actual return on that day and the expected return, divided by the standard deviation of returns for the estimation window. This helps to normalize the ABR and make it comparable across different stocks or events.

By calculating the ABR for each day in the event window, researchers can analyze the impact of the event on the stock price over time and identify any significant changes or trends. This information can be used to inform investment decisions or provide insights into market behavior. The ABR has relationship with the emotional scores of the trailer, including the positive (PVE) and negative (NVE) emotional scores [52] as explained in the algorithms in next section. Quantified emotions are used to predict economic value following recommendations from the Fama–French model [53]. The Fama–French model addresses common risk factors associated with abnormal returns (ABR) on various stocks and bonds. Movie stock pricing data is sourced from the Hollywood Stock Exchange (HSX), a virtual movie stock market with over two million participants, where active traders are typically heavy consumers and early adopters of movies [54]. These traders use virtual currency to enhance their net worth by trading movie stocks and related financial products [55]. Studies have found that HSX traders' forecasts are fairly accurate in predicting actual box office returns [56]. Additionally, investors use virtual stock markets to predict movie demand before a movie's release [57]. The proposed framework implements Fama–French inspired three-factor equations to predict the impact of positive and negative valence emotions (PVE and NVE) on stock pricing. The procedure to calculate PVE, NVE, and ABR is detailed in Algorithm 1. The Fama–French model's relevance is particularly significant in relation to abnormal returns and the timing window [58].

The experimental results discuss the statistical tests used for evaluation of hypothesis H1. To evaluate hypothesis H2, the study considered two theatrical trailers for the same movie, released on different dates. The algorithms in the subsequent section evaluated the emotional response of viewers for these two movie trailers. This procedure was repeated for the entire set of movies studied.

Methodology

This section describes the methodology used to test the hypothesis that emotional responses evoked by movie trailers. The effectiveness of traditional methods such as surveys, questionnaires, and interviews in measuring emotional responses generated by a movie trailer is limited due to their inability to capture temporal emotions and their reliance on a limited set of questions and choices. Moreover, in some cases, it may not be feasible to question someone about personal opinions and emotional responses. In contrast, facial expressions provide a continuous and reliable source of emotional expression, accounting for 55% of the message conveyed by a person, followed by intonations and verbal expressions [59]. Therefore, analyzing facial expressions is an effective way to reveal the actual emotional response of a person towards a movie trailer.

This paper proposes a Fama–French and Dlib-ml inspired unified framework for predicting the economic value of movie trailers. The approach involves sentiment analysis based on facial expressions, performed as a quantum walk algorithm. The algorithm is formalized as a quantum circuit, where the input wires are the qubits encoding 68 points of the human face. The states of these qubits are changed via a quantum walk, which is implemented as a sequence of rotations and controlled-phase-flippings, with the rotation angles depending on facial movements. Finally, a Quantum Fourier Transform is applied to detect the contribution, or intensity, of five target emotions: happiness, anger, sadness, surprise, and neutrality. The quantified emotions are then used to predict the economic value of the movie trailer using the recommendations of the Fama–French model. The component diagram in Fig. 1 illustrates the key elements of the proposed framework.

Fig. 1
figure 1

The key elements of the proposed framework

The Hollywood Stock Exchange (HSX) was used to observe the stock value of the movie in near real-time [60]. Assuming that there are no unexpected fluctuations in stock prices or other market variables related to the movie, abnormal returns (ABR) can occur if the release of a movie trailer causes a change in the movie production company's stock price. Efficient markets theory suggests that the stock price reflects all available information, including the impact of events on a business, due to the rationality of investors and the availability of perfect information [61]. Therefore, studying the variation in stock prices of a production company following the release of a movie trailer can provide valuable insights into the impact of such events on the company's financial performance.

In this study, a total of 141 movie trailers that were released between January 1, 2022, and March 31, 2023 were examined, which constituted only a subset of all 1334 movie stocks listed on the HSX market during that period. The criteria for selecting the movies were based on the release date of the trailer, the initial release of the movie on 650 or more screens (considered as "wide releases" for HSX), and the availability of at least 90 days of trading history on HSX prior to the release date [51, 62, 63].

The economic worth of a movie trailer is commonly evaluated by abnormal returns (ABR), which refer to the variation in the stock value of the movie production company following the release of the trailer. To analyze this variation, they used HSX, a virtual movie stock market (VSM) that has more than two million participants, including heavy movie consumers and early adopters [64]. Traders use virtual currency to trade movie stocks and other movie-related financial products on the platform, aiming to increase their net worth. Previous research has demonstrated the reasonably accurate forecasting abilities of HSX traders in predicting actual box office returns [65]. However, this study aimed to examine the impact of trailer release on financial returns during the pre-release period of a movie. The financial return measure was the movie's "stock price" as traded on HSX.

Overview of the work

The process of feature extraction utilizes the Dlib-ml library, which is a freely available, cross-platform open-source software library created to simplify software development and research. It comprises separate software components, each with comprehensive documentation and debugging modes, making it appropriate for both research and commercial ventures [66]. The process of recognizing facial expressions is performed by Dlib-ml using a set of 68 key facial landmarks [67]. These landmarks are then utilized as input states in a quantum-inspired model that is employed for quantifying emotions. The 68 facial landmarks correspond to specific facial muscles located in the eyebrows, eyes, nose, and mouth regions. These muscles have been identified as contributing significantly to the formation of different emotions.

To perform classification, proposed framework extracts these 68 facial landmarks from the input video frames using Dlib-ml. The resulting feature vector is then passed through quantum circuit to predict the emotional state of the subject. Any typical case of identifying emotions and facial expressions involves 68 landmarks as shown in Fig. 2 below:

Fig. 2
figure 2

A Dlib-ml mark up of annotated 68 Facial landmarks

Fig. 3
figure 3

Sample emotions identified during different reaction sequences

Table 1 Sample list of YouTube channels that met the minimum selection criteria based on their number of subscribers and total views

The subsequent points involve explanation of the process.

  1. 1.

    Data collection: We collected a dataset of movie reaction videos from YouTube [68]. The videos were selected based on their popularity and variety in terms of the movies and emotions expressed in them [69]. To develop our framework, we collected YouTube videos that included crowd-sourced trailer reviews. These videos were shared voluntarily by individuals on the platform. We gathered 153 reaction sequences or videos related to the trailers of 141 movies from YouTube to collect facial expressions, as shown in Fig. 3.

    The reliability of these channels was evaluated based on two parameters: the number of subscribers and the total views on the videos [70]. These criteria were used to select the reaction sequences.

    The selection process involved the following minimum requirements:

    1. a)

      The YouTube channel for the reaction sequence must have at least 5,000 subscribers.

    2. b)

      The YouTube channel for the reaction sequence must have at least 10,00,000 views in total.

    Prominent YouTube channels that met these requirements were considered to obtain the reaction sequences, as shown in Table 1. These channels were selected because they have a large number of subscribers and views, indicating that the reaction sequences are genuine and reliable indicators of authenticity.

  2. 2.

    Preprocessing: The collected videos will be preprocessed to extract frames from the video and annotate the emotions expressed in them using established emotion recognition frameworks. In the given context, pre-processing is applied to video frames, which involves resizing the frames to a specific size of 224 × 224 × 64, which is suitable for the subsequent feature extraction step. This pre-processing enables the frames to be compatible with a classification model and can be used for training and testing. The proposed model uses facial expressions to identify and measure the intensity of emotions such as smiles, eyebrow raises, anger, disgust, positive and negative reactions. These expressions are relevant to understanding the viewer's response and are validated by the research community. Classifiers are used to generate continuous emotive outputs based on probabilities, as shown in Fig. 4.

  3. 3.

    Proposed Quantum Framework: The preprocessed data is analyzed using quantum computing to process emotions in a video. It does so by analyzing the facial landmarks in each frame of the video and encoding them into a quantum circuit. Quantum walk is used to encode the facial landmarks into quantum states. A quantum walk is a type of quantum mechanical process that describes the behavior of a particle (such as an electron or photon) moving through a lattice or graph. In contrast to classical random walks, where the particle moves randomly, the quantum walk is a coherent process that involves the superposition of states. In a quantum walk, the particle is described by a quantum state, which is a superposition of states that correspond to different positions in the lattice or graph. As the particle moves, the superposition evolves in a way that depends on the structure of the lattice or graph and the interactions between the particle and the lattice. The resulting quantum state can be used to describe the probability distribution of finding the particle at different positions. Quantum walks have a wide range of applications in quantum computing and quantum information processing, such as quantum algorithms for search and optimization, quantum simulation of complex systems, and quantum cryptography. They also have potential applications in other areas, such as quantum sensing and metrology, and the study of complex systems in condensed matter physics. The quantum circuit consists of a series of quantum gates that perform rotations on the qubits based on the facial landmark values. These rotations are then followed by a Fourier transform, which is used to extract features from the circuit. Finally, measurements are made on the qubits, and the resulting bit strings are used to determine the probability of each emotion (Angry, Happy, Sad, Surprised, Neutral) being expressed in the frame.

  4. 4.

    Model evaluation: The model is evaluated on a test dataset to assess its performance in detecting probability distribution and the positive and negative valence estimates (pve and nve) based on whether the Neutral emotion has the highest probability in movie reaction videos.

  5. 5.

    Result analysis: The results obtained from the model evaluation is analyzed to gain insights into the emotional content of the movie reaction videos and the effectiveness of the proposed Quantum model in quantification of emotions. The proposed model detected five different emotions, which were happy, angry, sad, neutral and surprise for each face in every video frame of the reaction sequences extracted from YouTube. The classifier returned a probability value for each emotion corresponding to every face identified in each frame. In the context of a theatrical movie trailer, emotions like happy, sad, surprise, and angry were viewed as having positive valence, whereas neutral had a negative valence. The probabilities of the viewer experiencing each emotion at a given time were represented by each probability value. For example, the probabilities for happy, angry, sad, surprise, disgust, and neutral in the first frame of a viewer's reaction sequence could be represented as [Ha1, An1, Sa1, Su1, Ne1]. The total probability of the detected emotion was calculated as the sum of the probabilities of all the frames of the video. The Positive Valence Emotive score (PVEs) of the whole reaction sequence for a theatrical movie trailer with "n" frames was determined by adding up the probabilities for happy, angry, sad, surprise, and disgust, whereas the Negative Valence Emotive score (NVEs) was determined by adding up the probability for neutral.

Fig. 4
figure 4

Probability based outputs for identified emotions as generated by proposed quantum model

Architecture

Quantum walks are a generalization of classical random walks that can be used to study quantum algorithms and simulate quantum systems related to emotion transmission [71]. They are described by a unitary evolution operator that acts on a Hilbert space representing the state of the system. The evolution of a quantum walk can be described using the Schrödinger equation given in Eq. 3 [72]:

$$i\partial \partial t|\Psi (t)\rangle =H|\Psi (t)\rangle$$
(3)

where \(|\Psi (t)\rangle\) is the state of the system at time t and H is the Hamiltonian of the system. The Hamiltonian for a quantum walk can be written as:

$$H=SW$$
(4)

where S is the coin operator that acts on the internal degree of freedom of the walker (similar to a classical coin flip) and W is the shift operator that describes the movement of the walker. The coin operator is usually written as a 2 × 2 matrix:

$$S=(s\text{0,0} s\text{0,1} s\text{1,0}s\text{1,1})$$

where the indices (0,0), (0,1), (1,0), and (1,1) correspond to the basis states |0, |1, |L, and |R, respectively. The shift operator is a unitary operator that moves the walker one step to the left or right depending on the internal state of the walker shown in Eq. 5:

$$W=\sum \infty |x-1\rangle \langle x|\otimes |0\rangle \langle 0|+|x+1\rangle \langle x|\otimes |1\rangle \langle 1|$$
(5)

where |x represents the position of the walker and the tensor product  indicates that the shift operator acts on both the position and internal degrees of freedom of the walker.

The time evolution of the quantum walk is then given by Eq. 6:

$$|\Psi (t)\rangle =e-iHt |\Psi (0)\rangle$$
(6)

where \(|\Psi (0)\rangle\) is the initial state of the system. The evolution operator can be written as a product of the coin and shift operators in Eq. 7:

$$e-iHt=e-iSWt=e-iWte-iSt+O(t2)$$
(7)

where the second equality follows from the Trotter-Suzuki decomposition. This allows the time evolution of the quantum walk to be efficiently simulated using a sequence of coin and shift operations.

The encoded facial landmark data through quantum walk is transformed from the time domain to the frequency domain using a Fourier transform circuit. This is achieved using Quantum Fourier Transform(QFT), which generates a quantum circuit that performs the Fourier transform on a set of input qubits [73]. The idea behind this transformation is to analyze the facial landmark data in terms of its frequency components. The Fourier transform expresses a time-domain signal as a sum of sine and cosine functions of different frequencies. By performing the Fourier transform on the facial landmark data, it can be identified that which frequency components are present in the data and how much each component contributes to the overall signal. This frequency-domain representation of the facial landmark data can be useful in various applications such as facial expression recognition, where certain facial expressions may be characterized by specific frequency components [74].

The transformation of facial landmark data from the time domain to the frequency domain using a quantum Fourier transform is a key aspect of this work. It is necessary to perform this transformation in order to extract the relevant features from the facial landmark data that can be used for emotion recognition [75]. By transforming the time series of facial landmark data into the frequency domain, the model is able to capture the patterns and variations in the data that are important for distinguishing between different emotions. The use of a quantum Fourier transform is particularly interesting because it allows for the exploration of the potential of quantum computing for processing and analyzing complex data sets [76].

Quantum circuit is constructed using the Qiskit library. The circuit is constructed with 4 qubits = 4. Firstly, the quantum circuit applies a series of rotations qc.rx() and qc.ry() to each qubit using the x and y-coordinates of each facial landmark as inputs. This step is used to encode the information of facial landmarks into the quantum circuit. Then, a series of Controlled-X (CNOT) gates are applied to the qubits to create a "quantum Fourier transform" (QFT) using qc.cx () gates. The FourierTransformCircuits() function from the Qiskit library is used to construct the QFT circuit. After the QFT is applied, the circuit measures the qubits and outputs a binary string that represents the state of the qubits after measurement. This binary string is used to calculate the probability of the circuit being in each of the five possible states: Angry, Happy, Sad, Surprised, and Neutral.

The circuit is executed using the QASM simulator backend from the Aer module of Qiskit. The execute () function runs the circuit with 1024 shots and returns a set of measurement results. The probability of each emotion is calculated by counting the number of times each outcome is observed and dividing by the total number of shots. The explanation of quantum circuit and the states involved at each step is detailed subsequently.

Explanation of the algorithms

The algorithms used in this study aims to predict human emotions through quantum computing. The procedure involves processing a frame using the OpenCV library to detect faces, and subsequently applying a landmark detector to extract facial landmarks. Quantum circuits are then constructed based on these landmarks, and the resulting probabilities are used to predict the corresponding emotions. Specifically, the procedure begins with converting the frame into grayscale and applying the face cascade classifier to detect faces. The resulting faces are then used to extract facial landmarks through Quantum walk, which are normalized and used to construct quantum circuits. the quantum walk is used to encode graph information for the facial landmarks extracted from a detected face. The quantum circuit is initialized with a specific number of qubits (specified by n_qubits), and the quantum walk is performed by applying quantum gates to the qubits. The angles of rotation gates (ry) and the Controlled-Z gates (cz) are chosen based on the size of the input graph (i.e., the number of landmarks).

The encoded graph information via quantum walk is then used to calculate probabilities for different emotions associated with the face, using quantum time series analysis. The probabilities are obtained by measuring the output of the quantum circuit using the qasm_simulator backend. The resulting probability distribution is then used to calculate the positive and negative scores for the detected face. The quantum circuits are constructed using the QuantumCircuit library and the FourierTransformCircuits library, and the resulting probabilities are obtained through executing the quantum circuits using Aer.

The emotions predicted in this study are based on a set of pre-defined emotions, including 'Angry', 'Happy', 'Sad', 'Surprise', and 'Neutral'. The resulting probabilities are used to predict the corresponding emotion, with 'Neutral' being a special case. If the probability of 'Neutral' is greater than 0.5, the emotion is classified as 'Negative' and the probabilities of the other emotions are summed to obtain the 'Negative' probability. If the probability of 'Neutral' is less than 0.5, the emotion is classified as 'Positive' and the probabilities of the other emotions are summed to obtain the 'Positive' probability. The algorithm shown in Algorithm 1 process all the videos individually of any movie trailer.

Algorithm 1
figure a

illustrates procedure Process_video. The process_video() procedure takes a list of video paths as input and returns three lists containing the probabilities, positive emotions, and negative emotions of each video frame in each video file

The algorithm shown in Algorithm 2 below is to convert the input frame to grayscale and detect faces using the face_cascade object. The grayscale image is stored in the variable "gray," while the detected faces are stored in the variable "faces." Next, an empty list called "probs" is created to store the probabilities of each emotion, and pve and nve are initialized to zero. For each face in "faces," a region of interest (ROI) is defined using the x, y, w, and h coordinates, and the face is extracted from the frame. The ROI is converted to grayscale, and facial landmarks are detected using the "landmark_detector" object. The landmark coordinates are normalized using mean and standard deviation normalization and stored in the "landmarks" variable as a NumPy array. A quantum circuit is then constructed using the "QuantumCircuit" object and the number of qubits specified by "n_qubits." The Rx and Ry gates are applied to each qubit using a for loop that iterates over the "landmarks" array. The circuit consists of quantum walk algorithm to encode graph information into a quantum circuit. The quantum walk is performed using a sequence of rotations and controlled-Z gates. The rotation angle is determined by the number of landmarks in the graph, and the rotation axis is the y-axis. The loop iterates over all but the last qubit in the quantum circuit, applying the rotation gate to each qubit and a controlled-Z gate between each adjacent pair of qubits. The last qubit is then rotated by the same angle. The CNOT gates are applied to neighboring qubits using another for loop, and the Fourier transform is applied to the circuit using the "FourierTransformCircuits" object.

Finally, the qubits are measured, and the counts are obtained using the execute() method from the "Aer" backend. The probabilities of each emotion are then calculated using another for loop that iterates over the "emotions" list. The probability of the "Neutral" emotion is obtained by setting the key to '0000', while the probabilities of the other emotions are obtained by formatting the index of the emotion in the list as a binary string with four bits. If the probability of the "Neutral" emotion is greater than 0.5, pve is set to 0, and nve is set to the sum of the probabilities of the other emotions. Otherwise, pve is set to the sum of the probabilities of the other emotions, and nve is set to the probability of the "Neutral" emotion. Finally, the probabilities, pve, and nve are returned.

Algorithm 2
figure b

iillustrates procedure process_frame. This pseudo code represents a procedure called "process_frame" that takes a frame as input and returns the probabilities, pve (positive valence estimate), and nve (negative valence estimate) of the emotion expressed in the frame

The role of quantum time series is to represent the landmarks extracted from the face detected in each frame of a video as a quantum state. The quantum circuit built from these quantum states is then used to compute the probability of different emotions based on the facial landmarks. The idea is to use quantum computing to process and analyze the facial landmarks in a more efficient and accurate way compared to classical computing. The Fourier transform circuits from the qiskit.circuit.library are used to convert the time series of facial landmark data into the frequency domain. This conversion can help identify patterns or features in the data that may be more useful for emotion recognition.

Three-factor Asset Pricing Model is employed to estimate the expected return is given in Eq. 8 [77]:

$$Rit - RFt = ai + m_{i} (RMt - RFt) + s_{i} SMBt + h_{i} HMLt$$
(8)

The equation provided describes how daily returns of a company are calculated relative to the risk-free rate (i.e., the one-month Treasury bill rate) and the excess return on a value-weighted market portfolio consisting of stocks from NYSE, AMEX, and NASDAQ. The Fama–French factors are based on six value-weighted portfolios formed on size and book-to-market. The SMBt (Small Minus Big) term represents the difference between the average return on three small portfolios and the average return on three big portfolios for day t, while the HMLt (High Minus Low) term represents the difference between the average return on two value portfolios and the average return on two growth portfolios for day t. These terms, along with RMt—RFt, are used to evaluate the impact of market, size, and book-to-market factors on returns [77].

The Daily Abnormal Return (AR) can be calculated by subtracting the return predicted from the Three-Factor Model, as shown in Eq. 9, from the actual return [78].

$$ARit = Rit - E(Rit) = Rit - [ai + RFt + mi (RMt - RFt) + s_{i} SMBt + h_{i} HMLt]$$
(9)

These values are subsequently used in algorithm 3 as coefficients of Fama–French three-factor model for calculation of abnormal returns ABR value.

The Algorithm 3 implements a quantum algorithm for computing the alpha and beta values of a stock using the Fama–French three-factor model. The algorithm uses amplitude estimation, a quantum algorithm for estimating the amplitude of a specific state in a superposition of states, to perform a linear algebra operation that computes the alpha and beta values.

The quantum_ABR function takes two arguments: the positive and negative stock returns (pve and nve, respectively). It first defines a quantum circuit with two qubits and applies the Quantum Fourier Transform (QFT) to prepare a superposition of all possible states. It then uses the Amplitude Estimation component from the Qiskit Aqua library to estimate the amplitude of the state that corresponds to the linear algebra operation defined in the linear_algebra_operation function. This function computes the matrix multiplication of a 2 × 2 matrix A and a 2 × 1 vector b to obtain the alpha and beta values.

After obtaining the estimated alpha value, the quantum_ABR function calculates the beta value as the difference between the negative stock return and the product of the positive stock return and the estimated alpha value, divided by the sum of the positive and negative stock returns. Finally, the function returns both the alpha and beta values. The compute_ABR function uses the main function to compute the alpha and beta values and then calculates the ABR value using the Fama–French three-factor model, which takes into account three factors that are believed to affect stock returns: market risk (Rm_Rf), size risk (SMB), and value risk (HML). The values of 2.51, -5.59 and -9.01 respectively are taken values of three factor Fama–French model for the month of March 2023 as per report given by University of Dartmouth. The function takes five arguments: the positive and negative stock returns, and the values of the three factors. It returns the computed abnormal returns ABR value.

Algorithm 3
figure c

illustrates procedure to calculate abr values based on fama–french three-factor model

Quantum circuit

The circuit starts by initializing n_qubits qubits in the zero state. Then, for each landmark in the face detected in the current frame, the corresponding qubit in the circuit is rotated around the x and y axes by the x and y coordinates of the landmark, respectively. This creates a quantum state that is dependent on the facial landmarks. Next, a series of CNOT gates are applied to the qubits to create entanglement between them. This is followed by the application of a Fourier transform circuit, which applies a series of Hadamard and phase gates to the qubits to create a superposition of all possible states.

Finally, the circuit is measured in the computational basis, which collapses the superposition into a classical probability distribution. The probability of each possible state is then calculated by running the circuit shots number of times and counting the number of times each state is observed. These probabilities are returned by the function along with the probabilities of positive and negative emotions based on the values of the probability distribution.

The use of 4 qubits is to represent the 2D coordinates (x and y) of 4 facial landmarks. Each qubit is used to encode one coordinate of a landmark, with the amplitude of the quantum state representing the value of the coordinate. The quantum circuit applies rotations around the x and y axes to each qubit based on the corresponding coordinate value, effectively transforming the input landmark data into quantum state amplitudes.

The Fourier transform circuit converts the state from the time domain to the frequency domain. This transformation allows the quantum algorithm to analyze the input landmark data in a different representation, which can potentially reveal different patterns or features [47].

The quantum circuit shown in Fig. 5 takes in 4 qubits. It first applies RX and RY rotations to each qubit, with the angle of rotation determined by the landmark coordinates of the detected face. Then, CNOT gates are applied between adjacent qubits, except for the last qubit. Finally, a quantum Fourier transform (QFT) is applied to all the qubits, and measurements are performed on each qubit.

Fig. 5
figure 5

Quantum circuit representation of our experiment

The measurements result in a binary string of length 4, which represent the state of the qubits at that moment. The circuit counts the number of times each possible binary string occurs, by executing the circuit multiple times using a simulator.

Starting from the left, the first two boxes represent the input to the quantum circuit. In this case, there are 68 facial landmarks detected in the frame, so the input to the circuit is a state vector of size 2^6 = 64, representing the probabilities of each possible combination of the landmarks being present or absent. The "Prep" box initializes the input state vector, with the qubits in the |0 > state.

The next box, labeled "Hadamard", applies a Hadamard gate to each qubit in the input state. This gate puts each qubit into a superposition of the |0 > and |1 > states, which is a key step in the quantum Fourier transform (QFT) algorithm. The remaining boxes in the circuit apply a sequence of controlled-phase (CP) gates, which are used to implement the QFT. Each CP gate has a control qubit and a target qubit, and applies a phase shift to the target qubit if the control qubit is in the |1 > state. The phase shift depends on the position of the qubits in the circuit, and is determined by the formula e^(2piikj/2^n), where k is the position of the control qubit, j is the position of the target qubit, and n is the total number of qubits in the circuit.

The CP gates are arranged in a "reverse" order, with the qubits furthest to the right in the circuit acting as the control qubits, and the qubits furthest to the left acting as the target qubits. This is because the QFT algorithm operates in reverse order, starting with the most significant bit of the input state. The final box in the circuit is the "Measure" box, which measures each qubit in the circuit and collapses it to either the |0 > or |1 > state. The resulting measurement outcomes are classical bits, which are used to calculate the probability amplitudes of each possible state in the input vector.

Overall, the quantum circuit implements the QFT algorithm on the input state vector, which transforms the probabilities of each possible combination of facial landmarks into their corresponding Fourier coefficients. These coefficients can be used to analyze the frequency components of the landmark data and identify patterns in the facial expressions over time.

The circuit can be written in the following form as in Eq. 10:

$$\left| {q_{0} q_{1} q_{2} q_{3} } \right\rangle \to ^{{\text{ Hadamard gates }}} \frac{1}{{\sqrt {2^{4} } }}\mathop \sum \limits_{x = 0}^{{2^{4} - 1}} \left| x \right\rangle \to ^{{\text{ CNOT gates }}} \frac{1}{{\sqrt {2^{4} } }}\mathop \sum \limits_{x = 0}^{{2^{4} - 1}} \left| {x \oplus 15} \right\rangle$$
(10)

where |q0q1q2q3> represents the initial state of the qubits, and  denotes bitwise addition modulo 2. The state after the application of the H gates is a superposition of all possible basis states, and the CNOT gates entangle the qubits in such a way that the resulting state is inverted, i.e., each basis state is mapped to its bitwise complement.

Various states of the quantum circuit

  • Input state: The input to the quantum circuit is a 4-qubit register initialized to the |0000 state, which can be represented as: \(|\psi \rangle = |0000\rangle\)

  • Hadamard gate: The first operation in the circuit is a Hadamard gate applied to each qubit, which creates a superposition state on all qubits. The Hadamard gate can be represented by the following matrix:

    $$H = 1/\surd 2 * [1 1; 1 -1]$$

    The Hadamard gate applied to each qubit can be represented in Eq. 11 as:

    $$|\psi \rangle = (H\otimes H\otimes H\otimes H)|0000\rangle$$
    (11)

    Expanding the tensor product, we get Eqs. 12, 13, 14:

    $$|\psi \rangle = (H|0\rangle )\otimes (H|0\rangle )\otimes (H|0\rangle )\otimes (H|0\rangle )$$
    (12)
    $$|\psi \rangle = 1/2(|0\rangle + |1\rangle ) \otimes 1/2(|0\rangle + |1\rangle ) \otimes 1/2(|0\rangle + |1\rangle ) \otimes 1/2(|0\rangle + |1\rangle )$$
    (13)
    $$|\psi \rangle = 1/16(|0000\rangle + |0001\rangle + |0010\rangle + |0011\rangle + |0100\rangle + |0101\rangle + |0110\rangle + |0111\rangle + |1000\rangle + |1001\rangle + |1010\rangle + |1011\rangle + |1100\rangle + |1101\rangle + |1110\rangle + |1111\rangle )$$
    (14)

    The resulting state is a uniform superposition of all possible 4-qubit states.

  • Controlled-Z gate: The next operation in the circuit is a controlled-Z gate between qubits 1 and 2. This gate flips the sign of the state |11, and leaves all other states unchanged. The controlled-Z gate can be represented by the following matrix as in Eq. 15:

    $$CZ = |0\rangle \langle 0|\otimes I + |1\rangle \langle 1|\otimes Z$$
    (15)

    where I is the identity matrix, and Z is the Pauli-Z matrix. The controlled-Z gate applied to qubits 1 and 2 can be represented as in equation 16:

    $$|\psi \rangle = CZ(\text{1,2})|\psi \rangle$$
    (16)

    The state |11> is the only state that will be affected by the controlled-Z gate, so the resulting state is given in equation 17:

    $$|\psi \rangle = 1/16(|0000\rangle + |0001\rangle + |0010\rangle + |0011\rangle + |0100\rangle + |0101\rangle + |0110\rangle - |0111\rangle + |1000\rangle + |1001\rangle + |1010\rangle - |1011\rangle + |1100\rangle + |1101\rangle - |1110\rangle + |1111\rangle )$$
    (17)
  • Hadamard gate: The next operation in the circuit is a Hadamard gate applied to each qubit, which creates a superposition state on all qubits. The Hadamard gate can be represented by the following matrix:

    $$H = 1/\surd 2 * [1 1; 1 -1]$$
    (18)

    The Hadamard gate applied to each qubit can be represented as in Eq. 19:

    $$|\psi \rangle = (H\otimes H\otimes H\otimes H)|\psi \rangle$$
    (19)

    Expanding the tensor product, we get Eq. 20:

    $$|\psi \rangle = (H|0\rangle )\otimes (H|0\rangle )\otimes (H|0\rangle )\otimes (H|0\rangle )$$
    (20)

    Applying the Hadamard gate to the |0 state gives Eq. 21:

    $$H|0\rangle = 1/\surd 2 * |0\rangle + 1/\surd 2 * |1\rangle$$
    (21)

    Substituting this into the expanded equation, we get Eq. 22:

    $$|\psi \rangle = 1/2 * (|0000\rangle + |0001\rangle + |0010\rangle + |0011\rangle + |0100\rangle + |0101\rangle +|0110\rangle + |0111\rangle + |1000\rangle + |1001\rangle + |1010\rangle + |1011\rangle + |1100\rangle +|1101\rangle + |1110\rangle - |1111\rangle )$$
    (22)

The state \(|\psi \rangle\) after the Hadamard gate operation is a superposition of all 16 possible binary combinations of 4 qubits.

Results

A Shapiro–Wilk W test was employed to assess the normality of abnormal returns. The observed values of W (0.91) and p (0.01) suggest a statistically significant positive impact on abnormal returns, leading to the rejection of H1. Further, to further investigate the impact of trailer release timing, the researchers analyzed two theatrical trailers for the same film, released on different dates. The empirical evidence suggests that the release of movie trailers can lead to positive abnormal returns in stock prices following the film's release, thereby refuting the null hypothesis that there is no relationship between trailer release and stock value H2. A comparative analysis of the Positive Valence Emotions (PVE) and Negative Valence Emotions (NVE)) and abnormal returns (ABR) for the top-performing movie trailers of a film, released at different time points is summarized in Table 2. The results indicate a strong correlation between ABR and the emotional valence of the trailer (PVE and NVE). Aa increase in PVE tends to increase ABR whereas an increased NVE results in low ABR. Furthermore, the first trailer exhibited a higher PVE than the second trailer, suggesting that it evoked a wider range of emotions among viewers than the latter.

Table 2 Calculated values of PVEs and NVEs and ABR for ten best performing theatrical trailers released on different dates

This suggests that pre-release analysis of movie trailers can be a valuable tool for predicting a film's financial performance and for crafting trailers that may evoke higher positive emotions. Additionally, this study highlights the importance of trailer release timing, suggesting that strategic scheduling of multiple trailer releases for a single film can influence stock value returns. Figure 6 evaluates the effectiveness of movie trailers by examining their ability to elicit positive and negative valence emotions and their impact on ABR. The findings further identify pre-release analysis of movie trailers as a valuable tool for predicting financial performance and for optimizing trailer content to evoke positive emotions. Furthermore, the study establishes a direct link between theatrical trailer releases and stock value fluctuations. The emotionally intense trailers may lead to higher abnormal returns compared to those with moderate emotional intensity. Additionally, platforms such as YouTube can serve as a reliable source of data for training emotion-recognition models.

Fig. 6
figure 6

Abnormal return (ABR) of two movie trailers based on values of PVEs and NVEs

By employing abnormal return (ABR) values to represent movie trailer review videos on YouTube, we were able to quantitatively assess the emotional responses of viewers through the proposed Quantum inspired computing model. While previous research has explored the application of quantum computing for facial expression recognition and emotion quantification [79, 80], the proposed quantum model demonstrates superior performance compared to classical models such as CNN, Autoencoder, GoogleNet with One-Class Support Vector Machines (OCSVM), Histogram of Oriented Gradients (HOG) with One-Class Support Vector Machines (OCSVM), and RGB and Flow two-stream networks as shown in Table 3.

Table 3 Performance comparison of proposed and classical models

The experimental results show that the quantum model performed better in various metrics, including accuracy, precision, recall, F1-score, ROC, and AUC. The quantum model's accuracy was 95.65%, significantly higher than other machine learning models with improvements in overall classification accuracy. The Receiver Operating Characteristic (ROC) curve effectively visualizes the trade-off between true positive rate (sensitivity) and false positive rate (specificity) across different classification thresholds. This analysis reveals an inverse relationship between these two metrics. To further compare the performance of various algorithms, the Area Under the Curve (AUC) metric was employed. Figure 7 presents a comparison of AUC values for popular classification algorithms and the proposed quantum model. The results indicate that the proposed quantum model achieves a superior AUC value of 0.99, outperforming conventional classifiers.

Fig. 7
figure 7

Comparison of different models based on AUC values

Table 4 shows the emotive scores of ten theatrical trailers which yielded best performance. Figure 8 briefs the measured PVEs and NVEs of best performing theatrical trailers. Subsequently, Table 5 shows the emotive scores of ten theatrical trailers with worst performance. Figure 9 briefs the measured PVEs and NVEs of worst performing theatrical trailers. Tables 4 and 5 suggest that the theatrical trailers with high PVE low NVE resulted in better returns on stock value.

Table 4 Calculated values of PVEs and NVEs by proposed model for best performing theatrical trailers
Fig. 8
figure 8

Measured PVEs and NVEs of best performing theatrical trailers

Table 5 Calculated values of PVEs and NVEs by proposed model for worst performing theatrical trailers
Fig. 9
figure 9

Measured PVEs and NVEs of worst performing theatrical trailers

Discussion of key findings

The central finding of this study is the identification of a positive correlation between the emotional intensity of movie trailers and the financial success of the corresponding films at the box office. Specifically, the results reveal that trailers evoking higher emotional responses from viewers tend to translate into greater commercial performance for the movies.

From a theoretical perspective, these findings lend support to the Mehrabian-Russell PAD (Pleasure, Arousal, Dominance) model, which posits that advertising content can elicit specific emotional responses that drive consumer behavior and decision-making. The current study demonstrates the applicability of this theoretical framework in the context of movie marketing and trailer design.

On a practical level, the results provide valuable insights for filmmakers, producers, and marketing teams. By understanding the importance of emotional content in movie trailers, they can optimize the design and execution of trailer campaigns to better engage audiences and maximize the financial returns of their films. This knowledge can inform strategic decisions regarding trailer content, messaging, and distribution channels to enhance audience appeal and drive increased box office revenues.

Implications and contributions

The use of advanced quantum computing techniques, such as Quantum Walk and Quantum Time Series modeling, represents a significant methodological advancement over conventional linear and basic machine learning approaches. This quantum-inspired framework allows for the capture of complex, nonlinear relationships between various factors influencing a film's commercial performance, providing a more accurate and nuanced understanding of the key drivers of box office success.

By leveraging the power of quantum computing, the proposed approach can uncover meaningful patterns in the emotional dynamics underlying movie trailer performance and consumer decision-making. This represents a novel, data-driven approach to predicting box office outcomes, which can potentially aid filmmakers, producers, and marketing teams in their decision-making processes.

Limitations and future research

While the study offers important insights, it is not without limitations. Primarily this study is reliant on the HSX as the sole virtual stock market to measure financial returns for movie trading. While existing literature supports the authenticity of HSX, a comprehensive validation using another virtual stock market remains elusive. Nevertheless, this study offers valuable insights into the relationship between emotional content in movie trailers and relative box office returns. Further, it highlights the potential benefits of pre-release emotive analysis. Furthermore, the analysis was conducted on a relatively small sample of 141 movie trailers only released after a specific cutoff date. Expanding the dataset to include a larger and more diverse set of trailers over a longer time period could further validate the findings and improve the generalizability of the results.

Additionally, the study focused solely on the emotional intensity of movie trailers and its relationship with financial success. Other factors, such as the specific types of emotions evoked, the narrative structure of the trailers, and their interaction with other marketing channels, were not explored in depth. Future research could delve deeper into these aspects to provide a more comprehensive understanding of the drivers of a film's commercial performance.

The current model relies on the identification of macro-facial expressions, while micro-expressions, though indicative of core emotional valence, are not explicitly considered due to limited available datasets. Future research could explore the integration of micro-expression analysis to enhance the model's accuracy. Incorporating additional data sources, such as audience sentiment analysis from social media, critic reviews, and box office data, could further strengthen the predictive capabilities of the proposed technique. Exploring the application of this approach to other industries and domains beyond the movie industry like advertisement and marketing could also be a fruitful avenue for future investigation.

Overall, this study represents an important step forward in understanding the role of emotional content in movie marketing and its impact on financial success. The innovative use of quantum computing techniques opens up new avenues for researchers and industry practitioners to uncover meaningful insights and enhance decision-making in the fast-paced and highly competitive entertainment industry.

Conclusion

The proposed quantum-inspired model has the potential to empower filmmakers and production houses to create emotionally resonant movie trailers. The proposed model showed promising results in predicting emotional intensity of movie trailers, that could be helpful to design trailers that elicit positive emotions. Future research could explore the model's effectiveness in predicting the financial success of low-budget and regional movies that are not listed on HSX. Overall, the proposed quantum model holds promise as a powerful tool for the movie and the advertisement industry, enabling production houses and marketers to make data-driven decisions to create emotionally engaging content that resonates with their target audience. This application could have a substantial impact to efficiently allocate advertising budgets and maximize profits. Additionally, the framework could be extended to evaluate product design, optimize advertising efforts, and make informed marketing decisions for various businesses.

Data availability

No datasets were generated or analysed during the current study.

References

  1. Rokhade AA, Deivam A, Shettigar AJV, TM, Prasad VRB. Intelligent advertisement generation: harnessing deep learning techniques. In 2024 3rd International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Jun. 2024, pp. 628–637. https://doi.org/10.1109/ICAAIC60222.2024.10575640.

  2. Jamil K, Dunnan L, Gul RF, Shehzad MU, Gillani SHM, Awan FH. Role of social media marketing activities in influencing customer intentions: a perspective of a new emerging era. Front Psychol. 2022. https://doi.org/10.3389/fpsyg.2021.808525.

    Article  Google Scholar 

  3. Madongo CT, Zhongjun T. A movie box office revenue prediction model based on deep multimodal features. Multimed Tools Appl. 2023;82(21):31981–2009. https://doi.org/10.1007/s11042-023-14456-4.

    Article  MATH  Google Scholar 

  4. Ni Y, Dong F, Zou M, Li W. Movie box office prediction based on multi-model ensembles. Information. 2022;13(6):Article no 6. https://doi.org/10.3390/info13060299.

    Article  MATH  Google Scholar 

  5. Kishan MR, Mahadev DN. Reverse marketing strategies (review rating, paid critics, and peer pressure) for content delivery in modern movie making: a comparative analysis of past and present practices. Educ Admin Theory Pract. 2024;30(5):Article no 5. https://doi.org/10.53555/kuey.v30i5.5478.

    Article  MATH  Google Scholar 

  6. Cuntz A, Muscarnera A, Oguguo PC, Sahli M. Million dollar baby—a primer on film finance practices in the US movie industry. Ind Innov. 2024. https://doi.org/10.1080/13662716.2024.2328004.

    Article  Google Scholar 

  7. Liu H, Shi H. Analysis report on the development of the Chinese Film Industry in 2023. J Chin Film Stud. 2024;4(1):121–51. https://doi.org/10.1515/jcfs-2024-0019.

    Article  MATH  Google Scholar 

  8. Madongo CT, Tang Z, Jahanzeb H. Movie box-office revenue prediction model by mining deep features from trailers using recurrent neural networks. SSRN Electron J. 2022. https://doi.org/10.2139/ssrn.4139565.

    Article  Google Scholar 

  9. Abdulrashid I, Ahmad IS, Musa A, Khalafalla M. Impact of social media posts’ characteristics on movie performance prior to release: an explainable machine learning approach. Electron Commer Res. 2024. https://doi.org/10.1007/s10660-024-09852-3.

    Article  Google Scholar 

  10. Wang D et al. A movie box office revenues prediction algorithm based on human-machine collaboration feature processing. J Eng Res. 2022. https://kuwaitjournals.org/jer/index.php/JER/article/view/19489. Accessed 3 Sep 2024.

  11. Iida T, Goto A, Fukuchi S, Amasaka K. A study on effectiveness of movie trailers boosting customers appreciation desire: a customer science approach using statistics and GSR. JBER. 2012;10(6):375. https://doi.org/10.19030/jber.v10i6.7028.

    Article  MATH  Google Scholar 

  12. McGowan N, Sagredo-Olivenza I, Fraile-Narvaez M. Metrics of film success: defining the 21st-century blockbuster in the USA through theatrical release and profitability. Creative Ind J. 2024. https://doi.org/10.1080/17510694.2024.2357787.

    Article  Google Scholar 

  13. Sharda R, Delen D. Predicting box-office success of motion pictures with neural networks. Expert Syst Appl. 2006;30(2):243–54. https://doi.org/10.1016/j.eswa.2005.07.018.

    Article  MATH  Google Scholar 

  14. Zhang L, Luo J, Yang S. Forecasting box office revenue of movies with BP neural network. Expert Syst Appl. 2009;36(3Part 2):6580–7. https://doi.org/10.1016/j.eswa.2008.07.064.

    Article  MATH  Google Scholar 

  15. Quader N, Gani MO, Chaki D, Ali MH. A machine learning approach to predict movie box-office success. In 2017 20th International Conference of Computer and Information Technology (ICCIT), Dec. 2017, pp. 1–7. https://doi.org/10.1109/ICCITECHN.2017.8281839.

  16. Parimi R, Caragea D. Pre-release box-office success prediction for motion pictures. In: Perner P, editor. Machine learning and data mining in pattern recognition. Berlin, Heidelberg: Springer; 2013. p. 571–85.

    Chapter  MATH  Google Scholar 

  17. Bhadrashetty A, Patil S. Movie success and rating prediction using data mining. J Sci Res Technol. 2024. https://doi.org/10.61808/jsrt78.

    Article  MATH  Google Scholar 

  18. Ahmad IS, Bakar AA, Yaakub MR, Muhammad SH. A survey on machine learning techniques in movie revenue prediction. SN Comput Sci. 2020;1(4):235. https://doi.org/10.1007/s42979-020-00249-1.

    Article  Google Scholar 

  19. Mbunge E, Fashoto SG, Bimha H. Prediction of box-office success: a review of trends and machine learning computational models. Int J Bus Intell Data Mining. 2022;20(2):192–207. https://doi.org/10.1504/IJBIDM.2022.120825.

    Article  Google Scholar 

  20. Chen M, Ferro GM, Sornette D. On the use of discrete-time quantum walks in decision theory. PLoS ONE. 2022;17(8): e0273551. https://doi.org/10.1371/journal.pone.0273551.

    Article  MATH  Google Scholar 

  21. Behrens R, et al. Leveraging analytics to produce compelling and profitable film content. J Cult Econ. 2021;45(2):171–211. https://doi.org/10.1007/s10824-019-09372-1.

    Article  MATH  Google Scholar 

  22. An Y, An J, Cho S. Artificial intelligence-based predictions of movie audiences on opening Saturday. Int J Forecast. 2021;37(1):274–88. https://doi.org/10.1016/j.ijforecast.2020.05.005.

    Article  MATH  Google Scholar 

  23. Wang Z, Zhang J, Ji S, Meng C, Li T, Zheng Y. Predicting and ranking box office revenue of movies based on big data. Inf Fusion. 2020;60:25–40. https://doi.org/10.1016/j.inffus.2020.02.002.

    Article  MATH  Google Scholar 

  24. Liao Y, Peng Y, Shi S, Shi V, Yu X. Early box office prediction in China’s film market based on a stacking fusion model. Ann Oper Res. 2022;308(1):321–38. https://doi.org/10.1007/s10479-020-03804-4.

    Article  MathSciNet  MATH  Google Scholar 

  25. Tang Z, Dong S. A total sales forecasting method for a new short life-cycle product in the pre-market period based on an improved evidence theory: application to the film industry. Int J Prod Res. 2021;59(22):6776–90. https://doi.org/10.1080/00207543.2020.1825861.

    Article  MATH  Google Scholar 

  26. Zhou Y, Yen GG. Evolving deep neural networks for movie box-office revenues prediction. In 2018 IEEE Congress on Evolutionary Computation (CEC). 2018, pp. 1–8. https://doi.org/10.1109/CEC.2018.8477691.

  27. Sahu S, Kumar R, Pathan MS, Shafi J, Kumar Y, Ijaz MF. Movie popularity and target audience prediction using the content-based recommender system. IEEE Access. 2022;10:42044–60. https://doi.org/10.1109/ACCESS.2022.3168161.

    Article  MATH  Google Scholar 

  28. Lash MT, Zhao K. Early predictions of movie success: the who, what, and when of profitability. J Manag Inf Syst. 2016;33(3):874–903. https://doi.org/10.1080/07421222.2016.1243969.

    Article  MATH  Google Scholar 

  29. Xue D. A study of evolution of film marketing in the digital age. SHS Web Conf. 2024;193:04003. https://doi.org/10.1051/shsconf/202419304003.

    Article  Google Scholar 

  30. Papalampidi P, Keller F, Lapata M. Finding the right moment: human-assisted trailer creation via task composition. IEEE Trans Pattern Anal Mach Intell. 2024;46(1):292–304. https://doi.org/10.1109/TPAMI.2023.3323030.

    Article  Google Scholar 

  31. Souza TLD, Nishijima M, Pires R. Revisiting predictions of movie economic success: random forest applied to profits. Multimed Tools Appl. 2023;82(25):38397–420. https://doi.org/10.1007/s11042-023-15169-4.

    Article  Google Scholar 

  32. Dwivedi YK, et al. Setting the future of digital and social media marketing research: perspectives and research propositions. Int J Inf Manage. 2021;59: 102168. https://doi.org/10.1016/j.ijinfomgt.2020.102168.

    Article  MATH  Google Scholar 

  33. Kampani J, Nicolaides C. Information consistency as response to pre-launch advertising communications: The case of YouTube trailers. Front Commun. 2023. https://doi.org/10.3389/fcomm.2022.1022139.

    Article  MATH  Google Scholar 

  34. Kim A, Trimi S, Lee S-G. Exploring the key success factors of films: a survival analysis approach. Serv Bus. 2021;15(4):613–38. https://doi.org/10.1007/s11628-021-00460-x.

    Article  MATH  Google Scholar 

  35. Xu W, Yao Z, He D, Cao L. Understanding online review helpfulness: a pleasure-arousal-dominance (PAD) model perspective. Aslib J Inf Manag. 2023. https://doi.org/10.1108/AJIM-04-2023-0121.

    Article  Google Scholar 

  36. Manthiou A, Hickman E, Klaus P. Beyond good and bad: challenging the suggested role of emotions in customer experience (CX) research. J Retail Consum Serv. 2020;57: 102218. https://doi.org/10.1016/j.jretconser.2020.102218.

    Article  Google Scholar 

  37. Namba S, Sato W, Osumi M, Shimokawa K. Assessing automated facial action unit detection systems for analyzing cross-domain facial expression databases. Sensors. 2021;21(12):Article no 12. https://doi.org/10.3390/s21124222.

    Article  Google Scholar 

  38. Zou Z, Mubin O, Alnajjar F, Ali L. A pilot study of measuring emotional response and perception of LLM-generated questionnaire and human-generated questionnaires. Sci Rep. 2024;14(1):2781. https://doi.org/10.1038/s41598-024-53255-1.

    Article  MATH  Google Scholar 

  39. Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects elicited by video commercials. Front Neurosci. 2023. https://doi.org/10.3389/fnins.2023.1125983.

    Article  MATH  Google Scholar 

  40. Sels L, Tran A, Greenaway KH, Verhofstadt L, Kalokerinos EK. The social functions of positive emotions. Curr Opin Behav Sci. 2021;39:41–5. https://doi.org/10.1016/j.cobeha.2020.12.009.

    Article  Google Scholar 

  41. Nagai Y, Jones CI, Sen A. Galvanic Skin Response (GSR)/electrodermal/skin conductance biofeedback on epilepsy: a systematic review and meta-analysis. Front Neurol. 2019. https://doi.org/10.3389/fneur.2019.00377.

    Article  Google Scholar 

  42. Jabbooree AI, Khanli LM, Salehpour P, Pourbahrami S. A novel facial expression recognition algorithm using geometry β-skeleton in fusion based on deep CNN. Image Vis Comput. 2023;134: 104677. https://doi.org/10.2139/ssrn.4268767.

    Article  MATH  Google Scholar 

  43. Pise AA, et al. Methods for facial expression recognition with applications in challenging situations. Comput Intell Neurosci. 2022;2022(1):9261438. https://doi.org/10.1155/2022/9261438.

    Article  MathSciNet  MATH  Google Scholar 

  44. Díaz P, Vásquez E, Shiguihara P. A survey of video analysis based on facial expression recognition. Eng Proc. 2023;42(1):1. https://doi.org/10.3390/engproc2023042003.

    Article  MATH  Google Scholar 

  45. Rifai S, Bengio Y, Courville A, Vincent P, Mirza M. Disentangling factors of variation for facial expression recognition. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C, editors. Computer vision—ECCV 2012. Berlin, Heidelberg: Springer; 2012. p. 808–22.

    Chapter  Google Scholar 

  46. Essa IA, Pentland AP. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans Pattern Anal Mach Intell. 1997;19(7):757–63. https://doi.org/10.1109/34.598232.

    Article  MATH  Google Scholar 

  47. Ahmad IS, Bakar AA, Yaakub MR. Movie revenue prediction based on purchase intention mining using YouTube trailer reviews. Inf Process Manage. 2020;57(5): 102278. https://doi.org/10.1016/j.ipm.2020.102278.

    Article  Google Scholar 

  48. Cui X, Tao W, Cui X. Affective-knowledge-enhanced graph convolutional networks for aspect-based sentiment analysis with multi-head attention. Appl Sci. 2023;13(7):Article no 7. https://doi.org/10.3390/app13074458.

    Article  MATH  Google Scholar 

  49. Hur M, Kang P, Cho S. Box-office forecasting based on sentiments of movie reviews and Independent subspace method. Inf Sci. 2016;372:608–24. https://doi.org/10.1016/j.ins.2016.08.027.

    Article  MATH  Google Scholar 

  50. Wiles MA, Danielova A. The worth of product placement in successful films : an event study analysis. Int Retail Market Rev. 2013;9(1):23–48. https://doi.org/10.10520/EJC142832.

    Article  MATH  Google Scholar 

  51. Martinez-Blasco M, Serrano V, Prior F, Cuadros J. Analysis of an event study using the Fama-French five-factor model: teaching approaches including spreadsheets and the R programming language. Financ Innov. 2023;9(1):76. https://doi.org/10.1186/s40854-023-00477-3.

    Article  Google Scholar 

  52. Karray S, Debernitz L. The effectiveness of movie trailer advertising. Int J Advert. 2017;36(2):368–92. https://doi.org/10.1080/02650487.2015.1090521.

    Article  MATH  Google Scholar 

  53. Blitz D, Hanauer MX, Honarvar I, Huisman R, van Vliet P. Beyond Fama-French factors: Alpha from short-term signals. Financ Anal J. 2023;79(4):96–117. https://doi.org/10.1080/0015198X.2023.2173492.

    Article  Google Scholar 

  54. Plastun A, Sibande X, Gupta R, Ji Q. Price effects after one-day abnormal returns and crises in the stock markets. Res Int Bus Financ. 2024;70: 102308. https://doi.org/10.1016/j.ribaf.2024.102308.

    Article  Google Scholar 

  55. Agarwal JD, Agarwal M, Agarwal A, Agarwal Y. Economics of cryptocurrencies: artificial intelligence, blockchain, and digital currency. In Information for efficient decision making. WORLD SCIENTIFIC, 2020, pp. 331–430. https://doi.org/10.1142/9789811220470_0013.

  56. Kim HH-D, Park K. Impact of environmental disaster movies on corporate environmental and financial performance. Sustainability. 2021. https://doi.org/10.3390/su13020559.

    Article  MATH  Google Scholar 

  57. Goyal G, Singh J, Inder S. A novel framework for correlating content quality on OTT platforms with their stock value. In 2020 International Conference on Smart Electronics and Communication (ICOSEC). 2020, pp. 377–382. https://doi.org/10.1109/ICOSEC49089.2020.9215400.

  58. Munawaroh U, Sunarsih S. The effects of Fama-French five factor and momentum factor on Islamic stock portfolio excess return listed in ISSI. J Eko Keu Isl. 2020. https://doi.org/10.20885/jeki.vol6.iss2.art4.

    Article  Google Scholar 

  59. Sarvakar K, Senkamalavalli R, Raghavendra S, Santosh Kumar J, Manjunath R, Jaiswal S. Facial emotion recognition using convolutional neural networks. Mater Today Proc. 2023;80:3560–4. https://doi.org/10.1016/j.matpr.2021.07.297.

    Article  Google Scholar 

  60. Kim J, Song R, Kang W. The effect of temporal variation of prelaunch expectations on stock market response in the motion picture industry. J Prod Innov Manag. 2022;39(4):515–33. https://doi.org/10.1111/jpim.12616.

    Article  MATH  Google Scholar 

  61. Delcey T, Sergi F. The efficient market hypothesis and rational expectations macroeconomics. How did they meet and live (happily) ever after? Eur J Hist Econ Thought. 2023;30(1):86–116. https://doi.org/10.1080/09672567.2022.2108869.

    Article  MATH  Google Scholar 

  62. Simon FM, Schroeder R. Big data goes to hollywood: the emergence of big data as a tool in the American film industry. In: Hunsinger J, Allen MM, Klastrup L, editors. Second international handbook of internet research. Dordrecht: Springer, Netherlands; 2020. p. 549–67. https://doi.org/10.1007/978-94-024-1555-1_63.

    Chapter  MATH  Google Scholar 

  63. Singh J, Goyal G. Anticipating movie success through crowdsourced social media videos. Comput Hum Behav. 2019;101:484–94. https://doi.org/10.1016/j.chb.2018.08.050.

    Article  Google Scholar 

  64. Fan Y, Foutz N, James GM, Jank W. Functional response additive model estimation with online virtual stock markets. Ann Appl Stat. 2014;8(4):2435–60. https://doi.org/10.1214/14-AOAS781.

    Article  MathSciNet  MATH  Google Scholar 

  65. Canbolat M, Sohn K, Gardner JT. A parsimonious predictive model of movie performance: a managerial tool for supply chain members. IJORIS. 2020;11(4):46–61. https://doi.org/10.4018/IJORIS.2020100103.

    Article  MATH  Google Scholar 

  66. El-Sappagh S, Ali F, Abuhmed T, Singh J, Alonso JM. Automatic detection of Alzheimer’s disease progression: an efficient information fusion approach with heterogeneous ensemble classifiers. Neurocomputing. 2022;512:203–24. https://doi.org/10.1016/j.neucom.2022.09.009.

    Article  Google Scholar 

  67. ElSayed Y, ElSayed A, Abdou MA. An automatic improved facial expression recognition for masked faces. Neural Comput Appl. 2023;35(20):14963–72. https://doi.org/10.1007/s00521-023-08498-w.

    Article  MATH  Google Scholar 

  68. Peres VMX, Musse SR. Towards the creation of spontaneous datasets based on Youtube reaction videos. In: Bebis G, Athitsos V, Yan T, Lau M, Li F, Shi C, Yuan X, Mousas C, Bruder G, editors. Advances in visual computing. Cham: Springer International Publishing; 2021. p. 203–15. https://doi.org/10.1007/978-3-030-90436-4_16.

    Chapter  MATH  Google Scholar 

  69. Mokryn O, Bodoff D, Bader N, Albo Y, Lanir J. Sharing emotions: determining films’ evoked emotional experience from their online reviews. Inf Retrieval J. 2020;23(5):475–501. https://doi.org/10.1007/s10791-020-09373-1.

    Article  Google Scholar 

  70. Lopezosa C, Orduna-Malea E, Pérez-Montoro M. Making video news visible: identifying the optimization strategies of the cybermedia on YouTube using web metrics. J Pract. 2020;14(4):465–82. https://doi.org/10.1080/17512786.2019.1628657.

    Article  Google Scholar 

  71. Egger DJ, et al. Quantum computing for finance: state-of-the-art and future prospects. IEEE Trans Quant Eng. 2020;1:1–24. https://doi.org/10.1109/TQE.2020.3030314.

    Article  MATH  Google Scholar 

  72. Qu D, Marsh S, Wang K, Xiao L, Wang J, Xue P. Deterministic search on star graphs via quantum walks. Phys Rev Lett. 2022;128(5): 050501. https://doi.org/10.1103/PhysRevLett.128.050501.

    Article  MathSciNet  MATH  Google Scholar 

  73. Shakeel A. Efficient and scalable quantum walk algorithms via the quantum Fourier transform. Quantum Inf Process. 2020;19(9):323. https://doi.org/10.1007/s11128-020-02834-y.

    Article  MathSciNet  MATH  Google Scholar 

  74. Elberse A, Anand B. The effectiveness of pre-release advertising for motion pictures: an empirical investigation using a simulated market. Inf Econ Policy. 2007;19(3):319–43. https://doi.org/10.1016/j.infoecopol.2007.06.003.

    Article  MATH  Google Scholar 

  75. Kumar S, Sagar V, Punetha D. A comparative study on facial expression recognition using local binary patterns, convolutional neural network and frequency neural network. Multimed Tools Appl. 2023;82(16):24369–85. https://doi.org/10.1007/s11042-023-14753-y.

    Article  MATH  Google Scholar 

  76. Tang Y, Zhang X, Hu X, Wang S, Wang H. Facial expression recognition using frequency neural network. IEEE Trans Image Process. 2021;30:444–57. https://doi.org/10.1109/TIP.2020.3037467.

    Article  MATH  Google Scholar 

  77. Fama EF, French KR. Multifactor explanations of asset pricing anomalies. J Finance. 1996;51(1):55–84. https://doi.org/10.1111/j.1540-6261.1996.tb05202.x.

    Article  MATH  Google Scholar 

  78. Kim T, Kim TS, Park YJ. Cross-sectional expected returns and predictability in the Korean stock market. Emerg Mark Financ Trade. 2020;56(15):3763–84. https://doi.org/10.1080/1540496X.2019.1576126.

    Article  MATH  Google Scholar 

  79. Mengoni R, Incudini M, Di Pierro A. Facial expression recognition on a quantum computer. Quantum Mach Intell. 2021;3(1):8. https://doi.org/10.1007/s42484-020-00035-5.

    Article  MATH  Google Scholar 

  80. Singh J, Ali F, Shah B, Bhangu KS, Kwak D. Emotion quantification using variational quantum state fidelity estimation. IEEE Access. 2022;10:115108–19. https://doi.org/10.1109/ACCESS.2022.3216890.

    Article  MATH  Google Scholar 

Download references

Funding

This work was supported by the Researchers Supporting Project number (RSP2025R395), King Saud University, Riyadh, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Contributions

JS: prepared the first draft of manuscript and performed experimentation and supervised analysis, KB: compiled the data, performed analysis and validation of results, FA: Helped in reviewing and proofreading, AZ: arranged funds BS: helped in revising proofs.

Corresponding authors

Correspondence to Jaiteg Singh or Farman Ali.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, J., Bhangu, K.S., Ali, F. et al. Quantum-inspired framework for big data analytics: evaluating the impact of movie trailers and its financial returns. J Big Data 12, 22 (2025). https://doi.org/10.1186/s40537-025-01069-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-025-01069-x

Keywords