Abstract
Purpose
This paper aims to conduct an interdisciplinary systematic literature review (SLR) of fake news research and to advance the socio-technical understanding of digital information practices and platforms in business and management studies.
Design/methodology/approach
The paper applies a focused, SLR method to analyze articles on fake news in business and management journals from 2010 to 2020.
Findings
The paper analyzes the definition, theoretical frameworks, methods and research gaps of fake news in the business and management domains. It also identifies some promising research opportunities for future scholars.
Practical implications
The paper offers practical implications for various stakeholders who are affected by or involved in fake news dissemination, such as brands, consumers and policymakers. It provides recommendations to cope with the challenges and risks of fake news.
Social implications
The paper discusses the social consequences and future threats of fake news, especially in relation to social networking and social media. It calls for more awareness and responsibility from online communities to prevent and combat fake news.
Originality/value
The paper contributes to the literature on information management by showing the importance and consequences of fake news sharing for societies. It is among the frontier systematic reviews in the field that covers studies from different disciplines and focuses on business and management studies.
Keywords
Citation
Farhoudinia, B., Ozturkcan, S. and Kasap, N. (2023), "Fake news in business and management literature: a systematic review of definitions, theories, methods and implications", Aslib Journal of Information Management, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/AJIM-09-2022-0418
Publisher
:Emerald Publishing Limited
Copyright © 2023, Bahareh Farhoudinia, Selcen Ozturkcan and Nihat Kasap
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Social media is widely regarded as a primary news source for many people. It is accessible, often free and easily promoted, making it easy to spread information, including fake news. The COVID-19 pandemic has only heightened concerns about fake news, as the spread of false information about the pandemic, such as linking 5G cell towers to human immune system issues, has led to serious consequences (Mourad et al., 2020). In recent years, there has been a growing body of research on fake news, with a particular focus during the pandemic (Elías and Catalan-Matamoros, 2020, Hartley and Vu, 2020, Islam et al., 2020a, Laato et al., 2020, Marin, 2021, Naeem and Bhatti, 2020, Pennycook et al., 2020b). Twitter was claimed by Thelwall and Thelwall (2020) to have positively impacted information sharing during the COVID-19 pandemic, but it was also widely used to spread fake news. Among many similar electoral events, the 2016 USA presidential election marked a noteworthy milestone in which Facebook-based fake news sources were believed to have significantly impacted the election result (Meel and Vishwakarma, 2020). Barfar (2019) studied the spread of fake political news and observed strong reactions to fake liberal news, including anger. Fake stories spread faster than true ones on Twitter (Vosoughi et al., 2018), making companies and organizations vulnerable to the consequences of fake news. Fake news about a company could easily influence its stock price, leading to severe financial losses.
The proliferation of health-related fake news on social media is a significant concern with far-reaching consequences. Several studies have reviewed the literature on this issue and identified various types of misinformation, such as false claims about treatments, disease origins and conspiracy theories. The impact of this misinformation on public health has also been discussed, including its promotion of vaccine hesitancy, encouragement of risky health behaviors and erosion of trust in public health authorities.
Several approaches have been suggested to combat the spread of fake news, including fact-checking, social media platform policies, public health campaigns and the use of artificial intelligence. However, these approaches have limitations and require further research and development to improve their accuracy and efficiency.
In this context, we contribute to this emerging field by providing a comprehensive review of the literature related to fake news from a business and management perspective. Our review includes different types of fake news, definitions, theoretical and psychological background and fake news detection approaches through a systematic review. We also highlight the possible consequences of fake news on business and emphasize the importance of this topic from a managerial point of view.
Our review addresses research gaps and provides future research directions for multiple disciplines such as computer science, social science, psychology and policymaking, which cross paths with business and management researchers. We have sourced our collection of research papers from academic journals published within the business and management field by enclosing manuscripts published only in the Academic Journal Guide (AJG)-listed journals [1]. We believe that our review provides interested researchers from various disciplines with valuable insights to advance their academic research on fake news.
The paper is structured as follows: Sections 2 and 3 provide a literature review and methodology, respectively. Section 4 presents the findings. Section 5 discusses limitations, implications and future research opportunities. Finally, Section 6 summarizes the conclusions.
2. Literature review
Many review papers have focused on the topic of fake news detection methods. For instance, Nirav Shah and Ganatra (2022) reviewed 68 articles published between 2015 and 2021 to introduce various challenges in developing effective fake news detection models, such as the absence of standard datasets, various types of fake news and the need for detection models in diverse languages and cultural contexts. Similarly, Thompson et al. (2022) reviewed 62 articles published between 2016 and 2020 to provide an overview of online fake news detection approaches, including content-based, social network-based and hybrid approaches. The authors also discussed the most widely used datasets, evaluation metrics and machine learning algorithms.
In another study, Mridha et al. (2021) conducted a comprehensive review of deep learning techniques for detecting fake news. The paper covers 77 articles published between 2016 and 2021 and highlights the need for more realistic datasets and detection models in diverse languages and cultural contexts. Similarly, Shahzad et al. (2022) provided an overview of the relationship between big data analytics and context-based fake news detection. They reviewed 41 articles published between 2016 and 2021, addressing the methods used for fake news detection, including traditional methods such as fact-checking and new methods based on big data analytics. The authors revealed several research gaps, such as the lack of research on the effectiveness of detection methods, the ethical and legal implications of using big data analytics for fake news detection and the impact of fake news on public opinion and decision-making.
Furthermore, Saquete et al. (2020) reviewed natural language processing methods for fake news detection, including text classification, sentiment analysis and fact-checking. The authors addressed challenges in the field of natural language processing (NLP), such as the problem of biased training data and the need to develop more effective approaches to combat the spread of fake news on social media. Islam et al. (2020b) focused on using deep learning to detect misinformation on social media. They reviewed 70 articles published between 2015 and 2021, covering deep learning methods such as neural networks, convolutional neural networks, recurrent neural networks and attention mechanisms. The authors suggested that future research should focus on developing more advanced deep learning techniques to effectively detect and combat the spread of misinformation on online social networks.
Furthermore, Kaddoura et al. (2022) provided an overview of spam detection and classification. They reviewed 122 papers published between 2011 and 2019, examining the techniques used to detect spam, including rule-based approaches, content-based approaches and machine learning algorithms. The authors highlighted the importance of feature selection in spam detection and the need for large and diverse datasets. They emphasized the need for more research in this area, as spam and fake news pose a significant threat to individuals and organizations.
In a more interdisciplinary approach, Zhou and Zafarani (2020) provided a review of fake news detection methods and challenges and an overview of fundamental theories across various disciplines to encourage interdisciplinary research on fake news. The authors discussed various types of fake news, such as fabricated news, clickbait, satire and propaganda. They addressed the impact of fake news on society and noted that there is no one-size-fits-all solution for detecting fake news. They also highlighted the need for more research in this field.
2.1 Fake news about health
Fake news related to health is a pernicious phenomenon that can lead to serious crises, and as such, it has attracted significant attention from scholars across various disciplines. Wang et al. (2019) conducted a thorough review of the literature on health-related misinformation on social media, examining 40 research papers published between 2010 and 2020. They identified different types of health-related misinformation, including false claims about treatment efficacy, disease origins and conspiracy theories. Moreover, they discussed the deleterious impact of such misinformation on public health, including vaccine hesitancy, risky health behaviors and erosion of trust in public health authorities.
Building upon Wang et al.'s seminal work, Melchior and Oliveira (2022) conducted a comprehensive review of the literature on health-related fake news on social media. They identified key factors contributing to the spread of such fake news on social media platforms and critically reviewed the approaches aimed at combating its proliferation, including fact-checking, social media platform policies and public health campaigns.
Similarly, Balakrishnan et al. (2022) examined the infodemic and fake news related to COVID-19, drawing on 74 research papers published between 2020 and 2021. Their review revealed false claims regarding the virus's origins, conspiracy theories and inaccurate information about the effectiveness of vaccines and treatments. The authors highlighted research gaps, such as a lack of empirical studies on the impact of the infodemic on public health and healthcare systems and a limited understanding of the effectiveness of interventions designed to combat the spread of fake news.
Ahmad et al. (2022) provided an insightful overview of the role of fake news in the COVID-19 pandemic and the use of artificial intelligence to combat its spread. They reviewed 56 research papers published in January and December 2020 and noted that existing approaches have limitations, such as the need for more accurate and efficient natural language processing techniques, better training data and the potential for biases and errors in artificial intelligence (AI)-based systems. They recommended future research that employs interdisciplinary approaches from computer science, communication studies and the social sciences.
Aïmeur et al. (2023) provided a general overview of the literature on fake news, disinformation and misinformation issues. They covered research papers published between 2016 and 2021, exploring the definitions of these concepts and the impact of fake news on society, politics and the economy. The authors highlighted the potential consequences of fake news, including the erosion of trust in institutions and the media, political polarization and the spread of conspiracy theories. They also discussed various approaches and strategies to address fake news, such as fact-checking, media literacy and regulatory interventions, emphasizing the need for more effective methods to detect and combat fake news.
Kim et al. (2021) reviewed fake news from the perspective of news creation and consumption, drawing on 91 research studies published between 2014 and 2020. They provided various approaches to fake news, such as content analysis, social network analysis and machine learning. The authors revealed research gaps, including a limited understanding of psychological and social factors that influence the consumption of fake news. They also highlighted the need for more research on the ethical and legal implications of fake news.
In their recent study, Vasist and Krishnan (2023) conducted a comprehensive literature review, analyzing the relationship between fake news and sustainability-focused innovations from various perspectives. Their review incorporated 31 research papers published between 2016 and 2021, suggesting that fake news can act as a barrier to adopting sustainability-focused innovations by spreading misleading information about their benefits. To combat this issue, the authors recommended that future research focus on developing strategies to counteract the spread of fake news associated with sustainability-focused innovations.
Similarly, Damstra et al. (2021) also provided a literature review, exploring different forms of intentional deception, including fake news, disinformation, propaganda and conspiracy theories. Their analysis highlighted the significant impact that intentional deception can have on both society and individuals, such as the erosion of trust in institutions and the potential for political and social polarization.
Furthermore, Alkhamees et al. (2021) conducted a thorough review of the literature on user trustworthiness in social media, examining 69 papers published between 2004 and 2018. Their paper described various dimensions of user trustworthiness, including reliability, integrity and competence and emphasized the importance of developing effective strategies to enhance user trustworthiness. These strategies included providing users with feedback and incentives and promoting transparency and accountability.
To advance the emerging field of fake news research, we present a comprehensive literature review that explores the topic from the perspective of articles published within the business and management disciplines. Our systematic review offers an overview of different types of fake news, their definitions, theoretical and psychological background and fake news detection approaches. Importantly, we also provide insights into the potential consequences of fake news on businesses and address the significance of this topic from a managerial standpoint, distinguishing our work from previous systematic review papers.
To ensure the rigor of our review, we retrieved a collection of research papers exclusively from journals listed in the AJG, which is a trusted guide of academic journals within the business and management field as endorsed by the Chartered Association of Business Schools (ABS). By addressing research gaps and providing future research directions, our review serves as a valuable resource for multi-disciplinary and interdisciplinary researchers in computer science, social science, psychology, policymaking and management. Our review aims to assist researchers from diverse disciplines in identifying fruitful avenues for further research in the realm of fake news.
The extraordinary growth of fake news and its dangers to societies by threatening democracy, justice, freedom of expression and public trust further intensify the need for emerging research on this topic. This manuscript aims to contribute to the growing information management literature by providing a systematic review of the published business and management literature in the AJG-listed journals between the years 2010 and 2020 to shed light on several research questions: First, we unpack the definition of “fake news” in the business and management literature during the period of our chosen focus. Next, we seek out the most common theoretical frameworks that scholars in business and management use to study fake news. Then, we identify the most commonly employed research methods used in these studies. We explore the major research gaps in past business and management studies included in our dataset. Finally, we uncover some promising research opportunities to point future researchers in the right direction.
3. Methodology
This paper aims to be an unbiased and reproducible study (Nolan and Garavan, 2016, Nguyen et al., 2018). We adhere to the systematic review's five-step process that Denyer and Tranfield (2009) outlined. These steps are, namely, (1) formulate the research questions; (2) find studies; (3) select and evaluate studies; (4) analyze the findings; and (5) report the results. The research questions were established first. Then, a search strategy was developed to cover all relevant past research in the field using keywords. The databases were searched accordingly and the results were collected. Filtering criteria were established to refine the results.
The initial search for the term “fake news” and its synonyms, such as “hoax news” and “disinformation,” was conducted in the Scopus, Web of Science, Ebsco, ProQuest, Jstor and Emerald databases. This research focuses on fake news on social media. This review is restricted to English-language articles published in the AJG-listed journals, which are endorsed by the ABS. The time frame of 2010–2020 was selected as this was when fake news on social media became a significant challenge for societies and businesses, while not many studies were published before 2010. Figure 1 illustrates the search steps and the number of papers excluded in every phase, where the funneling led to eighty-one articles being included in this systematic review.
4. Analysis of findings
The analysis indicated that the frequency of articles on the topic showed a dramatic increase in 2019. Figure 2 illustrates the frequency of all articles published with respect to their publication years, ranging from 2010 to 2020. The emerging high frequency of the articles only in recent years could be due to the novelty of the research topic and the 2016 USA presidential election (Wang et al., 2019; Carlson, 2020), marking the relevancy and timeliness of the field.
The articles included in the dataset came from various disciplines that have been published in the business and management literature. Figure 3 indicates the number of articles in each field, including journalism, health, psychology, political science, information science, computer science, management and marketing. Determining a single specific discipline for some articles proved elusive; therefore, they were classified into multiple domains where relevant. Five review articles were found in the literature: Lozano et al. (2020) studied and reviewed computerized veracity assessment methods; Meel and Vishwakarma (2020) focused on theories from different disciplines to solve the problem of truthfulness and credibility analyses of web content; Wang et al. (2019) examined health-related misinformation explicitly by reviewing relevant literature to understand the procedures and mechanisms of misinformation spread; Baccarella et al. (2018) investigated the dark side of social media such as “cyberbullying, addictive use, trolling, online witch hunts, fake news, or privacy abuse.”
4.1 The definition of “fake news”
Fake news takes various forms, including misinformation and disinformation. Misinformation refers to false, inaccurate, or incomplete information (e.g. Berthon and Pitt, 2018; Acker and Donovan, 2019; Carrieri et al., 2019; Brashier and Schacter, 2020, Islam et al., 2020a). Disinformation refers to incorrect information shared intentionally. Fake news can encompass both definitions, but it is commonly defined as intentionally false, fabricated news articles that can deceive the reader (e.g. Allcott and Gentzkow, 2017; Chen and Cheng, 2019; Colliander, 2019; Flostrand et al., 2019; Kim and Dennis, 2019; Kim et al., 2019; Lee et al., 2019; Borges‐Tiago et al., 2020; Di Domenico and Visentin, 2020; Kwanda and Lin, 2020). According to Ozbay and Alatas (2020), any low-quality or incomplete news can be fake. Fake news is also referred to as “post-truth,” meaning that emotions and personal beliefs play a greater role in shaping public opinion than objective facts (Koro-Ljungberg et al., 2019).
Fake news can be in the form of satire, rumors, pictures, or videos. Rumors are unverified information that may be true or false, whereas fake news is false information spread through news outlets (Wu et al., 2020). User-generated content is any digital content created and posted as videos, pictures, blogs and tweets on social media (Rajamma et al., 2019). Advances in computer graphics, computer vision and machine learning have made it possible to synthesize fake images or videos (Agarwal et al., 2020). The term “deepfake” refers to convincing digital content, especially videos. Deepfakes can pose a threat to politicians, celebrities, companies and brands. Career-related fake information on social media is another form of fake information on these platforms (Sampson et al., 2018). Clickbait is a form of fake news shared on social media. It is a sensational headline that encourages clicks to a separate webpage by providing some information (Chua et al., 2021).
4.2 Theoretical review
The collective theoretical framework of the papers aims to answer questions such as why people believe and spread fake news (e.g. Al-Rawi et al., 2019; Apuke and Omar 2020; Talwar et al., 2020) and identify the characteristics of those who share or contribute to the spread of fake news (e.g. Ben-Gal et al., 2019; Chen and Cheng, 2019; Sela et al., 2020; Brashier and Schacter, 2020; Duffy et al., 2020). Researchers have stated that fake news sharing can be influenced by confirmation bias, which occurs when individuals prefer to share news that aligns with their existing beliefs (Kim et al., 2019). Kahneman (2011) proposed that humans are not always rational in decision-making and have two types of cognition: System 1 and System 2. System 1 is fast, intuitive and influenced by confirmation bias, while System 2 is deliberate, slow and requires more effort. Moreover, Kim and Dennis (2019) argue that social media users rely on System 1 cognition, leading to the rapid spread of fake news on these platforms. Social media can form echo chambers where beliefs are reinforced without exposure to opposing views (Meel and Vishwakarma, 2020), as the platforms tend to display content that aligns with a user's interests more frequently. As a result, users are less likely to encounter opposing ideas or news, making fake news more credible. Many papers have suggested that the echo chamber effect contributes to the belief and spread of false information on social media (e.g. Allcott and Gentzkow, 2017; Berthon and Pitt, 2018; Chua and Banerjee, 2018; Long et al., 2019; Peterson, 2019; Di Domenico and Visentin, 2020).
4.3 Methodology review
The papers reviewed in this study had different methodologies depending on their objectives. Papers focusing on the psychology of fake news behavior used surveys or experiments to test theoretical concepts. Papers aimed at detecting and analyzing fake news applied machine learning or deep learning to identify its features. Another group of papers studied the spread of false information using network analysis, resulting in the classification of the papers into three categories.
4.3.1 Experiments and surveys
Psychological research in the reviewed studies aimed to investigate user behavior and motivation toward fake news. The research employed surveys and experiments to test their hypotheses, focusing on the sharing behaviors of social media users. The papers examined the impact of personality traits on an individual's ability to detect, believe and spread fake news (Lutzke et al., 2019; Thompson et al., 2020; Wolverton and Stevens, 2019). For instance, Talwar et al. (2019) proposed that online trust, self-disclosure, fear of missing out (FoMO) and social media fatigue are positively related to sharing behavior. The researchers designed a survey and used Indian WhatsApp users as their sample population. Meanwhile, Laato et al. (2020) suggested that trust in online information is a strong predictor of sharing unverified information. This finding was established through an online survey of 1,000 students. The articles, which included experiments and surveys, were typically published in psychology or marketing journals. Online questionnaire-based survey research from the USA and India confirms that conservatives and collectivists prefer to believe false news more. This study reveals traits that make people more likely to believe bogus news. This study also examines control factors including age, sex and Internet use. Findings indicate conceptual implications and actionable insights (Gupta et al., 2023). Kim and Dennis (2019) run two online tests to determine the impact of a source assessment icon (a positive or negative summary of the evaluation). They find that positive and negative symbols and details have different effects. Negative symbols lower article credibility, while positive symbols do not affect credibility. They also discover that people are more inclined to check the assessment details when the article content matched their pre-existing attitudes, regardless of icon valence. Mirhoseini et al. (2023) through a laboratory experiment and online survey examines why individuals believe fake news and offers a solution. Behavioral and neurophysiological data suggest closed-mindedness promotes fake news. Performance feedback decreases overconfidence and improves analytical thinking and improves detecting fake news by 14%. This study supports classical thinking and suggests a strategy to improve fake news identification.
4.3.2 Machine learning approach
Machine learning is a field of study that allows machines to perform tasks without explicit programming. It has been widely used in prediction tasks due to its ability to analyze large amounts of data. Machine learning methods have proven to be useful in fake news studies, as they enable researchers to effectively process large datasets (Ongsulee, 2017). In the reviewed literature, several articles utilized machine learning or deep learning methods to detect fake news or identify relevant features. The detection of fake news has garnered attention from researchers across various disciplines, particularly computer scientists. Social media platforms provide ample amounts of big data for research purposes, making machine learning techniques an attractive solution for detecting fake news.
Fake news detection can be accomplished based on the linguistic features of the texts. Faustini and Covões (2020) proposed detecting fake news using only text features. They used K-Nearest Neighbors (KNN), Random Forest, Gaussian Naïve Bayes and Support Vector Machine (SVM) algorithms to detect fake news on social media. Ozbay and Alatas (2020) suggested a two-step method to identify fake news on social media. The first step is preprocessing the data with a term frequency-inverse document frequency (TF-IDF) weighting method. They applied 23 supervised machine learning algorithms combined with text mining features to a dataset containing news stories. Bot detection is another field that has attracted researchers. Since fake news creators do not prefer their identities to be recognized, they often use bots to spread misleading information (Groshek et al., 2019; Jones, 2019; Ross et al., 2019).
Different types of features have been proposed for fake news detection models, including text/content-specific, image-specific, user/account, propagation, temporal, structural and linguistic features. Machine learning methods such as Naive Bayes, SVM, decision tree, random forest and logistic regression have been utilized in this field. However, one disadvantage of these machine learning methods is that they typically rely on manually crafted features, which require human effort and can introduce bias. Recent research has employed deep learning-based models to overcome this limitation.
4.3.3 Deep learning approach
Deep learning models can automatically extract hidden information from text, image, sentiment, or structure and are useful in analyzing tweets to detect fake information online (Meel and Vishwakarma, 2020). Deep learning models excel in text classification due to their ability to learn hierarchical representations of data, revealing hidden patterns. Deep learning models may automatically learn hierarchical data representations due to their many layers and can learn from raw data without human involvement, eliminating the need for manual feature creation (Goodfellow et al., 2016). Deep learning models can handle enormous data and represent nonlinear connections between characteristics and projected classes. Deep learning has shown effective in computer vision, speech recognition and NLP. Zhang et al. (2023) use behavioral and textual data to identify fake reviewers. Their study consists of two primary parts: A behavior-sensitive feature extractor learns reviewing patterns and a context-aware attention mechanism extracts important traits from online reviews. Two http://Yelp.com datasets are used to assess the modules and architecture against industry standards. They utilized several machine learning and deep-learning models such as SVM, Logistic Regression, Random Forest and Convolutional neural networks. Xia et al. (2023) use a CNN–BiLSTM–AM model to improve the identification of fake news and emphasizes how crucial it is to scientifically disprove erroneous information in order to facilitate public-government dialog and the creation of a trustworthy news release system. The study proposes future approaches for more comprehensive media analysis and enhanced model generalization, while also acknowledging granularity and possible overfitting constraints.
Deep learning algorithms are used in fake news detection research. Rodrigues et al. (2022) evaluate tweets for spam. The research incorporates sentiment analysis. Convolutional neural network (CNN) and long short-term memory (LSTM) networks identify tweet sentiment. They utilize real-time spam identification and sentiment analysis for Twitter fake news detection. Amer et al. (2022) conducted three experiments: one using machine learning classifiers, one using deep learning models and one utilizing transformers. Word embedding is used to extract contextual characteristics from articles in each test. The trials reveal that deep learning models outperform machine learning classifiers and transformers in accuracy. However, while machine learning and deep learning-based models have proven to be useful, research has shown that algorithmic detection methods are not as effective as human intervention (Marsden et al., 2020).
4.3.4 Transformer-based approaches
Transformer-based approaches are a class of deep learning models. Transformer-based fake news detection methods are becoming more popular due to their natural language comprehension capability. Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Approach (RoBERTa), Generative Pre-trained Transformer (GPT) and other transformer models have been pretrained on enormous Internet text corpora. These models have learned extensive word contextual representations and may be fine-tuned for specific tasks (Devlin et al., 2018). In order to identify COVID-19 fake news, Alghamdi et al. (2023) focus on machine learning techniques and improved transformer-based models like BERT and COVID-Twitter-BERT. On top of these models, they test multiple neural network topologies (CNN and Bidirectional Gate recurrent unit (BiGRU)) with varying parameter settings. The tests conducted using actual COVID-19 fake data demonstrate that the addition of BiGRU to Clinical Trial Information Extraction with BERT (CT-BERT) produces remarkable outcomes, reaching a state-of-the-art F1 score of 98%. In a benchmark research, Khan et al. (2021) compared three different datasets worth of machine learning techniques in order to detect fake news. Even with little data, they discovered that pretrained models like BERT performed better than others. When it comes to languages with little training data, these models are suggested. The study also looked at word count relationships, article subjects and model performance.
4.3.5 Propagation-based approaches
The fake news dissemination process is like the spread of infectious disease and can be understood with network epidemic models. Vosoughi et al. (2018) analyzed a dataset of rumor cascades consisting of tweets and retweets and found that false information spreads faster on social media than accurate information. Lord Ferguson et al. (2019) developed a framework to explain the spread of fake news in the health industry. Giglietto et al. (2019) emphasized that while many researchers have focused on the creation of misleading information and the intention of the creator, it is crucial to focus on the propagation of information to understand how both true and false information spreads. Other researchers have analyzed the role of spreading groups in the propagation of fake news using network analysis on Twitter. They studied the differences between users involved in highly repeated and lowly repeated cascades and found that the distribution of retweets was significantly different (e.g. Sela et al., 2020). They discovered that the messages of a few anonymous Twitter accounts spread more widely than those of well-known accounts. Pantumsinchai (2018) explained how a claim could be perceived as fact or fiction through networks of interactions during major events. Papanastasiou (2020) suggested that individuals are more likely to share the news if their peers have already disseminated it. This indicates that social influence plays a crucial role in the spreading of information on social media. Furthermore, the study highlights the importance of understanding the role of social influence in the spread of information, both true and false.
Table 1 summarizes articles in the primary discipline of computer science with the objective, dataset, and method used in the respective manuscripts classifies articles in the computer sciences into three classes with similar goals and details of the corresponding methods.
4.4 Managerial review
The spread of fake news on social media can have serious consequences for businesses and their brands. Inaccurate information can quickly spread, leading to a negative perception of the company and financial losses. Therefore, it is important for brand managers to have a strategy in place to mitigate the risks posed by fake news. This may include monitoring social media for false information, developing crisis management plans and proactively communicating with customers to dispel rumors and restore trust in the brand. In August 2017, a widely disseminated tweet claimed that Starbucks provided discounts to undocumented immigrants (Tschiatschek et al., 2018). Starbucks denied the claim by directly responding to users who shared it as a response to the rumor. PepsiCo faced a boycott and a 4% decrease in stock price after false news about the company circulated on social media. The news claimed that the PepsiCo's chief executive officer (CEO) had told Trump supporters to “take their business elsewhere” (Berthon and Pitt, 2018). Kentucky Fried Chicken (KFC) was accused of selling rats instead of chicken (Pal et al., 2017). These examples demonstrate the impact fake news can have on a company's reputation and financial stability. In the age of social media, false information can spread rapidly, and it can be challenging for companies to effectively combat it. Brand managers must be proactive and have a strategy in place to address fake news and protect their brand's image.
Studies have illustrated the consequences of fake news on brands (Ryan et al., 2020) and suggest response strategies for managers (Mills and Robson, 2019; Vafeiadis et al., 2020). Marketing and psychology researchers have studied consumer characteristics and the potential factors that influence customers' sharing behaviors (Beuk et al., 2019; Chen and Cheng, 2019; Talwar et al., 2019). These studies help managers understand how fake news affects the public's perception of their brands and the ways in which they can respond to and mitigate the impact of fake news. Table 2 summarizes papers addressing fake news' managerial and marketing impacts with useful findings for brands and managers.
5. Discussion
This paper provides a comprehensive overview of the research on fake news. It highlights that fake news is characterized as false information that spreads through various platforms, including social media. The intention behind creating fake news, whether it is to harm an individual, group or entity, is also an important aspect. Fake news has become a popular research topic since the 2016 USA presidential election and its consequences, with interdisciplinary researchers from fields such as information technology (IT) and psychology contributing to the field. The growth in popularity of social media has also led to an increase in research on fake news spread through these platforms, with the COVID-19 pandemic being a recent event that has contributed to the spread of fake news.
The rise of fake news has led to the emergence of fact-checking services, but their effectiveness in reducing misperceptions and increasing trust among users is still unclear. According to (Marres, 2018), fact-checking may not be successful in reducing misperceptions, especially among people who are prone to believing them. Brandtzaeg and Følstad (2017) found that users with opposing opinions are more concerned about the trustworthiness of fact-checking websites than those with positive perceptions. These findings suggest that the effectiveness of fact-checking services in reducing fake news and increasing trust among users may be limited.
Pennycook et al. (2020a) conduct a study that showed that warnings about fake news titles could actually increase misperception among users. The study found that even if the news is false, it appears more accurate when fact-checking sites label parts of it. This highlights the need for a more human-focused approach to information dissemination, as the sheer amount of information flowing on social media and the fast pace of trending topics can make it difficult for organizations to keep up. As a result, social media platforms, traditional media and institutions must encourage users to spend more time and cognitive effort evaluating news on social media before accepting it as accurate (Di Domenico and Visentin, 2020).
5.1 Managerial implications
Fake news might severely affect marketing brands, organizations and consumer behavior. There are real examples that prove the relevance of the topic for managers. The two businesses that fake news specifically targeted were PepsiCo and Starbucks (Berthon and Pitt, 2018, Tschiatschek et al., 2018). The consumers' sharing behaviors differ based on their loyalty to the company. When consumers identify more closely with the brand, the threat to the brand is not very serious, and a trustworthy denial strategy is the best choice (Mills and Robson, 2019). Defining the best response strategy is crucial for companies in the era of fake news. Mustak et al. (2023) reveal that the greatest hazards to organizations are harm to their image, reputation and trustworthiness and the quick obsolescence of technology. To safeguard against market deceptions caused by deepfakes, companies should spend in creating resources and capabilities. This involves investing in technologies to improve a company's deepfake detection and avoidance capabilities. Additionally, they should invest in human resources to mitigate the possible negative consequences of deep-fake technologies. Additionally, managers must address possible customer harm and take preventive efforts to protect them. Cheng et al. (2023) predicts abnormal stock trading behavior using social media data (posts, likes and responses) and decision tree induction. They find that rumor propagation predicts abnormal trading behavior better than management shocks and other factors. Sharif et al. (2022) suggest that businesses must understand how fake news affects behavioral intentions and plan accordingly. Favorable brand experience, trust and credibility create favorable brand behavioral intentions, according to this study businesses that prioritize pleasant brand experience have higher brand trust and credibility and are less influenced by fake news. Rahadian and Nurfitriani (2022) explores the impact of COVID-19-related news on stock market returns, using quantile regression analysis technique they find that fake news and panic can affect stock market returns. Sharif et al. (2022) explore how fake news affects consumer behavior. Positive brand experience, trust and credibility create favorable brand behavioral intentions. They find that businesses that prioritize pleasant brand experiences have higher brand trust and credibility and are less influenced by fake news. Companies must consider fake news as a possible threat and create suitable contingency plans to avoid spreading untrue information about them. Therefore, it is crucial for companies to know and recognize fake news' characteristics and how they propagate in order to detect them. While scholars have conducted valuable research on how fake news can affect businesses and companies, there is still room for further research. Fake news is an essential threat to brands, and future researchers must pay more attention to this field of study.
5.2 Limitations and research gaps
The detection of fake news has been a hot topic in the last few years, making it not a new challenge. Before 2010, fake news was a well-known topic that mainly included traditional media like TV or newspapers. Limitations of this study's findings could be inherited from its design, which purposefully narrowed its sampling to literature published only after 2010 and published in journals included in the AJG-listed journals. Future research could benefit from including a wider range of sources, such as conference proceedings (e.g. Kaliyar, 2018; Rana et al., 2018; Zhang et al., 2018) and gray literature, to gain a more comprehensive understanding of the current state of fake news detection. Moreover, the field is rapidly evolving, and new techniques and approaches are emerging all the time, so it is essential to continuously update the knowledge in this area. Thus, the dataset does not include conference papers, while several manuscripts were published as conference proceedings in engineering and computer science, which might contribute to the existing methods and algorithms.
The analysis points to several gaps identified in the existing literature. First, gathering human-labeled datasets has the potential to significantly contribute to the existing literature since only a limited number of datasets have been used. Second, as per methodology, unsupervised machine learning, deep learning and transfer learning can be utilized for better fake news detection. These methods have the potential to enable scholars to utilize social media-based big data. To the best of our knowledge, previous studies rarely applied qualitative research methods simultaneously with social media analysis (Ozturkcan et al., 2017). The results of these two methods can complement each other to further inform the literature. The emotions associated with fake news and their impacts on engagement are also under-researched areas that require further attention. These gaps in the literature suggest that there is a need for more research in the field of fake news detection and its impact on businesses and consumers.
5.3 Future research
Despite the significant attention given to fake news research in recent years, there is still a need for more research in this area. Social media platforms such as Twitter, Facebook and Instagram offer a vast amount of data for researchers from various disciplines to study fake news. The millions of unlabeled data points available on the web can be used to detect fake news. Future research directions could include the application of unsupervised machine learning or deep learning methods for fake news detection.
Early fake news detection is another important area that requires further research. Early detection can help prevent the spread of fake news, and researchers in computer science should consider developing real-time fake news detection systems that can make a valuable contribution to the field. Using multiple datasets for analysis can improve the quality of research, yet only a small number of studies have used multiple datasets.
Incorporating sentiment analysis into fake news detection could also be a valuable direction for future research. Sentiment analysis methods can be used to analyze the emotions involved in fake news and provide deeper insights into its linguistic and emotional characteristics (Farhoudinia et al., 2022). Collaboration between data analysts and psychologists could provide a comprehensive understanding of fake news, including how it is formed and spread and the effects it has.
Psychologists have studied the theoretical background of why people believe and share fake news and the impact of personality characteristics on sharing behavior. However, there is still a need for more research in this area, particularly through the use of big data and experiments. The impact of crises such as the COVID-19 pandemic on fake news-sharing behavior also requires further investigation.
Managers need to be equipped with the knowledge of how to respond to fake news spread about their companies, brands, or organizations. Further research is needed to provide guidelines on the most effective response strategies to recover the brand's reputation. There are still many questions that need to be answered, such as the role of social media platforms in dealing with fake news, the limits of freedom of speech and the adequacy of existing policy frameworks.
Researchers in policymaking and law, in collaboration with researchers in library and information science, should ensure that the rules and legislation against fake news do not result in censorship. The balance between protecting against fake news and preserving freedom of speech is an important issue that requires further attention (Agarwal and Alsaeedi, 2021).
6. Conclusion
This systematic review analyzed past research on fake news dissemination on social networks and social media over the last decade. The paper provided an interdisciplinary review of articles from various fields, such as computer and information science, psychology, marketing and journalism, by reviewing the theoretical frameworks, methods and objectives of articles published in AJG-listed journals. The analysis included a definition of fake news, a review of the development of fake news research over the past ten years, an identification of the most used theories in fake news research and an examination of the research methods used. The paper concluded with a discussion of the limitations and gaps in the field and provided suggestions for future research directions.
List of abbreviations
• 5G: the fifth-generation technology standard for broadband cellular networks in telecommunication
• ABS: Chartered Association of Business Schools
• AJG-listed journals: peer-reviewed journals that are included in the AJG listings released by the Chartered Association of Business Schools
• CEO: Chief Executive Officer
• COVID-19: the global pandemic that started in the year 2019.
• KNN: K-Nearest Neighbors Algorithm
• SARS-CoV-2: the infectious coronavirus that led to the beginning of a global pandemic in the year 2019
• SVM: Support Vector Machine Algorithm
• tf–idf: frequency–inverse document frequency
Figures
Articles in computer sciences
Objective | Author | Dataset | Method |
---|---|---|---|
Fake news detection | Apuke and Omar (2020) Faustini and Covões (2020) Ozbay and Alatas (2020) Papanastasiou (2020) Wu et al. (2020) Zhang et al. (2018) | Online survey data | Structural Equation Modeling (SEM) |
FakeBrCorpus (Monteiro et al., 2018) TwitterBR (Faustini and Covõ es, 2020), Fake_or_real_news (Bhattacharjee et al., 2017), Fakenewsdata1 (Horne et al., 2017) Btvlifestyle (Hardalov et al., 2016) | Machine Learning (KNN, random forest, Gaussian Naïve Bayes, SVM) | ||
BuzzFeed political news data set (Silverman, 2016) Random political news data set ISOT fake news data set | Text mining methods and supervised artificial intelligence algorithms | ||
A corpus of debunked and verified user-generated videos | Machine Learning | ||
Twitter dataset | Deep Learning (neural network) | ||
A corpus of news data | A novel analytics-driven framework for detecting fake news | ||
Fake news and characteristics of users involved in fake news sharing | Al-Rawi et al. (2019) Ben-Gal et al. (2019) Islam et al. (2020a, b) Jang et al. (2018) Shin et al. (2018) | Boston University Twitter Collection | Network analysis |
Twitter data | Network analysis | ||
Online survey data | Online survey, PLS-SEM, machine learning methods | ||
Twitter data | Network analysis | ||
Twitter data | Time-series analysis | ||
Fake news prevention system | Chen et al. (2020) | News outlets | Blockchain, along with a customized Proof-of-Authority (PoA) algorithm |
Source(s): Created by authors
Summary of marketing and business papers
Author | Paper title | Method | Summary of findings |
---|---|---|---|
Berthon and Pitt (2018) | Brands, truthiness, and post-fact: Managing brands in a post-rational world | Conceptual | They explore the relationship between brands and fake news. Brands can fuel fake news or be threatened by it. The paper offers managers ways to survive in the fake news era |
Beuk et al. (2019) | Fake news and the willingness to share: a schemer schema and confirmatory bias perspective | Conceptual | Confirmatory bias influences fake news consumption greatly. Believability can extend the spread of fake news. Suggests how firms can grapple with the diffusion of fake news |
Borges‐Tiago et al. (2020) | Online users' attitudes toward fake news: Implications for brand management | Cluster analysis and partial least squares structural equation modeling | Consumer attitudes toward fake news can be different based on national culture |
Brigida and Pratt (2017) | Fake news | Time-series Analysis | Reactions to fake news occur immediately in equity markets, but option markets react after a delay |
Chen and Cheng (2019) | Consumer response to fake news about brands on social media: the effects of self-efficacy, media trust, and persuasion knowledge on brand trust | Structural Equation Modeling | Self-efficacy and media trust are predictors of consumers' ability to recognize fake news |
Di Domenico and Visentin (2020) | Fake news or true lies? Reflections about problematic contents in marketing | Review | Provided some future research opportunities |
Diddi et al. (2019) | Refuting fake news on social media: nonprofits, crisis response strategies, and issue involvement | Qualitative study | Compared two strategies: denial and attack. Attacking the source of fake news reduces the message's credibility more than denying fake news. The denial strategy effectively reduces the credibility of fake news for low involvement stakeholders, but high issue involvement individuals prefer the attack response strategy |
Flostrand et al. (2019) | Fake news and brand management: a Delphi study of impact, vulnerability, and mitigation | Delphi study | Findings indicate that service brands are at risk of fake news, and managers must implement fake news mitigation strategies |
Lee et al. (2019) | Do your employees think your slogan is “fake news?” A framework for understanding the impact of fake company slogans on employees | Conceptual | It is essential that employees of a company believe in the credibility of their slogans. Other-wise, this will have negative consequences for the organization |
Long et al. (2019) | Media, fake news, and debunking | Hoteling-type model | A wide range of customers increases the prevalence of fake news and debunking costs |
Lord Ferguson et al. (2019) | A false image of health: how fake news and pseudo-facts spread in the health and beauty industry | Conceptual- Case study | Suggests marketing denial tactics that can be effective in the case of fake news diffusion |
Mills and Robson (2019) | Brand management in the era of fake news: narrative response as a strategy to insulate brand value | Conceptual | Storytelling is a more effective strategy for companies instead of facts and statistics. Companies can use this strategy to clarify fake news about their company |
Nyilasy (2019) | Fake news: When the dark side of persuasion takes over | Conceptual | Fake news is created for the benefit of a sponsor. Fake news spreads on advertising-supported social media |
Paschen (2019) | Investigating the emotional appeal of fake news using artificial intelligence and human contributions | Database Analysis by AI, Machine learning | The text body of fake news displays much more negative feelings than positive ones. Fake news titles include more negative concepts than accurate news titles |
Peterson (2019) | A high-speed world with fake news: brand managers take warning | Conceptual | Suggested that businesses and governments use scientific methods to improve their resistance to fake news |
Rajamma et al. (2019) | User-generated content (UGC) misclassification and its effects | Survey | User-generated content can enhance purchase intention because of its specific characteristics like vicarious experience and transparency. Misclassification can suppress this effect; it highlights some steps to improve the effectiveness of UGC |
Robertson et al. (2019) | The truth (as I see it): philosophical considerations influencing a typology of fake news | Conceptual | Power structures influence the ability to respond to fake news for brands. Externally constructed news is challenging for companies to address. Internally created disinformation will cause distrust in public |
Ryan et al. (2020) | Monetizing disinformation in the attention economy: the case of genetically modified organisms (GMOs) | Case study | This case study illustrates the power of inaccurate information on businesses and societies |
Song et al. (2019) | Does deceptive marketing pay? The evolution of consumer sentiment surrounding a pseudo-product-harm crisis | Sentiment analysis | Misleading and deceptive business practices have no benefits for the offending firm. Advertising on pseudo-product-harm crisis seems to have negative results |
Vafeiadis et al. (2020) | Refuting fake news on social media: nonprofits, crisis response strategies, and issue involvement | Experiment | The authors indicate that attacking the source of fake news reduces the credibility of news more than denying it. Attack strategy increases the credibility of rumors for low-involvement individuals |
Wiesenberg (2020) | Deep strategic mediatization: Organizational leaders' knowledge and usage of social bots in an era of disinformation | Survey | This paper covers “organizational leaders' knowledge and usage of social bots.” Only a few organizations use social bots or plan to use them. The paper proposes a deep strategic mediatization concept and explains different scenarios in which social bots can be used |
Source(s): Created by authors
Note
Funding: This research has not received any funding.
Competing interests: The authors declare that there are no competing interests. The authors do not have any competing interests to disclose, whether financial or non-financial, that are directly or indirectly related to the work submitted for publication.
Authors' contributions: All authors have equally contributed to the manuscript.
References
Acker, A. and Donovan, J. (2019), “Data craft: a theory/methods package for critical internet studies”, Information, Communication and Society, Vol. 22 No. 11, pp. 1590-1609.
Agarwal, N.K. and Alsaeedi, F. (2021), “Creation, dissemination and mitigation: toward a disinformation behavior framework and model”, Aslib Journal of Information Management, Vol. 73 No. 5, pp. 639-658, doi: 10.1108/AJIM-01-2021-0034.
Agarwal, S., Farid, H., El-Gaaly, T. and Lim, S.N. (2020, December), “Detecting deep-fake videos from appearance and behavior”, In 2020 IEEE international workshop on information forensics and security (WIFS), IEEE, pp. 1-6.
Ahmad, T., Aliaga Lazarte, E.A. and Mirjalili, S. (2022), “A systematic literature review on fake news in the COVID-19 pandemic: can AI propose a solution?”, Applied Sciences, Vol. 12 No. 24, p. 12727.
Aïmeur, E., Sabrine, A. and Gilles, B. (2023), “Fake news, disinformation and misinformation in social media: a review”, Social Network Analysis and Mining, Vol. 13 No. 1, p. 30.
Alghamdi, J., Yuqing, L. and Suhuai, L. (2023), “Towards COVID-19 fake news detection using transformer-based models”, Knowledge-Based Systems, Vol. 274, 110642.
Alkhamees, M., Alsaleem, S., Al-Qurishi, M., Al-Rubaian, M. and Hussain, A. (2021), “User trustworthiness in online social networks: a systematic review”, Applied Soft Computing, Vol. 103, 107159.
Allcott, H. and Gentzkow, M. (2017), “Social media and fake news in the 2016 election”, Journal of Economic Perspectives, Vol. 31 No. 2, pp. 211-236.
Al-Rawi, A., Groshek, J. and Zhang, L. (2019), “What the fake? Assessing the extent of networked political spamming and bots in the propagation of# fakenews on Twitter”, Online Information Review, Vol. 43 No. 1, pp. 53-71, doi: 10.1108/OIR-02-2018-0065.
Amer, E., Kyung-Sup, K. and Shaker, E.-S. (2022), “Context-based fake news detection model relying on deep learning models”, Electronics, Vol. 11 No. 8, p. 1255.
Apuke, O.D. and Omar, B. (2020), “User motivation in fake news sharing during the COVID-19 pandemic: an application of the uses and gratification theory”, Online Information Review, Vol. 45 No. 1, pp. 220-239.
Baccarella, C.V., Wagner, T, F., Kietzmann, J.H. and McCarthy, I.P. (2018), “Social media? It’s serious! Understanding the dark side of social media”, European Management Journal, Vol. 36 No. 4, pp. 431-438.
Balakrishnan, V., Zhen, N.W., Chong, S.M., Han, G.J. and Lee, T.J. (2022), “Infodemic and fake news–A comprehensive overview of its global magnitude during the COVID-19 pandemic in 2021: a scoping review”, International Journal of Disaster Risk Reduction, Vol. 78, 103144.
Barfar, A. (2019), “Cognitive and affective responses to political disinformation in Facebook”, Computers in Human Behavior, Vol. 101, pp. 173-179.
Ben-Gal, I., Sela, A., Milo, O. and Kagan, E. (2019), “Improving information spread by spreading groups”, Online Information Review, Vol. 44 No. 1, pp. 24-42, doi: 10.1108/OIR-08-2018-0245.
Berthon, P.R. and Pitt, L.F. (2018), “Brands, truthiness and post-fact: managing brands in a post-rational world”, Journal of Macromarketing, Vol. 38 No. 2, pp. 218-227.
Beuk, F., Weidner, K. and Bal, A. (2019), “Fake news and the willingness to share: a schemer schema and confirmatory bias perspective”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 180-187, doi: 10.1108/JPBM-12-2018-2155.
Borges‐Tiago, T., Tiago, F., Silva, O., Guaita Martínez, J.M. and Botella‐Carrubi, D. (2020), “Online users’ attitudes toward fake news: implications for brand management”, Psychology and Marketing, Vol. 37 No. 9, pp. 1171-1184.
Brandtzaeg, P.B. and Følstad, A. (2017), “Trust and distrust in online fact-checking services”, Communications of the ACM, Vol. 60 No. 9, pp. 65-71.
Brashier, N.M. and Schacter, D.L. (2020), “Aging in an era of fake news”, Current Directions in Psychological Science, Vol. 29 No. 3, pp. 316-323.
Brigida, M. and Pratt, W.R. (2017), “Fake news”, The North American Journal of Economics and Finance, Vol. 42, pp. 564-573.
Carlson, M. (2020), “Fake news as an informational moral panic: the symbolic deviancy of social media during the 2016 US presidential election”, Information, Communication and Society, Vol. 23 No. 3, pp. 374-388.
Carrieri, V., Madio, L. and Principe, F. (2019), “Vaccine hesitancy and (fake) news: Quasi‐experimental evidence from Italy”, Health Economics, Vol. 28 No. 11, pp. 1377-1382, available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6851894/pdf/HEC-28-1377.pdf
Chen, Z.F. and Cheng, Y. (2019), “Consumer response to fake news about brands on social media: the effects of self-efficacy, media trust, and persuasion knowledge on brand trust”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 188-198.
Chen, Q., Srivastava, G., Parizi, R. M., Aloqaily, M. and Al Ridhawi, I. (2020), “An incentive-aware blockchain-based solution for internet of fake media things”, Information Processing and Management, Vol. 57 No. 6, 102370.
Cheng, L.-C., Lu, W.-T. and Yeo, B. (2023), “Predicting abnormal trading behavior from internet rumor propagation: a machine learning approach”, Financial Innovation, Vol. 9 No. 1, p. 3.
Chua, A.Y. and Banerjee, S. (2018), “Intentions to trust and share online health rumors: an experiment with medical professionals”, Computers in Human Behavior, Vol. 87, pp. 1-9.
Chua, A.Y.K., Pal, A. and Banerjee, S. (2021), “‘This will blow your mind’: examining the urge to click clickbaits”, Aslib Journal of Information Management, Vol. 73 No. 2, pp. 288-303, doi: 10.1108/AJIM-07-2020-0214.
Colliander, J. (2019), “‘This is fake news’: investigating the role of conformity to other users' views when commenting on and spreading disinformation in social media”, Computers in Human Behavior, Vol. 97, pp. 202-215.
Damstra, A., Boomgaarden, H.G., Broda, E., Lindgren, E., Strömbäck, J., Tsfati, Y. and Vliegenthart, R. (2021), “What does fake look like? A review of the literature on intentional deception in the news and on social media”, Journalism Studies, Vol. 22 No. 14, pp. 1947-1963.
Denyer, D. and Tranfield, D. (2009), “Creating a Systematic Review”, In D. A. Buchanan and A. Bryman (Eds.), The Sage Handbook of Organizational Research Methods, Sage Publications, pp. 671–689, available at: https://www.cebma.org/wpcontent/uploads/Denyer-Tranfield-Producing-a-Systematic-Review.pdf
Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2018), “Bert: pre-training of deep bidirectional transformers for language understanding”, arXiv preprint arXiv:1810.04805.
Di Domenico, G. and Visentin, M. (2020), “Fake news or true lies? Reflections about problematic contents in marketing”, International Journal of Market Research, Vol. 62 No. 4, pp. 409-417.
Diddi, P., Xiao, A., Vafeiadis, M., Bortree, D.S. and Buckley, C. (2019), “Refuting fake news on social media: nonprofits, crisis response strategies and issue involvement”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 209-222.
Duffy, A., Tandoc, E. and Ling, R. (2020), “Too good to be true, too good not to share: the social utility of fake news”, Information, Communication and Society, Vol. 23 No. 13, pp. 1965-1979.
Elías, C. and Catalan-Matamoros, D. (2020), “Coronavirus in Spain: fear of ‘Official’fake news boosts WhatsApp and alternative sources”, Media and Communication, Vol. 8 No. 2, pp. 462-466.
Farhoudinia, B., Ozturkcan, S. and Kasap, N. (2022), “Lexicon-based sentiment analysis of fake news on social media”, AIRSI2022 Conference: Technologies 4.0 in Tourism, Services and Marketing.
Faustini, P.H.A. and Covões, T.F. (2020), “Fake news detection in multiple platforms and languages”, Expert Systems with Applications, Vol. 158, 113503.
Flostrand, A., Pitt, L. and Kietzmann, J. (2019), “Fake news and brand management: a Delphi study of impact, vulnerability and mitigation”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 246-254, doi: 10.1108/JPBM-12-2018-2156.
Giglietto, F., et al. (2019), “‘Fake news’ is the invention of a liar: how false information circulates within the hybrid news system”, Current Sociology, Vol. 67 No. 4, pp. 625-642.
Goodfellow, I., Bengio, I., Courville, A. and Bengio, Y. (2016), Deep Learning, MIT Press, Cambridge, Vol. 1.
Groshek, J., et al. (2019), “What the fake? Assessing the extent of networked political spamming and bots in the propagation of #fakenews on Twitter”, Online Information Review, Vol. 43 No. 1, pp. 53-71.
Gupta, M., Dennehy, D., Parra, C.M., Mäntymäki, M. and Dwivedi, Y.K. (2023), “Fake news believability: the effects of political beliefs and espoused cultural values”, Information and Management, Vol. 60 No. 2, 103745.
Hardalov, M., Koychev, I. and Nakov, P. (2016), In Search of Credible News. International Conference on Artificial Intelligence: Methodology, Systems, and Applications, Springer.
Hartley, K. and Vu, M.K. (2020), “Fighting fake news in the COVID-19 era: policy insights from an equilibrium model”, Policy Sciences, Vol. 53 No. 4, pp. 735-758.
Horne, B. D., Adali, S. and Sikdar, S. (2017), “Identifying the social signals that drive online discussions: a case study of reddit communities”, 2017 26th International Conference on Computer Communication and Networks (ICCCN), IEEE.
Islam, A.N., Laato, S., Talukder, S. and Sutinen, E. (2020a), “Misinformation sharing and social media fatigue during COVID-19: an affordance and cognitive load perspective”, Technological Forecasting and Social Change, Vol. 159, 120201.
Islam, M.R., Liu, S., Wang, X. and Xu, G. (2020b), “Deep learning for misinformation detection on online social networks: a survey and new perspectives”, Social Network Analysis and Mining, Vol. 10, pp. 1-20.
Jang, S.M., Geng, T., Li, J.-Y. Q., Xia, R., Huang, C.-T., Kim, H. and Tang, J. (2018), “A computational approach for examining the roots and spreading patterns of fake news: evolution tree analysis”, Computers in Human Behavior, Vol. 84, pp. 103-113.
Jones, M.O. (2019), “The gulf information war| propaganda, fake news, and fake trends: the weaponization of twitter bots in the gulf crisis”, International Journal of Communication, Vol. 13, p. 27.
Kaddoura, S., Chandrasekaran, G., Popescu, D.E. and Duraisamy, J.H. (2022), “A systematic literature review on spam content detection and classification”, PeerJ Computer Science, Vol. 8, e830.
Kahneman, D. (2011), Thinking, Fast and Slow, Macmillan, New York.
Kaliyar, R.K. (2018), “Fake news detection using a deep neural network”, 2018 4th International Conference on Computing Communication and Automation (ICCCA), IEEE.
Kim, A. and Dennis, A.R. (2019), “Says who? The effects of presentation format and source rating on fake news in social media”, Mis Quarterly, Vol. 43 No. 3, pp. 1025-1039.
Khan, J.Y., Khondaker, M.T.I., Afroz, S., Uddin, G. and Iqbal, A. (2021), “A benchmark study of machine learning models for online fake news detection”, Machine Learning with Applications, Vol. 4, 100032.
Kim, A., Moravec, P.L. and Dennis, A.R. (2019), “Combating fake news on social media with source ratings: the effects of user and expert reputation ratings”, Journal of Management Information Systems, Vol. 36 No. 3, pp. 931-968.
Kim, B., Xiong, A., Lee, D. and Han, K. (2021), “A systematic review on fake news research through the lens of news creation and consumption: research efforts, challenges, and future directions”, PloS One, Vol. 16 No. 12, e0260080.
Koro-Ljungberg, M., Carlson, D.L. and Montana, A. (2019), “Productive forces of post-truth (s)?”, Qualitative Inquiry, Vol. 25 No. 6, pp. 583-590.
Kwanda, F.A. and Lin, T.T. (2020), “Fake news practices in Indonesian newsrooms during and after the Palu earthquake: a hierarchy-of-influences approach”, Information, Communication and Society, Vol. 23 No. 6, pp. 849-866.
Laato, S., Islam, A.N., Islam, M.N. and Whelan, E. (2020), “What drives unverified information sharing and cyberchondria during the COVID-19 pandemic?”, European Journal of Information Systems, Vol. 29 No. 3, pp. 288-305.
Lee, L.W., Hannah, D. and McCarthy, I.P. (2019), “Do your employees think your slogan is ‘fake news?’ A framework for understanding the impact of fake company slogans on employees”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 199-208.
Long, N.V., Richardson, M. and Stähler, F. (2019), “Media, fake news, and debunking”, Economic Record, Vol. 95 No. 310, pp. 312-324.
Lord Ferguson, S., Montecchi, M. and de Regt, A. (2019), “A false image of health: how fake news and pseudo-facts spread in the health and beauty industry”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 168-179.
Lozano, M.G., Brynielsson, J., Franke, U., Rosell, M., Tjörnhammar, E., Varga, S. and Vlassov, V. (2020), “Veracity assessment of online data”, Decision Support Systems, Vol. 129, 113132.
Lutzke, L., Drummond, C., Slovic, P. and Árvai, J. (2019), “Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook”, Global Environmental Change, Vol. 58, 101964.
Marin, L. (2021), “Three contextual dimensions of information on social media: lessons learned from the COVID-19 infodemic”, Ethics and Information Technology, Vol. 23 Suppl 1, pp. 79-86.
Marres, N. (2018), “Why we can't have our facts back”, Engaging Science, Technology, and Society, Vol. 4, pp. 423-443.
Marsden, C., Meyer, T. and Brown, I. (2020), “Platform values and democratic elections: how can the law regulate digital disinformation?”, Computer Law and Security Review, Vol. 36, 105373.
Meel, P. and Vishwakarma, D.K. (2020), “Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities”, Expert Systems with Applications, Vol. 153, 112986.
Melchior, C. and Oliveira, M. (2022), “Health-related fake news on social media platforms: a systematic literature review”, New Media and Society, Vol. 24 No. 6, pp. 1500-1522.
Mills, A.J. and Robson, K. (2019), “Brand management in the era of fake news: narrative response as a strategy to insulate brand value”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 159-167.
Mirhoseini, M., Early, S., El Shamy, N. and Hassanein, K. (2023), “Actively open-minded thinking is key to combating fake news: a multimethod study”, Information and Management, Vol. 60 No. 3, 103761.
Mourad, A., Srour, A., Harmanai, H., Jenainati, C. and Arafeh, M. (2020), “Critical impact of social networks infodemic on defeating coronavirus COVID-19 pandemic: twitter-based study and research directions”, IEEE Transactions on Network and Service Management, Vol. 17 No. 4, pp. 2145-2155.
Mridha, M.F., Keya, A.J., Hamid, M.A., Monowar, M.M. and Rahman, M.S. (2021), “A comprehensive review on fake news detection with deep learning”, IEEE Access, Vol. 9, pp. 156151-156170.
Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A. and Dwivedi, Y.K. (2023), “Deepfakes: deceptions, mitigations, and opportunities”, Journal of Business Research, Vol. 154, 113368.
Naeem, S.B. and Bhatti, R. (2020), “The COVID‐19 ‘infodemic’: a new front for information professionals”, Health Information and Libraries Journal, Vol. 37 No. 3, pp. 233-239.
Nguyen, D.H., de Leeuw, S. and Dullaert, W.E. (2018), “Consumer behaviour and order fulfilment in online retailing: a systematic review”, International Journal of Management Reviews, Vol. 20 No. 2, pp. 255-276.
Nirav Shah, M. and Ganatra, A. (2022), “A systematic literature review and existing challenges toward fake news detection models”, Social Network Analysis and Mining, Vol. 12 No. 1, p. 168.
Nolan, C.T. and Garavan, T.N. (2016), “Human resource development in SMEs: a systematic review of the literature”, International Journal of Management Reviews, Vol. 18 No. 1, pp. 85-107.
Nyilasy, G. (2019), “Fake news: when the dark side of persuasion takes over”, International Journal of Advertising, Vol. 38 No. 2, pp. 336-342.
Ongsulee, P. (2017), “Artificial intelligence, machine learning and deep learning”, 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE), IEEE.
Ozbay, F.A. and Alatas, B. (2020), “Fake news detection within online social media using supervised artificial intelligence algorithms”, Physica A: Statistical Mechanics and Its Applications, Vol. 540, 123174.
Ozturkcan, S., Kasap, N., Cevik, M. and Zaman, T. (2017), “An analysis of the Gezi Park social movement tweets”, Aslib Journal of Information Management, Vol. 69 No. 4, pp. 426-440.
Pal, A., Chua, A.Y.K. and Goh, D.H.-L. (2017), “Does KFC sell rat? Analysis of tweets in the wake of a rumor outbreak”, Aslib Journal of Information Management, Vol. 69 No. 6, pp. 660-673, doi: 10.1108/AJIM-01-2017-0026.
Pantumsinchai, P. (2018), “Armchair detectives and the social construction of falsehoods: an actor–network approach”, Information, Communication and Society, Vol. 21 No. 5, pp. 761-778.
Papanastasiou, Y. (2020), “Fake news propagation and detection: a sequential model”, Management Science, Vol. 66 No. 5, pp. 1826-1846.
Paschen, J. (2019), “Investigating the emotional appeal of fake news using artificial intelligence and human contributions”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 223-233.
Pennycook, G., Bear, A., Collins, E.T. and Rand, D.G. (2020a), “The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings”, Management Science, Vol. 66 No. 11, pp. 4944-4957.
Pennycook, G., McPhetres, J., Zhang, Y., Lu, J.G. and Rand, D.G. (2020b), “Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention”, Psychological Science, Vol. 31 No. 7, pp. 770-780.
Peterson, M. (2019), “A high-speed world with fake news: brand managers take warning”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 234-245.
Rahadian, D. and Nurfitriani, W. (2022), “Impact of news related to Covid-19 on stock market returns in five major ASEAN countries”, Economics and Business Quarterly Reviews, Vol. 5 No. 2, pp. 39-50.
Rajamma, R.K., Paswan, A. and Spears, N. (2019), “User-generated content (UGC) misclassification and its effects”, Journal of Consumer Marketing, Vol. 37 No. 2, pp. 125-138.
Rana, D.P., Agarwal, I. and More, A. (2018), “A review of techniques to combat the peril of fake news”, 2018 4th International Conference on Computing Communication and Automation (ICCCA), IEEE.
Robertson, J., Kirsten, M. and Ferreira, C.C. (2019), “The truth (as I see it): philosophical considerations influencing a typology of fake news”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 150-158.
Rodrigues, A.P., Fernandes, R., Shetty, A., Lakshmanna, K. and Shafi, R.M. (2022), “Real-time twitter spam detection and sentiment analysis using machine learning and deep learning techniques”, Computational Intelligence and Neuroscience, pp. 1-14, doi: 10.1155/2022/5211949.
Ross, B., Pilz, L., Cabrera, B., Brachten, F., Neubaum, G. and Stieglitz, S. (2019), “Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks”, European Journal of Information Systems, Vol. 28 No. 4, pp. 394-412.
Ryan, C.D., Schaul, A.J., Butner, R. and Swarthout, J.T. (2020), “Monetizing disinformation in the attention economy: the case of genetically modified organisms (GMOs)”, European Management Journal, Vol. 38 No. 1, pp. 7-18.
Sampson, J.P., Osborn, D.S., Kettunen, J., Hou, P.C., Miller, A.K. and Makela, J.P. (2018), “The validity of social media–based career information”, The Career Development Quarterly, Vol. 66 No. 2, pp. 121-134.
Saquete, E., Tomás, D., Moreda, P., Martínez-Barco, P. and Palomar, M. (2020), “Fighting post-truth using natural language processing: a review and open challenges”, Expert Systems with Applications, Vol. 141, 112943.
Sela, A., Milo, O., Kagan, E. and Ben-Gal, I. (2020), “Improving information spread by spreading groups”, Online Information Review, Vol. 44 No. 1, pp. 24-42, doi: 10.1108/OIR-08-2018-0245.
Shahzad, K., Khan, S.A., Ahmad, S. and Iqbal, A. (2022), “A scoping review of the relationship of big data analytics with context-based fake news detection on digital media in data age”, Sustainability, Vol. 14 No. 21, 14365.
Sharif, A., Awan, T.M. and Paracha, O.S. (2022), “The fake news effect: what does it mean for consumer behavioral intentions towards brands?”, Journal of Information, Communication and Ethics in Society, Vol. 20 No. 2, pp. 291-307.
Shin, J., Jian, L., Driscoll, K. and Bar, F. (2018), “The diffusion of misinformation on social media: temporal pattern, message, and source”, Computers in Human Behavior, Vol. 83, pp. 278-287.
Silverman, C. (2016), “Viral fake election news outperformed real news on facebook in final months of the us election”, BuzzFeed News, Vol. 16.
Song, R., Kim, H., Lee, G.M. and Jang, S. (2019), “Does deceptive marketing pay? The evolution of consumer sentiment surrounding a pseudo-product-harm crisis”, Journal of Business Ethics, Vol. 158 No. 3, pp. 743-761.
Talwar, S., et al. (2020), “Sharing of fake news on social media: application of the honeycomb framework and the third-person effect hypothesis”, Journal of Retailing and Consumer Services, Vol. 57, 102197.
Talwar, S., Dhir, A., Kaur, P., Zafar, N. and Alrasheedy, M. (2019), “Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior”, Journal of Retailing and Consumer Services, Vol. 51, pp. 72-82.
Thelwall, M. and Thelwall, S. (2020), “A thematic analysis of highly retweeted early COVID-19 tweets: consensus, information, dissent and lockdown life”, Aslib Journal of Information Management, Vol. 72 No. 6, pp. 945-962, doi: 10.1108/AJIM-05-2020-0134.
Thompson, R.C., Joseph, S. and Adeliyi, T.T. (2022), “A systematic literature review and meta-analysis of studies on online fake news detection”, Information, Vol. 13 No. 11, p. 527.
Thompson, N., Wang, X. and Daya, P. (2020), “Determinants of news sharing behavior on social media”, Journal of Computer Information Systems, Vol. 60 No. 6, pp. 593-601, doi: 10.1080/08874417.2019.1566803.
Tschiatschek, S., Singla, A., Gomez Rodriguez, M., Merchant, A. and Krause, A. (2018), “Fake news detection in social networks via crowd signals”, Companion Proceedings of the The Web Conference 2018.
Vafeiadis, M., Bortree, D.S., Buckley, C., Diddi, P. and Xiao, A. (2020), “Refuting fake news on social media: nonprofits, crisis response strategies and issue involvement”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 209-222, doi: 10.1108/JPBM-12-2018-2146.
Vasist, P.N. and Krishnan, S. (2023), “Fake news and sustainability-focused innovations: a review of the literature and an agenda for future research”, Journal of Cleaner Production, 135933.
Vosoughi, S., Roy, D. and Aral, S. (2018), “The spread of true and false news online”, Science, Vol. 359 No. 6380, pp. 1146-1151.
Wang, Y., McKee, M., Torbica, A. and Stuckler, D. (2019), “Systematic literature review on the spread of health-related misinformation on social media”, Social Science and Medicine, Vol. 240, 112552.
Wiesenberg, M. and R. Tench (2020). "Deep strategic mediatization: Organizational leaders’ knowledge and usage of social bots in an era of disinformation." International Journal of Information Management, Vol. 51, 102042.
Wolverton, C. and Stevens, D. (2019), “The impact of personality in recognizing disinformation”, Online Information Review, Vol. 44 No. 1, pp. 181-191, doi: 10.1108/OIR-04-2019-0115.
Wu, Z., Pi, D., Chen, J., Xie, M. and Cao, J. (2020), “Rumor detection based on propagation graph neural network with attention mechanism”, Expert Systems with Applications, Vol. 158, 113595.
Xia, H., Wang, Y., Zhang, J.Z., Zheng, L.J., Kamal, M.M. and Arya, V. (2023), “COVID-19 fake news detection: a hybrid CNN-BiLSTM-AM model”, Technological Forecasting and Social Change, Vol. 195, 122746.
Zhang, D., Li, W., Niu, B. and Wu, C. (2018), “Research on text classification for identifying fake news”, 2018 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), IEEE.
Zhang, D., Wang, Y. and Tan, C. (2023), “A deep learning approach for detecting fake reviewers: exploiting reviewing behavior and textual information”, Decision Support Systems, Vol. 166, 113911.
Zhou, X. and Zafarani, R. (2020), “A survey of fake news: fundamental theories, detection methods, and opportunities”, ACM Computing Surveys (CSUR), Vol. 53 No. 5, pp. 1-40.
Corresponding author
About the authors
Bahareh Farhoudinia completed her PhD at the Sabanci Business School.
Selcen Ozturkcan continues her academic career with a permanent faculty position at the School of Business and Economics of Linnaeus University (Sweden) and is affiliated with Sabanci Business School (Turkey). Her research on digital experiences, which appeared as journal articles, books, book chapters and case studies, is accessible at http://www.selcenozturkcan.com.
Nihat Kasap works as Professor of Management Information Systems at the Sabanci Business School, where he also serves as the Dean. His research focuses on social media analytics and data mining, mobile technologies and M-government applications, pricing and quality of service in telecommunication networks, generation expansion planning and investments in the energy sector, mathematical programming and heuristic design and optimization. His publications are accessible at https://sbs.sabanciuniv.edu/en/faculty_members/detail/820