Abstract

The high volume of user-generated content in social networks and online platforms facilitates instant access to a substantial amount of data. However, users’ inability to validate and verify the content of online information exacerbates the spread of false and misleading information. Engagement with disinformation can result in inaccurate judgment and maladaptive decision-making, which becomes especially problematic when disinformation targets physical infrastructures. In this research, we explore the effects of a hypothetical scenario where disinformation is spread claiming that a particular subway station in New York City will be closed for maintenance (similar to rumors circulated in New York City in recent years). Assuming that subway users plan their routes based on efficiency (i.e. the shortest travel time), believing such false information would lead to extended travel times and create an unexpected demand for alternative subway lines. Situations like these raise several questions: (i) How can we devise an efficient mechanism to limit the spread of disinformation in communication in a social network? (ii) How can we interdict the spread of disinformation to combat weaponized disinformation campaigns initiated by adversaries? and (iii) What effect does information protection have on the utilization of infrastructure network components? To answer these questions, we linearized a nonlinear integer programming model for competitive information dissemination and proposed a mixed-integer linear programming model to interdict the spread of disinformation in a social network, taking into account the structure of social interactions to help mitigate adversary effects of uncontrolled disinformation spread. We illustrate the proposed model with a case study of the New York City subway system.

1. Introduction

Due to interdependencies among infrastructure networks, an interruption in the operation and functionality of one network can result in cascading effects within the network itself and other physically or logically connected networks. For example, an interruption in the power grid can plunge the subway into darkness, resulting in reduced performance [1].

Such interdependencies are not limited to physical infrastructure networks. A network of connected individuals or infrastructure users can also be interdependent with a physical infrastructure network. Indeed, physical networks are being increasingly integrated with information technology to provide more automated and efficient customer service. Information flows among the network of users can impact how they engage with physical infrastructure. For example, a social media post from a water utility company asking to conserve water during a particular part of the day may increase conservation behaviors. Such messages are important to new “demand response” programs being implemented by various utilities to regulate and reduce variability in demand throughout the day [2, 3].

However, the spread of disinformation, or a “purposeful strategy to induce false belief, channel behavior, or damage trust” [4] on social networks, online forums, and media outlets, has led to increasingly problematic outcomes [5]. The mitigation strategies for the COVID-19 pandemic [6–10], the outcome of 2020 US presidential elections [11], and the impacts of climate change [12, 13] are all recent examples wherein disinformation has swayed certain users to behave in ways leading to adverse consequences. In the aforementioned examples, the adoption of disinformation not only affected people on an individual level (i.e. loss of life due to COVID, reduced political engagement, or adverse health outcomes due to poor air quality) but also resulted in profound social consequences by eroding trust in the important pillars of modern society (e.g. medicine, science, and democracy, in general). Thus, with easily shareable disinformation, adversaries have found a new means of attack by targeting institutions and their trustworthiness.

Given how easily disinformation can be disseminated, another threat is potentially on the horizon: An adversary who seeks to attack infrastructure systems indirectly by altering the consumption behavior of human intermediaries via weaponized disinformation. These human intermediaries—the adopters of disinformation and users of such infrastructures—would be manipulated to interact with infrastructures in different-than-nominal ways that could lead to disruptive effects. There is a plethora of literature examining situations where such adversaries directly attack infrastructures (e.g. through viruses and ransomware [14, 15]). We focus instead on indirect attacks via weaponized disinformation. Since disinformation attacks are cost-effective and easy, they have been on the rise in recent years, necessitating research that examines potential avenues for combating weaponized disinformation [16]. False pricing disinformation could lead to the overloading of the power grid in specific locations, causing a wider spread of power disruptions [17–19]. Transportation networks could also be at risk of disinformation-driven disruptions, as a fake traffic alert could convince drivers to avoid certain locations, resulting in heavy traffic and delays in other parts of a city [20]. Similarly, Jamalzadeh et al. [21] explored how disinformation campaigns could disrupt supply chains via closures of multi-commodity rail networks. These scenarios motivate our interest in modeling mitigation efforts to interdict disinformation weaponized against infrastructure systems, transportation networks in particular, spread by adversaries in cyberspace.

We borrow concepts from recent models of disinformation diffusion and infrastructure impact [19, 21], but we specifically focus here on how to represent an interdiction of the spread of disinformation. We propose a mechanism to interdict the spread of disinformation in cyberspace to prevent unexpected human responses (i.e. changes in user consumption behavior), causing adverse impacts on infrastructure performance. This mechanism involves three main steps: (i) The interdependent structure of information and physical layers is modeled and integrated based on a one-to-many relationship so that each intermediary in cyberspace is associated with several components of the physical infrastructure affected by the intermediary, as depicted in Fig. 1. (ii) The spread of disinformation is interdicted by counteracting disinformation broadcasting. We simplified a complex model of information diffusion [22], which can be solved with greater computational efficiency while guaranteeing optimal solutions. (iii) The effect of disinformation interdiction efforts in cyberspace is projected on the physical infrastructure layer.

Interdependent information and physical layers connected by links representing human users and their behavior.
Figure 1.

Interdependent information and physical layers, where disinformed users (red nodes in the information layer) can misuse and disrupt nodes in the physical layer (adapted from [21]). If some proportion of users surrounding a physical node receive disinformation, then the node is disrupted.

This paper is organized as follows. Section 2 provides a background on disinformation diffusion in social networks and its impact on physical infrastructure networks. Section 3 describes the proposed mathematical programming model of interdiction designed to minimize the harmful impacts of disinformation on physical infrastructure networks. Section 4 illustrates the applicability of the proposed model with a case study involving the New York City (NYC) subway network. Finally, Section 5 summarizes the results, implications, and future research directions.

2. Background and literature review

In the present research, we explore the mechanism of how disinformation spreads in cyberspace, how to curb its reach, and how to minimize its adverse impacts on physical infrastructures. We discuss the interactions between users in the information layer and offer some formulations to quantify the outcome of these interactions and techniques to combat disinformation to protect physical infrastructures. Our proposed method focuses on interdicting the flow of disinformation based on the information layer’s structure and users’ availability to prevent the unexpected usage of infrastructure networks. This section reviews relevant literature and identifies research gaps that guide our approach.

Physical infrastructure systems are typically modeled as networks (or graphs) that involve nodes (or vertices) and links (or edges) that connect the nodes. The nodes represent the assets, and the links represent the connections that enable the flow of commodities (enabling services) between each pair of nodes. Connections between components (i.e. nodes and links) from one layer to another have been studied in the literature as interdependent networks [23–25]. The connections determine how the functionality of a component in one network can affect the functionality of another network.

Recent literature has shown the vulnerability of physical infrastructure systems to disinformation propagated by human intermediaries in different contexts [15, 26–28]. Harmful disinformation spread in social networks can lead to economic damage and productivity losses [29]. Such weaponized disinformation attacks can result in blackouts [18], heavy traffic in certain areas [20], and interruption in political processes [30], among others. In these situations, the ability of physical infrastructure networks to mitigate the magnitude and duration of the failure and to return to normal operation is important [31–33]. The damages to physical infrastructure can result in cascading failures in other infrastructures with interdependent connections [34–37]. For example, a disruption in electric power can cause disruptions in related infrastructure networks such as water pumps, traffic lights, and communication towers [24, 38, 39]. Other examples include the injection of false data to interrupt operators’ control efforts in electric power networks [14, 40], false pricing attacks that lead to shifts in customer power usage with adverse effects [2, 17, 18], and the spread of false traffic information that leads to urban traffic congestion at a city scale [20]. Although the activity of human intermediaries in cyberspace is not easily observed, the propagation of disinformation in cyberspace can result in dramatic changes in consumption behavior and subsequent damage to physical infrastructures [33, 41, 42]. Thus, interdicting such diffusion of false information in the information layer can mitigate the adverse impact of abnormal consumption behavior in the physical layer.

Online user-generated content and its consumption are growing on the internet [43–48]. Such content broadcast over the web is available to all users and can influence how they learn, think, socialize, and make decisions. Unfortunately, not only is a significant portion of users unable to distinguish factual from false online content, but also a majority of them (i.e. three out of four users) mistakenly overestimate their ability to detect fake news shared on social network platforms [49–51]. Given how quickly and broadly disinformation can spread, especially when content is not verified (or even unverifiable) [52–54], user engagement with physical infrastructure networks as a result of disinformation is a major concern. In such cases, the dramatic spread of disinformation can result in sizable consequences such as those described above (e.g. a city-wide blackout [18] or heavy urban traffic load [20]).

Tech companies employ artificial intelligence to detect disinformation and combat its spread, albeit with limited success [55]. Other strategies might be more effective at interdicting the spread of disinformation to protect social media users’ access to verified information and alleviate adverse consequences of weaponized disinformation attacks. We discuss these alternate strategies in more detail below.

There are a number of different approaches to model the diffusion of information in social networks based on different assumptions and characteristics. Three common categories of approaches include: (i) cascade models [56], (ii) linear threshold models [57], and (iii) epidemiological models [58]. The first two models are based on the likelihood that users may be exposed to and adopt disinformation, and the third is based on the influence of users relatively close to each other in a network. These approaches were used in research examining how to maximize the influence of information on social networks by targeting specific users to gain a competitive advantage over rivals (competitors) [59, 60]. Several mathematical models have been developed to this end. For example, Bharathi et al. [61] developed a game theory approach based on a cascade model to influence target users by word-of-mouth propagation. Bozorgi et al. [62] proposed a linear threshold model to maximize influence on target users with the minimum number of influencers (i.e. social network users with a relatively high level of trust by other users). Qiang et al. [63] developed a mixed-integer programming model based on the linear threshold information diffusion model to estimate diffusion influence weights between users in large-scale social networks. Qiang et al.’s model predicts how social network users get exposed to information online. Epidemiological models, originally developed to represent the spread of diseases, have also been used to model the spread of disinformation. Specifically, the susceptible‐infected‐recovered (SIR) model and its derivatives are examples of such models wherein individuals are classified into different categories: users who adopt disinformation and react to it (infected), users who are not affected by disinformation (recovered), and users who have not yet been exposed to disinformation (susceptible) [64, 65]. In SIR models, coefficients govern the speed at which disinformation is propagated and detected. The literature on information diffusion processes allows us to develop mathematical formulations and deploy them in an optimization model designed to combat the spread of disinformation in social networks. As a result, we propose solutions to prevent unexpected consequences of disinformation spread related to physical infrastructure networks.

What distinguishes our article from existing work on infrastructure protection is that we model the interdependent relationship between the information and physical layers (see Fig. 1). We aim to interdict the spread of disinformation in the information layer to mitigate its impact on the physical layer. In other words, we propose a mechanism to prevent disinformation in social networks from causing unexpected consumption behaviors affecting physical infrastructure. We provide a general formulation to address this problem and apply the model to an urban subway network to curb maladaptive consumption behavior driven by disinformation.

3. Materials and methods

We propose a mixed-integer linear programming model to stabilize the components of the physical infrastructure network and its performance by targeting cyberspace actors that spread disinformation in the information layer.

To describe widespread disinformation diffusion in social networks, we assume that the users are more likely to adopt information (including disinformation, or deliberately misleading information, and misinformation, or unintentionally false or inaccurate information) received from relatively closer connections (e.g. friends) as compared to those who are farther away (e.g. distant acquaintances or multiple connections removed). This assumption complies with Tobler’s first law of geography [66], which states that “everything is related to everything else, but things near are more related than distant things.” We can apply Tobler’s first law of geography to the social media context, predicting that users are more likely to be influenced by their immediate contacts than their indirect contacts [67]. Support for the law is inferred when a closer connection in an individual’s network exerts greater persuasive influence relative to more remote connections [22]. This law applies to our model because it helps us target specific users who are most likely to influence others, allowing us to minimize the spread of disinformation by focusing on those who can most effectively limit its reach.

We model the information layer (here, a social network) as a directed graph denoted as |$ G(V^{t},E^{t}) $| where |$ V^{t} $| and |$ E^{t} $| represent the sets of users (i.e. nodes) and interactions (i.e. links), respectively. We denote the shortest interaction distance (i.e. the shortest path) between a pair of social network users as |$ i\in V^{t} $| and |$ j\in V^{t} $| as |$ \bar{d}_{ij}\geq 0 $|⁠, where |$ \bar{d}_{ij}\rightarrow\infty $|⁠, if user |$ i $| has no interaction with another user |$ j $| at all. While the influence between two users can be modeled by means other than the shortest path and the timing of disinformation spread, we assume that the most short-term influence is likely for the connections with the shortest paths between them. Given the interaction distance threshold |$ k $|⁠, we extend variable |$ \bar{d}_{ij} $| into a binary variable |$ d_{ijk} $|⁠, where |$ d_{ijk}=1 $| if user |$ i $| is a |$ k $|th distant neighbor follower of user |$ j $| at distance |$ k $|⁠, and |$ d_{ijk}=0 $| otherwise. We employ this parameter to calculate the number of users adopting either accurate information or disinformation in the neighborhood of a social network user.

Let |$ A $|⁠, |$ B $|⁠, and |$ C $| denote users of different types as described in Table 1, depending on whether each social network user adopts neither accurate information nor disinformation, adopts disinformation, or adopts accurate information, respectively. We partition the set |$ A $| into sets |$ A_{m} $| and |$ A_{n} $| to denote if the social network user who adopts neither accurate information or disinformation is a consumer of the infrastructure under consideration or not, respectively. Moreover, the users who either adopt accurate information or disinformation can be classified as members of the set |$ B $| or |$ C $|⁠, respectively. These sets and distance parameters are used in our proposed mixed-integer linear programming model to interdict the influence of different social network users on the spread of disinformation by targeting their user accounts (i.e. by preventing them from receiving disinformation). The target users in the optimization model do not include the infrastructure commodity users who acted on disinformation. An added advantage of targeting a specific set of users to interdict disinformation is that it also reduces computational complexity of the optimization algorithm.

Table 1.

Optimization model notation

NotationDescription
Sets
|$ A $|Users who are unaware of disinformation but would have acted on it if exposed
|$ A_{m} $|Subset of users who use the infrastructure, where |$ A_{m}\in A $|
|$ A_{n} $|Subset of users who do not use the infrastructure, where |$ A_{n}\in A $|
|$ B $|Users who consumed and shared disinformation
|$ C $|Users who detected disinformation and shared accurate information messages
Parameters
|$ n $|Total number of users targeted to share accurate information
|$ \pi_{i} $|Personality trait score of user |$ i $|
|$ d_{ijk} $||$ =1 $|⁠, if user |$ i $| receives information from user |$ j $| (i.e. |$ i $| follows |$ j $|⁠) in the interaction,
distance threshold |$ k $|⁠; |$ =0 $|⁠, otherwise
Decision variables
|$ x_{j} $||$ =1 $|⁠, if user |$ j $| is targeted to share accurate information from their immediate neighbors,
|$ =0 $| otherwise
NotationDescription
Sets
|$ A $|Users who are unaware of disinformation but would have acted on it if exposed
|$ A_{m} $|Subset of users who use the infrastructure, where |$ A_{m}\in A $|
|$ A_{n} $|Subset of users who do not use the infrastructure, where |$ A_{n}\in A $|
|$ B $|Users who consumed and shared disinformation
|$ C $|Users who detected disinformation and shared accurate information messages
Parameters
|$ n $|Total number of users targeted to share accurate information
|$ \pi_{i} $|Personality trait score of user |$ i $|
|$ d_{ijk} $||$ =1 $|⁠, if user |$ i $| receives information from user |$ j $| (i.e. |$ i $| follows |$ j $|⁠) in the interaction,
distance threshold |$ k $|⁠; |$ =0 $|⁠, otherwise
Decision variables
|$ x_{j} $||$ =1 $|⁠, if user |$ j $| is targeted to share accurate information from their immediate neighbors,
|$ =0 $| otherwise
Table 1.

Optimization model notation

NotationDescription
Sets
|$ A $|Users who are unaware of disinformation but would have acted on it if exposed
|$ A_{m} $|Subset of users who use the infrastructure, where |$ A_{m}\in A $|
|$ A_{n} $|Subset of users who do not use the infrastructure, where |$ A_{n}\in A $|
|$ B $|Users who consumed and shared disinformation
|$ C $|Users who detected disinformation and shared accurate information messages
Parameters
|$ n $|Total number of users targeted to share accurate information
|$ \pi_{i} $|Personality trait score of user |$ i $|
|$ d_{ijk} $||$ =1 $|⁠, if user |$ i $| receives information from user |$ j $| (i.e. |$ i $| follows |$ j $|⁠) in the interaction,
distance threshold |$ k $|⁠; |$ =0 $|⁠, otherwise
Decision variables
|$ x_{j} $||$ =1 $|⁠, if user |$ j $| is targeted to share accurate information from their immediate neighbors,
|$ =0 $| otherwise
NotationDescription
Sets
|$ A $|Users who are unaware of disinformation but would have acted on it if exposed
|$ A_{m} $|Subset of users who use the infrastructure, where |$ A_{m}\in A $|
|$ A_{n} $|Subset of users who do not use the infrastructure, where |$ A_{n}\in A $|
|$ B $|Users who consumed and shared disinformation
|$ C $|Users who detected disinformation and shared accurate information messages
Parameters
|$ n $|Total number of users targeted to share accurate information
|$ \pi_{i} $|Personality trait score of user |$ i $|
|$ d_{ijk} $||$ =1 $|⁠, if user |$ i $| receives information from user |$ j $| (i.e. |$ i $| follows |$ j $|⁠) in the interaction,
distance threshold |$ k $|⁠; |$ =0 $|⁠, otherwise
Decision variables
|$ x_{j} $||$ =1 $|⁠, if user |$ j $| is targeted to share accurate information from their immediate neighbors,
|$ =0 $| otherwise

Each interaction between pairs of individual users is considered to be a link in the information layer, determining whether communication between pairs of users is established to transfer accurate information or disinformation messages. Thus, nodes and links in the information layer are associated with probabilities that users are online and receptive to the message and the chance that an interaction tries to influence immediate connections. From this point forward, the links used for influencing immediate connections of a particular user are referred to as an “activated interaction.” The set of activated interactions shapes the structure of the information layer.

In addition to the structure of social networks, factors such as user personality traits, the beliefs of information recipients, the perceived quality and reliability of information, the attractiveness of the message, the quality of interaction between the sender and the recipient, and the consistency between the message and the recipient’s beliefs affect the extent to which users are influenced by disinformation [17, 68–73].

In social networks, personality traits can be inferred from digital footprints collected from user profiles, such as demographic information and visited locations [74, 75], and have been shown to affect a person’s receptivity to disinformation. In the model, we account for the personality traits; “the big five (i.e. openness, conscientiousness, extraversion, agreeableness, neuroticism.” Personality traits have been shown to affect behavior, including information behavior. For example, neuroticism and extraversion influence the propensity to believe online rumors [73]. As Lai et al.’s survey results of over 11 500 social media users indicate, individuals with high neuroticism and extraversion were most susceptible to rumors. [73].

The attitudinal consistency between senders and recipients of disinformation is likewise important. For example, some online users may trust disinformation shared by their contacts simply due to the similarity in shared beliefs, even if the message is not verified or verifiable [68, 76]. Conversely, accurate information from users with opposing attitudinal positions can be discounted merely because it comes from an attitudinally discrepant source. Other factors may also play a role. For example, the motivation of the sharer (such as interacting with people to feel influential) and the amount of information exchanged [68, 77, 78] can also guide information sharing behaviors. Based on the influence optimization problem developed by Kempe et al. [56], we define the likelihood of user |$ i $| sharing disinformation to their immediate neighbor |$ j $| defined as (Equation 1) follows:

(1)

where (i) |$ \pi_{i} $| describes the likelihood that user |$ i $| believes disinformation, (ii) |$ \psi_{ij} $| indicates the penetration level (i.e. the likelihood of successful transfer) of the message between the pair of users |$ i $| and |$ j $|⁠, and (iii) |$ \theta_{j} $| represents the likelihood that user |$ j $| shares disinformation with their immediate neighbors. Also, |$ \zeta_{i} $| and |$ \zeta_{j} $| are the probabilities that users |$ i $| and |$ j $| are actively online and are using the social network platform or are involved in the community to interact with other members. The personality traits of the user can influence specifications (i) and (iii) [69, 73], and specification (ii) can be affected by the characteristics of the disinformation message (e.g. the credibility of the source) [68]. Equation (1) can be generalized based on the characteristics of senders, recipients, and their interactions. For example, with a large number of online users willing to share disinformation (high |$ \theta_{j} $|⁠) and a greater likelihood of disinformation adoption among them (high |$ \psi_{ij} $|⁠), a relatively high percentage of users will be exposed to disinformation [79]. These parameters determine the likelihood of disinformation spreading and help identify which users should be protected (i.e. blocked from receiving disinformation) to prevent its dissemination.

Using Table 1, we extend the existing fractional programming model proposed by Carnes et al. [22] in Equations (2–4) to identify the optimal set of users, who are potential disinformation targets and must be protected from receiving disinformation. The objective function that minimizes the overall likelihood that users of type |$ A_{m} $| adopt disinformation is given as:

where the constraint |$ \sum_{j\in A}x_{j}\leq n $| limits the number of users blocked from receiving disinformation, and the constraint |$ x_{j}\in\{0,1\} $| describes the nature of the decision variables used in the model. We reformulate nonlinearities imposed in the optimization model as a mixed-integer linear programming problem below to enable optimal and computationally efficient solutions. Proposition 1 provides the reformulation, with the proof provided thereafter.

We begin by defining the mixed-integer optimization model and its key equations. The objective function aims to minimize the cumulative personality trait scores of users selected to share accurate information. This is mathematically expressed as: |$ \underset{x, y, z}{\mathrm{min}}\sum_{i\in A_{m},k\in K}\pi_{i}y_{ik} $|⁠, where |$ y_{ik} $| is a decision variable indicating whether user |$ i $| in the subset |$ A_{m} $| interacts with accurate information at distance threshold |$ k $|⁠, and |$ \pi_{i} $| represents the personality trait score of user |$ i $|⁠. The minimization ensures that users with higher influence (as indicated by |$ \pi_{i} $|⁠) are carefully selected to optimize the dissemination of accurate information.

The first constraint ensures that users in |$ A_{m} $| receive accurate information from at least one source. This is formulated as: |$ \sum_{j\in A}d_{ijk}z_{ijk}+(y_{ik}-1)\sum_{j\in B}d_{ijk}+y_{ik}\sum_{j\in C} d_{ijk}\geq 0 , \;\forall i\in A_{m},\;\forall k\in K $|⁠, where |$ d_{ijk} $| is a binary parameter indicating whether user |$ i $| follows user |$ j $| within threshold |$ k $|⁠, and |$ z_{ijk} $| captures the spread of accurate information from user |$ j $| to |$ i $|⁠. The terms involving |$ B $| and |$ C $| adjust for the presence of disinformation and corrective messages.

A budget constraint imposed on the total number of users targeted to share accurate information is given as: |$ \sum_{j\in A}x_{j}\leq n $|⁠. This ensures that the number of selected users does not exceed the predefined budget |$ n $|⁠.

The next constraints regulate the interactions between decision variables and are defined as: |$ y_{ik}-z_{ijk}\geq 0 , \;\forall i\in A_{m},\; j\in A , \; k\in K $|⁠, which ensures that if information reaches user |$ i $|⁠, then |$ y_{ik} $| must be at least as large as |$ z_{ijk} $|⁠, maintaining consistency in information flow. Similarly, the constraint to ensure that accurate information is shared only by targeted users is given as: |$ x_{j}-z_{ijk}\geq 0 , \;\forall i\in A_{m},\; j\in A , \; k\in K $|⁠. To guarantee that information reaches user |$ i $| only if it is shared by some targeted user |$ j $|⁠, we have the logical constraint defined as: |$ z_{ijk}-y_{ik}-x_{j}+1\geq 0 , \;\forall i\in A_{m},\; j\in A $|⁠.

Lastly, we impose constraints on the decision variables, defined as: |$ x_{j}\in\{0,1\} $|⁠, |$ y_{ik}\in[0,1] $|⁠, and |$ z_{ijk}\in[0,1] $|⁠, where |$ x_{j} $| is a binary variable indicating whether user |$ j $| is selected to share accurate information, while |$ y_{ik} $| and |$ z_{ijk} $| are bounded continuous variables modeling the diffusion process.

With these definitions, we can now state the proposition:

 

Proposition 1.

The fractional programming problem (Equations 2–4) is equivalent to the mixed-integer linear programming problem with the minimization and constraints provided above.Proof. We begin by transforming the fractional objective function 2 into a linear function. With the change of variable as Equation 5, the objective function 2 is converted to 6 with a set of constraints added as Equation 7.
We can equivalently re-write Equations (6) and (7) with Equations (8) and (9).

The set of constraints 9 is nonlinear and also requires linearization, which can be achieved by introducing new variables into the model. Using McCormick’s relaxation [80], we define new variables as Equation (10) and convert the nonlinear integer programming Model 2–4 into a mixed-integer linear programming model.

(10)

The ability of urban transportation infrastructure to maintain service levels is vital to community productivity [81]. The spread of disinformation can disrupt service levels by manipulating user behavior. Suppose disinformation is spread on Twitter (recently rebranded as X), claiming that a highly used NYC subway station has been closed. Subway users whose daily travel route passes through that station and who believe the disinformation may choose an alternative route, increasing their travel distance and time. When a large number of passengers adopt this disinformation and change their typical travel routes, the subway network will no longer be able to provide optimized transit times as planned because the demand shifts unnecessarily to other areas of the network. Thus, making passengers aware of the disinformation can keep subway network performance relatively stable. To show how this interdiction helps maintain the utilization of infrastructure components, we define the physical layer and integrate it with the information layer. We denote the graph associated with the infrastructure network as |$ G(V^{u},E^{u}) $|⁠, where |$ V^{u} $| represents the set of infrastructure junctions (i.e. metro stations) and |$ E^{u} $| represents the set of connections between junctions (i.e. metro lines). Then, we assign each user to a set of connected infrastructure links, |$ E^{u} $| (e.g. a subway passenger traveling through rail lines that connect one station to another). Once users are exposed to disinformation claiming that certain infrastructure components are unusable (e.g. subway routes closed for maintenance), they start utilizing alternative infrastructure components (i.e. alternative subway routes) to satisfy their needs. By solving the proposed disinformation interdiction model, we demonstrate how interdiction helps maintain the utilization of infrastructure components (i.e. peak usage of nodes and links). A subway system consists of a set of stations (nodes) for passengers to enter and exit the system and lines (links) that connect the stations to transfer passengers from one station to another. The subway system is used to meet passenger needs as they travel from an origin station to a destination. Typically, a passenger enters the origin station, passes through at least one line, and exits at the destination station.

4. Results

The impetus for this paper is a scenario wherein a weaponized disinformation attack attempts to manipulate subway routes. Such an attack can consist of a list of stations claimed to be closed due to an unusual event, such as maintenance or a potential terror attack, to convince passengers to avoid those stations during their commute. The adversary disinformation spread can be deployed to redirect the traffic from certain locations or move the passengers away from them. We refer to the former scenario as the “link-up” and the latter as the “split-up.” In the “link-up” scenario, the aim is to overload some stations with passengers to reduce the quality of service of the subway system. In this scenario, a station is targeted so that if it is closed, the passengers’ alternative routes pass through the most crowded station (i.e. a station with relatively higher betweenness centrality [82], suggesting that such a station is more likely to fall in the shortest paths of travel). Then, the next most critical station is added to the list of closed stations, one at a time, until a predetermined number of stations is reached. The “split-up” scenario can be deployed to move passengers toward certain stations to overcrowd them. Under the “split-up” scenario, a station that is part of the shortest travel paths is targeted, and the next station is added, one at a time, based on its betweenness centrality, to the set of target stations incrementally. In these two scenarios, keeping some passengers away from engaging with disinformation and thus traveling their original routes is essential, which is what we address in the model.

Social networks are instant information-sharing tools for public outreach. They help transportation providers reach many passengers instantly to inform them about service interruptions. A disinformation campaign about service interruptions can quickly spread, manipulating passenger travel behavior, resulting in rerouting or even passengers leaving the subway, which in turn causes delays and decreases subway utilization. Consider NYC, the most populous US city with ∼8 million residents [83], known as a “tweeting town,” where residents regularly share local information on Twitter [84]. In fact, the NYC Police Department has had to combat disinformation about subway shutdowns in the past (e.g. false rumors spread on social media about the closure of the subway network due to COVID-19 precautions [85]).

The NYC subway network—one of the largest subway systems in the world—comprises over 450 stations and more than 500 links (including subway lines and pedestrian connection links) with around 5.5 million daily ridership in pre-pandemic conditions [86]. With the large amount of daily ridership and active population on social networks, a weaponized disinformation campaign could lead to extensive adverse consequences, further highlighting the need for protection against such disinformation campaigns. To this end, we collected relevant NYC ridership data and applied our model to illustrate how disinformation targeting the subway infrastructure can be interdicted to protect its functionality.

By solving the mathematical model, we can target the optimal set of users to prevent from adopting disinformation. The likelihood of people believing disinformation can be quantitatively represented as a rumor belief score. We simulate these scores from a large sample of 11 600 participants, using the average and covariance matrix of rumor belief and personality traits scores obtained from Lai et al. [73]. The rumor belief scores are used to estimate users who are relatively less or more likely to adopt disinformation. These scores are assumed to be normally distributed, according to the Central Limit Theorem because they satisfy the following four conditions: (i) The sample of participants in the survey was random, (ii) The participant responses in the survey were independent of each other (i.e. an answer of one participant in the survey does not affect the answer of another), (iii) The sample size of participants was sufficiently large, and (iv) The sample size of participants was no larger than 10% of the total population (i.e. 11 500 out of 270 million users) and is drawn without replacement. We set the population with the highest rumor belief scores (i.e. three times the standard deviation above the average score) as disinformation spreaders, and the lowest rumor belief scores (i.e. three times the standard deviation below the average score) as accurate information spreaders in the social network. For the rest of the population (i.e. those who are unaware of either disinformation or accurate information), weights |$ \pi_{i} $| in the objective function (Equation 2) are set according to the estimated likelihood of disinformation adoption (i.e. rumor belief scores).

As mentioned, personality traits may determine the likelihood that social network users will like, comment on, and/or share posts on social networks [68–73]. For example, a recent survey of participants with different personality traits showed that almost |$ 40% $| of online social network users never share online content, and |$ 37% $| of users never comment on any posts [69, 87]. However, it is unclear whether there is any overlap between users who never share and those who never comment. As such, if all users who never share also never comment, then 37% of users do not engage in transferring information (neither sharing nor commenting), suggesting that 63% of users may share information. On the other hand, if there is no overlap between those who share and those who comment, then 100% of users have the potential to either share or comment (transfer information). Based on the two estimates and the extremes of their overlap, we assume that a user’s likelihood of commenting or sharing content (with any frequency) ranges from 63% to 100% under complete overlap and no overlap between the population who are willing to comment and the population who are willing to share contents, respectively. The midpoint of the extremes suggests that ∼80% of the population has the potential to transfer information, whether accurate or false. We set this parameter in the model to estimate the likelihood of user interaction willingness in advance and modify the social network accordingly.

The distance threshold, k, represents the distance between users in which, if one user shares the information, it can be viewable by another user. We solve the model using a single snapshot of the social network. Therefore, we only consider tweets and retweets, although disinformation can spread beyond the distance threshold of 2. In addition, isolated users were removed prior to the analyses, given that they do not affect other users and are not affected by others.

Assuming the circulation of a disinformation message naming several stations that have closed unexpectedly for maintenance, the list of stations could be either where the passengers pass the most to complete their travel (i.e. split-up scenario) or where, if closed, the passengers will reroute and pass through the most traveled stations (i.e. link-up scenario). In each scenario, the message could state the closure of stations during the morning or afternoon peak travel. Therefore, we consider four scenarios to illustrate how disinformation interdiction can mitigate travel delays, unexpected rerouting, and incomplete trips.

4.1. Data and parameters

We obtained travel times and origin-destination passenger flow data from Blume et al. [88], who estimated the proportions of passengers traveling between different stations using a Bayesian inference method. The travel duration delay (%) caused by avoiding a station is defined as follows:

(11)

where |$ T $| represents the travel duration with a reroute, and |$ T_{0} $| represents the fastest travel duration. Equation (11) quantifies the percentage increase in travel time when a passenger avoids passing through a station due to its rumored closure. For example, if the delay is 10%, the trip will take 10% longer than the fastest possible journey.

Given these proportions (probabilities) and entry–exit count data, the expected value of passenger flows between each pair of stations was calculated. We sampled the flow of passengers based on the calculated probability tracking their behavior (rerouting) while exposed to disinformation during rush hours, 6:30–9:30 AM and 3:30–8:00 PM (Eastern Standard Time), Monday–Friday [89].

We consider the Twitter graph of the social network [90, 91], which includes over 80 000 nodes and around 1.8 million interaction links collected from publicly available sources. We randomly selected an initial seed of Twitter users who are assumed to be online, then divided them into subway passengers (⁠|$ A_{m} $|⁠) and non-subway passengers (⁠|$ A_{n} $|⁠). In this case study, we only take tweets and retweets into account, which can be seen on the main Twitter page for users, so we set the threshold to |$ k\,=\,2 $|⁠.

Although our method is motivated by the interaction among social network users, every user can be an instance of dis/information sharing because information can reach out through different means of social interaction, such as television, radio, face-to-face interaction, and other social networks.

4.2. Numerical results

Using the aforementioned data and the proposed model, we solved the interdiction optimization problem to illustrate the impact of disinformation interdiction on subway network utilization. Solutions to the proposed model were estimated using the branch-and-cut algorithm, a commonly used method in commercial solvers (i.e. Gurobi) [92, 93]. Solving this model helped minimize the impact of disinformation about unexpected travel changes through the NYC subway (see Fig. 2).

Map of New York City with the subway network overlaid.
Figure 2.

New York City subway network.

A sample of a social interaction network, borrowed from the Twitter social network, is illustrated in Fig. 3 before and after the interdiction (i.e. solving the proposed optimization problem) under the morning split-up scenario. Each user is represented by a color based on their status. The red nodes represent users who consumed and acted upon disinformation (user type “B”). The blue nodes are users who detected disinformation and may share accurate information to counter disinformation (user type “C”). The gray nodes represent non-subway users whose engagement with disinformation does not impact the physical infrastructure. The yellow nodes represent subway users in the social network. Moreover, the mixed gray-red and gray-blue nodes represent the non-subway passengers who share disinformation and accurate information, respectively. And finally, the mixed yellow-red and yellow-blue nodes represent the subway users who share disinformation and accurate information, respectively.

The network on the left provides a sample set of social network users and their behavior before the interdiction of disinformation. The network on the right displays that behavior after interdiction.
Figure 3.

Social interaction graph before and after interdiction under the scenario morning split-up. The green dashed circles indicate the optimal group of users to receive accurate information.

The optimization problem finds the optimal set of combined gray and yellow nodes to share accurate information to minimize the impact of disinformation on subway infrastructure. The optimal set of users to target with accurate information is either represented in green or is mixed with green color. The green nodes impact the yellow nodes the most to make them aware of disinformation and prevent engagement with it. The results show that users with a relatively higher number of connections to subway users are attractive targets to share accurate information with. Additionally, users with more connections to disinformation spreaders are stronger candidates to be connected with accurate information spreaders to help mitigate the impact of disinformation spreaders on their neighbors.

Figure 4 represents the impact of rumored subway station closures on passenger rerouting decisions in the NYC subway network. The red dots in this figure represent the subway stations that are rumored to be closed under the split-up scenario, and the blue lines connect the origin and destination of a trip via the shortest path (calculated by Dijkstra’s algorithm). Each user is assigned a route based on the travel probability data.

The subway network on the left highlights a route between an origin and a destination that is not impacted by disinformation under the split-up scenario. The network on the right displays a route that is impacted by disinformation, leading to rerouting, under the split-up scenario.
Figure 4.

(a) A route not impacted by disinformation. (b) A route impacted by disinformation (reroute), both under the split-up scenario.

Passengers who do not travel through the “closed” stations in response to the rumor are not impacted by disinformation, as shown in Fig. 4a, even though they engaged with disinformation. On the other hand, some passengers who engaged with disinformation might be affected by rumored closures and thus reroute, as depicted in Fig. 4b. Those who chose to reroute cause the passenger traffic redistributed to alternate stations. Passengers are assumed to take the shortest alternative route from closed stations. To illustrate the impact of disinformation interdiction, we close a certain number of stations (by removing a node from the subway network and their incoming/outgoing links), ranging from 1 to 20 stations, under the split-up and link-up scenarios in the morning and afternoon. Then, we solve the optimization model and calculate the projected impact of disinformation interdiction from the information layer to the physical layer by determining the delay improvements, defined as the percentage of reduction in average travel time to destination after engaging with disinformation. Figure 5 represents the percentage of delay improvement during disinformation spread in 80 scenarios (i.e. split-up and link-up scenarios in the morning and afternoon by closing a certain number of stations ranging from 1 to 20).

Percentage of delay improvement (reduction) under scenarios (a) split-up in the morning, (b) split-up in the afternoon, (c) link-up in the morning, and (d) link-up in the afternoon. The horizontal axis represents the total number of stations that are assumed to be closed due to disinformation, and the vertical axis represents the percentage of delay improvement found by solving the proposed optimization problem.
Figure 5.

Percentage of delay improvement (reduction) under scenarios: (a) split-up in the morning, (b) split-up in the afternoon, (c) link-up in the morning, and (d) link-up in the afternoon. The horizontal axis represents the total number of stations that are assumed to be closed due to disinformation, and the vertical axis represents the percentage of delay improvement found by solving the proposed optimization problem.

As shown in Fig. 5, e.g. if five stations are rumored to be closed, we improve the delay in travel time by 5% on average, under the split-up scenario, by interdicting disinformation that the stations marked by red circles in Fig. 6a are closed. However, the delay improvement is substantially larger when more stations are rumored to be closed. If we interdict disinformation regarding the stations rumored to be closed marked by red circles in Fig. 6b, we improve delay time by 10% on average. Moreover, interdicting disinformation considerably improves the delay of some exemplary travels, shown with the outlying points in Fig. 5c, during the morning rush hour under the link-up scenario starting with the fourth station. This station followed by the next two stations, fifth and sixth, are marked by red circles in Fig. 7 as a sample of stations rumored to be closed under the aforementioned scenario. These stations are Beverly Road, Grand Army Plaza, and Atlantic Av—Barclay’s Center, respectively, and interdicting disinformation on closure of these stations reduces delays considerably for some passengers. Since the link-up scenario encourages passengers to reroute through the most trafficked stations, interdicting disinformation about the closure of these stations significantly reduces travel delays (over 200% for some passengers) and prevents passengers from passing through crowded stations.

The subway network on the left displays the first 5 stations rumored to be closed under the split-up scenario. The network on the right displays the first 12 stations rumored to be closed under the split-up scenario.
Figure 6.

(a) The first five stations rumored to be closed under the split-up scenario. (b) The first 12 stations rumored to be closed under the split-up scenario.

Highlighted on the subway network are the three stations most impacted by interdicting disinformation under the link-up scenario in the morning.
Figure 7.

Three stations most impacted by interdicting disinformation under the link-up scenario in the morning.

The results of Fig. 5 further show that if the weaponized disinformation suggests the second most-traveled stations are closed (i.e. a split-up scenario) in the morning, the delay is reduced almost equally up to the seventh most-traveled closed station. Also, under the afternoon split-up scenario, the delay is improved the most as the 11th and 12th most-traveled stations are rumored to be closed. The delay improvement remains constant in other instances. The results of the link-up scenario suggest that interdictions reduce delays as more stations are rumored to be closed by disinformation, specifically in the morning. In addition, disinformation makes some passengers unable to find a viable route to complete their trip. As the number of rumored closed stations increases to 20 and the subway network is split into disconnected islands, users traveling between islands may not be able to complete their travel because they avoided stations rumored to be closed. We observed a few instances of this issue, 4.5%, 5%, 1.6%, and 12% in the following scenarios: morning split-up, afternoon split-up, morning link-up, and afternoon link-up, respectively. The interdiction model helps users complete their travel with minimum travel time and reduces the likelihood that they will assume some stations are closed, preventing them from completing their travel via the subway system.

5. Conclusion

Accounting for the interdependent relationship between information and physical layers, we proposed and tested an interdiction model aimed at preventing weaponized disinformation attacks. To this end, we linearized a nonlinear competitive information cascade model [22] to be able to estimate it efficiently using commercial solvers. To illustrate the outcomes of the proposed model, we solved a large-scale interdiction problem applied to a social network and investigated the effect of the interdiction using the public subway system in NYC. Our results suggest that under the split-up scenario (where passengers are moved away from particular stations because of disinformation), the model significantly reduces delays when a few stations are rumored to be closed, compared to the link-up scenario (where disinformation is used to steer passengers toward particular stations in the subway network). As one might expect, the larger the number of rumored station closures, the more important it is to limit the spread of disinformation and interdict sources propagating it. Another finding indicates that, on average, the model is able to improve morning travel delays better compared to delays in the afternoon; however, this conclusion may vary depending on the underlying network topology. Overall, the model provides valuable insight into how effective communication can be leveraged during disinformation attacks to protect infrastructures from being disrupted.

There are a few limitations of this research that merit discussion. One of the limitations of our model is computational costs: We applied our model to a relatively small sample of a social network. More complex modeling and optimization computations will require higher computational costs. Employing heuristic methods could aid in approximating the optimal solution at a reasonable computational cost, and such methods should be explored in the future. Another limitation is that the subway data used in this paper are estimated. Future work can enrich subway topology data, offering more accurate insights into the structure of the subway network. In addition, the proposed model should be applied to different types of social networks, such as those with bidirectional (e.g. Facebook) links. Although the proposed model focuses on disinformation in social networks, the impacts of disinformation can also be mitigated in physical layers. Future research should explore the development of a disinformation mitigation model in the physical layer and integrate it with the information layer to identify potential collaborative strategies between information and physical network providers attempting to mitigate the impact of disinformation attacks.

Funding

This research was partially funded by the National Science Foundation (NSF) [Award 2310470], a seed grant from the Data Institute for Societal Challenges (DISC) and the Oklahoma Aerospace and Defense Innovation Institute (OADII) at the University of Oklahoma, and a fellowship from the Fulbright Finland Foundation. The contents expressed in this paper are the views of the authors and do not necessarily represent the opinions or views of the NSF, DISC, OADII, or Fulbright Finland Foundation.

References

1

CBS New York. NYC blackout: subway passengers plunged into darkness during manhattan power outage. https://www.cbsnews.com/newyork/news/new-york-city-blackout-subway-loses-power/.

2

Raman
G
,
Peng
JCH
,
Rahwan
T.
 
Manipulating residents’ behavior to attack the urban power distribution system
.
IEEE Trans Industr Inform
 
2019
;
15
:
5575
87
.

3

Jain
M
,
Chandan
V
,
Minou
M
 et al. Methodologies for effective demand response messaging. 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, USA,
2015
,
453
8
.

4

Calo
R
,
Coward
C
,
Spiro
ES
 et al.  
How do you solve a problem like misinformation?
 
Sci Adv
 
2021
;
7
:
eabn0481
.

5

Hunt
K
,
Agarwal
P
,
Zhuang
J.
 
Monitoring misinformation on Twitter during crisis events: a machine learning approach
.
Risk Anal
 
2022
;
42
:
1728
48
.

6

Larson
HJ
,
Lin
L
,
Goble
R.
 
Vaccines and the social amplification of risk
.
Risk Anal.
 
2022
;
42
:
1409
22
.

7

Stanford Report. Disinformation about the COVID-19 vaccine is a problem. Stanford researchers are trying to solve it
. https://news.stanford.edu/2022/02/24/curbing-spread-covid-19-vaccine-related-mis-disinformation/.

8

Wirz
CD
,
Xenos
MA
,
Brossard
D
 et al.  
Rethinking social amplification of risk: social media and Zika in three languages
.
Risk Anal
 
2018
;
38
:
2599
624
.

9

Cohen
AS
,
Lutzke
L
,
Otten
CD
 et al.  
I think, therefore I act: the influence of critical reasoning ability on trust and behavior during the COVID-19 pandemic
.
Risk Anal
 
2022
;
42
:
1073
85
.

10

Wong
JCS
,
Yang
JZ
,
Liu
Z
 et al.  
Fast and frugal: information processing related to the coronavirus pandemic
.
Risk Anal
 
2021
;
41
:
771
86
.

11

Sharma
K
,
Ferrara
E
,
Liu
Y.
Characterizing online engagement with disinformation and conspiracies in the 2020 US presidential election. In: Proceedings of the International AAAI Conference on Web and Social Media, Atlanta, GA, USA. Vol.
16
,
2022
,
908
19
.

12

Treen
KMd
,
Williams
HT
,
O’Neill
SJ.
 
Online misinformation about climate change
.
Wiley Interdiscip Rev Clim Change
 
2020
;
11
:
e665
.

13

Fleming
W
,
Hayes
AL
,
Crosman
KM
 et al.  
Indiscriminate, irrelevant, and sometimes wrong: causal misconceptions about climate change
.
Risk Anal
 
2021
;
41
:
157
78
.

14

Liang
G
,
Zhao
J
,
Luo
F
 et al.  
A review of false data injection attacks against modern power systems
.
IEEE Trans Smart Grid
 
2016
;
8
:
1630
8
.

15

Huang
K
,
Zhou
C
,
Qin
Y
 et al.  
A game-theoretic approach to cross-layer security decision-making in industrial cyber-physical systems
.
IEEE Trans Ind Electron
 
2019
;
67
:
2371
2379
.

16

Cyber Infrastructure Security Agency. Threats to the nation’s critical infrastructure
. https://www.cisa.gov/insights.

17

Tang
D
,
Fang
YP
,
Zio
E
 et al.  
Resilience of smart power grids to false pricing attacks in the social network
.
IEEE Access
 
2019
;
7
:
80491
505
.

18

Raman
G
,
AlShebli
B
,
Waniek
M
 et al.  
How weaponizing disinformation can bring down a city’s power grid
.
PloS One
 
2020
;
15
:
e0236517
.

19

Jamalzadeh
S
,
Barker
K
,
González
AD
 et al.  
Protecting infrastructure performance from disinformation attacks
.
Sci Rep
 
2022
;
12
:
1
14
.

20

Waniek
M
,
Raman
G
,
AlShebli
B
 et al.  
Traffic networks are vulnerable to disinformation attacks
.
Sci Rep
 
2021
;
11
:
1
11
.

21

Jamalzadeh
S
,
Mettenbrink
L
,
Barker
K
 et al.  
Weaponized disinformation spread and its impact on multi-commodity critical infrastructure networks
.
Reliab Eng Syst Saf
 
2024
;
243
:
109819
.

22

Carnes
T
,
Nagarajan
C
,
Wild
SM
 et al. Maximizing influence in a competitive social network: a follower’s perspective. In: Proceedings of the Ninth International Conference on Electronic Commerce, Minneapolis, MN USA,
2007
,
351
60
.

23

Rocco
CM
,
Barker
K
,
Moronta
J
 et al.  
Multiobjective formulation for protection allocation in interdependent infrastructure networks using an attack-diffusion model
.
J Infrastruct Syst
 
2018
;
24
:
04018002
.

24

Ghorbani-Renani
N
,
González
AD
,
Barker
K
 et al.  
Protection-interdiction-restoration: Tri-level optimization for enhancing interdependent network resilience
.
Reliab Eng Syst Saf
 
2020
;
199
:
106907
.

25

Almoghathawi
Y
,
Barker
K
,
Albert
LA.
 
Resilience-driven restoration model for interdependent infrastructure networks
.
Reliab Eng Syst Saf
 
2019
;
185
:
12
23
.

26

Akella
R
,
Tang
H
,
McMillin
BM.
 
Analysis of information flow security in cyber–physical systems
.
Int J Crit Infrastruct Prot
 
2010
;
3
:
157
73
.

27

Peng
R
,
Xiao
H
,
Guo
J
,
Lin
C.
 
Defending a parallel system against a strategic attacker with redundancy, protection and disinformation
.
Reliab Eng Syst Saf
 
2020
;
193
:
106651
.

28

Goh
J
,
Adepu
S
,
Tan
M
 et al. Anomaly detection in cyber physical systems using recurrent neural networks. In: 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE), Singapore. IEEE,
2017
,
140
5
.

29

Garcia Tapia
A
,
Suarez
M
,
Ramirez-Marquez
JE
 et al.  
Evaluating and visualizing the economic impact of commercial districts due to an electric power network disruption
.
Risk Anal
 
2019
;
39
:
2032
53
.

30

Schreiber
D
,
Picus
C
,
Fischinger
D
 et al.  
The defalsif-AI project: protecting critical infrastructures against disinformation and fake news
.
Elektrotech Informationstechnik
 
2021
;
138
:
480
4
.

31

Liu
W
,
Song
Z.
 
Review of studies on the resilience of urban critical infrastructure networks
.
Reliab Eng Syst Saf
 
2020
;
193
:
106617
.

32

Hosseini
S
,
Barker
K
,
Ramirez-Marquez
JE.
 
A review of definitions and measures of system resilience
.
Reliab Eng Syst Saf
 
2016
;
145
:
47
61
.

33

Mishra
S
,
Li
X
,
Pan
T
 et al.  
Price modification attack and protection scheme in smart grid
.
IEEE Trans Smart Grid
 
2016
;
8
:
1864
75
.

34

González
AD
,
Dueñas-Osorio
L
,
Sánchez-Silva
M
 et al.  
The interdependent network design problem for optimal infrastructure system restoration
.
Comput Aided Civ Infrastruct Eng
 
2016
;
31
:
334
50
.

35

Barker
K
,
Lambert
JH
,
Zobel
CW
 et al.  
Defining resilience analytics for interdependent cyber-physical-social networks
.
Sustain Resilient Infrastruct
 
2017
;
2
:
59
67
.

36

Li
B
,
Barker
K
,
Sansavini
G.
 
Measuring community and multi-industry impacts of cascading failures in power systems
.
IEEE Syst J
 
2017
;
12
:
3585
96
.

37

Zhang
Z
,
Radhakrishnan
S
,
Subramanian
C
 et al.  
Causal node failures and computation of giant and small components in networks
.
IEEE Trans Netw Sci Eng
 
2021
;
8
:
3048
60
.

38

Almoghathawi
Y
,
González
AD
,
Barker
K.
 
Exploring recovery strategies for optimal interdependent infrastructure network resilience
.
Netw Spat Econ
 
2021
;
21
:
229
60
.

39

Lobban
H
,
Almoghathawi
Y
,
Morshedlou
N
 et al.  
Community vulnerability perspective on robust protection planning in interdependent infrastructure networks
.
Proc Inst Mech Eng O J Risk Reliab
 
2021
;
235
:
798
813
.

40

Ashrafuzzaman
M
,
Chakhchoukh
Y
,
Jillepalli
AA
 et al. Detecting stealthy false data injection attacks in power grids using deep learning. In: 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), Limassol, Cyprus. IEEE,
2018
,
219
25
.

41

Baidya
PM
,
Sun
W
,
Perkins
A.
A survey on social media to enhance the cyber-physical-social resilience of smart grid. In: 8th Renewable Power Generation Conference (RPG 2019), Shanghai, China. IET,
2019
,
1
6
.

42

Fawzi
H
,
Tabuada
P
,
Diggavi
S.
 
Secure estimation and control for cyber-physical systems under adversarial attacks
.
IEEE Trans Automat Contr
 
2014
;
59
:
1454
67
.

44

Dixon S J. Number of monetizable daily active Twitter users (mDAU) worldwide from 1st quarter 2017 to 1st quarter 2022. https://www.statista.com/statistics/970920/monetizable-daily-active-twitter-users-worldwide/.

45

Dixon S J. Number of daily active Snapchat users from 1st quarter 2014 to 1st quarter 2022. https://www.statista.com/statistics/545967/snapchat-app-dau/.

46

Dixon S J. Number of monthly active Instagram users from January 2013 to December 2021. https://www.statista.com/statistics/253577/number-of-monthly-active-instagram-users/.

47

Che
H
,
Pan
B
,
Leung
MF
 et al.  
Tensor factorization with sparse and graph regularization for fake news detection on social networks
.
IEEE Trans Comput Soc Syst
 
2024
;
11
:
4888
98
.

48

Jing
J
,
Li
F
,
Song
B
 et al.  
Disinformation propagation trend analysis and identification based on social situation analytics and multilevel attention network
.
IEEE Trans Comput Soc Syst
 
2022
;
10
:
507
22
.

49

Lyons
BA
,
Montgomery
JM
,
Guess
AM
 et al.  
Overconfidence in news judgments is associated with false news susceptibility
.
Proc Natl Acad Sci U S A
.
2021
;
118
:e2019527118.

51

Bodaghi
A
,
Schmitt
KA
,
Watine
P
 et al.  
A literature review on detecting, verifying, and mitigating online misinformation
.
IEEE Trans Comput Soc Syst
 
2024
;
11
:
5119
45
.

52

Dizikes P. Study: on Twitter, false news travels faster than true stories. https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308.

53

Langin K. Fake news spreads faster than true news on Twitter—thanks to people, not bots. https://www.science.org/content/article/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots.

54

Guo
Z
,
Cho
JH
,
Lu
CT.
 
Mitigating influence of disinformation propagation using uncertainty-based opinion interactions
.
IEEE Trans Comput Soc Syst
 
2022
;
10
:
435
47
.

55

DeVerna
MR
,
Yan
HY
,
Yang
KC
 et al.  
Fact-checking information from large language models can decrease headline discernment
.
Proc Natl Acad Sci U S A
 
2024
;
121
:
e2322823121
.

56

Kempe
D
,
Kleinberg
J
,
Tardos
É.
Maximizing the spread of influence through a social network. In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA,
2003
,
137
46
.

57

Granovetter
M.
 
Threshold models of collective behavior
.
Am J Sociol
 
1978
;
83
:
1420
43
.

58

Rodrigues
HS.
Application of SIR epidemiological model: new trends. arXiv, arXiv:161102565,
2016
, preprint: not peer reviewed.

59

Chen
W
,
Wang
Y
,
Yang
S.
Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Toronto, ON, Canada,
2009
,
199
208
.

60

Li
Y
,
Fan
J
,
Wang
Y
 et al.  
Influence maximization on social graphs: a survey
.
IEEE Trans Knowl Data Eng
 
2018
;
30
:
1852
72
.

61

Bharathi
S
,
Kempe
D
,
Salek
M.
Competitive influence maximization in social networks. In: International Workshop on Web and Internet Economics, San Diego, CA, USA. Springer,
2007
,
306
11
.

62

Bozorgi
A
,
Samet
S
,
Kwisthout
J
 et al.  
Community-based influence maximization in social networks under a competitive linear threshold model
.
Know-Based Syst
 
2017
;
134
:
149
58
.

63

Qiang
Z
,
Pasiliao
EL
,
Zheng
QP.
 
Model-based learning of information diffusion in social media networks
.
Appl Netw Sci
 
2019
;
4
:
1
16
.

64

Tang
J
,
Zhu
H
,
Guo
J.
 
Information Diffusion between Users in Open Data Ecosystem: Modelling and Simulation Analysis
.
Math Probl Eng
 
2022
; doi:.

65

Wang
Y
,
Wang
J
,
Wang
H
 et al.  
Users’ mobility enhances information diffusion in online social networks
.
Inf Sci
 
2021
;
546
:
329
48
.

66

Tobler
WR.
 
Geographical filters and their inverses
.
Geogr Anal
 
1969
;
1
:
234
53
.

67

Borgatti
SP
,
Foster
PC.
 
The network paradigm in organizational research: a review and typology
.
J Manag
 
2003
;
29
:
991
1013
.

68

Buchanan
T.
 
Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation
.
Plos One
 
2020
;
15
:
e0239666
.

69

Burbach
L
,
Halbach
P
,
Ziefle
M
 et al. Who shares fake news in online social networks? In: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization, Larnaca, Cyprus,
2019
,
234
42
.

70

Leng
J
,
Guo
Q
,
Ma
B
 et al.  
Bridging personality and online prosocial behavior: the roles of empathy, moral identity, and social self-efficacy
.
Front Psychol
 
2020
;
11:575053
.

71

Indu
V
,
Thampi
SM.
A systematic review on the influence of user personality in rumor and misinformation propagation through social networks. In: International Symposium on Signal Processing and Intelligent Recognition Systems, Chennai, India. Springer,
2020
,
216
42
.

72

Wolverton
C
,
Stevens
D.
 
The impact of personality in recognizing disinformation
.
Online Inf Rev
 
2020
;
44
:
181
91
.

73

Lai
K
,
Xiong
X
,
Jiang
X
 et al.  
Who falls for rumor? Influence of personality traits on false rumor belief
.
Pers Individ Dif
 
2020
;
152
:
109520
.

74

Lambiotte
R
,
Kosinski
M.
 
Tracking the digital footprints of personality
.
Proc IEEE
.
2014
;
102
:
1934
9
.

75

Azucar
D
,
Marengo
D
,
Settanni
M.
 
Predicting the Big 5 personality traits from digital footprints on social media: a meta-analysis
.
Pers Individ Dif
 
2018
;
124
:
150
159
.

76

Staiano
J
,
Lepri
B
,
Aharony
N
 et al. Friends don’t lie: inferring personality traits from social network structure. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Espoo, Finland,
2012
,
321
30
.

77

Chen
X.
The influences of personality and motivation on the sharing of misinformation on social media. In: IConference 2016 Proceedings, Philadelphia, PA, USA.
2016
.

78

Super
JF
,
Li
P
,
Ishqaidef
G
 et al.  
Group rewards, group composition and information sharing: a motivated information processing perspective
.
Organ Behav Hum Decis Processes
 
2016
;
134
:
31
44
.

79

Qi
J
,
Liang
X
,
Wang
Y
 et al.  
Discrete time information diffusion in online social networks: micro and macro perspectives
.
Sci Rep
 
2018
;
8
:
1
15
.

80

McCormick
GP.
 
Computability of global solutions to factorable nonconvex programs: Part I—Convex underestimating problems
.
Math Program
 
1976
;
10
:
147
75
.

81

Li
M
,
Wang
H
,
Wang
H.
 
Resilience assessment and optimization for urban rail transit networks: a case study of Beijing subway network
.
IEEE Access
 
2019
;
7
:
71221
34
.

82

Freeman
LC.
 
A set of measures of centrality based on betweenness
.
Sociometry
 
1977
;
40
:
35
41
.

84

Kaufman SM. How Social Media Moves New York: Twitter Use by Transportation Providers in the New York Region
. https://wagner.nyu.edu/files/faculty/publications/how_social_media_moves_new_york.pdf.

85

Tenbarge K. New York police addressed rampant social-media rumors that the NYC subway system will shut down amid the novel-coronavirus pandemic
. https://www.businessinsider.com/nyc-subway-will-not-shut-down-coronavirus-nypd-confirms-2020-3.

86

Metropolitan Transportation Authority
. MTA Performance Metrics. https://new.mta.info/coronavirus/ridership.

87

Buchanan
T
,
Benson
V.
 
Spreading disinformation on Facebook: do trust in message source, risk propensity, or personality affect the organic reach of “fake news”?
 
Soc Media Soc
 
2019
;
5
:
2056305119888654
.

88

Blume
SO
,
Corman
F
,
Sansavini
G.
Bayesian Origin-Destination Estimation in Networked Transit Systems using Nodal In-and Outflow Counts.
Transportation Research Part B: Methodological
2022;
161
:
60
94
.

89

Metropolitan Transportation Authority. Subway Service Guide
. http://web.mta.info/maps/service_guide.pdf.

90

Leskovec
J
,
Mcauley
J.
 
Learning to discover social circles in ego networks
.
Adv Neural Inf Processing Syst
 
2012
;
25
:
539
47
.

91

Leskovec J. Social circles: Twitter
. https://snap.stanford.edu/data/ego-Twitter.html.

92

Caccetta
L.
Branch and cut methods for mixed integer linear programming problems. Progress in Optimization: contributions from Australasia. Springer, Boston, MA,
2000
,
21
44
.

93

Gurobi Optimization
. https://www.gurobi.com/.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected] for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via the Permissions link on the article page on our site—for further information please contact [email protected].