ArticlePDF Available

A negative survey based privacy preservation method for topology of social networks

Authors:
A Negative Survey based Privacy Preservation Method
for Topology of Social Networks
Hao Jianga, Yuerong Liaoa, Dongdong Zhaob, Yiheng Lia, Kehang Mua,
Qianwei Yuc,
aKey Laboratory of Intelligent Computing and Signal Processing of Ministry of Education,
School of Artificial Intelligence, Anhui University, Hefei 230601, China
bChongqing Research Institute, School of Computer Science and Artificial Intelligence,
Wuhan University of Technology, Wuhan, Hubei, China
cDepartment of Gastroenterology, the Second Affiliated Hospital of Anhui Medical
University, Hefei 230601, China
Abstract
With the development of social platforms, the social network has aroused wide
attention. Since social networks contain a lot of personal sensitive information,
many privacy preservation methods have been designed for social networks to
allay concerns about privacy disclosure of people. However, most of the existing
methods disturb the social networks too much to ensure the utility of social
networks. To this end, we propose a negative survey based privacy preserva-
tion method, called NetNS, to preserve the topology privacy of social networks,
where a dedicated negative survey model is developed to disturb edges in social
networks in order to preserve the topology privacy of them. The theoretical anal-
ysis indicates that the developed NetNS is efficient, and can resist two common
graph structure attacks including friendship attack and subgraph attack, while
empirical studies conducted on three real-world social networks show that com-
pared to six existing privacy preservation algorithms tailored for the topology
of social networks, the developed NetNS can provide disturbed social networks
with better utility while achieving same privacy preservation level.
Keywords: Social network, topology privacy, negative survey, graph structure
Corresponding author
Email addresses: haojiang@ahu.edu.cn (Hao Jiang), 468647989@qq.com (Yuerong Liao),
zdd@whut.edu.cn (Dongdong Zhao), 3167459376@qq.com (Yiheng Li), 752671485@qq.com
(Kehang Mu), 792744504@qq.com (Qianwei Yu)
Preprint submitted to Applied Soft Computing July 6, 2023
attack
1. Introduction
Along with the growth of online social platforms, social networks have re-
ceived more and more attention. As a kind of complex network, social networks
are composed of a node set and an edge set, where the nodes usually represent
users who typically use the same social software, such as Facebook, WeChat,
Twitter, and so on, while the edges connect different nodes, which denote a
social interaction between corresponding users, such as friendship and shared
interests [1, 2]. Since the social network is a collection of social relationships,
it can be used to reveal some useful information concealed in social relations,
which is helpful in understanding the dependencies between social entities [3],
characterizing the behaviors of users [4] and other areas. However, for a user,
his/her social relationship belongs to sensitive information [5]. Due to privacy
concerns, the majority of users hesitate or even reject to provide their social
relationships to build social networks, which puts a detrimental impact on the
research and applications of social networks. To this end, social platform com-
panies generally make a commitment to users that they will hide personal in-
formation, such as name, gender, age, and so on, when building social networks
[6], namely anonymizing social network, so as to preserve privacy of personal
social relationships, and alleviate user concerns.
However, hiding personal information is necessary but insufficient for pre-
serving the privacy of personal social relationships, given that if an adversary
owns some background knowledge about the social relationship of a user, he/she
might be specifically recognized by the adversary from the anonymous social
network according to the background knowledge [7]. Here, we call it graph
structure attack. To alleviate the above issue, many researchers add/remove
edges in social networks to disturb the relationship between different users, so
as to improve the privacy preservation level of anonymous social networks [8].
In spite of their effectiveness of preserving personal social relationships in social
2
networks [9], most of existing disturbance based methods require a high level
of turbulent noise to achieve satisfactory privacy preservation level, which con-
siderately deteriorates the utility of social networks. Moreover, existing work
usually evaluates the utility of disturbed networks according to attributes that
change between disturbed and the original networks, such as the number of
triangles, clustering coefficient, and the shortest path [6]. Nevertheless, the
purpose of building social networks is to discover useful information hidden in
social relationships. In some cases, a small amount of change in network at-
tributes might lead to different analysis results. Take two networks plotted in
Fig. 1 as an illustrative example, where Fig. 1 (a) gives the original social
network, and the disturbed network is shown in Fig. 1 (b) , and the attributes
of networks including the number of triangles, the clustering coefficient, and the
shortest path of each pair of nodes are listed at the right of figures. In Fig. 1
(a) and (b), it can be found that the attributes between original and disturbed
networks are basically the same. However, as displayed in Fig. 1 (c) and (d),
the community detection results of original and disturbed networks are quite
different. To be specific, in the Fig. 1 (c), the original network is divided to two
communities, where the green nodes form the first community, while the second
community contains all orange nodes. Differently, the disturbed network only
owns one community, which contains all orange nodes in the Fig. 1 (d). Here,
the novel label propagation algorithm [10] is employed to detect communities.
Therefore, it is not enough to measure the utility of disturbed networks only in
terms of attribute changes.
As a branch of the artificial immune system, the negative survey is a model
of sensitive information collection inspired by the negative selection mechanism
in the biological immune system [11, 12]. For example, in a common question-
naire (called positive survey), the following question might be provided to the
respondent when collecting the stomach health information.
Q1: In the last year, how many times did you get nausea?
A. 0 B. 12 C. 36 D. 79 E. More than 10
The respondents are asked to answer the above question, and directly pro-
3
1
Number of triangles = 7
The clustering coefficient = 0.650
a
3
4
2
5
67
9
8
10
1
Number of triangles = 7
The clustering coefficient = 0.583
b
3
4
2
5
67
9
8
10
1
3
4
2
5
67
9
8
10
c
Community 1
Community 2 1
3
4
2
5
67
9
8
10
d
Community 1
Community 2
Figure 1: An example of a small amount of change in network attributes lead to different
analysis results. (a) is the original social network and its attributes. (b) is the disturbed
social network and its attributes. (c) is the community division of the original network. (d)
is the community division of the disturbed network.
vide their frequency of nausea (sensitive information). Different from the posi-
tive survey, in the negative survey, the respondents are required to answer the
following question.
Q2: In the last year, what is NOT the number of times that you get nausea?
A. 0 B. 12 C. 36 D. 79 E. More than 10
For a respondent who had nausea five times in the last year, namely be-
longing to C in the positive survey (called positive category), he/she should
randomly select a category from A, B, D and E (called negative categories) in
the negative survey. After every respondent has selected the negative category,
the investigator can use dedicated statistical methods (called reconstruction
methods), such as NStoPS-I [13] and NStoPS-LP [14] to reconstruct the posi-
tive survey results from the negative survey results. From the above steps, it
can be found that the negative survey only requires respondents to provide a
category to which they do not belong, so when the number of categories that
respondents can select is greater than three, even if the adversary obtains the
selection of a respondent, he/she still cannot uniquely determine the positive
category of respondent.
Since it can collect sensitive information while preserving personal privacy,
the negative survey has been used to collect various sensitive information in
4
different scenarios. To name a few, in [15], the negative survey was used to col-
lect health information in the questionnaire, while in [16], the negative survey
was employed to aggregate location information from intelligent transportation
system. In addition to collecting sensitive information, the negative survey has
also been applied in the field of data publication [12]. For example, in [17], the
negative survey was integrated into k-anonymity [18] and l-diversity [19] sepa-
rately, to improve the privacy preservation level of published data. Although
the negative survey has demonstrated its ability of privacy preservation, most
of existing negative survey based methods are tailored for category sensitive
information.
In this paper, we extend negative survey model and develop a negative survey
based privacy preservation method called NetNS for topology structure of social
networks, where the social networks are first divided into different subnetworks,
and then each subnetwork is replaced by a subnetwork with different structure
(called negative subnetwork). Moreover, the work of this paper is an improved
version of the work in [20]. The contributions of this paper are summarized as
follows.
1. The negative survey model is extended to a complex type of sensitive
information, namely complex networks, based on which a negative survey
based method, called NetNS, is designed to disturb the topology of social
networks, so that the personal social relationship can be preserved. The
theoretical analysis indicates that the NetNS is able to defend against
two well-known graph structure attacks, including friendship attack and
subgraph attack.
2. Based on the results of network community detection, a new metric is
designed to measure the utility of disturbed networks. Different from ex-
isting utility metrics, the proposed metric evaluates the utility of disturbed
networks from the perspective of analysis results, not rather than network
attributes. Therefore, it can measure the utility of disturbed networks
more comprehensively.
5
3. To assess the performance of the NetNS, empirical evaluations have been
conducted on three real-world social networks in comparison with six state-
of-the-art algorithms tailored for the topology of social networks. The
experimental results indicate that compared to six existing algorithms the
social network disturbed by NetNS owns better utility when achieves the
same privacy preservation level.
The rest of this paper is organized as follows. In Section 2, the related studies
of negative survey and existing methods for preserving network topology privacy
are analyzed. Section 3 gives the details of proposed NetNS and the theoretical
analysis of NetNS. The experimental studies are reported in Section 4. Finally,
the conclusion and the future work of this paper are given in Section 5.
2. Related work
2.1. Negative Survey
Because the process of collecting information through the negative survey is
simple and efficient, the negative survey has been employed to collect various
sensitive information under different conditions. For example, Luo et al. [21]
adopted the negative survey to collect the rating of goods in the context of
online shopping and proposed the negative evaluation model, which can better
protect the privacy of customers’ ratings compared with the traditional evalu-
ation model. In [22], Horey et al. combined the quadtree and negative survey,
and brought up a location information collection method called NQT. How-
ever, as demonstrated in [23], when the location information providers move,
the NQT might disclose their location information. To this end, Jiang et al.
[16] designed a new strategy for selecting negative locations which can preserve
the location privacy of providers even if they move. Moreover, Liu et al. [24]
employed the negative survey to preserve the privacy of the category in cloud
computing. In addition to category information, Jiang et al. [25] developed a
real-value negative survey model for power consumption data collection while
6
protecting user’s power privacy, which can allow for partial user failures and
defend against differential attacks.
Except for sensitive information collection, the negative survey has also been
applied for data publication to improve the privacy of published data. In [17],
Du et al. combined negative survey and k-anonymity and l-diversity separately,
and designed two methods for publishing structural data. To increase the utility
of published data, Wu et al. [26] designed a new data publication method
for structural data, which utilized the distribution of sensitive information to
generalize data.
From the above introduction, it can be found that the negative survey model
preserves the privacy of sensitive information by converting it to a negative
category, which is simple, efficient, and can be easily parallelized [11, 12, 15].
All of these advantages enable the negative survey model to efficiently tackle the
topology privacy of social networks, especially when the scale of social networks
is large. Although the negative survey has been used to preserve privacy of
many different kinds of sensitive information, existing methods mainly focus
on category sensitive information, and there is no work that uses the negative
survey to preserve the topology privacy of social networks.
2.2. Typical Privacy Preserving Method for Social Networks
There are many methods have been proposed for preserving the topology
privacy of social networks, which can be mainly divided into the following three
groups.
The first group anonymizes the social network to satisfy some data publi-
cation models. For example, Zhang et al. [27] designed a privacy-preserving
scheme based on network partition, where the networks were modified accord-
ing to degree-based graph entropy to achieve k-anonymity. In [28], Langari et
al. modified k-member version of fuzzy c-means algorithm to create balanced
clusters of nodes, where each cluster contains at least kmembers, and then
executed firefly algorithm to further optimize the main cluster and anonymize
the network. A new method for generating k-degree anonymous graphs based
7
on noisy node addition and classification algorithm neural support vector ma-
chines was provided in [29], which used a mixture of neural networks and SVMs,
called NeuroSVM, where the average path length of the graph was preserved and
the addition of noisy nodes and noisy edges was greatly reduced. Based on the
fuzzy sets and rewiring algorithm, a privacy-preserving randomization algorithm
named PPRA was proposed in [30], which was centered on the degree-preserving
randomization technique focusing on rewiring the network structure so that the
degree of each node remained the same after rewiring, while the topology of
the entire network changed. Although anonymizing the social network can hide
some topology privacy by clustering, the macroscopic characteristics can still be
retained, which leads to a risk of being compromised by malicious attackers [6].
The second group preserves the topology privacy of social networks by re-
stricting the querying on the social networks. To name a few, in order to obtain
the intersection of different networks while preserving the privacy of each net-
work, Zuo et al. [31] employed cryptographic accumulators to provide secure
verifiable graph intersection operation. Wang et al. [32] employed the non-zero
prior knowledge to measure the probability of privacy disclosed by query re-
sults, and then re-formulated the -difference model to preserve the privacy of
social networks. To achieve the trade-off between the privacy and the utility,
Shen et al. [33] proposed a privacy-preserving algorithm combining friendship
links of central nodes in dynamic social networks, namely PPCN, which divided
the friendship links of central nodes into three levels and designed a substitu-
tion factor to measure the probability of two strangers becoming friends so as
to prevent implicit relationships from becoming explicit when replacing links,
which further enhanced the confidentiality of the data. In spite of promising
privacy preservation, this group of methods restricts the use of social networks,
and therefore the application scenarios of them are limited.
The third group resorts to disturbance, where the edges in the social net-
works are disturbed to preserve the topology privacy. In [34], Ahmed et al.
projected the adjacency matrix of social networks into a low-dimensional space,
and then perturbed the matrix with random noise. Based on compressed sens-
8
ing (CS), a privacy protection method was presented in [35] to protect privacy
in the labeled dynamic social networks, which firstly used CS to compress the
node information updated at each point in time, and then combined the tagging
information with node degrees during the reconstruction process to anonymize
the node degrees in the tag group by randomly deleting/changing the node
links to protect the relationship privacy. Huang et al. [6] hybridized the dif-
ferential privacy model, clustering, and randomization algorithms to preserve
the topology privacy of social networks, where the networks were disturbed
after node clustering. Jian et al. [36] proposed two methods for publishing
graphs under node-differential privacy, which a node-level ingestion algorithm
that modified the input graph by randomly inserting and removing nodes, and
an edge-level ingestion algorithm that randomly removed edges and inserted
nodes, and which provided reasonable privacy guarantees while preserving mul-
tiple statistical properties of the input graph. Although the perturbation based
method can defend against the anonymous attack based on probability [37], it
requires abundant noise to achieve desirable privacy preservation level, which
results in the lower utility of social networks. Therefore, a new perturbation
method is required to preserve the topology privacy of social networks, while
keeping commendable data utility.
3. Proposed NetNS
In this section, the details of proposed NetNS are given, and the privacy and
time complexity of it are also analyzed.
3.1. Steps of Proposed NetNS
Before introducing the steps of proposed NetNS, we first need to define some
concepts.
Definition 1 (Negative Subnetwork).For a subnetwork in the social network,
its negative subnetworks are the ones with the same nodes but different edges.
9
Figure 2: An illustrative example of negative subnetwork and the distance between subnet-
works.
Definition 2 (Distance between Subnetworks).For two subnetworks with the
same nodes, their distance is the least times of flipping edges required by chang-
ing from one subnetwork to another.
Take the subnetwork with four nodes plotted in Fig.2 as an illustrative ex-
ample. For the subnetwork displayed in Fig. 2 (a) (here, we call it positive
subnetwork), the subnetworks shown in Figs. 2 (b)-(l) are its negative sub-
networks. The distance between it and negative subnetworks shown in Figs.
2 (b)-(l) is 1, 1, 2, 2, 3, 3, 3, 4, 4, 5, and 6 separately. Here, it should be
noticed that we only provide the negative subnetworks with different shape in
Fig. 2. Take the negative subnetwork plotted in Fig. 2 (g) as an illustration.
The negative subnetworks that own the same shape with it are shown in Fig.
3. It can be found that the subnetworks in Figs. 3 (g1)-(g2) can own the
same shape as the subnetwork shown in Fig. 3 (g) by changing the position
of nodes. However, since in social networks different nodes represent different
users, and different edges indicate the relationship between different users, two
subnetworks are considered distinct even if they have the same shape.
By converting the subnetworks in a social network to one of its negative sub-
networks, the connections between different nodes will be randomly disturbed,
and the topology privacy of the original social network can be preserved. Here,
we refer to the original social network as positive social network, whereas the
10
Figure 3: An example of negative subnetwork with the same shape but different structure.
converted social network as negative social network. However, if we directly
use the negative survey model proposed in [11], it is difficult to reconstruct
the original social network from negative social network. To this end, in this
paper, we resort to Gaussian negative survey, of which negative survey results
can be directly used to estimate the positive survey results without the process
of reconstruction [38]. Different from negative survey model proposed in [11],
in Gaussian negative survey, the probability of selecting different negative cat-
egories obeys Gaussian distribution. Here, we take the questions Q1 and Q2
as an example. For a respondent whose positive category is C. In the original
negative survey model, he/she selects the negative categories A, B, D, and E
with the same probability. In other words, the probability of each negative
category being selected is 1/4=0.25. In the model of Gaussian negative survey,
the probability of the respondent selecting different negative categories satis-
fies Gaussian distribution, and the farther the negative survey is from C, the
lower the probability of it being selected. To be specific, for a respondent with
positive category C, the probability of he/she selecting negative category B is
the same with he/she selecting D, and greater than selecting A and E. Based
on the concept of negative subnetwork and Gaussian negative survey model, a
privacy-preserving method, called NetNS, is developed for the topology of social
networks. The main steps of proposed NetNS are described in Algorithm 1.
In Algorithm 1, the steps of NetNS can be mainly divided into three steps.
The first step is initialization (Lines 1-3), where all nodes in Gare set as unvis-
ited (Line 1) and the negative social network G0is as empty (Line 2). Moreover,
the probability of selecting different negative subnetworks Pis also calculated
according to the size of each subnetwork M, and the variance of the Gaussian
11
Algorithm 1: The steps of NetNS
Input: G: The positive social network
M: The number of nodes in a subnetwork
σ: The variance of the Gaussian distribution
Output: G0: The negative social network
1Set all nodes in Gas unvisited;
2G0 ;
3PGenerate P(M,σ); // Algorithm 2
4while the number of unvisited nodes in Gis no less than Mdo
5Subnetwork Randomly select Munvisited nodes from G;
6NegSubnetwork Generate Neg(Sub,P); // Algorithm 3
7G0G0NegSubnetwork;
8Set all nodes in Subnetwork as visited;
9end
10 Add the rest unvisited nodes and corresponding edges to G0;
11 Return G0;
distribution σ(Line 3). The second step is to convert the subnetworks in Gto
negative subnetworks (Lines 4-9). Specifically, it first randomly selects a positive
subnetwork Subnetwork with Mnodes from unvisited nodes of social network G
(Line 5), and then generates a negative subnetwork NegSubnetwork according
to the probability P(Line 6). After that, the NetNS adds the N egSubnetwork
to negative social network G0, and sets all nodes in Subnetwork as visited. The
conversion will be repeated until the unvisited nodes are less than M. The
last step is to add the rest unvisited nodes and corresponding edges to G0and
return it as the negative social network. In Algorithm 1, it can be found that
the generation of probability Pand the negative subnetwork NegSubnetwork
are two important components of NetNS, and the details of them are described
below.
According to the definition of Gaussian negative survey model, the proba-
bility that a negative subnetwork is selected depends on its distance from its
12
positive subnetwork. The farther away the negative subnetwork is from posi-
tive subnetwork, the less likely it is to be selected. For a positive subnetwork
with Mnodes, the farthest distance between it and its negative subnetworks is
M×(M1)/2. Therefore, the probability P= (p1, p2, ..., pM×(M1)/2) should
satisfy the following conditions.
p1> p2> ... > pM×(M1)/2(1)
M×(M1)/2
X
i=1
pi= 1,(2)
where piis the probability of selecting the negative subnetwork with the distance
of ifrom the positive subnetwork. Algorithm 2 gives the steps generating the
probability of selecting negative subnetwork. In Algorithm 2, the M×(M
1)/2 values are first sampled from Gaussian distribution with mean value of
1 and variance of σ(Lines 1-3). Here, the reason of set the mean of Gaussian
distribution as 1 is that the probability of selecting the negative subnetwork with
the distance of 1, namely p1should be the largest. Then, the M×(M1)/2
samples are normalized (Lines 4-6), and returned as the probability of selecting
negative subnetwork P(Line 7). Since when the value of sample is greater than
the mean of Gaussian distribution, the value of Gaussian probability density
function decreases with the value of sample, and all samples are normalized, the
probability outputted by Algorithm 2 satisfy the conditions shown in (1) and
(2).
After obtaining the probability P, the NetNS can directly select a negative
subnetwork according to its distance from positive subnetwork. However, for a
positive subnetwork with Mnodes, the number of its negative subnetwork is
2M×(M1)/21. Therefore, if the NetNS selects a negative subnetwork after
generating all negative subnetworks, it will greatly increase the memory used by
algorithm. To this end, in this paper, the NetNS generates a negative subnet-
work for a given positive subnetwork according to the probability P. The steps
of generating negative subnetworks are described in Algorithm 3. The NetNS
13
Algorithm 2: Generate P(M,σ)
Input: M: The number of nodes in a subnetwork
σ: The variance value of the Gaussian distribution
Output: P = (p1, p2, ..., pM×(M1)/2): The probability of selecting negative
subnetwork
1for each i= 1 : M×(M1)/2do
2pi=1
2πσ exp(i1)2
2σ2
3end
4for i= 1 : M×(M1)/2do
5pi=pi
PM×(M1)/2
k=1 pk
6end
7Return P;
first determines the distance between the generated negative subnetwork and
positive subnetwork according to the probability of selecting the negative sub-
network P(Lines 1-4). Then, the NetNS randomly selects Dis pairs of nodes
from the subnetwork (Line 9). Here, Dis is the distance between the negative
and positive subnetworks. Next, the NetNS flips the relationship between the
selected Dis pairs of nodes (Lines 10-16). To be specific, if there is an edge
between a selected pair of nodes, then the NetNS removes the edge from the
subnetwork. Otherwise, the NetNS adds an edge to the subnetwork. After the
relationship between all selected pairs of nodes is flipped, the NetNS returns the
subnetwork as the generated negative subnetwork (Line 17).
Fig.4 gives an illustrative example of the steps of NetNS, where the original
social network with 12 nodes is plotted in Fig.4 (a). Here, the parameters M
and σare set as 4 and 1 separately, and the probability of selecting different
negative subnetworks P=(0.57034, 0.34593, 0.07718, 0.00633, 0.00019, 0.00000).
As shown in Fig.4 (b), the NetNS first randomly selects four unvisited nodes,
namely the nodes {1,5,9,10}, as the subnetwork and sets them as visited. Then,
the NetNS generates the negative subnetwork for the selected Mnodes and adds
it to the network G0. Next, the NetNS selects the nodes {2,3,4,6}, sets them as
14
Algorithm 3: Generate Neg(Sub,P)
Input: Subnetwork: Subnetwork
P= (p1, p2, ..., pM×(M1)/2): The probability of negative survey
Output: NegSubnetwork: Negative subnetwork
1NegSubnetwork Subnetwork;
2rand Generate a random number between (0,1];
3T emp p1;
4Dis 1;
5while T emp < rand do
6Dis Dis + 1;
7T emp T emp +pDis ;
8end
9Pair Randomly select Dis pairs of nodes from N egS ubnetwork;
10 for each node pair (i, j)in Pair do
11 if there is an edge between nodes iand jthen
12 Remove the edge between nodes iand j;
13 else
14 Add an edge between nodes iand j;
15 end
16 end
17 Return NegSubnetwork;
visited, generates the corresponding negative subnetwork, and adds the negative
subnetwork to G0which is plotted in Fig.4 (c). Here, it is worth to mentioning
that the relationship between the nodes in the negative subnetwork and the
nodes in G0is the same with that in the original social network. After that,
the NetNS selects four unvisited nodes again, namely the nodes {7,8,11,12},
generates the corresponding negative subnetwork, and adds it to G0. Here, all
the nodes in the network Gare visited. The NetNS outputs the network G0as
the disturbed social network.
The method proposed in this manuscript is an improved version of our previ-
ous work [20]. The method proposed in this manuscript and our previous work
15
G
1
2
3
4
5
6
78
9
10
11
12
(a)
G
1
2
3
4
5
6
78
9
10
11
12
1
5
9
10
Subn etwork Negative Subn etwork
G’
1
5
9
10
1
59
10
(b)
G
1
2
3
4
5
6
78
9
10
11
12
2
4
3
6
Subn etwork Negative Subn etwork
G’
2
4
3
6
1
2
3
4
5
6
9
10
(c)
G
1
2
3
4
5
6
78
9
10
11
12
7
11
8
12
Subn etwork Negative Subn etwork
G’
7
11
8
12
1
2
3
4
5
6
78
9
10
11
12
(d)
Figure 4: An example of the steps of NetNS. (a) is the original social network. (b) is the
steps of generating and adding the first negative subnetwork. (c) is the steps of generating
and adding the second negative subnetwork. (d) is the steps of generating and adding the last
negative subnetwork.
are different in the following three aspects. Firstly, the method proposed in this
manuscript, the number of nodes in each subnetwork can be any integer between
3 and N/2 rather than only 3, which enhance the ability of method to balance
the privacy and the utility. Secondly, the probability of replacing a subnetwrk
with different negative subneworks is only determined by one parameter, while
it needs to be determined by three parameters in previous method, which sim-
plifies the parameter tuning of the method. Thirdly, for a given subnetwork,
the method proposed in this manuscript generates the negative subnetwork by
using a dedicated algorithm instead of generating all the networks beforehand
and then selecting, which reduces the memory used by the method.
16
3.2. Time Complexity of the NetNS
From Algorithm 1, it can be found that the developed NetNS just divides
the network into different subnetworks and replaces them with their nega-
tive subnetworks, which is simple and efficient. Assume that there is a so-
cial network with Nnodes, and the number of nodes in each subnetwork is
M. The initialization of NetNS owns the time complexity of O(M×(M1)
2),
given that there are M×(M1)
2probabilities that should be sampled from Gaus-
sian distribution. The time complexity of generating a negative subnetwork for
a given positive subnetwork is O(M×(M1)
2). It is because that in extreme
cases, all the edges in the positive subnetwork need to be flipped. More-
over, the time complexity of adding rest uninvited nodes and corresponding
edges is less than O(M×(M1)
2). Therefore, the time complexity of NetNS is
O(M×(M1)
2+N
M×M×(M1)
2+M×(M1)
2) = O((N
2+M)×(M1)) = O(N×M).
3.3. Privacy Analysis Against Graph Structure Attack
The graph structure attack indicates that an adversary tries to identify a
target node in the disturbed social networks by using some knowledge of graph
structure. In this paper, two representative structure attacks are considered,
which utilize the degree of the target node and the structure of the subgraph
that contains the target node to identify the target node in the released network
separately.
3.3.1. Friendship Attack
The friendship attack assumes that the adversary knows the degree of the
target node and one of its neighbor nodes. Take Fig. 5 as an example, where
a social network containing the target node Alice is shown in Fig. 5 (a), and
the corresponding anonymous network is plotted in Fig. 5 (c). If an adversary
knows the degree of Alice and Bob are three and two respectively, and Alice
and Bob are friends. Then the adversary can know that Alice has friends with
three users, and he/she can uniquely identify Alice in Fig. 5 (c).
17
Figure 5: An example of friendship attack.
The NetNS can resist the friendship attack. In the NetNS, the relationship
between the target node and the other nodes in the social network is randomly
disturbed. Assume that the target node is a, while its neighbor node of which
degree is known by adversary is b. The probability that NetNS divides them
into the same subnetwork can be calculated as follows.
psame =M1
N1,(3)
where Nis the number of nodes in a positive social network, while Mis the
number of nodes in the subnetwork. Furthermore, the probability of the rela-
tionship between one node and the rest nodes in the same subnetwork not being
changed in NetNS can be calculated as follows.
punchanged1=
(M1)×(M2)/2
X
i=1
pi×(M1)×(M2)/2
i
M×(M1)/2
i,(4)
where piis the probability of selecting the negative subnetwork with the distance
of ifrom the positive subnetwork. Moreover, for two nodes in the same subnet-
work, the probability of their relationship with the rest nodes in the subnetwork
being unchanged in NetNS can be calculated as follows.
punchanged2=
(M2)×(M3)/2
X
i=1
pi×(M2)×(M3)/2
i
M×(M1)/2
i,(5)
Therefore, the probability that the edges of nodes aand bare not changed is:
pfriend =psame ×punchanged2+ (1 psame )×(punchanged1)2.(6)
Fig. 6 plots the values of pfriend under different Mwith N= 1000, and
σ= 1. It can be seen from Fig. 6, the probability of the edges of nodes a
18
0 20 40 60 80
The number of nodes in a group M
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
P
friend
Figure 6: The probability of an adversary successfully executing friend attack against a neg-
ative social network under different M. (N= 1000, σ= 1)
and bnot being changed is less than 0.05, when M > 10, which indicates that
the adversary can uniquely determine the target node according to the degree
of it and its neighbor nodes with a relatively low probability. Moreover, it is
possible that a node with different degree from the target node in the positive
social network, has the same degree with the target node in negative social
network, which can also prevent the adversary from determining the target node
by friendship attack. Consequently, the probability of an adversary successfully
executing friend attack against a negative social network is less than pfriend.
Here, it can be found that the probability shows a trend of increasing first
and then decreasing later in the Fig. 6. The reasons for this trend are as fol-
lows. In the friendship attack, the adversary successfully executes the attack
only when the relationship between the target node, the neighbor node known
by the adversary, and the other nodes in the social network remains unchanged.
With the increase of M, the number of edges contained in a subnetwork will
increase, which leads to a decrease in the probability of changing the relation-
ship between the target node/neighbor node and the rest nodes in the same
subnetwork. Consequently, at first, the probability of successfully executing the
friendship attack increases with M. However, with the further increase of M,
the probability of the target node and neighbor node being divided into the same
19
Figure 7: An example of subgraph attack.
subnetwork increases. The relationship between the target node, the neighbor
node, and the other nodes is more likely to change when the target node and
the neighbor node are divided into the same subnetwork, given that there are
more edges that are related to the target node and neighbor node in one subnet-
work. Therefore, the probability of successfully executing the friendship attack
decrease as Mfurther increases.
3.3.2. Subgraph Attack
The subgraph attack assumes that the adversary knows the structure of a
subgraph containing the target node from the graph he/she wants to attack,
such as the graph plotted in Fig. 7 (b). Then, the adversary can uniquely
determine that the node 7 in anonymous network shown in Fig. 7 (c) is the
target node Alice, given that only node 7 owns the same subgraph structure of
target node.
The NetNS can also deal with the subgraph attack owing to the fact that
the subgraph known by adversary is disturbed with a high probability. Assume
that the subgraph known by the adversary owns nnodes. Since the NetNS
randomly divides the nodes in the positive social network into different subnet-
works, there are various divisions of the subgraph. The number of division is
PC
i=1 1
i!Pi
k=0(1)ki
k(ik)n, where C=dN/Meis the number of subnet-
works contained in the positive social network. To simplify the analysis, here
we only consider the case that the nodes in subgraph are divided evenly into
Csubnetworks, i.e., each subnetwork contains v=bn
Ccneighbor nodes. The
20
40 50 60 70 80 90
The number of nodes in a group M
0.000
0.001
0.002
0.003
0.004
0.005
0.006
P
neighborhood
Figure 8: The probability of an adversary successfully executing subgraph attack against a
negative social network under different M. (N= 1000, σ= 1, n= 30)
probability that the structure of subgraph nodes contained in a subnetwork is
not changed can be calculated as follows.
psingle =
(Mv)×(Mv1)/2
X
i=1
pi×(Mv)×(Mv1)/2
i
M×(M1)/2
i.(7)
Since for each subnetwork, the NetNS generates the negative subnetwork inde-
pendently, the probability of the whole subgraph node structure known by the
adversary being unchanged is:
pneighborhood =
C
Y
i=1
psingle.(8)
Fig. 8 shows the probability of an adversary successfully executing subgraph
attack against a negative social network under different M, with N= 1000,
σ= 1, and n= 30. In Fig. 8, the value of pneighborhood is less than 0.005, when
M < 83, which means that the adversary successfully executing neighborhood
attack is a small probability event. Therefore, the NetNS can resist subgraph
attack.
21
4. Experimental Studies
4.1. Compared Algorithms and Data Sets
In this paper, the developed NetNS is compared with k-degree anonymity (k-
degree for short) [39], EAGA [40], UMGA [41], PBCN[6], GPPS[27], and PPRA
[30], all of which disturb the edges of social network to preserve topology privacy.
To be specific, the k-degree adds and removes the edges to modify the structure
of social networks, so that the disturbed networks satisfy k-degree anonymity,
i.e., for each node, there are at least knodes in the network with the same
degree. The EAGA uses the evolutionary algorithm to optimize the sequence of
the disturbed nodes, which not only satisfies the request of k-anonymity model,
but also owns the minimized distance from the original degree sequence. The
UMGA adds and removes the edges according to the neighborhood centrality
score of them, so that the disturbed networks meet k-degree anonymity with
high utility. The PBCN randomly disturbs the edges in the social network
to satisfy the differential privacy model. The GPPS implements k-anonymity
through node clustering and graph modification to protect the user’s identity
privacy. The PPRA uses fuzzy sets to disturb the degree centrality of each node,
and then uses a degree-preserving randomization technique to make the degree
of each node in the reconstructed graph remain constant.
The NetNS and six compared algorithms are tested on three real-world social
networks, including Zachary’s karate club network (Karate for short) [42], Books
about US politics network (Polbooks for short) [43], and blogs network (Blogs
for short) [44]. The Karate has 34 nodes and 78 edges, while the Polbooks owns
105 nodes and 411 edges. As for Blogs, it contains 3984 nodes and 6803 edges.
The characteristics of the three real-world social networks are given in Table 1.
4.2. Privacy and Utility Metrics
The structure entropy of social networks is employed as privacy metric, which
calculated the entropy of node degree to measure the privacy preservation of
different methods has been used in [45]. The structure entropy of social networks
22
Table 1: The information of three real-world social networks.
Network Nodes Edges
Karate 34 78
Polbooks 105 441
Blogs 3984 6803
can be calculated as follows.
P rivacy =
d max
X
i=1
pi×log2pi,(9)
where d max is the maximum degree of nodes in social networks, and piis
the proportion of nodes with degree i. The higher the entropy, the better the
topology privacy of social networks is preserved.
In the experiments, three common metrics are adopted to evaluate the utility
of the disturbed social networks, including the number of triangles, the cluster-
ing coefficient, and the similarity between the shortest path of each pair of
nodes in original and disturbed networks. All of them measure the utility of
social networks according to the changes of attributes of networks.
In addition to the attributes of social networks, we also evaluated the utility
of social networks from the perspective of community detection, which is an im-
portant network analysis method [46]. Since the community detection can find
some useful information hidden in networks, the similarity between the commu-
nity detection results of original and disturbed social networks can be used to
evaluate the utility of disturbed social networks. The higher the similarity of
detection results, the greater the availability of the network. In this paper, we
also employ the normalized mutual information (NMI) as the utility metric,
which can be calculated as follows.
NMI (A, B) =
2PCA
i=1 PCB
j=1 Ci,j log Ci,j n
Ci.C.j
PCA
i=1 Ci.log Ci.
n+PCB
j=1 C.j log C.j
n,(10)
where Aand Bare community detection results in original and disturbed social
networks separately, nis the number of nodes in the social network, and Cis
23
the confusion matrix whose element Ci,j indicates the number of nodes in the
community iof the partition Aalso in the community jof the partition B.
CA(CB) is the number of communities in A(B) division, Ci. (C.j) is the sum
of elements in matrix C. The higher the NMI, the better utility the social
network achieves.
4.3. Performance Analysis
In order to verify the privacy and utility of different methods, we tune the pa-
rameters of NetNS and six compared algorithms to achieve the different privacy
preservation levels, and then compare the utility of disturbed social networks.
Here, the community detection method CLPE [10], is employed to divide the
nodes into different communities, given that the disturbance of edges might de-
stroy the community structure, and the CLPE achieves a good performance on
network without a clear community structure. All of experiments are indepen-
dently conducted 10 times and the median values of the experimental results
are plotted in Figs. 9-12.
3.3 3.31 3.32 3.33 3.34 3.35 3.36
privacy protection level p
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
clustering coefficient
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(a)
4.47 4.48 4.49 4.5 4.51 4.52 4.53
privacy protection level p
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
clustering coefficient
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(b)
7.71 7.72 7.73 7.74 7.75 7.76 7.77
privacy protection level p
0
0.05
0.1
0.15
0.2
0.25
clustering coefficient
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(c)
Figure 9: The comparison of clustering coefficient on social networks disturbed by seven al-
gorithms with different privacy preservation levels. (a) The clustering coefficient of Karate
network disturbed by seven algorithms with different privacy preservation levels. (b) The
clustering coefficient of Polbooks network disturbed by seven algorithms with different pri-
vacy preservation levels. (c) The clustering coefficient of Blogs network disturbed by seven
algorithms with different privacy preservation levels.
24
4.3.1. Clustering Coefficient
The absolute value of the difference between the clustering coefficient of
the original social networks and the networks disturbed by different methods is
shown in Fig. 9. It can be found that the developed NetNS achieves the overall
most accurate clustering coefficient on Polbooks network and Blogs network. On
the social network of Karate, the clustering coefficient of networks disturbed by
NetNS is not good as method PPRA, but is almost close to the original network
compared to the other five algorithms. On the Polbooks network, although when
the privacy preservation level is 4.48, the EAGA obtains the better clustering
coefficient than NetNS, its performance on the clustering coefficient considerably
deteriorates with the increase of privacy preservation level. By contrast, the
networks disturbed by NetNS always have a competitive clustering coefficient
under different privacy preservation levels. On the Blogs network, when the
privacy preservation level is less than 7.75, the NetNS and the PPRA perform
almost the same in terms of clustering coefficient. However, when the privacy
preservation level exceeds 7.75, the clustering coefficient obtained by NetNS has
a stronger performance compared to PPRA. Therefore, the NetNS owns overall
the best performance on the clustering coefficient.
4.3.2. Number of Triangles
The absolute value of the difference between the number of triangles in orig-
inal and disturbed social networks is plotted in Fig. 10. As can be seen in
Fig. 10 (a), the developed NetNS owns the best performance on the number
of triangles when the privacy preservation level is less than 3.32. Although the
performance of NetNS is worse than the UMGA, it still achieves competitive
performance on the number of triangles. On the Polbooks network, the devel-
oped NetNS achieves the best performance except when the privacy preservation
level is 4.48. Moreover, when the privacy preservation level is 4.48, the perfor-
mance of NetNS is only slightly worse than the EAGA, which achieves the best
performance under corresponding privacy preservation level. As for the Blogs
network shown in Fig. 10 (c), the number of triangles in the network disturbed
25
3.3 3.31 3.32 3.33 3.34 3.35 3.36
privacy protection level p
0
5
10
15
20
25
30
number of triangles
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(a)
4.47 4.48 4.49 4.5 4.51 4.52 4.53
privacy protection level p
0
50
100
150
200
250
300
350
400
450
500
number of triangles
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(b)
7.71 7.72 7.73 7.74 7.75 7.76 7.77
privacy protection level p
0
500
1000
1500
2000
2500
3000
number of triangles
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(c)
Figure 10: The comparison of the number of triangles in social networks disturbed by seven
algorithms with different privacy preservation levels. (a) The number of triangles in Karate
networks disturbed by seven algorithms with different privacy preservation levels. (b) The
number of triangles in Polbooks networks disturbed by seven algorithms with different pri-
vacy preservation levels. (c) The number of triangles in Blogs networks disturbed by seven
algorithms with different privacy preservation levels.
by the NetNS remains almost the same as the original data. On the whole,
compared to the other algorithms, the networks disturbed by NetNS still owns
the overall most accurate number of triangles.
4.3.3. The Shortest Path Between Different Nodes
Fig. 11 displays the similarity between the shortest path of each pair of
nodes in original and disturbed networks. Here, the cosine similarity is used to
measure the similarity of the shortest path between two networks. From Fig. 11
(a), it can be found that the networks disturbed by the NetNS have the overall
highest similarity with the original Karate network from the perspective of the
shortest path. Although the UMGA, k-degree, and GPPS owns better similarity
than NetNS when the privacy preservation level is 3.33, the NetNS still owns
competitive performance, and at the other levels, the NetNS always achieves
the best performance. In Fig. 11 (b), the networks disturbed by the NetNS
also have the overall best utility. Although when the privacy preservation level
is 4.48, the EAGA owns competitive performance, it is far worse than NetNS
when the privacy preservation level is 4.47, 4.49. For the Blog network, it can
be seen that in Fig. 11 (c), the NetNS still achieves the best performance on
26
3.3 3.31 3.32 3.33 3.34 3.35 3.36
privacy protection level p
0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
the similar of shortest path
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(a)
4.47 4.48 4.49 4.5 4.51 4.52 4.53
privacy protection level p
0.85
0.9
0.95
1
the similar of shortest path
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(b)
7.71 7.72 7.73 7.74 7.75 7.76 7.77
privacy protection level p
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1
the similar of shortest path
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(c)
Figure 11: The comparison of the similarity between the shortest path of each pair of nodes
in original and networks disturbed by seven algorithms with different privacy preservation
levels. (a) The similarity between the shortest path of each pair of nodes in Karate networks
disturbed by seven algorithms with different privacy preservation levels. (b) The similarity
between the shortest path of each pair of nodes in Polbooks networks disturbed by seven
algorithms with different privacy preservation levels. (c) The similarity between the shortest
path of each pair of nodes in Blogs networks disturbed by seven algorithms with different
privacy preservation levels.
the utility of disturbed networks.
4.3.4. Normalized Mutual Information
To verify the influence of different algorithms on network analysis results,
the normalized mutual information (NMI) between the community detection
results on the original and disturbed social networks is givens in Fig. 12. In
Fig. 12 (a), the NetNS owns the highest NMI on the Karate network under any
privacy preservation levels, which means that the community detection results
on the networks disturbed by the NetNS are the most similar to the original
network. In Fig. 12 (b), although the NetNS is slightly worse than k-degree
on the Polbooks network when privacy preservation level is 4.48, it achieves the
best NMI under the rest privacy preservation levels. As for Fig. 12 (c), it can
be seen that the NetNS has the best NMI on almost all privacy preservation
level. Consequently, the networks disturbed by NetNS own the best utility in
terms of NMI.
To sum up, from the analysis of Figs. 9-12, the following two conclusions can
27
3.3 3.31 3.32 3.33 3.34 3.35 3.36
privacy protection level p
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
NMI
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(a)
4.47 4.48 4.49 4.5 4.51 4.52 4.53
privacy protection level p
0.3
0.4
0.5
0.6
0.7
0.8
0.9
NMI
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(b)
7.71 7.72 7.73 7.74 7.75 7.76 7.77
privacy protection level p
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
NMI
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
(c)
Figure 12: The comparison of community detection results on social networks disturbed by
seven algorithms with different privacy preservation levels. (a) The community detection re-
sults on Karate networks disturbed by seven algorithms with different privacy preservation
levels. (b) The community detection results on Polbooks networks disturbed by seven al-
gorithms with different privacy preservation levels. (c) The community detection results on
Blogs networks disturbed by seven algorithms with different privacy preservation levels.
be obtained. First, compare to the six considered privacy preserving algorithms
tailored for social networks, the social networks obtained by proposed NetNS
achieve overall best utility with the same privacy preservation level. Second, the
NetNS can provide higher level of privacy while ensuring the utility of social net-
works. The reasons of high utility achieved by NetNS are as follows. The NetNS
adds and removes edges only within subnetworks and the relationship between
nodes in different subnetworks will not be changed, which limits the impact of
the NetNS on the structure of social networks. Instead, all of compared algo-
rithms add and remove the edges throughout the whole social network, which
might destroy the structure of networks and lead to the low utility. Moreover,
for each subnetwork, the NetNS selects the negative subnetwork according to
the Gaussian distribution, and thus the negative subnetworks with more dis-
turbed edges have less probability of being selected, which further ensures the
structure of subnetworks does not change significantly.
4.3.5. The Stability of the Performance of Different Algorithms
To verify the stability of the performance of different algorithms, the vari-
ance of the NMI values of Polbooks networks disturbed by different algorithms
28
4.47 4.48 4.49 4.5 4.51 4.52 4.53
privacy protection level p
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
The variance of NMI
NetNS
EAGA
UMGA
k-degree
PBCN
PPRA
GPPS
Figure 13: The variance of the N M I values of the polbooks network distributed by different
algorithms.
is calculated and plotted in Fig. 13, where the NetNS achieves the best sta-
bility among all seven algorithms. In Fig. 13 , the variance of NMI values of
the networks disturbed by the EAGA, UMGA, k-dgree, PBCN, and PPRA is
far greater than that of NetNS, which means that the utility of the networks
disturbed by them has larger fluctuates than the NetNS. Although the GPPS
achieves competitive stability when the privacy preservation level is less than
4.5, it is still greater than that of the NetNS. Furthermore, when the privacy
preservation level is greater than 4.5, there is a sharp rise in the variance of
NMI values of the networks disturbed by GPPS. On the contrary, the variance
of NMI values of the networks disturbed by the NetNS always keeps at low
values. Therefore, the developed NetNS can not only achieves better utility,
but also owns better stability than compared algorithms.
4.4. Sensitivity Analysis of Parameters in NetNS
The developed NetNS contains two parameters, namely the number of nodes
in a subnetwork Mand the variance value of the Gaussian distribution σ. In
order to verify the impact of parameters Mand σon the performance of NetNS,
we fix one parameter and change another to analyze the impact of parameter
on the utility and privacy of social networks disturbed by NetNS.
29
4.4.1. Impact of the Parameter M
Fig. 14 displays the variance of utility and privacy of different networks
disturbed by NetNS under different M. Here, the σis set as 9. In Fig. 14 (a),
the utility of networks in terms of the clustering coefficient shows a significant
increase trend with M. From Fig. 14 (b), it can be found that on the Blogs
network, the utility of networks in terms of the number of triangles increases
sharply when M < 12, and then increases slowly, while on Karate and Polbooks
networks, the utility maintains relative stability. In the perspective of the short-
est path shown in Fig. 14 (c), although there is a modest decline when Mis 6
on Karate and Polbooks networks and Mis 15 on Karate and Blogs networks,
the utility of networks disturbed by NetNS also increase with M.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Figure 14: Impact of the number of nodes in a group Mon the privacy and utility of social
network disturbed by NetNS. (a) Clustering coefficient. (b) Number of triangles. (c) Av-
erage shortest path. (d) NMI. (e) The privacy preservation level(Karate). (f) The privacy
preservation level(Polbooks). (g) The privacy preservation level(Blogs)
As for NMI shown in Fig. 14 (d), it can be seen that on the Karate network,
the NMI rises sharply from below 0.65 to above 0.9 when Mis between 3 and
6, then keeps relatively gentle increase with the increase of M. Similarly, on
Polbooks network, the NMI maintains a similar trend as the Karate network,
namely increase sharply first and then increase steadily. As for the Blogs net-
30
work, the NMI has maintained a steady upward trend with the increase of M.
On the whole, the utility of the social networks disturbed by NetNS increases
with M. The reason for this trend is that when Mincreases, the number of
subnetworks decreases, so that the number of edges disturbed by NetNS in the
whole network also decreases, which leads to a higher utility.
Figs. 14 (e)-(g) gives the privacy preservation level achieved by NetNS under
different M, where the privacy preservation level of three real-world social net-
works decreases as the parameter Mincreases. It is because that the smaller M
is, the more subnetworks are divided from the whole social network, so that more
edges will be disturbed, given that the expected number of edges disturbed in
each subnetwork is the same. For a network, the more disturbed edges are, the
more chaotic its structure will be, and therefore a higher privacy preservation
level will be achieved.
From the above analysis, it can be found that the utility of social networks
disturbed by NetNS increases with M, whereas the privacy preservation level
decreases with M. The researchers can determine the parameter Maccording to
the demand of utility and privacy preservation level. When there is no additional
requirement for utility and privacy preservation level, the Mis recommended
to be set as 6, given that when M= 6, the NetNS achieves the best privacy
preservation level on Karate and Polbooks networks, the second best privacy
preservation level on Blogs network, and a competitive utility on all three real-
world social networks.
4.4.2. Impact of the Parameter σ
Fig. 15 shows the utility and privacy preservation level of networks disturbed
by NetNS under the parameter σvarying from 1 to 10 with interval 1. Here,
M= 3.
In Fig. 15 (a), the utility of networks in terms of the clustering coefficient
shows a decrease trend as σincreases, while an irregular change on Karate
networks, which might be related to the small number of nodes in the Karate
network. In Fig 15 (b), the utility of networks in terms of the number of triangles
31
keeps stable on Karate and Polbooks networks, while on Blogs network, the
utility decreases obviously when the σincrease from two to three.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Figure 15: Impact of the variance value of the Gaussian distribution σon privacy preservation
level and data utility in different network. (a) Clustering coefficient. (b) Number of triangles.
(c) Average shortest path. (d) NMI. (e) The privacy preservation level(Karate). (f) The
privacy preservation level(Polbooks). (g) The privacy preservation level(Blogs)
As for the shortest path, it can be found from Fig. 15 (c), the similarity be-
tween disturbed networks and original networks demonstrates a decrease trend
as σincreases, while increases first and then keeps stable as σincreases on Blogs
network. In Fig. 15 (d), the values of NMI increase first and then keep sta-
ble on Karate network, while it shows a decrease trend on Polbooks and Blogs
networks.
Figs. 15 (e)-(g) displays the changes of privacy preservation levels under
different σ. In Fig. 15 (e), the privacy preservation level of NetNS increases
sharply when σincreases from one to two, and four to six on Karate network.
On Polbooks network, the NetNS achieves the best privacy preservation level
when σ= 3 or σ > 4. As for Blogs network, the privacy preservation level of
NetNS increases when σ > 3, and then keeps relatively stable.
Different from parameter M, the utility of social networks disturbed by
NetNS first decreases with σ, and then keeps relatively stable, while the privacy
32
preservation level of NetNS first decreases with σand then keeps relatively sta-
ble. The reason for this is that with the increase of parameter σ, the probability
of NetNS selecting negative subnetworks farther from the original subnetworks
increases, and thus the expected number of edges disturbed by NetNS also
increases. Consequently, the structure of network will be changed more sig-
nificantly, so as to achieve higher privacy preservation level and lower utility.
However, When the σis relatively large, the probability of NetNS selecting neg-
ative subnetworks with different distances from positive subnetwork becomes
closer. In this case, a further increase of σcan only make the expected number
of edges disturbed by NetNS changes very little. Therefore, when σis relatively
large, the utility and privacy preservation level achieved by NetNS will keep
relatively stable.
From the experimental results shown in Fig. 15, it can be found that when
there are high requirements for the utility of social network, the σis recom-
mended to be set as one or two, given that the disturbed networks show the
overall best utility in terms of all metrics. When there are high requirements
for privacy preservation level, the σis recommended to be set as seven, since in
this case the NetNS achieves the best privacy preservation level.
Furthermore, by comparing the experimental results of sensitivity of param-
eters Mand σ, it can be seen that the impact of parameters Mand σon
privacy preservation level is near with each other, but the parameters Mhas a
larger impact on the utility of social networks than σ. Therefore, in practical
applications, it can be considered to satisfy the actual requirements on utility
and privacy preservation level by adjusting the parameter Mgreatly and the
parameter σwith a small amplitude.
5. Conclusion and Future Work
The research and applications of social networks have brought a lot of im-
provement on our daily life. However, the privacy preservation of it also deserves
more attention, given that there is a lot of sensitive information about people
33
in social networks. To this end, based on Gaussian negative survey model, a
privacy preservation method, called NetNS, has been developed for the topology
privacy of social networks. The developed NetNS first divides the social network
into multiple subnetworks with the same size, and then selects a negative sub-
network, which owns different topology from the original subnetwork, according
to Gaussian distribution for each subnetwork. After that, the NetNS replaces
each subnetwork with the selected negative subnetwork. The theoretical analy-
sis of NetNS shows that the NetNS not only owns high efficiency, but also can
resist the friendship attack and subgraph attack. The experimental results con-
ducted on three real-world social networks indicate that under the same level
of privacy preservation, the NetNS can obtain the networks with better utility
than six existing algorithms tailored for topology privacy of social networks.
There is also some work for further research. First, in the NetNS, the social
network is divided into multiple subnetworks with the same size, and the rela-
tionship between each pair of nodes owns the same privacy preservation level.
However, in real life, different people often have different levels of privacy preser-
vation. Therefore, it is intriguing to expand NetNS to provide individualized
levels of privacy preservation. Second, in this paper, we only verify the impact
of NetNS on community detection results. However, there are many analysis
technologies for social networks, such as critical node detection and link predic-
tion. In the future, we will use the other technologies of social network analysis
to further evaluate the utility of social networks disturbed by NetNS.
Acknowledgments
This study is partly supported by the National Natural Science Foundation
of China (No. 62006001), the Synergy Innovation Program of Anhui Province
(No. GXXT-2020-013), and the Natural Science Foundation of Chongqing City
(Grant No. CSTC2021JCYJ-MSXMX0002).
34
References
[1] U. Can, B. Alatas, A new direction in social network analysis: Online
social network analysis problems and applications, Physica A: Statistical
Mechanics and its Applications 535 (2019) 122372.
[2] S. Tabassum, F. S. Pereira, S. Fernandes, J. Gama, Social network analysis:
An overview, Wiley Interdisciplinary Reviews: Data Mining and Knowl-
edge Discovery 8 (5) (2018) e1256.
[3] T. Prochnow, H. Delgado, M. S. Patterson, M. R. U. Meyer, Social network
analysis in child and adolescent physical activity research: A systematic
literature review, Journal of Physical Activity and Health 17 (2) (2020)
250–260.
[4] R. F. Hunter, K. de la Haye, J. M. Murray, J. Badham, T. W. Valente,
M. Clarke, F. Kee, Social network interventions for health behaviours and
outcomes: A systematic review and meta-analysis, PLoS medicine 16 (9)
(2019) e1002890.
[5] H. Jiang, J. Pei, D. Yu, J. Yu, B. Gong, X. Cheng, Applica-
tions of differential privacy in social network analysis: a survey,
IEEE Transactions on Knowledge and Data Engineering, accepted,
DOI:10.1109/TKDE.2021.3073062 (2021).
[6] H. Huang, D. Zhang, F. Xiao, K. Wang, J. Gu, R. Wang, Privacy-preserving
approach pbcn in social network with differential privacy, IEEE Transac-
tions on Network and Service Management 17 (2) (2020) 931–945.
[7] A. A. N. Al-Rabeeah, M. M. Hashim, Social network privacy models, Cihan
University-Erbil Scientific Journal 3 (2) (2019) 92–101.
[8] A. K. Jain, S. R. Sahoo, J. Kaubiyal, Online social networks security and
privacy: comprehensive review and analysis, Complex & Intelligent Sys-
tems 7 (5) (2021) 2157–2177.
35
[9] X. Ying, X. Wu, Randomizing social networks: a spectrum preserving ap-
proach, in: Proceedings of the 2008 SIAM International Conference on
Data Mining, SIAM, 2008, pp. 739–750.
[10] H. Jiang, Z. Liu, C. Liu, Y. Su, X. Zhang, Community detection in com-
plex networks with an ambiguous structure using central node based link
prediction, Knowledge-Based Systems 195 (2020) 105626.
[11] F. Esponda, V. M. Guerrero, Surveys with negative questions for sensitive
items, Statistics & Probability Letters 79 (24) (2009) 2456–2461.
[12] W. Luo, R. Liu, H. Jiang, D. Zhao, L. Wu, Three branches of negative
representation of information: A survey, IEEE Transactions on Emerging
Topics in Computational Intelligence 2 (6) (2018) 411–425.
[13] Y. Bao, W. Luo, X. Zhang, Estimating positive surveys from negative
surveys, Statistics & Probability Letters 83 (2) (2013) 551–558.
[14] H. Jiang, W. Luo, L. Ni, B. Hua, On the reconstruction method for nega-
tive surveys with application to education surveys, IEEE Transactions on
Emerging Topics in Computational Intelligence 1 (4) (2017) 259–269.
[15] H. Jiang, W. Luo, On consistency in multiquestion negative surveys with
application to healthcare data collection, IEEE Transactions on Industrial
Informatics 15 (12) (2019) 6395–6406.
[16] H. Jiang, W. Luo, D. Zhao, A novel negative location collection method
for finding aggregated locations, IEEE Transactions on Intelligent Trans-
portation Systems 19 (6) (2017) 1741–1753.
[17] X. Du, D. Luo Wenjian, .and Zhao, Negative publication of data, Immune
Computation 2 (2) (2014) 1–14.
[18] L. Sweeney, k-anonymity: A model for protecting privacy, International
journal of uncertainty, fuzziness and knowledge-based systems 10 (5) (2002)
557–570.
36
[19] A. Machanavajjhala, D. Kifer, J. Gehrke, M. Venkitasubramaniam, l-
diversity: Privacy beyond k-anonymity, ACM Transactions on Knowledge
Discovery from Data (TKDD) 1 (1) (2007) 3–es.
[20] H. Jiang, Y. Liao, Q. Yu, A negative survey based method for preserving
topology privacy in social networks, in: Proceedings of the 4th International
Conference on Data Intelligence and Security (ICDIS), IEEE, 2022, pp. 1–6.
[21] W. Luo, H. Jiang, D. Zhao, Rating credits of online merchants using neg-
ative ranks, IEEE Transactions on Emerging Topics in Computational In-
telligence 1 (5) (2017) 354–365.
[22] J. Horey, M. M. Groat, S. Forrest, F. Esponda, Anonymous data collec-
tion in sensor networks, in: Proceedings of the 4th Annual International
Conference on Mobile and Ubiquitous Systems: Networking & Services
(MobiQuitous), IEEE, 2007, pp. 1–8.
[23] W. Luo, Y. Lu, D. Zhao, H. Jiang, On location and trace privacy of the
moving object using the negative survey, IEEE Transactions on Emerging
Topics in Computational Intelligence 1 (2) (2017) 125–134.
[24] R. Liu, S. Tang, Negative survey-based privacy protection of cloud data,
in: Proceedings of the International Conference in Swarm Intelligence,
Springer, 2015, pp. 151–159.
[25] H. Jiang, W. Luo, Z. Zhang, A privacy-preserving aggregation scheme based
on immunological negative surveys for smart meters, Applied Soft Comput-
ing 85 (2019) 105821.
[26] L. Wu, W. Luo, D. Zhao, Svdnpd: a negative data publication method
based on the sensitive value distribution, in: Proceedings of the Interna-
tional Workshop on Artificial Immune Systems (AIS), IEEE, 2015, pp. 1–9.
[27] H. Zhang, L. Lin, L. Xu, X. Wang, Graph partition based privacy-
preserving scheme in social networks, Journal of Network and Computer
Applications 195 (2021) 103214.
37
[28] R. K. Langari, S. Sardar, S. A. A. Mousavi, R. Radfar, Combined fuzzy
clustering and firefly algorithm for privacy preserving in social networks,
Expert Systems with Applications 141 (2020) 112968.
[29] H. Kaur, N. Hooda, H. Singh, k-anonymization of social network data using
neural network and svm: K-neurosvm, Journal of Information Security and
Applications 72 (2023) 103382.
[30] S. Kumar, P. Kumar, Privacy preserving in online social networks using
fuzzy rewiring, IEEE Transactions on Engineering Management (2023).
[31] X. Zuo, L. Li, S. Luo, H. Peng, Y. Yang, L. Gong, Privacy-preserving
verifiable graph intersection scheme with cryptographic accumulators in
social networks, IEEE Internet of Things Journal 8 (6) (2020) 4590–4603.
[32] Y. Wang, L. Yang, X. Chen, X. Zhang, Z. He, Enhancing social network
privacy with accumulated non-zero prior knowledge, Information Sciences
445 (2018) 6–21.
[33] J. Shen, J. Tian, Z. Wang, H. Cai, Friendship links-based privacy-
preserving algorithm against inference attacks, Journal of King Saud
University-Computer and Information Sciences 34 (10) (2022) 9363–9375.
[34] F. Ahmed, A. X. Liu, R. Jin, Publishing social network graph eigenspec-
trum with privacy guarantees, IEEE Transactions on Network Science and
Engineering 7 (2) (2019) 892–906.
[35] W. Gao, J. Zhou, Y. Lin, J. Wei, Compressed sensing-based privacy pre-
serving in labeled dynamic social networks, IEEE Systems Journal (2022)
1–12doi:10.1109/JSYST.2022.3197150.
[36] X. Jian, Y. Wang, L. Chen, Publishing graphs under node differential pri-
vacy, IEEE Transactions on Knowledge and Data Engineering (2023).
[37] M. Xue, P. Karras, R. Chedy, P. Kalnis, H. K. Pung, Delineating social net-
work data anonymization via random edge perturbation, in: Proceedings
38
of the 21st ACM international conference on information and knowledge
management, 2012, pp. 475–484.
[38] H. Xie, L. Kulik, E. Tanin, Privacy-aware collection of aggregate spatial
data, Data & Knowledge Engineering 70 (6) (2011) 576–595.
[39] K. Liu, E. Terzi, Towards identity anonymization on graphs, in: Proceed-
ings of the ACM SIGMOD international conference on Management of
data, 2008, pp. 93–106.
[40] J. Casas-Roma, J. Herrera-Joancomart´ı, V. Torra, Evolutionary algorithm
for graph anonymization, arXiv preprint arXiv:1310.0229 (2013).
[41] J. Casas-Roma, J. Herrera-Joancomart´ı, V. Torra, k-degree anonymity and
edge selection: improving data utility in large networks, Knowledge and
Information Systems 50 (2) (2017) 447–474.
[42] W. W. Zachary, An information flow model for conflict and fission in small
groups, Journal of anthropological research 33 (4) (1977) 452–473.
[43] M. E. Newman, Modularity and community structure in networks, Pro-
ceedings of the national academy of sciences 103 (23) (2006) 8577–8582.
[44] L. Zhang, H. Pan, Y. Su, X. Zhang, Y. Niu, A mixed representation-based
multiobjective evolutionary algorithm for overlapping community detec-
tion, IEEE Transactions on Cybernetics 47 (9) (2017) 2703–2716.
[45] W. ZhenQiang, H. Jing, T. Pan, Privacy preserving algorithms of uncertain
graphs in social networks, Journal of Software 30 (4) (2019) 1106–1120.
[46] S. Souravlas, A. Sifaleras, M. Tsintogianni, S. Katsavounis, A classification
of community detection methods in social networks: a survey, International
Journal of General Systems 50 (1) (2021) 63–91.
39
... In addition to the structured data, a new method called NetNS, which is based on negative surveys, was created recently to preserve the topological privacy of social networks without node attributes [33]. In NetNS, the social network (called positive social network) is first divided into several subnetworks (called positive subnetworks) with M nodes. ...
... For the first question, the noise nodes directly participate in the division of social networks. The relationship between it and the other nodes is determined by the steps of disturbing the structure of the subnetwork, where the connection between each pair of nodes in the subnetwork is randomly disturbed by the method based on developed in [33]. Here, it is worth mentioning that if there are isolated nodes, namely the nodes without any edge connected to them, in the disturbed network, the AttNetNRI will disturb the subnetwork again. ...
Article
Full-text available
Due to the presence of a large amount of personal sensitive information in social networks, privacy preservation issues in social networks have attracted the attention of many scholars. Inspired by the self-nonself discrimination paradigm in the biological immune system, the negative representation of information indicates features such as simplicity and efficiency, which is very suitable for preserving social network privacy. Therefore, we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks, called AttNetNRI. Specifically, a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private. Moreover, a negative database-based method is proposed to hide node attributes, so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes, which is crucial to the analysis of social networks. To evaluate the performance of the AttNetNRI, empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks. The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts. The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
... Zhang et al. [23] design a new mechanism that defines the privacy protection levels of different users based on their attribute values, which reduces the addition of noise. Jiang et al. [24] propose a negative survey-based privacy preservation method, called NetNS, where a dedicated negative survey model is developed to disturb edges in social networks. Chu et al. [25] propose a hybrid method based on feature selection and a cluster analysis to solve the data utility and privacy problems of high-dimensional data. ...
Article
Full-text available
With the ever-increasing intertwining of social networks and daily existence, the accumulation of personal privacy information is steadily mounting. However, the exposure of such data could lead to disastrous consequences. Current graph data protection algorithms lack sufficient research on the characteristics of social users, while simultaneously incurring significant time and space overhead. Additionally, the strategies lack adaptability in incorporating noise, often resulting in a subsequent decrease in data availability. To address these issues, a novel approach called the Information Entropy-driven Adaptive Differential Privacy Protection Scheme (IEA-DP) is presented for the release of social data in this study. The proposed solution initially designs the InfomapMerge algorithm to divide the data into communities based on the characteristics of social networks, thereby enabling parallel processing, and mitigating the time and space overhead. Subsequently, the Adaptive Edge Modification Algorithm (AEMA) is proposed to optimize the noise addition process by adaptively adding noise based on the score of node importance. This effectively reduces the amount of added noise, increasing data availability. Finally, the experimental results conducted on six public datasets demonstrate that the IEA-DP scheme strikes a desirable balance between data availability and privacy.
Article
Full-text available
With fast-growing technology, online social networks (OSNs) have exploded in popularity over the past few years. The pivotal reason behind this phenomenon happens to be the ability of OSNs to provide a platform for users to connect with their family, friends, and colleagues. The information shared in social network and media spreads very fast, almost instantaneously which makes it attractive for attackers to gain information. Secrecy and surety of OSNs need to be inquired from various positions. There are numerous security and privacy issues related to the user’s shared information especially when a user uploads personal content such as photos, videos, and audios. The attacker can maliciously use shared information for illegitimate purposes. The risks are even higher if children are targeted. To address these issues, this paper presents a thorough review of different security and privacy threats and existing solutions that can provide security to social network users. We have also discussed OSN attacks on various OSN web applications by citing some statistics reports. In addition to this, we have discussed numerous defensive approaches to OSN security. Finally, this survey discusses open issues, challenges, and relevant security guidelines to achieve trustworthiness in online social networks.
Article
Full-text available
Differential privacy provides strong privacy preservation guarantee in information sharing. As social network analysis has been enjoying many applications, it opens a new arena for applications of differential privacy. This article presents a comprehensive survey connecting the basic principles of differential privacy and applications in social network analysis. We concisely review the foundations of differential privacy and the major variants. Then, we discuss how differential privacy is applied to social network analysis, including privacy attacks in social networks, models of differential privacy in social network analysis, and a series of popular tasks, such as analyzing degree distribution, counting subgraphs and assigning weights to edges. We also discuss a series of challenges for future work.
Article
Full-text available
The detection of community structures is a crucial research area. The problem of community detection has received considerable attention from a large portion of the scientific community and a very large number of papers has already been published in the literature. Even more important is the fact that, this large number of articles is in fact spread across a large number of different disciplines, from computer science, to statistics, and social sciences. These facts necessitate some type of classification and organization of these works. In this work, our basic classification approach divides the community detection schemes into three basic approaches: (a) the bottom-up approaches that use the local structures and try to expand them to form communities, (b) the top-down approaches, which start from the graph representing the entire network and try to divide it into communities, and (c) the data structure based approaches, which try to convert social networks to existing data structures, in order to facilitate processing. The first category includes the majority of algorithms, so further classification is possible. Such a classification is included in this work. For the other two categories, we make no further categorizations but we simply focus our discussion on the metrics or the data structures being used. Finally, a few possible directions for future research are also suggested.
Article
Directly publishing the original data of social networks may compromise personal privacy because social relationship data contain sensitive information about users. To protect the social relationships against inference attacks and achieve the trade-off between privacy and utility, we propose a privacy protection algorithm that combines the friendship links of central nodes (PPCN) in a dynamic social network. In the preparation work, we design two indices for user influence based on the characteristics of social networks that can identify central nodes (Definition 1) in a network. Operating central nodes can effectively protect user privacy and improve algorithm efficiency. Then we propose the PPCN algorithm to classify the friendship links of central nodes into three levels, which achieves the trade-off between privacy and utility. Considering that the added links may increase the risk of privacy disclosure, a substitution coefficient θ (Definition 4) is designed to measure the probability of two strangers becoming friends. Experimental results show that the privacy-utility trade-off (PUTO) value of the PPCN algorithm is an average 29.43% lower than that of other methods, achieving a better trade-off between privacy and structural utility. In addition, the PPCN algorithm only runs for 3.59 seconds, which performs better than most algorithms.
Article
With the rapid development of social networks, privacy- preserving is an important issue. Nowadays, privacy-preserving mainly involves static social networks. In fact, social networks are dynamic. The sequential release of social networks will lead to privacy leakage. Many studies have employed $K$ -anonymity to protect users' privacy, but because $K$ affects data utility in social networks, a smaller $K$ cannot provide enough privacy preserving, whereas a larger $K$ will lead to a significant loss of data utility. This article proposes a new privacy-preserving approach based on compressed sensing (CS) to protect privacy in the labeled dynamic social networks. The scheme first uses the CS to compress the updated node information at each time point. Then, in the form of label grouping, the node link relationship is randomly deleted/changed to blur the node degrees to protect privacy. Finally, the reconstruction algorithm and Gaussian measurement matrix are combined to ensure operating efficiency while balancing data privacy and data utility. The simulation results show that our scheme retains data utility better than $l$ -diversity and $K$ -anonymity. Furthermore, according to privacy analysis, the scheme can also protect link relationship privacy and label privacy at the same time, and prevent background knowledge attacks.
Article
Differential privacy (DP) has become the de facto standard of privacy protection. For graphs, there are two widely used definitions of differential privacy, namely, edge differential privacy (edge-DP) and node differential privacy (node-DP), and node-DP is preferred when the minimal unit of interest is a node. To preserve node-DP, one can develop different methods to answer each specific graph query, or develop a graph publishing method to answer all graph queries. However, no existing works worked on such graph publishing methods. In this work, we propose two methods for publishing graphs under node-DP. One is the node-level perturbation algorithm which modifies the input graph by randomly inserting and removing nodes. The other one is the edge-level perturbation algorithm which randomly removing edges and inserting nodes. Both methods can achieve a flexible privacy guarantee by adjusting the running parameters. We conduct extensive experiments on both real-world and synthetic graphs to show the effectiveness and efficiency of proposed algorithms.
Article
With the development of social networks, more and more data about users are released on social platforms such as Facebook, Enron, WeChat, in terms of social graphs. Without the efficient anonymization, the graph data publishing will cause serious privacy leakage of users, for example, malicious attackers might launch 1-neighborhood graph attack on targets, which assumes that 1-hop neighbors and the relations among them are known by attackers, thereby, targets can be re-identified in anonymous social graphs. To prevent such attack, we propose a Graph Partition based Privacy-preserving Scheme, named GPPS,i n social networks to realize social graph anonymization. The proposed GPPS preserves users’ identity privacy by k-anonymity which achieved by node clustering and graph modification. Specifically, in the similarity matrix calculation, we introduce the degree-based graph entropy to improve the accuracy of node clustering. Then, the graph modification is implemented to achieve the k-anonymity of users and meanwhile minimize the graph information loss. The experiment results illustrate that the proposed GPPS is effective and efficient both on synthetic and real data sets.
Article
Privacy concerns of users threaten the usage of online social networks (OSN). In this regard, privacy preserving of OSN emerged as a convincing solution for preserving the privacy of users and uncovering useful insights from the social network data. In this article, we propose a novel algorithm based on the fuzzy sets and rewiring algorithm for preserving the privacy of users. This article presents the algorithm called privacy-preserving rewiring algorithm (PPRA), which can be used for anonymizing the social network data. The algorithm is validated by showing its effectiveness on four real-world datasets across three major graph mining tasks. The proposed PPRA algorithm will help in preserving the privacy of users in the OSN graph while simultaneously maintaining the utility that can be generated from the OSN graph structure.