Figure 3 - uploaded by Avelie Stuart
Content may be subject to copyright.
Parametric Markov chain for the activity diagram given in Figure 1 where, puc = puserComment, p f c = p f riendComment and p f l = p f riendLike 

Parametric Markov chain for the activity diagram given in Figure 1 where, puc = puserComment, p f c = p f riendComment and p f l = p f riendLike 

Source publication
Conference Paper
Full-text available
Some online social networks (OSNs) allow users to define friendship-groups as reusable shortcuts for sharing information with multiple contacts. Posting exclusively to a friendship-group gives some privacy control, while supporting communication with (and within) this group. However, recipients of such posts may want to reuse content for their own...

Contexts in source publication

Context 1
... have mentioned in Section 1 that one of the key as- pects of our approach is the use of a parametric DTMC for modelling social network interaction between the user and a group member. Figure 3 describes the parametric DTMC that captures all the online social interactions identified in the simplified Facebook workflow given in Figure 1. Each state in this model represents a stage in the interactions be- tween a user i and a group member j started with a shared piece of information being received (i.e., initial state s0). State s10 denotes when no further interactions between user i and friend j, relative to the shared information, occur. Transitions between transient states model the control flow of how the receiver, which can be group member j as well as user i (in the case of a re-shared action performed by j), interacts with the shared information. For example, the transition (s0, s1) model that group member j has ignored i's message, whereas transition (s0, s2) models the feedback interaction between i and j. Transitions (s2, s4) and (s2, s6) represent the i's interaction with group member j, where transitions (s2, s5) and (s2, s7) model j's interaction with user i in response to the shared information. The probabili- ties of the outgoing transitions between states are unknown, or may change over time. For example, transition (s0, s3 k ) models group member j re-sharing user's i's received item of information, with sensitivity level k, which depends on the (unknown) sharing behaviour of ...
Context 2
... use the cost/reward structure of the Markov chain to as- sign state rewards corresponding to system operations that have a potential of triggering an interaction between the user and the group member. As shown in Figure 3, we as- sign state rewards to s1, s3 1 , s4, s5, s6, s7 ∈ S, labelled as r1, r3 1 , r4, r5, r6 and r7 respectively. State rewards are non- negative values and denote our system model ...
Context 3
... have mentioned in Section 1 that one of the key as- pects of our approach is the use of a parametric DTMC for modelling social network interaction between the user and a group member. Figure 3 describes the parametric DTMC that captures all the online social interactions identified in the simplified Facebook workflow given in Figure 1. Each state in this model represents a stage in the interactions be- tween a user i and a group member j started with a shared piece of information being received (i.e., initial state s0). State s10 denotes when no further interactions between user i and friend j, relative to the shared information, occur. Transitions between transient states model the control flow of how the receiver, which can be group member j as well as user i (in the case of a re-shared action performed by j), interacts with the shared information. For example, the transition (s0, s1) model that group member j has ignored i's message, whereas transition (s0, s2) models the feedback interaction between i and j. Transitions (s2, s4) and (s2, s6) represent the i's interaction with group member j, where transitions (s2, s5) and (s2, s7) model j's interaction with user i in response to the shared information. The probabili- ties of the outgoing transitions between states are unknown, or may change over time. For example, transition (s0, s3 k ) models group member j re-sharing user's i's received item of information, with sensitivity level k, which depends on the (unknown) sharing behaviour of ...
Context 4
... use the cost/reward structure of the Markov chain to as- sign state rewards corresponding to system operations that have a potential of triggering an interaction between the user and the group member. As shown in Figure 3, we as- sign state rewards to s1, s3 1 , s4, s5, s6, s7 ∈ S, labelled as r1, r3 1 , r4, r5, r6 and r7 respectively. State rewards are non- negative values and denote our system model ...

Similar publications

Conference Paper
Full-text available
Online social networks (OSNs) are currently a popular platform for social interactions among people. Usually, OSN users upload various contents including personal information on their profiles. Not all information uploaded on OSNs are public. The ability to infer users' hidden information or information that has not been even uploaded (i.e. private...

Citations

... Privacy and security scholars have developed several approaches to facilitate the definition of access-control policies in OSNs. Particularly, ACPMs seek to automate the generation of ACLs through machine learning [7,10,[28][29][30], formal logic [31,32], and network analysis [14,17,23] among other methods. On a large scale, ACPMs can be classified into community-based [14,17,23] or attribute-based [7,10,29,33], depending on whether they leverage communities or personal attributes for the automatic generation of access-control policies. ...
Article
Full-text available
Access-Control Lists (ACLs) (a.k.a. “friend lists”) are one of the most important privacy features of Online Social Networks (OSNs) as they allow users to restrict the audience of their publications. Nevertheless, creating and maintaining custom ACLs can introduce a high cognitive burden on average OSNs users since it normally requires assessing the trustworthiness of a large number of contacts. In principle, community detection algorithms can be leveraged to support the generation of ACLs by mapping a set of examples (i.e. contacts labelled as “untrusted”) to the emerging communities inside the user’s ego-network. However, unlike users’ access-control preferences, traditional community-detection algorithms do not take the homophily characteristics of such communities into account (i.e. attributes shared among members). Consequently, this strategy may lead to inaccurate ACL configurations and privacy breaches under certain homophily scenarios. This work investigates the use of community-detection algorithms for the automatic generation of ACLs in OSNs. Particularly, it analyses the performance of the aforementioned approach under different homophily conditions through a simulation model. Furthermore, since private information may reach the scope of untrusted recipients through the re-sharing affordances of OSNs, information diffusion processes are also modelled and taken explicitly into account. Altogether, the removal of gatekeeper nodes is further explored as a strategy to counteract unwanted data dissemination.
... (Sources: Antón and Earp, 2004;Deng et al., 2011;Zhang et al., 2020;Calandrino et al., 2011;Jana et al., 2013;Hasan et al., 2020;Omoronyia et al., 2012;Some, 2019;Figueiredo et al., 2017;Venkatadri et al., 2018;Horbe and Hotzendorfer, 2015;De and Metayer, 2016;Fisk et al., 2015;Barman et al., 2015;Lin, 2016;Ahamed et al., 2009;Sicari et al., 2012;Yang et al., 2016;Erola et al., 2011;Bilogrevic et al., 2011;Tuan Anh et al., 2014;Siewe and Yang, 2016;Calciati et al., 2018;Drosatos et al., 2014;Zhang et al., 2005;Castiglione et al., 2010;Tschersich et al., 2011;Rafiq et al., 2017;Iyilade and Vassileva, 2014;Office Journal of the European Union, 2016;ISO/IEC, 2011;Department of Health and Human Services, 2013;United State Congress, 1999;Federal Register of Legislation, 2017;U.S. Department of Justice, 1974;The OWASP Foundation, 2015) This category covers vulnerabilities related to the use, storage, and manipulation of the collected personal data. ...
... Processing personal data at a third party is a risk since they may apply lower levels of personal data protection, particularly mobile and web applications (The OWASP Foundation, 2015). There are cases that user privacy is violated when personal data is used for unspecified purposes or used or transferred without permissions (Antón and Earp, 2004;Deng et al., 2011;Zhang et al., 2020;Jana et al., 2013;Hasan et al., 2020;Rafiq et al., 2017;Lebeck et al., 2018;Iyilade and Vassileva, 2014;De and Metayer, 2016;Fisk et al., 2015 Foundation, 2015). In addition, although allowing unauthorised actors to modify personal data and collecting personal data without user consent/permissions are serious threats, none of the existing privacy vulnerabilities in CWE and CVE cover this. ...
Preprint
Full-text available
In this digital era, our privacy is under constant threat as our personal data and traceable online/offline activities are frequently collected, processed and transferred by many software applications. Privacy attacks are often formed by exploiting vulnerabilities found in those software applications. The Common Weakness Enumeration (CWE) and Common Vulnerabilities and Exposures (CVE) systems are currently the main sources that software engineers rely on for understanding and preventing publicly disclosed software vulnerabilities. However, our study on all 922 weaknesses in the CWE and 156,537 vulnerabilities registered in the CVE to date has found a very small coverage of privacy-related vulnerabilities in both systems, only 4.45\% in CWE and 0.1\% in CVE. These also cover only a small number of areas of privacy threats that have been raised in existing privacy software engineering research, privacy regulations and frameworks, and industry sources. The actionable insights generated from our study led to the introduction of 11 new common privacy weaknesses to supplement the CWE system, making it become a source for both security and privacy vulnerabilities.
... Privacy and security scholars have developed several approaches to facilitate the definition of access-control policies in OSNs. Particularly, ACPMs seek to automate the generation of ACLs through machine learning [44,17,37,14,52], formal logic [48,56], and network analysis [18,38,15] among other methods. On a large scale, ACPMs can be classified into community-based [18,38,15] or attribute-based [44,17,14,54], depending on whether they leverage communities or personal attributes for the automatic generation of access-control policies. ...
Preprint
Access-Control Lists (ACLs) (a.k.a. friend lists) are one of the most important privacy features of Online Social Networks (OSNs) as they allow users to restrict the audience of their publications. Nevertheless, creating and maintaining custom ACLs can introduce a high cognitive burden on average OSNs users since it normally requires assessing the trustworthiness of a large number of contacts. In principle, community detection algorithms can be leveraged to support the generation of ACLs by mapping a set of examples (i.e. contacts labelled as untrusted) to the emerging communities inside the user's ego-network. However, unlike users' access-control preferences, traditional community-detection algorithms do not take the homophily characteristics of such communities into account (i.e. attributes shared among members). Consequently, this strategy may lead to inaccurate ACL configurations and privacy breaches under certain homophily scenarios. This work investigates the use of community-detection algorithms for the automatic generation of ACLs in OSNs. Particularly, it analyses the performance of the aforementioned approach under different homophily conditions through a simulation model. Furthermore, since private information may reach the scope of untrusted recipients through the re-sharing affordances of OSNs, information diffusion processes are also modelled and taken explicitly into account. Altogether, the removal of gatekeeper nodes is further explored as a strategy to counteract unwanted data dissemination.
Book
Full-text available
This book constitutes the thoroughly refereed post conference papers of the First International Conference Blocksys 2019, 2019, held in Guangzhou, China, in December 2019. The 50 regular papers and the 19 short papers were carefully reviewed and selected from 130 submissions. The papers are focus on Blockchain and trustworthy systems can be applied to many fields, such as financial services, social management and supply chain management.
Chapter
Solid (Social Linked Data) aims to radically change the way web applications work today, giving users true data ownership and improved privacy. However, it is facing two challenges, one is that data in centralized repositories needs to be separated from social web applications that force users to share their information. In addition, a decentralized authentication that guarantees who can operate on user’s data with a secure privacy protection is another significant issue. In this paper, we address these challenges by proposing a blockchain-based decentralized data storage and authentication scheme for Solid, termed BCSolid, in which a user’s data can be independent of multiple web applications and can switch data storage service easily without relying on a trusted third party. Meanwhile, our scheme gurantees data ownership and user’s privacy by leveraging the blockchain miners to perform authentication with the help of certificateless cryptography. Additionally, we present a possible instantiation to illustrate how “transactions” in BCSolid are processed. To our knowledge this is the first work to promote the Solid project using blockchain. The evaluation results show that our scheme can gurantee a low latency network and is a promising solution to Solid.
Chapter
Security and privacy can often be considered from two perspectives. The first perspective is that of the attacker who seeks to exploit vulnerabilities of the system to harm assets such as the software system itself or its users. The second perspective is that of the defender who seeks to protect the assets by minimising the likelihood of attacks on those assets. This chapter focuses on analysing security and privacy risks from these two perspectives considering both the software system and its uncertain environment including uncertain human behaviours. These risks are dynamically changing at runtime, making them even harder to analyse. To compute the range of these risks, we highlight how to alternate between the attacker and the defender perspectives as part of an iterative process. We then quantify the risk assessment as part of adaptive security and privacy mechanisms complementing the logic reasoning of qualitative risks in argumentation (Yu et al., J Syst Softw 106:102–116, 2015). We illustrate the proposed approach through the risk analysis of examples in security and privacy.