Figure 1 - uploaded by Athman Bouguettaya
Content may be subject to copyright.
Social Web: technologies, platforms and applications 

Social Web: technologies, platforms and applications 

Contexts in source publication

Context 1
... through into the mainstream newspapers and televisions without much scrutiny and verification due to the high competition of being first to report the news. On the positive side, Social Media has played a major role in providing a universal and unfettered platform for expressing and sharing views to enact political changes (such as the Arab Spring) [10] and providing effective relief in natural disasters (such as the Japan Tsunami and the New Zealand earthquakes) [20], thus consolidating global relief efforts. Furthermore, people are increasingly relying on the Social Web to obtain information on a wide variety of topics, such as travel, healthcare advice and government services [6]. Indeed, the Social Web has become as essential part of the communication eco-system worldwide. Therefore, it is important to address the issue of trust in the Social Web in order to leverage it for the betterment of human society. In the Social Web, there are four essential entities that are involved in trust directly: service consumers, service providers, services and content. In this special issue, we are looking for innovative technologies and solutions from diverse disciplines that address the issue of trust in the Social Web. Trust has been studied in many disciplines, including Sociology, Psychology, Economics and Computer Science. Each of these disciplines has defined and considered trust from different perspectives, and their definitions may not be directly applicable to the Social Web. In general, trust is a measure of confidence that an entity or entities will behave in an expected manner. In the Social Web, trust has to be studied from different aspects: data (or content), services (or applications), service providers (Web sites, organisations, governments or individuals) and service consumers (organisations or individuals). There has been already work reported in the literature covering these different aspects [11, 19]. Beatty et al. [4] conducted a study of consumer trust on e-commerce Web sites, whereas Grandison et al. [9] surveyed trust from the point of view of applications to identify trust needs for e-commerce applications. Similarly, Artz and Gil [3] provided an overview of trust research in the Semantic Web, whereas Malik and Bouguettaya [14] looked at the issues of trust in the Service Web. Josang et al. (2007) published an important survey for Internet applications in which they provided an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions. Review articles focus mainly on trust from a Computer Science perspective. The emer- gence of the Social Web has spurred new research in the study of trust, and, recently, a number of trust models for social networks have been developed with a specific focus on social aspects of trust. Sherchan et al. presented a comprehensive review of trust in social networks covering literature from both Computer Science and Social Sciences [19]. Trust in the Social Web faces a number of challenges that need to be addressed. We outlined a few fundamental challenges below. First, the issue of trust bootstrapping is paramount, i.e., how to assign a trust value to new entities (e.g., service providers, service consumers, etc.) in the Service Web. Second, the Social Web is built on the social interactions of entities, and hence can be represented as a complex network structure. The propagation of trust and distrust in the networks becomes an important aspect of building trust in the Social Web. Third, recommendation systems are gaining popularity in the Social Web. The recent survey conducted by Forester Research Survey shows that 70 % of US respondents (out of 60,000) trust brands or products recommendations from friends and family. However, trust is low when it comes to consumer written online reviews (46 %). There is thus a scope to develop techniques to increase trust on consumer written online reviews, which in turn helps to build the trusted Social Web. Fourth, the issue of deception must be addressed to build the trusted Social Web, because the Social Web is intrinsically vulnerable to ill-intentioned activities due to anonymity and openness. Deception can occur in many ways including using fake personal identity and wrong content or location. Thus, there is need to develop novel techniques for preventing, detecting, and isolating deceptions in the Social Web. Finally, context plays an important role in building the trusted Social Web. Trust is context sensitive, e.g., a person can trust his/her friends for movie recommendation, but may not trust them for restaurant recommendations, as their taste of food is different. A way to go further in this direction is to build a context-aware trust management framework for the Social Web. We believe that we need to develop a trust framework for the Social Web that can be easily integrated with different technologies, platforms and applications as shown in Figure 1. The W3C initiated a workshop addressing this issue. Passant et al. [16] have proposed a Provenance-Trust-Privacy (PTP) model that deals with three orthogonal, interdependent dimensions: social, interaction and content. This special issue presents six papers that provide the insights into some of the key issues discussed earlier. In the following, we describe the problem space and the contribution summary of each paper. As stated earlier, the Social Web, through social networks (e.g., Facebook, LinkedIn, etc.), provides a way for people connecting with each other, fostering relationships and offering information and emotional supports. Although people share information and expe- riences through such sites, privacy disclosure has serious consequences and hence remains a major concern. For example, Stacy Snyder, a student of education at Millersville University, was denied her teaching certificate after campus administrators discovered photos on her MySpace profile portraying her as a “ drunken pirate ” [17]. There are three major types of privacy disclosure for social networks: content disclosure , identity disclosure and link disclosure . The content disclosure is related to revealing private information about a person such as age, sex, gender, and sexual orientation. Identify disclosure is related to identifying a person such as name and social security number. The link disclosure is related to revealing a private link between people in the social networks, such as the link between a person and his/her psychiatrics. The first two types of disclosures have been studied in the context of information systems in general, and the third type of disclosure is very unique to social networks. A standard technique for preserving link privacy is link randomisation. However, such technique causes the structural distortion of the network. The resulting graph cannot be used to analyse the social networks. To overcome this problem, Fard and Wang present a neighbourhood randomisation technique in their paper “ Neighbourhood Randomization for Link Privacy in Social Network Analysis ” . The proposed technique drastically reduces the distortion of the graph structure and yet preserves the privacy of sensitive links. Informational support in the Social Web can be provided in a variety of ways such as online networks, forums and blogs. In recent times, Q&A has gained a lot of attention and many people rely on the information available on the Web. However, the accuracy of the information is paramount to maintain the reputation of the Social Web. In many cases, the information provided in the Web may be contradictory, causing ambiguity with respect to its accuracy. This is mainly attributed to the contribution of content from non-experts. In order to address this problem, in this special issue, Pelechrinis et al. propose a solution in their paper “ Automatic Evaluation of Information Provider Reliability and Expertise ” . Their solution is based on the study of human cognitive traits, where each user ’ s activity is monitored by their peers and their compliance to predefined cognitive models is observed. These observations are used to obtain a reliability and expertise consensus for users in the social network. Allahbakhsh et al. also address the issue of accuracy of information on the Web. In the paper entitled “ Robust Evaluation of Products and Reviewers in Social Rating Systems ” , they looked specifically at the trustworthiness of reviewers and reviews in social rating systems. Social rating systems enable people to rate and provide reviews about various entities (e.g., products, movies, hotels, etc.). These are, in turn, used by web users to make decisions (e.g., what to buy, what movie to watch, what hotel to stay in, etc.). Unfortunately, some abuse these rating systems by posting false or unfair evaluations. In this paper, the authors present a framework and novel algorithms for the robust computation of product rating scores and reviewer trust ranks, even in the presence of unfair reviews. They define a three-pronged approach which includes: a product rating computation that aggregates the community sentiment to assess the quality of a review; an analysis of reviewers ’ behaviour and a novel method to compute the reviewers ’ trust rank. Still addressing the issue of credibility of the information found on the Web, the work of Seth et al. is concerned with participatory media such as blogs. In their paper, “ Personalized Credibility Model for Recommending Messages in Social Participatory Media Environ- ments ” , they propose a method to determine the credibility of messages in order to be able to recommend to users the messages that will be considered the most credible to them. Recommendation systems have been widely used in the Social Web, ranging from recommending news articles to items in e-commerce sites or people for partners and friends. Recommendation systems are often seen as one way to overcome ...
Context 2
... phenomenon of the Social Web (i.e., the Web of Social Media) has caught the attention of research communities in the last decade. Researchers from diverse disciplines ranging from social and behavioral sciences to computer science have started investigating the issues and challenges in the Social Web. Within computer science, researchers from established research areas such as language technologies, machine learning, and service and cloud computing have started looking into the computational and development challenges brought about by the Social Web. With the continued reports of breach of trust and hoax news spreading from social media to mainstream news, trusting the Social Web has become one of the major challenges that need to be addressed. This special issue focuses on this challenge. In this editorial note, we provide a brief introduction of the Social Web, followed by a discussion on the major challenges in building a trusted Social Web. Finally, we present recent work in trusting the Social Web through a summary of the papers included in this special issue. Social media is increasingly becoming mainstream for a variety of purposes, ranging from online journalism (e.g., blogs), online knowledge bases (e.g., Wikepedia), online marketing (e.g., Twitter), to keeping in touch with friends, family and professional col- leagues online (e.g., Facebook, LinkedIn, etc.). These technologies have given rise to a web of social media, also known as the Social Web. The popularity of the Social Web has been overwhelming. It is reported in [8] that 67 % of online adults are connected to one or more social media, and that a large proportion of them first check in to their favorite social media site everyday. This trend is likely to grow as the number of Android-based smartphones shipments alone is expected to reach 1 billion by 2013 [13]. Additionally, the value of social commerce is expected to reach 30 billion dollars within next 5 years. Similarly, 13 % of digital news consumers follow recommendations from Twitter and Facebook. Therefore, the Social Web is here to stay for the foreseeable future and will revolutionise the way we live, from communicating with our family and friends to conducting businesses. In a nutshell, the Social Web provides a unique platform for digitising our behaviour. This has the enormous potential to change the way we conduct ourselves in the physical world. There is no consensus about the definition of the Social Web. In what follows, we define the Social Web as the platforms, technologies and applications that enable the Web to support and foster social interactions. The most popular Social Web applications include Twitter, Facebook, LinkedIn and Instagram. Figure 1 shows the key component technologies, platforms and applications that define the Social Web. Since the introduction of the World Wide Web (WWW or the Web) by Tim Berners-Lee, the Web has gone through a number of evolutions from its initial form - a web of hypertext documents to be viewed by browsers. The early major evolution of the Web was triggered by the introduction of Extensible Markup Language (XML) in 1998 by WWW Consortium (W3C). By the dawn of this century, the Web has established itself as a major place for hosting a variety of applications and data from different domains, ranging from various scientific endeavours to electronic commerce and delivery of government services. W3C has, along with other industry partners, played a significant role in revolutionising the Web on how data and applications are hosted and presented. The major evolution on developing applications in the Web took place with the introduction of Web Services in 2003 [2]. A Web service is an application that is accessible to other applications over the Web [15]. The vision of Web Services is to develop large scale distributed applications over the Web, where applications can communicate through a standard format similar to how computers can communicate with each other through TCP/IP, using autonomous, transparent software components called services. The foundational technologies for Web Services include Web Services Description Language (WSDL), Universal Description, Discovery and Integration (UDDI) and Simple Object Access Pro- tocol (SOAP). Layers of standards have been proposed as Web Services stack to address other application related issues such as transactions, security and workflows [7]. In recent time, a large number of applications have been developed as services and deployed on the Web. This has changed the Web to a pool of services that can be searched, composed and executed, called the Service Web [1]. In parallel to the evolution of applications on the Web through the Service Web, there has been an evolution of representation of data in the Web through the introduction of the Semantic Web [5]. The aim of the Semantic Web is to promote common data formats on the Web with associated semantic content so that the content in the web pages can be linked to each other and processed intelligently. Similar to the three foundational technologies for the Service Web, the Semantic Web is based on the foundational technology: Resource Description Framework (RDF) recommended by W3C; a stack of standards are being developed on top of it [18]. The Service Web and the Semantic Web together provide the fundamental building blocks for the Semantic Web Services. They also enabled the next evolution of the Web: the Web 2.0, with social media as one of its major outcomes [12]. Social media allows the Web to move from data and applications to user interaction and experi- ence . The web of social media that supports and fosters social interactions is called the Social Web. The Social Web provides a unique platform for sharing information and knowledge for individuals, businesses, governments and any organisations. As the Social Web is inherently open and free, it lacks the central control or authority that asserts the trustworthiness of the people and content in it. It is hard to distinguish what is rumor and what is truth. For example, many statesmen/women (e.g., Nelson Mandela, Mikhail Gorbachev, Margaret Thatcher, etc.) and celebrities (e.g., Lady Gaga, Eddie Murphy, etc.) have witnessed their own death in the Social Web. These pieces of fake death news are filtered through into the mainstream newspapers and televisions without much scrutiny and verification due to the high competition of being first to report the news. On the positive side, Social Media has played a major role in providing a universal and unfettered platform for expressing and sharing views to enact political changes (such as the Arab Spring) [10] and providing effective relief in natural disasters (such as the Japan Tsunami and the New Zealand earthquakes) [20], thus consolidating global relief efforts. Furthermore, people are increasingly relying on the Social Web to obtain information on a wide variety of topics, such as travel, healthcare advice and government services [6]. Indeed, the Social Web has become as essential part of the communication eco-system worldwide. Therefore, it is important to address the issue of trust in the Social Web in order to leverage it for the betterment of human society. In the Social Web, there are four essential entities that are involved in trust directly: service consumers, service providers, services and content. In this special issue, we are looking for innovative technologies and solutions from diverse disciplines that address the issue of trust in the Social Web. Trust has been studied in many disciplines, including Sociology, Psychology, Economics and Computer Science. Each of these disciplines has defined and ...

Similar publications

Article
Full-text available
The paper focuses on the automatic opinion analysis related to web discussions. First, the paper introduces an approach to the extraction of texts from web forum discussion contributions and their commentaries as well as filtering these texts from irrelevant and junk information. Second, it describes variety of approaches to the sentiment analysis...
Article
Full-text available
Advertisers worldwide are designing advertising with an eye toward viral activity particularly within social networking sites such as Facebook. Yet, little is known about the social processes at play when ads are shared. Taking a consumer-centric approach, this study investigates the social processes central to ads going viral within the Social Web...
Article
Full-text available
Different proposals have been made in recent years to exploit Social Web tagging data to improve recommender systems. The tagging data was used for example to identify similar users or viewed as additional information about the recom-mendable items. In this work we propose to use tags as a means to express which features of an item users particular...
Article
Full-text available
This study focuses on the integration of a digital social network within a university department that teaches French as a foreign language in France and seeks to identify the conditions for the actualization of its potentialities for learning. We examine usage of the social network by 59 students and their perception of this site related to their l...

Citations

... There are some approaches to measuring trustworthiness in social media [37][38][39][40][41][42][43][44][45] [46] [47]. Nepal et al. [48] address the challenges of the trust in the web-based social media. The paper states the four main components that are involved in the trust evaluation; service consumers, service providers, services and content. ...
Preprint
Full-text available
The concept of social trust has attracted an attention of information processors/data scientists and information consumers / business firms. One of the main reasons for acquiring the value of SBD is to provide frameworks and methodologies using which the credibility of online social services users can be evaluated. These approaches should be scalable to accommodate large-scale social data. Hence, there is a need for well comprehending of social trust to improve and expand the analysis process and inferring credibility of social big data. Given the exposed environment's settings and fewer limitations related to online social services, the medium allows legitimate and genuine users as well as spammers and other low trustworthy users to publish and spread their content. This chapter presents an overview of the notion of credibility in the context of SBD. It also list an array of approaches to measure and evaluate the trustworthiness of users and their contents. Finally, a case study is presented that incorporates semantic analysis and machine learning modules to measure and predict users' trustworthiness in numerous domains in different time periods. The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.
... There are some approaches to measuring trustworthiness in social media [37][38][39][40][41][42][43][44][45][46][47]. Nepal et al. [48] address the challenges of the trust in the web-based social media. The paper states the four main components that are involved in the trust evaluation; service consumers, service providers, services and content. ...
Chapter
Full-text available
The concept of social trust has attracted the attention of information processors/data scientists and information consumers/business firms. One of the main reasons for acquiring the value of SBD is to provide frameworks and methodologies using which the credibility of online social services users can be evaluated. These approaches should be scalable to accommodate large-scale social data. Hence, there is a need for well comprehending of social trust to improve and expand the analysis process and inferring credibility of social big data. Given the exposed environment’s settings and fewer limitations related to online social services, the medium allows legitimate and genuine users as well as spammers and other low trustworthy users to publish and spread their content. This chapter presents an overview of the notion of credibility in the context of SBD. It also lists an array of approaches to measure and evaluate the trustworthiness of users and their contents. Finally, a case study is presented that incorporates semantic analysis and machine learning modules to measure and predict users’ trustworthiness in numerous domains in different time periods. The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.
... Trust and distrust factors facilitate the attainment of more satisfactory bargaining outcomes [27,30]. Even after the importance of these factors, however, unexpectedly little attention has been afforded towards explaining the effect of these factors on the negotiation process and results. ...
Article
Full-text available
Group recommender system (GRS) is the gradually prospering type of recommender system (RS) which tends to provide recommendations for the group of users rather than the individual. Most of the existing GRS obtain group preferences using equal weighing of the individual preferences, ignoring the relationship among group members within the group. But this is not a practical scenario because each member has different behavior. Therefore, in this article, we introduce a multiagent based negotiation mechanism between agents, each of them acts in favor of one group member. The proposed negotiation protocol allows agents to accept or discard a part of the offer based on trust and distrust among users, which gives more agility to the negotiation process. Further, we use memory for each agent in the group that records the previously proposed offers for that agent. The efficiency of trust-distrust enhanced GRSs is compared with traditional techniques and the outcomes of computational experiments confirm the supremacy of our proposed models over baseline GRSs techniques.
... Numerous individuals are becoming increasingly involved in sharing their opinions on a variety of subjects in social media platforms [6] [7]. It has become increasingly popular in recent years [19]. Consumers express their complaints or their satisfaction with certain brands and services. ...
Preprint
This paper explores an effective machine learning approach to predict cloud market performance for cloud consumers, providers and investors based on social media. We identified a set of comprehensive subjective metrics that may affect cloud market performance via literature survey. We used a popular sentiment analysis technique to process customer reviews collected from social media. Cloud market revenue growth was selected as an indicator of cloud market performance. We considered the revenue growth of Amazon Web Services as the stakeholder of our experiments. Three machine learning models were selected: linear regression, artificial neural network, and support vector machine. These models were compared with a time series prediction model. We found that the set of subjective metrics is able to improve the prediction performance for all the models. The support vector machine showed the best prediction results compared to the other models.
... The most important factor that shaped modern Intelligence is definitely technology, with the advent of WEB 2.0, social media, and smart technologies. Web 2.0 comes with a new way of information spreading, as social media allows the Web to move from data and applications to user interaction and experience [1]. This means that the modern World Wide Web does not restrict Internet users just to read data or information but also allows them to share on different platforms their thoughts, opinions, personal data, and so on. ...
Article
Full-text available
Since the emergence of Internet and social media, new Intelligence branches have flourished, like CYBERINT (Cyber Intelligence), OSINT (Open Source Intelligence) or SOCMINT (Social Media Intelligence), with the aim to exploit different dimensions of the virtual world. These Intelligence-related disciplines may inquire personal information, statements and conversations posted voluntarily on websites or social platforms in order to profile people, identify social networks and organizational structures, and uncover vulnerabilities and threats/risks that can jeopardize the security of individuals or organizations. In this respect, the Internet - as environment - can provide valuable information from both technical and social side. This is why the World Wide Web is and will remain an important place to search for data and information that can be processed into Intelligence, and represents the reason why people working in sensitive domains (e.g. Intelligence) should be aware of their vulnerabilities and the risks and threats posed by this environment. DISCLAIMER : This paper expresses the views, interpretations, and independent position of the authors. It should not be regarded as an official document, nor expressing formal opinions or policies, of NATO or the HUMINT Centre of Excellence.
... As for further work, we would like to extend the proposed model to more complex negotiation environment and will explore how these changes would affect the agreement. Another important future direction would be to incorporate trust-distrust strategies [3,4,32,44,53,54] during the negotiation process to further improve the accuracy of the proposed scheme. Also, we would like to explore GRSs based on identification of social circles [1] and community detection in social networks [51] for more effective group recommendations. ...
Article
Full-text available
Recommender systems (RSs) have emerged as a solution to the information overload problem by filtering and presenting the users with information, services etc. according to their preferences. RSs research has focused on algorithms for recommending items for individual users. However, in certain domains, it may be desirable to be able to recommend items for a group of persons, e.g., movies, restaurants, etc. for which some remarkable group recommender systems (GRSs) have been developed. GRSs provide recommendations to groups, i.e., they take all individual group members’ preferences into account and satisfy them optimally with a sequence of items. Taking into consideration the fact that each group member has different behaviour with respect to other members in the group, we propose a genetic algorithm (GA) based multi-agent negotiation scheme for GRS (GA-MANS-GRS) where each agent acts on behalf of one group member. The GA-MANS-GRS is modelled as many one-to-one bilateral negotiation schemes with two phases. In the negotiation phase, we have applied GA to obtain the maximum utility offer for each user and generated the most appropriate ranking for each individual in the group. For the recommendation generation phase, again GA is employed to produce the list of ratings with that minimizes the sum of distances among the preferences of the group members. Finally, the results of computational experiments are presented that establish the superiority of our proposed model over baseline GRSs techniques.
... As for future, we would like to extend our work to more complex consensus environments and explore how these changes would affect the agreement. A fascinating future direction would be to assimilate trust-distrust strategies [34,46] during the consensus process to further improve the proposed scheme. Further, the weights given to each expert can be learned through GA, and its effect on consensus also needs to be investigated. ...
Article
Full-text available
Recommender systems have focused on algorithms for a recommendation for individuals. However, in many domains, it may be recommending an item, for example, movies, restaurants etc. for a group of persons for which some remarkable group recommender systems (GRSs) has been developed. GRSs satisfy a group of people optimally by considering the equal weighting of the individual preferences. We have proposed a multi-expert scheme (MES) for group recommendation using genetic algorithm (GA) MES-GRS-GA that depends on consensus techniques to further improve group recommendations. In order to deal with this problem of GRS, we also propose a consensus scheme for GRSs where consensus from multiple experts are brought together to make a single recommended list of items in which each expert represents an individual inside the group. The proposed GA based consensus scheme is modeled as many consensus schemes within two phases. In the consensus phase, we have applied GA to obtain the maximum utility offer for each expert and generated the most appropriate rating for each item in the group. In the recommendation generation phase, again GA has been employed to produce the resulting group profile, i.e. the list of ratings with the minimum sum of distances from the group members. Finally, the results of computational experiments that bear close resemblance to real-world scenarios are presented and compared to baseline GRS techniques that illustrate the superiority of the proposed model.
... Several approaches have been proposed for measuring credibility in social media [15][16][17][18][19][20][21][22][23]. For instance, Nepal et al. [24] address the challenges of confidence in web-based social media. That paper identifies the four main components involved in trust evaluation: service consumers, service providers, services and content. ...
Article
Full-text available
The widespread use of big social data has influenced the research community in several significant ways. In particular, the notion of social trust has attracted a great deal of attention from information processors and computer scientists as well as information consumers and formal organisations. This attention is embodied in the various shapes social trust has taken, such as its use in recommendation systems, viral marketing and expertise retrieval. Hence, it is essential to implement frameworks that are able to temporally measure a user’s credibility in all categories of big social data. To this end, this article suggests the CredSaT (Credibility incorporating Semantic analysis and Temporal factor), which is a fine-grained credibility analysis framework for use in big social data. A novel metric that includes both new and current features, as well as the temporal factor, is harnessed to establish the credibility ranking of users. Experiments on real-world datasets demonstrate the efficacy and applicability of our model in determining highly domain-based trustworthy users. Furthermore, CredSaT may also be used to identify spammers and other anomalous users.
... In fact, it is necessary for SR based an adequate mechanism for analyzing a specific social relationship among users. Therefore, SR involves the independent research field of Social Networks Analysis (SNA) [33] and takes full advantage of various works which are established in this area such as how to detect trust in social network [20,25,35,22,27,36], etc. ...
... Actually, the social trust has become an important concept in service recommendation. It has been studied from different aspects: the data (or content), services (or applications), the providers (Web sites, organizations, governments or individuals) and the service consumers (organizations or individuals) [36]. ...
Article
A few years ago, the Internet of (Web) Service vision came to offer services to all aspects of life and business. The increasing number of Web services make service recommendation a directive research to help users discover services. Furthermore, the rapid development of social network has accelerated the development of social recommendation approach to avoid the data sparsity and cold-start problems that are not treated very well in the collaborative filtering approach. On the one hand, the pervasive use of the social media provides a big social information about the users (e.g., personnel data, social activities, relationships). Hence, the use of trust relation becomes a necessity to filter and select only the useful information. Several trust-aware service recommender systems have been proposed in literature but they do not consider the time in trust level detection among users. On the other hand, in the reality, the majority of users prefer the advice not only of their trusted friends but also their expertise in some domain-specific. In fact, the taking into account of user’ s expertise in recommendation step can resolve the user’ s disorientation problem. For these reasons, we present, in this paper, a Web service decentralized discovery approach which is based on two complementary mechanisms. The trust detection is the first mechanism to detect the social trust level among users. This level is defined in terms of the users’ interactions for a period of time and their interest similarity which are inferred from their social profiles. The service recommendation is the second mechanism which combines the social and collaborative approaches to recommend to the active user the appropriate services according to the expertise level of his most trustworthy friends. This level is extracted from the friends’ past invocation histories according to the domain-specific which is known in advance in the target user’s query. Performance evaluation shows that each proposed mechanism achieves good results. The proposed Level of social Trust (LoT) metric gives better precision more than 50% by comparing with the same metric without taking into account the time factor. The proposed service recommendation mechanism which based on the trust and the domain-specific expertise gives, firstly, a RMSE value lower than other trust-aware recommender systems like TidalTrust, MoleTrust and TrustWalker. Secondly, it provides a better response rate than the recommendation mechanism which based only on trust with a difference equal to 4%.
... Social media provides a platform for people to share their opinions and feelings on various topics. It has become increasingly popular in the recent years [7]. Consumers are reaching out to social media to voice their satisfaction or complaints of brands and services. ...
Conference Paper
Full-text available
We investigate the use of subjective metrics in social media to evaluate cloud service performance in the market. We first examine the subjective factors that drive cloud consumers to/from purchasing cloud services. These include the ability to achieve greater scalability, security concerns, etc. according to several industry surveys. We then analyse the correlation between the consumers' perception on those factors and the cloud market revenue growth. This paper identifies the unique subjective metrics that are indicative of cloud service performance from the market perspective. The cloud consumers' perception is sourced from several particular social media using sentiment analysis techniques. We focus on consumers' perception on a leading cloud provider that holds the majority of the cloud market share. We find that subjective metrics are empirically proved to be applicable in evaluating the performance of cloud services in the market.