Credibility and Trustworthiness of Online Sources

This article offers another viewpoint about reviewing the credibility and trustworthiness of online sources.

Abstract

This article investigates the impact of social dynamics on online credibility. Empirical studies by Pettingill (2006) and Hargittai, et al. (2010) suggest that social validation and online trustees play increasingly important roles when evaluating credibility online. This dynamic puts pressure on the dominant theory of online credibility presented by Fogg and Tseng (1999). To remedy this problem we present a new theory we call aggregated trustworthiness based on social dynamics and online navigational practices.

 

Introduction

Credibility of online information has always been a key factor in our understanding of the Web at large. With the recent rise in social media and particularly user-generated content, our need for a solid understanding of online credibility is becoming ever more important (Gillmor, 2008). Recent empirical studies suggest that we put more emphasis on social validation than traditional expert sources when assessing online information (Hargittai, et al., 2010; Pettingill, 2006). Yet, our current theories fail to explain this dynamic, leaving us with an insufficient framework to analyze and explain the workings of Web sites such as Twitter and Wikipedia.

The point of departure for this paper is a discrepancy between the leading theory of online credibility and our actual online behavior. We argue that social dynamics online deeply impact our evaluation process by incorporating other people's evaluation and the navigation process itself. The unique characteristics of online information seeking routines and navigational practices present a more dynamic theory of online credibility evaluation.

We call this theory aggregated trustworthiness.

 

Defining Online Credibility

Several theories have tried to pinpoint the concepts involved and the workings of online credibility. Fogg and Tseng's (1999) paper on computer credibility is broadly cited and generally offers the best theory of online credibility. However, their core theory was developed over a decade ago, and as we will argue, crucial changes in both technological reach and online practices have made their theory of online credibility inadequate for our present-day online environment.

To understand why it is necessary to explain Fogg and Tseng's theory of computer credibility. They stress that credibility is always a perceived quality and not a property residing in any human or computer product. Rhetoricians use the term ethos to explain how the subjective quality of credibility evaluation (also known as trustworthiness) is primarily established by the receiver of the message, incorporating emotional sentiment and situational context (Bitzer, 1968; Warnick, 2004; Boyd, 2008).

In other words, we cannot design credibility itself, only design for credibility. Fogg uses a simple metaphor to illustrate credibility; like beauty, credibility is in the eye of the beholder, and "it exists only when you make an evaluation of a person, object, or piece of information".

That said, evaluating credibility is not random. Certain qualities exist to help lay the basis for a Web site to be considered credible (Fogg, et al., 2002). Fogg and Tseng (1999) call this property of credibility for believability.

In its simplest form credibility is the persuasive nature of the medium (Fogg and Tseng, 1999; Fogg, 2003). Thus, if a Web page is believable it is also considered credible. Fogg and Tseng argue that from the dozen or more elements that contribute to credibility evaluation, there are just two key dimensions of credibility: Trustworthiness and Expertise (Fogg and Tseng, 1999; Fogg, 2003).

Figure 1: Model of the key dimensions of credibility
Figure 1: Model of the key dimensions of credibility (Fogg, 2003).


Fogg defines trustworthiness in the terms well-intentioned and unbiased, and expertise in the terms of perceived knowledge, skill, and experience (Fogg, 2003). According to Fogg and Tseng's theory, a high level of credibility incorporates both a high level of trustworthiness and expertise. Hence, according to their theory, a Web site cannot be considered credible if it does not entail both of these concepts. This construct is the basis of their theory on online credibility.

 

Aggregated Trustworthiness

Moving onwards from this theory, online users would tend to look for both trustworthiness and expertise cues to establish a level of credibility of the information in question – effectively mimicking the traditional routine of source evaluation. In this routine, a user would look at who wrote some information to assess whether the author is trustworthy and an authority on the matter. If so, strong credentials and objectivity would serve as strong credibility cues. In most offline settings, this approach is very robust since verifying the identity and credentials of the source is much easier than evaluating the accuracy of their claims.

The problem, however, is that a great deal of information online is detached from these credential and authority cues. Particularly in user-created content platforms like wikis, review and rating Web sites, blogs, forums, Twitter, etc. we find very few, if any, direct cues of expertise. Following Fogg and Tseng, this dynamic severely lowers the perceived credibility of online information, since we are not able to easily identify authors and hence evaluate their expertise.

Yet, recent empirical studies suggest that youth do in fact perceive information without an identified author as credible (Pettingill, 2006; Hargittai, et al., 2010), especially if some sort of collective judgment of the information is available (Lankes, 2008; O'Byrne, 2009). This means that the feedback of others is crucial when assessing the credibility of online information (Weinschenk, 2009; Ljung and Wahlforss, 2008). An extension of this dynamic is the reliance on trustees (Pettingill, 2006).

Trustees often act as a form of authority and provide a baseline of trustworthiness. Trustees are not necessarily experts of the specific topic, yet are important fixtures in the evaluation process and the overall dynamic of establishing credibility (Wang and Emurian, 2005). In addition, the navigation process itself affects the perceived credibility by highlighting search rankings, topic correlations, and brand exposure (Hargittai, et al., 2010).

The social element attached to information is thus key when users evaluate its credibility. Enabling vote-like behavior such as comments, "Likes", ratings, and even links simply provide a much broader spectrum of validation than possible in any offline setting. Collecting multiple streams of trustworthiness cues to form an aggregate of credibility is at the root of this dynamic. We call this theory of online credibility, aggregated trustworthiness. The illustration below shows the factors and dynamics of aggregated trustworthiness (see Figure 2).

Figure 2: Illustration of aggregated trustworthiness
Figure 2: Illustration of aggregated trustworthiness.


On the right side of the solid arrow is perceived credibility as the degree to which we believe the information presented to us.

On the left side of the solid arrow, there are three main factors.

 

  1. Social validation includes the large-scale verifications made by others (e.g., comments, Facebook Likes, shares, social bookmarks, ratings, etc.). Social validation may include profiles, but are not constrained to them.  In our theory, social validation simply means that the more people acknowledge a certain piece of information the more trustworthy it is perceived.

  2. Profile provides the baseline for identity online as well as adding a fixture of the evaluation (e.g., LinkedIn profile, Twitter stream, personal Web site or blog). Having a known identity can be critical when assessing important information.

  3. Authority & trustee includes the known brand or authority on the matter (e.g., New York Times, Stanford University, etc.), but also trustees verifying lesser known sources (e.g., social network friends, Wikipedia references, Twitter personas, etc.).

 

These three factors are dependent on each other and are thus vetted into a larger system of navigation (the dotted arrows). This dynamic includes basic search and navigational processes (e.g., search context such as history, ranking, lookups, links, etc.) Together the model illustrates how social validation may provide verification of an authority, which in turn may provide verification of a specific profile, focusing our evaluation process and establishing the level of perceived credibility of the information.

However, we are not suggesting that quantitative metrics such as Facebook Likes or Google's PageRank can or should substitute a critical analysis of online information. Yet, we do argue that the theory of aggregated trustworthiness explains a dynamic unique to the Web and is made possible by factors not attainable in any offline setting, such as large-scale feedback systems. These elements shift the perception of credibility from necessitating a fixture of traditional expertise cues, to a process which is inherently more dynamic and flexible, not hinging on any root authority.

Changing the dynamics this way effectively spreads out the risk of being misled from relying on a few stable sources to many, albeit, less stable sources. The key here is how the dynamic functions without cues of root authority, and hence the perceived expertise of the author of the information – a dynamic in stark contrast to our current theory of online credibility formulated by Fogg and Tseng (1999). Instead of factoring in the perceived expertise of the sender of information, we instead leverage social feedback and collective judgment to assess its credibility. Joining many untrustworthy pieces of information to stitch together a patchwork of credibility is simply easier in an online setting where root authority is much harder to establish (Lankes, 2008; Shirky, 2009).

We have based our theory of aggregated trustworthiness on two major studies of young adults' evaluation of Web content and information-seeking routines (Hargittai, et al., 2010) and youth's online research practices (Pettingill, 2006).

Both studies demonstrate how youth gather credibility cues from a broad spectrum of sources, not confined to expert sources. Hargittai, et al.'s (2010) study even demonstrates how users not exposed to source credentials or traditional expertise cues still manage to successfully complete their given information-seeking tasks.

Additionally, a group in one study placed great emphasis on trustees when accessing the quality of information (Pettingill, 2006). A trustee is typically a guiding person they know from an offline setting like a teacher or a parent. Yet, youth involved in social networking had expanded these trustee roles to include members of their online network, leveraging the flexibility and reach of online social networks when evaluating online credibility.

As Pettingill explains: "While subjects were ambivalent about the use of Wikipedia generally, those engaging in social networking sites daily were more likely to cite Wikipedia as a trusted source for information". The evaluations made by others are thus a key cue to determine the credibility of the information in question (O'Byrne, 2009). Aggregating a wealth of trustworthiness cues provide the most robust form of evaluation, when author credentials are hard or impossible to come by.

Understanding the social dynamics online is thus far more important in credibility evaluation than the static methods of traditional source critique. Traditional theories on online credibility, such as the one proposed by Fogg and Tseng, give very little attention to this fact. Root authority and source critique are still important factors (e.g., when analyzing author intent), but it is not the preferred method of evaluating online credibility. Using our theory we can explain why a number of Web platforms from Wikipedia to Twitter are perceived as credible, despite the lack of known authorities and traditional expertise cues.

Examining other individuals' evaluations (e.g., Likes, shares, comments, etc.) or the aggregate of these activities (e.g., search ranking, Twitter trending topics, etc.) provides a baseline of judgment and helps our own evaluation (Shirky, 2009; Weinschenk, 2009). Key in this dynamic is how users use the built-in filtering mechanisms of the Web, such as search and social recommendations, putting greater emphasis on navigational tools and processes than expertise (Lankes, 2008; Shirky, 2008; Hargittai, et al., 2010). The core values are the same, but the process is different. In the following, we will illustrate this notion's explanatory power by using Wikipedia as an example.

 

Explanatory Power

Aggregated trustworthiness helps explain the success of a number of online platforms like Wikipedia, eBay, Twitter, LinkedIn, and many more. Wikipedia has proven exceptionally difficult to explain by traditional theories of credibility (Warnick, 2004), and hence it offers a prime example to explain and highlight the explanatory power of this notion of aggregated trustworthiness.

The platform of Wikipedia has brought with it an array of obstacles in terms of evaluating credibility, verifiability, and consensus of interpretation. Focusing only on credibility of edits we are faced with the following scenario. Normally, in an offline setting, the author of a text is known and the time of publication is fixed, but not exactly on Wikipedia. What makes Wikipedia possible in the first place – its networked structure and review process – is also what makes it practically impossible to critically evaluate it using traditional methods of source evaluation (Standler, 2004; Stvilia, et al., 2005).

Evaluating a Wikipedia article by looking for a known author or time of publication is essentially meaningless due to its format and the dynamic nature of the medium (Warnick, 2004). Since articles are written and rewritten by multiple contributors, very few of which may be said to be a known authority on the matter (or visible to anyone outside the Wikipedia community), the traditional approach of evaluating author credentials and point-of-origin simply breaks down. The collaborative nature of Wikipedia is thus limiting for the traditional method of credibility assessment, but actually advantageous for the aggregated kind.

The vast majority of the editors and contributors on Wikipedia are anonymous. Yet combined and added together, the sum of their edits and re-edits seems to justify the inherent lack of root authority and identity. Seemingly, we do not trust the individual anonymous user, but do we trust a lot of them (Lankes, 2008).

One explanation lies in the fact that it is extremely cumbersome to individually coordinate or influence the actions of a very large number of people (Shirky, 2008), especially, when all their actions are transparent to one another. It is simply unlikely that a contributor on Wikipedia had bribed all the other editors; it is much more likely that they independently from each other found the original contribution acceptable.

According to Shirky, it is the incredible amount of social media that necessitates this post-hoc filtering approach: "The expansion of social media means the only working system is publish-then-filter".

The dynamic of Wikipedia's post-hoc filtering system is dependent on social cues, broadly defined as our reactions to certain behavior characteristics, social validation principles, and general reinforcements through other people's judgments (Resnick, et al., 2003; Weinschenk, 2009).

By consulting what (a lot of) other people think we are presented by a form of transparency normally unavailable – be it news articles, forum posts, or blog comments. The opinions of others matter to us and studies done on whether online recommendations influence buying decisions showed a volume increase of 20 percent for items with recommendations over items without them (de Vries and Pruyn, 2008).

Since we react favorably to social validation, other people's collective judgment is perceived as a trustworthiness metric. We even seem to react to artificial mimicry of social cues as a way to evaluate trustworthiness cues directly from computers (Mui, 2002; Nass and Reeves, 1996). This tendency of social validation and reliance on social cues is a key factor in the dynamic of wiki edits and a founding principle in aggregated trustworthiness.

Researchers at the University of California, Santa Cruz have even taken this dynamic a step further and developed an algorithm called WikiTrust. Critics of Wikipedia have argued that there is no apparent way to see which articles, and more importantly what part of the article, is credible and which is not (Stvilia, et al., 2005). To counter this criticism and offer a solution using the process of multiple edits, social validation principles, and post-hoc filtering, these researchers have developed WikiTrust.

The algorithm works by color-coding every word in every article based on the reliability of its author and the length of time the edit has persisted on the page. The algorithm counts the number of silent edits any word has undergone and assigns a value to the author. The more unedited pieces of text an author has on Wikipedia, the higher the reputation within the algorithm that the author will earn. Newly added edits from questionable sources are highlighted in bright orange; the longer information remains on the page, literally surviving multiple edits, it will turn from lighter shades of orange eventually to white (see Figure 3).

Figure 3: An example from WikiTrust

Figure 3: An example from WikiTrust: The Wikipedia page "Politics of Denmark". Notice the error "Anders Fjogh Rasmussen" highlighted in bright orange. The word Fjogh is a play on the Prime Minister's real middle name Fogh and literally means fool in Danish.


Wikipedia and particularly the WikiTrust algorithm are clear examples of the explanatory power of aggregated trustworthiness. Embedding the social verification into the edit and navigation process itself, we see how users are able to circumvent a credibility evaluation based on expertise. WikiTrust is just one example of a service letting people assess the credibility of online information based on the collective judgments made by other people.

 

Conclusion

Aggregated trustworthiness provides a more adequate explanation of online credibility. Incorporating social validation, online trustees, and profile-based Web sites, this notion is a first step towards better explaining the processes of credibility evaluation of online information and platforms lacking traditional expert cues. Highlighting recent empirical studies we have argued that information without explicit cues of authorship and expertise are in fact, and contrary to our present theories, perceived as credible. To illustrate the explanatory power of aggregated trustworthiness we have focused on the social dynamics of Wikipedia and how these dynamics mitigate the need for expertise to establish credibility. Aggregated trustworthiness is based on this social dynamic and points to a new direction for future research of online credibility.

 


Source: Johan Jessen and Anker Helms Jørgensen, https://firstmonday.org/ojs/index.php/fm/article/view/3731/3132
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 License.

Last modified: Wednesday, September 23, 2020, 2:13 PM