» Startsidan/
senaste numret


» Tidigare nummer

» Varför Rockad?

» Kontakta redaktionen

» Teknik/
hjälpprogram


E-post: rockad@rockad.nu

Copyright © 2001, Rockad.

The Future of Trust and the Trust of the Future

Läs som PDF (1,4 Mb)

Nicklas Lundblad

nicklas@acm.org

Abstract.

This paper explores the notion that the focus on trust in today’s research and business, and the technical evolution in this area, is changing the concept of trust. Using a collection of examples to create an overview over how “trust” is used and implemented today we try to show that trust is moving from having been transformative, and a question of belief, to being informative and a question of knowledge. In consequence we find that we today only trust when we are certain. Trust, that is, where no trust is truly needed.

Introduction

Trust is a fuzzy concept that resists simple definitions. In the literature we find definitions ranging from the epigrammatic (Luhmann 1979) to the formalistic (Josang 1999, Reagle 1996). In this paper we will not attempt a definition at all but take the naïve understanding of the word as our starting point. We will then introduce a simple overview of how the concept looks today, how it is used today, and from this we will try to draw conclusions on changes and evolutions.

We speak today of “creating trust” and using it in an instrumental way (Karvonen 1999). Trust has become a means to entice customers and partners in an ever-more anonymous world (Benno 1998). This is an effect of several interrelated phenomena that produce a whole new set of premises for interpersonal and business relationships. The two most important such phenomena are:

  1. The new technologies. Ranging from the Internet to peer-to-peer technologies that connect us with a growing amount of nodes in international networks.

  2. The globalised world. The technologies make global interaction possible, and we find that this is a part of a greater tendency towards globalisation that also creates the need for trust.

In trying to understand how these developments affect our notions of trust we will present a number of examples in the following section. First we have to introduce a number of questions that will guide us through those examples. These questions are all different questions about trust, and they reflect different trust problems. The set of questions presented here is not exhaustive, but gives a good overview of the kinds of questions where trust comes into play.

  1. Can I trust the technology? This issue touches on the complexities of technical trust and is one of the more basic questions.
  2. Can I trust the users of the technology? This is a question that is more fundamental than the first and really more of a trust issue. When interacting with someone over the Internet, can I then trust him or her? The technology might be all right, but is the user to be trusted?
  3. Can I trust the politics? The new technology has political implications and problems. The question of governance (‘who rules the net?’) might be important to some users in deciding whether or not they will trust the new technology. Issues of surveillance and privacy also enter into the discussion at this point.

All these questions are about trust and whether or not we should trust users, technology and politics. The questions are not new, but some of the answers are, and those answers are changing the concept of trust.

Trust as an artifact

As mentioned above, we today tend to think that trust is an artifact that can be created or implemented. This is not a self-evident truth. It might very well be argued that trust is an emergent quality that is not reducible to a set of ‘quick fixes’. The use of metaphors here is important. When we speak of “creating trust” as opposed to “fostering trust” or “growing trust”, we make certain assumptions about trust that are not easily demonstrated to be either true nor false. Nevertheless we today treat the problem as an engineering problem.

We will not delve deeper into these difficulties, even though I feel they are important, but rather give a few examples of how trust engineering looks today and what tools are used to create trust. In each and every example I will try to show the conceptual shift that the engineering approach creates, to highlight the ways in which these new approaches are changing our notion of trust.

Consumer trust strategies

There are many different strategies for creating consumer trust and I will only touch lightly upon a few of these, and not to develop them in extenso, but rather to show how they have changed our notion of trust.

E-bay, the reputation-based model and SquareTrade

Ebay.com, the large on-line auctioneer, is faced with a trust problem the likes of which few other on-line actors face. They have to make it plausible for those attending the auctions that the buyers and sellers are indeed trustworthy. The company utilises different approaches in resolving this problem. One approach is to offer a reputation service. This service works in a simple way: every time a transaction is concluded the parties are invited to offer commentary on how they felt the other party handled the deal. The result is then aggregated over transactions and presented on the users presentation page:

Fig 1 The Ebay users reputation page

As we can see, the user is identified by a handle only, as to ensure a rudimentary privacy. As an evaluating user I am here offered information on good, bad and neutral feedback, as well as on bid retractions.

If we accept the notion that this is trust creation we seem to accept a curious idea: that if there is no reason to distrust a person, we should trust him. The only data the e-bay reputation model offers is data on past behaviour. Should this be bad, well, then we might better not interact with the user in question (so the system efficiently generates distrust, this is true!), but if the user only has positive feedback, does this mean that he or she is trustworthy? Will he or she remain trustworthy? Not necessarily. We could call this idea the notion of the double negative: lack of distrust equals trust. It is true in mathematics that [-(-1)]=1, but is this the case also when it comes to trust? The reputation-based model implies so.

Ebay also works with a trusted third party that offers on-line dispute resolution – SquareTrade. The idea behind this arrangement is to “build trust” (SquareTrades homepage) by ensuring the identity of the user and that he or she will comply with the rules for the low-cost on-line dispute resolution process.

When SquareTrade registers the user, he or she will have to sign a contract to the effect that they will comply with decisions made by SquareTrade: The users reputation page is then updated with a trustmark (more on these later) that shows that the user is indeed a part of the SquareTrade program.

The conceptual shift here is noteworthy. When I ask why I should trust a user, the SquareTrade answer is this: because we have eliminated the need for trust by identifying and legally obliging the user to do what he or she promises. To paint a picture: we are asked to trust a person with a gun to his or her head, by the gunman. This is, naturally, an oversimplification, but it illustrates the point.

Epinions and the web of trust

Epinions, producing reviews of products and services, uses trust creation to solve a cost-related problem in their business model. An actor that wishes to aggregate reviews must give the user a way to sort these reviews according to personal taste and relevance. One simple, and human, way of doing this is to look for a person, a name, and follow him or her through some products the user already owns, and then decide whether or not the reviewer is trustworthy. This is what practically every one of us does in trying to find a movie reviewer that fits our taste.

This process, however, is time consuming and will take enormous amounts of time if I am to sift through thousands and thousands of persons. Epinions came up with an interesting solution to this problem. What if it were possible not only to review the products but the reviewers as well? The result would seem to be pure genius: if I review a reviewer and deem him or her trustworthy I simply add him or her to my list of trusted reviewers. Then I hope that he has done the same! If he has done the same, I can trust his trusted reviewers by proxy.

This model also allows people to acquire a certain trust capital: the number of people that trust them. This is an example:

Fig 2. A node in the web of trust. Mkaresh trusts 24 people and is trusted by 124.

The interesting thing here is that this approach seems to reduce the costs for evaluating the trustworthiness of a network to the task of evaluating but one of the nodes in that network. An important gain, and a sign that trust can be used to reduce complexity. (This has been indicated as one of the great advantages of trust in the literature (Luhmann 1979)).

The problem is, however, that this notion of transitive trust remains relatively unexplored. It seems to imply some rather problematic results. Is it, in general, true that:

(i)if a trusts b and b trusts c, then a trusts c?

This is the very premise the concept of transitive trust rests on. I can however think of examples where this might not be the case (where the trust between a and b and b and c is based on different parameters (love and fear, for example)). And more generally: even if we accept (i) above, can the premise then be generalised to this?

(ii) if a1 trusts a 2 and a2 trusts a3… in an unbroken chain to an then a1 trusts an.

This seems improbable (for a more formal argument on why this is problematic see Christianson and Harbinson 1996). At least I would like to place an upper bound to n, and thus establish a sort of maximum trust diameter in a web of trust. The shift in concept here is towards the mathematical model, and the assumption is that trust can be thought of as a transitive relation. This seems to represent a significant shift from older concepts of trust that were not so formalised (We can find examples of a simpler version of transitive trust, however, in sayings such as “any friend of yours is a friend of mine”).

Persona and the infomediary

An infomediary offers a third model for the creation of trust, that we will but briefly touch on, is the infomediary. The business model behind the infomediary is simple. As a user I am supposed to leave my personal data with the infomediary and then rest assured that he or she will manage that to the best of his or her ability. An example of this business model can be found in Persona:

Fig 3. Persona offers to mediate between the user and the direct marketing market

The infomediary represents an instance of another form of transitive trust. The transition here is not along a chain of trusted nodes in the network, but it instead quickly branches out. The central question is a variation of the one presented above:

(iii) if a trusts b to handle c, does that mean that he trusts b to handle d?

And in more general form the question becomes:

(iv) if a1 trusts a2 to handle a3, does that then imply that a1 trusts a2 to handle any an?

The difficulties facing the notion of transitive linear trust, also face this brand of transitive trust. Modelling trust in this way is changing the concept.

Trustmarks

Lastly we will discuss the most engineered response to the lack of trust that users might feel when interacting over the Internet: the trustmark.

This construct is simple: by certifying certain users with a designated trustmark the trust issue is thought to be resolved. It will now, these programs say, be obvious whom to trust: the ones with the mark!

There are several different trustmarks on the market today. From the relatively inexpensive TRUSTe to the more advanced accounting firm programmes:

Fig 4. This slide shows three examples. The low-end TRUSTe and BBB-online marks and the more expensive WebTrust-mark.

The problem with these solutions reflect the change in trust they effect: the idea that we will trust an unknown mark more than an unknown web site is quite intriguing. It might be true in some cases, but it is far from a general rule.

When discussing these issues it is worthwhile to return to a basic notion of what trust is and how it grows. Trust grows, for one thing, by habit. That habit comes with the use of new technology. Assuming that the technology adoption curve looks somewhat like the one introduced by Geoffrey Moore (Moore 1991), we can guess that there is a correlation between the adoption of technology and the trust of that same technology. If we map this in a Moore-curve we get the following result:

Fig 4 Technology Adoption Curve (TAC) and Trust Curve(TC)

Trust in a new technology, as mapped above, comes quite naturally. It grows by habit. In general we speak of growing to trust someone, et cetera, and the presence of technologies seem to inspire a quiet form of trust as well.

What we might hope to accomplish, then, with the trustmark would be to move the trust curve to the left, so that people begin to trust the technology quicker than they would otherwise. There seems to be a slight problem here, however. A trustmark is, as simple as it is, actually a technology itself. This means that it has to be trusted first, before it can have any effect whatsoever on the general trust curve for other technologies that it was invented to certify.

Trustmarks have to fulfill a number of conditions to be worthwhile, the more important of these are:

  1. a trustmark must become trusted faster than the technology it is supposed to certify. This is not merely a condition that involves the trustmark. It also involves the total amount of trustmarks on the market. The more there are, the slower the trust curve for the trustmarks individually.
  2. the cost of implementing the trustmark must be equal to or less than the gains expected by accelerating the trust for the other technology. This cost is not only intrinsic to the trustmark program but is also affected by the existence of a wealth of alternative trustmark systems.
  3. The trust mark must balance between creating trust and deterring users by its very existence. It is actually possible that the act of marking up certain sites as secure might to the naïve user imply that all others aren’t. This goes especially for the case where the number of trustmarks feels overwhelming.

In conclusion we can state that the engineering approach of trustmarks seems to change the concept of trust in many different ways. The idea seems to imply a slightly more static notion of trust than the one we have had up until now.

This said, it should be noted that some recent studies imply that the use of trustmarks on web sites individually might boost confidence for these sites (Krishnamurty 2001). This effect might however be ascribed to a number of different factors, including the fact that we trust a site that claims that it deserves it more than one that doesn’t.

Other trust developments

Having discussed some consumer examples it is now time to turn to more complex trust environments, such as communities and games, and how these change the concept of trust. Here we will discuss only games. Communities are treated in the literature (Abdul-Raman, Hailes 2000)

Computer games and trust

We can observe relationships based on trust in many different new media, especially in computer games. Sometimes the trust arises spontaneously and users gather together in clans or teams to work cooperatively against an enemy or an enemy clan. The implicit trust in game modes like team death match and catch the flag has lately become even more popular with games like Counterstrike. These games rely on cooperation and a basic sense of trust in between group members. This trust is highly compartmentalised 

Fig 5 A cooperative effort to foil the enemy – based on trust!

Another, more advanced, approach to trust is found in some of the on-line role playing games that we observe gaining ground rapidly today. Asherons call, for example, has implemented a trust model in its design. By trusting someone more advanced than myself to become my lord I can gain levels and experience more quickly than otherwise. This more advanced player can give me weapons, protection and advice that enables me to become more skillful quickly. This feudal trust model is a part of the game, and built-in into the game. It is called an Allegiance and constitutes a popular part of the game. The lord receives a small sum of my experience points and thus is able to boost his own performance if he has several vassals.

Fig 6.Many fights in Asherons Call can only be won in cooperation.

It is fascinating to reflect on the fact that these games now can be used as virtual laboratories. By using and working with the interactions in virtual spaces we can learn more about the phenomenon of trust and how it works.

The concept of trust in these environments is formalised and compartmentalised: I trust people to perform a limited set of actions (run, kill et cetera), but the trust is still crucial to the game. Trust in these games is also a survival strategy. In a very Darwinist way players learn to trust some players and distrust others.

Affective technologies

A look to the future reveals that much is happening. Traditionally we speak about trust only in relation to other humans, but in the research if the AI lab at MIT we now see attempts to work with emotional and affective communication with machines.

These developments will force us into an investigation into the meaning of trust that is more profound than any other technological advance?

Kismet and affective robotics

One especially fascinating project is Kismet, where facial expressions are mimicked and used for communication (Breazeal, Scassellati 1999) between robot and man.

The project works with several different emotional modes and the robot, Kismet, can shift between modes depending on what stimuli it is provided with.

Fig 7 A happy robot

Fig 8. …and an angry robot

Trusting a machine might be the next step in the evolution of the concept of trust. It will certainly change the concept, as we know it.

Conclusions

To summarise: much of the research that deals with trust today tries to draw up a map over the field and determine the meaning and structure of trust (McKnight 1996.). What I think is more important is to realise that the very concept of trust changes with the introduction of new technologies. The trust we speak about today is distinctly different from that which our ancestors discussed a hundred years ago. But what are then the changes? One way of illustrating this is in a diagram, where we can show the conceptual shift.

From believing to knowing

I suspect that we are moving from a society where we by trust meant to act on faith in someone, to a society where we trust someone if we have rational reasons and facts that move us to do so. This is an oversimplification, but it remains a useful oversimplification. We are moving from belief to knowledge in our trust concept. This is illustrated by the ebay-example, where offering knowledge was seen as a means to foster trust.

From transforming to informing

There is also another facet of trust that is changing. To have trust in someone traditionally is a dialogic relation. If I am trusted that changes me as well as the trusting party. My responsibility increases. When we speak of building trust, or of transitive trust, this aspect fades away. Instead of transforming me as a trusted part, I am merely informed of the fact that someone trusts me. It is informatised.

The conceptual shift

With these starting points we might show the following diagram to illustrate the way the shift is proceeding:

Fig 9. The conceptual shift

The shift takes time, and the concept of trust is becoming ever more complex. I think however that the tendency is clear. We are moving towards a more engineering-like perspective on trust. If that is what serves our purposes best remains to be seen (Klang 2000).

The transparent society – the end of trust?

Another interesting trend that I have not introduced here, but which deserves our attention, is the eroding of personal privacy. With cameras becoming pervasive and databases growing every second, this erosion will soon near total elimination of privacy. When this happens we might end up in what David Brin has termed the Transparent Society (Brin 1998). In this society there are no secrets, nothing is hidden.

Why is this important for the concept of trust? The reason is simple: when we achieve total transparency the element of knowing might totally obliterate the element of believing in the trust grid above. Why should we speak about trust when all is monitored and all is recorded? There will then be no need for trust anymore. A bleak future indeed. We are moving towards this position today, with the engineering approach. We try to inspire trust by showing all, supplying the users with remedies and information in a higher and higher degree. There is a paradox waiting to happen here. Soon we will have eroded so much of the classical trust concept that we only will trust where we feel that we have recourse to other methods should our trust turn out to be misplaced. Trust, that is, where no trust is needed and only when no risk is taken.

There is a risk that we will have lost something valuable in human society by then.

References

Abdul-Raman, Alfarez and Hailes, Stephen “Supporting Trust in Virtual Communities” HICSS 2000

Benno, Joachim “The "anonymisation" of the transaction and its impact on legal problems - A theory as to why the use of ICT engenders legal problems" International Journal of Communications Law and Policy, Issue 2 1998/99

Breazeal, C. and Scassellati, B. (1999), "How to build robots that make friends and influence people". To appear in IROS99, Kyonjiu, Korea

Brin, David The Transparent Society – Will Technology Force Us to Choose Between Privacy and Freedom? (Reading 1998)

Christianson, B and Harbison, William. Why Isn't Trust Transitive?. In Proceedings, Security Protocols International Workshop, University of Cambridge, 1996. 22

Josang, Audun “Trust-Based Decision Making for Electronic Transactions” in NordSec 99: Proceedings of the fourth Nordic Workshop on Secure IT systems- Encouraging Co-operation.

Karvonen, Kristina “Creating Trust”, NordSec 99: Proceedings of the fourth Nordic Workshop on Secure IT systems- Encouraging Co-operation.

Klang, Mathias “Who do You Trust? Beyond Encryption, Secure e-business” World Wide Law, BILETA 2000

Krishnamurty, Sandeep “An Empirical Study of the Casual Antecedents of Customer Confidence in E-tailers” First Monday, vol 6, no 1 2001

Luhmann, N. (1979) Trust and Power, New York: Wiley)

McKnight, D Harrison “The Meanings of Trust” Working paper MISRC [http://www.misrc.umn.edu/wpaper/wp96-04.htm] 1996

Moore, Geoffrey Crossing the Chasm (New York 1991)

Reagle, Joseph M. Jr „Trust in electronic markets: The Convergence of Cryptographers and Economists” First Monday 1996 (http://www.firstmonday.dk)