Activity: Talk or presentation types › Invited talk › Scientific
Trust is a robust strategy to reduce complexity in social interaction. When we trust we have positive expectations about the actions of others. Although we are not sure what the future brings or how others are going to behave, when we have trust we can bridge this gap of uncertainty. Trust can be seen as a “functional fiction”. By acting ‘as if’ we know for sure what will happen in the future, social interaction is made possible.
It is not to be expected that in our networked societies, trust will soon become redundant and substituted by technology. On the contrary, in order to endure the complexity that technology inherently brings forth, trust will only be more in demand. However, how trust and the shaping of trust are impacted by the use of technologies is an open question.
The focus of this talk will therefore be on the intersubjective character of trust and how this intersubjective character might be under siege by the development of automated and pro-active others. Both public and private actors invest in algorithmic decision making systems that crunch huge amounts of data in order to predict the behaviour of their citizens and customers. Based on these data-driven decision-making tools, interactions with citizens and customers increasingly become automated. Generally, citizens and customers have however no way of guessing how they are being read by these systems. They are in a relation of “invisible visibility”. Citizens and customers become visible in a way that is invisible to them.
Although trust is always to a certain extent blind, is there a moment where it becomes too blind and transforms in sheer hope? Can our pro-active, personalized and automated environment still function as a familiar world, a “lifeworld” in which shared perceptions and beliefs can be presupposed? Finally, some preliminary ideas are shared on trust as a central part of interface ethics.