Wednesday, 27 November 2024

Privacy violations undermine the trustworthiness of the Tim Hortons brand

Users don’t expect that a more convenient way to get coffee will lead to privacy violations. (Shutterstock)

Jordan Richard Schoenherr, Concordia University

The Office of the Privacy Commissioner (OPC) of Canada, along with three provincial counterparts, released a scathing report on the Tim Hortons’ app on June 1.

The year after the seemingly benign app was updated in May 2019, a journalist’s investigation found that the app was collecting vast amounts of user location data that could be used to infer their place of work and home, as well as their mobility patterns.

While the OPC’s report notes that “Tim Hortons’ actual use of the data was very limited,” it concluded that there was no “legitimate need to collect vast amounts of sensitive location information where it never used that information for its stated purpose.” This report follows on the heels of the OPC’s concerns over the government’s use of mobile phone data during the pandemic.

The joint report has met with both overtly negative and cynical responses on social media. Many are not surprised by the data collection practices themselves. Users have likely become numb to the collection of behavioural traces to create big data sets, a kind of learned helplessness. What is jarring to many is the perceived violation of trust that has traditionally been given to this parbaked Canadian institution.

 

Everything, everywhere

The Tim Hortons case illustrates our growing entanglement with artificial intelligence (AI) that reflects the backbone of seemingly benign apps.

AI has permeated every domain of human experience. Domestic technologies — mobile phones, smart TVs, robot vacuums — present an acute problem because we trust these systems without much reflection. Without trust, we would need to check and recheck the input, operations and output of these systems. But, when people are converted into data, novel social and ethical issues emerge due to unqualified trust.

Technological evolution is continual. It can outpace our understanding of their operations. We cannot assume that users understand the implications of the agreements that reflect a single click or that companies fully understand the implications of data collection, storage and use. For many, AI is still the purview of science fiction. Popular science frequently fixates on terrific and terrifying features of these systems.

At the cold heart of this technology are computer algorithms that vary in their simplicity and intelligibility. Complex algorithms are often described as “black boxes,” their content lacking transparency to users. When autonomy and privacy are at stake, this lack of transparency is particularly problematic. Compounding these issues, developers do not necessarily understand how or why privacy engineering is necessary, leaving users to determine their own needs.

Data that is collected or used to train these virtual machines often reflects “black data” — data sets whose content is often opaque due to proprietary or privacy issues. How the data was collected, its accuracy and biases must be clearly established. This has led to calls for explainable AI systems, whose function can be understood by users and policymakers to scrutinize the extent to which their operations support social values.

Paths to trust

Our trust is not always grounded in facts. A basic sense of trust can be induced through repeated exposure to an object or entity, rather than being hard-won through direct exchange experiences or knowledge of social norms of fairness. The trouble with apps — Tim Hortons included — stems from these issues.

Despite a brief and temporary decline in trust, the brand remains a Canadian staple. Tim Hortons stores are a feature of Canada’s physical and consumer landscapes. Our familiarity with the brand makes the collection of products — real or digital — seem innocuous. It is therefore unreasonable to expect that consumers would suspect that their location data was collected every few minutes throughout the day.

A hand holds a mobile screen showing the Tim Hortons app. There's a Tim Hortons restaurant in the background
According to the Gustavson Brand Trust Index, Tim Hortons was voted the most trusted brand in Canada in 2015. (Shutterstock)

The dark patterns of design

In design, dark or deceptive patterns reflect the active exploitation of design features to benefit the application developer or distributor. The most prominent case to date is that of Cambridge Analytica scandal, where Facebook user data was used in an attempt to affect how people voted.

Despite the decline in trust for Facebook, users continued to use the platform with only comparatively minor changes in their behaviour.

Facebook’s initial response pointed out that: “People knowingly provided their information … and no sensitive pieces of information were stolen or hacked.” However, users spend very little time reading privacy policies and, when they are not presented with them, they don’t go out of their way to read them.

Claims that anonymizing data — removing identifying personal information — can eliminate privacy issues are also overly simplistic. Merging multiple data sets provides a more complete picture of an individual: what they prefer, how they behave, what they owe, who they date. With enough information, a detailed picture of a person can be created.

In some cases, AI can be as good as humans in predicting personality traits. In other cases, AI can predict sensitive information that is not disclosed. This could threaten our personal autonomy.

Engaging with data ethics

Given the growing capabilities of AI, coupled with a lack of transparency in how data is collected and used, the validity of users’ consent must be questioned. The OPC’s judgement speaks to this point: users would not reasonably expect the kinds or amount of detail collected about their behaviour given the nature of the app.

While this information might not have been used by Tim Hortons, we must consider the unintended consequences of data collection. For instance, cybercriminals can steal and sell this information, making it available to others. By simply collecting this data, institutions, organizations and businesses are assuming responsibility for our information, how it is protected and used. They must be held accountable.

We don’t expect that a more convenient way to buy coffee and donuts will lead to privacy violations and the deepening of our digital footprint. The trade-off cannot be rationalized away.

There is no single solution to our privacy woes, and many users are unlikely to disconnect. Users, developers, distributors and regulators need to be brought into more direct and transparent relationships with one another. New skills and competencies need to be developed in our education system to make sense of the social consequences of technology use. And more agile public institutions need to be developed to address these issues.The Conversation

Jordan Richard Schoenherr, Assistant Professor, Concordia University, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

News Letter

Subscribe our Email News Letter to get Instant Update at anytime

About Oases News

OASES News is a News Agency with the central idea of diseminating credible, evidence-based, impeccable news and activities without stripping all technicalities involved in news reporting.