Personal privacy infractions weaken the trustworthiness of the Tim Hortons brand name

 The Office of the Privacy Commissioner (OPC) of Canada, along with three provincial counterparts, released a scathing report on the Tim Hortons’ app on June 1.

The year after the seemingly benign app was updated in May 2019, a journalist’s investigation found that the app was collecting vast amounts of user location data that could be used to infer their place of work and home, as well as their mobility patterns.

While the OPC’s report notes that “Tim Hortons’ actual use of the data was very limited,” it concluded that there was no “legitimate need to collect vast amounts of sensitive location information where it never used that information for its stated purpose.” This report follows on the heels of the OPC’s concerns over the government’s use of mobile phone data during the pandemic.

The joint report has met with both overtly negative and cynical responses on social media. Many are not surprised by the data collection practices themselves. Users have likely become numb to the collection of behavioural traces to create big data sets, a kind of learned helplessness. What is jarring to many is the perceived violation of trust that has traditionally been given to this parbaked Canadian institution.

Everything, everywhere

The Tim Hortons case illustrates our growing entanglement with artificial intelligence (AI) that reflects the backbone of seemingly benign apps.

AI has permeated every domain of human experience. Domestic technologies — mobile phones, smart TVs, robot vacuums — present an acute problem because we trust these systems without much reflection. Without trust, we would need to check and recheck the input, operations and output of these systems. But, when people are converted into data, novel social and ethical issues emerge due to unqualified trust.

Technolog


ical evolution is continual. It can outpace our understanding of their operations. We cannot assume that users understand the implications of the agreements that reflect a single click or that companies fully understand the implications of data collection, storage and use. For many, AI is still the purview of science fiction. Popular science frequently fixates on terrific and terrifying features of these systems. Slot Online Terpercaya

At the cold heart of this technology are computer algorithms that vary in their simplicity and intelligibility. Complex algorithms are often described as “black boxes,” their content lacking transparency to users. When autonomy and privacy are at stake, this lack of transparency is particularly problematic. Compounding these issues, developers do not necessarily understand how or why privacy engineering is necessary, leaving users to determine their own needs. Togel Hari Ini

Data that is collected or used to train these virtual machines often reflects “black data” — data sets whose content is often opaque due to proprietary or privacy issues. How the data was collected, its accuracy and biases must be clearly established. This has led to calls for explainable AI systems, whose function can be understood by users and policymakers to scrutinize the extent to which their operations support social values.

Popular posts from this blog

Why the very best point to perform in a magpie assault might be actually towards go versus your impulses

After 7 October and the start of Israel’s ground invasion of the Gaza Strip, that relationship started to unravel.

Benefits as well as Disavantages of Available Accessibility Journals