Data Learning Goal: I’m working on a communications case study and need an explanation and answer to help me learn. Part 1- Data Come up with random data

Click here to Order a Custom answer to this Question from our writers. It’s fast and plagiarism-free.

 

Learning Goal: I’m working on a communications case study and need an explanation and answer to help me learn.

Part 1- Data
Come up with random data to analyze (do not mention it is random data, act like it is your data) the data should be google Account, Calendar, Chrome, Contacts, Drive, Photos, and Gmail. Take a look through Your Activity, especially your Location History, Web & App Activity, and YouTube history. And also Facebook data that includes posts, activities, events, interactions, groups, etc. try to think like you are a young woman interested in typical young woman things.

Part 2 – Analysis
Now that you have spent some time examining your digital shadow, consider your data and how it reflects you and who you are. Were you surprised about the types of information that have been captured about you? What do these data say about you and who you are (hint: specifically look at the advertising options that are being shown to you)? Does examining these data evoke any particular emotions?
Writereport reflecting on and analyzing your own data and connecting it to the concepts that we have discussed in class, especially those around the data-human assemblage. Readings are attached. The following questions may help you to focus your analysis:
• How do these data reflect you as a data-human assemblage?
• Is your data a companion species? Do data-capturing devices become companion species? If so, how or why?
• Is your personal data distinct from you as aperson?
• How does interacting with your data align with conceptions of subjectivity and agency?
should be 1,250 to 1,500 words in length (not including references), or 5 to 6 double-spaced #. Use APA style for your in-text citations and reference list. Be sure to include a header on each page with your last name and page number.
I tip well<3

Below are reading that can help: The first one is a class reading that you’re going to use for references

https://journals.sagepub.com/doi/10.1177/205395171…

If this does help you at all with creating/analyzing data- here are two articles my teacher just sent

Here are two articles that may help you interpret your Google and Facebook data:

https://www.teenvogue.com/story/all-the-data-googl…

https://www.nytimes.com/2018/05/16/technology/personaltech/google-personal-data-facebook.html

this assignment is very opinion based and it’s pretty much you trying to distinguish your data and what it essentially shows about you.

I have a more detailed outline if you need so please let me know and of course, if you have any questions as well.

Thank you!!

Rapid #: -18459234
CROSS REF ID: 843283

LENDER: C7F :: Main Library

BORROWER: IAY :: Daley Library
TYPE: Book Chapter

BOOK TITLE: Self-Tracking: Empirical and Philosophical Investigations

USER BOOK TITLE: Self-Tracking: Empirical and Philosophical Investigations

CHAPTER TITLE: Apps as Companions: How Quantified Self Apps Become Our Audience and Our Companions

BOOK AUTHOR: Jill Walker Rettberg

EDITION:

VOLUME:

PUBLISHER: Palgrave Macmillan

YEAR: 2018

PAGES: 27-42

ISBN: 9783319653792

LCCN:

OCLC #:

Processed by RapidX: 1/5/2022 2:25:32 PM

This material may be protected by copyright law (Copyright Act 1968 (Cth))

27

CHAPTER 3

Apps as Companions: How Quantified
Self Apps Become Our Audience and Our

Companions

Jill Walker Rettberg

Abstract Self-tracking apps gather intimate information about our daily
lives. Sometimes, they take the role of a confidante, an anthropomor-
phised companion we can trust. Humans have long confided in non-
human companions, such as diaries. The relationship between user and
app is structurally similar to the relationship narratologists and literary
theorists have identified between diarist and diary. Our agency is always
shared with the technologies we use, whether they are simply pen and
paper or a complex AI. By comparing apps to diaries, I demonstrate how
these technologies act not simply as objects but also as narrators and
narratees. While diaries are mostly silent listeners, self-tracking apps speak
back to us in a feedback loop and thus enter a role as our companions
rather than simply as our audiences.

Keywords Self-tracking · Quantified Self · Apps · Diary · AI
Narratology

© The Author(s) 2018
B. Ajana (ed.), Self-Tracking, DOI 10.1007/978-3-319-65379-2_3

J.W. Rettberg (*)
University of Bergen, Bergen, Norway
e-mail: Jill.Walker.Rettberg@uib.no

28 J.W. RETTBERg

IntroductIon
Self-tracking requires technology. Not necessarily digital technology, but
always, technology. Tally marks pressed into clay or scratched into stone;
paper charts with pens for making check marks and perhaps calculations;
smartphone apps that track everything a smartphone can measure: all
these are ways in which humans have used technology to create an exter-
nal, quantified representation of an aspect of our lives.

As long as the technology we use is simple, like a pen and paper, we
tend not to think of the technology as adding much to the process. But
we could not possibly remember the events we record in anything like as
exact a manner without recording them, even if the only technology we
are using is paper. If we think about it, we also know that the organisa-
tion of the charts we draw affects what we measure and how we think
about it.

When we use simple technologies, though, we tend to still feel as
though we are using the paper. We are in no doubt as to who is the sub-
ject here: the human feels fully in charge, at least in cases of voluntary
self-tracking, where the person doing the tracking is free to stop at any
time or to change the chart she is using. The human is the subject with
agency to act upon objects, that is, upon the pen and paper and the data
that the human collects.

This chapter is an examination of self-tracking apps that emphasise
the agency of the app through a conversational interface, where the app
uses simple scripts or more complex artificial intelligence (AI) to speak
to the user. Until recently, self-tracking apps have displayed user data in
lists or graphs, but as conversational agents like Siri on the iPhone or
Amazon’s Alexa have become popular, self-tracking apps are also begin-
ning to use the technology. Examples range from text-based chatbots
like Lark, Instant and Pepper, which send encouraging messages and
ask simple questions of the user, to speaking workout assistants like Vi
(pronounced vee), which is what Andrea L. guzman calls a Vocal Social
Agent (guzman 2017).

Telling our secrets to a simulated confidante like Vi is structurally sim-
ilar to confiding in a diary. Diarists often anthropomorphise their diaries,
addressing them as ‘Dear Diary’ and confiding in them as though to a
human friend. In this chapter, I outline a history of humans confiding
in non-human companions, from diaries to apps, in order to show how
our agency is always shared with the technologies we use, whether they

3 APPS AS COMPANIONS: HOW QUANTIFIED … 29

are simply pen and paper or a complex AI. By comparing apps to diaries,
I show how these technologies, or media, act not simply as objects but
also as narratees or audiences to our human narratives. While diaries are
mostly silent listeners, self-tracking apps speak back to us and thus enter
a role as our companions rather than simply our audiences. We don’t see
this to the same extent in social media, where we share content intended
for a human audience, using technology as a medium between humans
rather than as a companion or a tool for organising our data. This also
occurs, to a lesser extent, in other digital media—but it is more obvious
in self-tracking apps because they are designed to work without necessar-
ily having any other human audience than the user themselves.

trustIng our Apps
Digital devices are far less transparent to us than pens and paper or most
other pre-digital technology. Most of us don’t really understand how our
self-tracking apps work, and we’re not always entirely sure what they’re
measuring. Interestingly enough, this often means we trust them more
than we trust ourselves. José van Dijck calls this dataism: a ‘widespread
belief in the objective quantification and potential tracking of all kinds
of human behaviour and sociality through online media technologies’
(Dijck 2014). We may even trust our devices more than our own experi-
ences or memories. Studying people wearing heart rate variability moni-
tors, Minna Ruckenstein found that her informants changed their stories
about their day after being shown the data:

Significantly, data visualizations were interpreted by research participants as
more ‘factual’ or ‘credible’ insights into their daily lives than their subjec-
tive experiences. This intertwines with the deeply-rooted cultural notion
that ‘seeing’ makes knowledge reliable and trustworthy. (Ruckenstein
2014)

This surrendering of subjectivity or agency to our machines tends to
worry people. We trust the machine’s representation of our life more
than our own memories. Do we really want our machines to be writing
the stories of our lives?

Perhaps, though, we have never written the stories of our own lives.
At least not completely alone. We write with the tools we have at hand:
pen and paper, Snapchat or a typewriter. These tools also determine

30 J.W. RETTBERg

how we write, how we are able to see our own lives. Literary theorist
Paul de Man wrote of this in the late seventies, arguing that perhaps,
rather than a lived life leading to an autobiography, it is the other way
around:

We assume that life produces the autobiography as an act produces its
consequences, but can we not suggest, with equal justice, that the auto-
biographical project may itself produce and determine the life and that
whatever the writer does is in fact governed by the technical demands of
self-portraiture and thus determined, in all its aspects, by the resources of
his medium? (Man 1979, 920)

We usually think of a diary, an autobiography or a self-tracking app as an
inanimate object that may structure and mediate the way we are able to
tell our stories, but that has no stories of its own. And yet there are many
examples of people adjusting their actions so as to make them more suit-
able for mediation. For instance, a runner may postpone a run because
their phone’s battery is flat and needs charging and thus cannot track
their run. A Snapchatter may decide to go to a certain event because
they want to show themselves at that event in their next Snapchat story.
And once we see the data that our devices have collected, we may, as
Ruckenstein found, slightly alter our retelling of our day to better fit the
data that is displayed.

James Bridle, an artist and designer, has argued that the data a phone
collects are actually the phone’s diary, not the diary of the person carrying
the phone. When he learned that his iPhone had saved the coordinates of
every location he (or it) had been at, he downloaded the data and used
it to create an artistic project: a book of maps showing his whereabouts
as recorded by the phone (Bridle 2011). The title of the book, fittingly
enough, is Where the F**k Was I? because Bridle claims to have no rec-
ollection of having been at all the places the phone had registered that
he was at. Bridle’s phone, seen in this way, is hardly an inanimate object
that is only acted upon and has no agency of its own. It tells its own sto-
ries, as an independent subject. What does that mean for our relationship
with our machines?

3 APPS AS COMPANIONS: HOW QUANTIFIED … 31

deAr dIAry: dIArIes And Apps As nArrAtees
Marshall McLuhan saw media as extensions of our bodies (1964).
Perhaps he would say that our ‘dear diary’ and our step counters and
lifelogging apps are such extensions. I argue that these personal media
(Lüders 2008) are something more. They are our audiences. These are
media that we do not simply listen to or read or watch: we speak to them
(Walker 2004). We are the narrators, and they are the narratees, the
audience for our words or our data. These media (machines) may be the
only ‘readers’ of our stories and our data, or we may share the stories and
data we record in a diary or an app with others, for instance, by passing
around a paper diary or by choosing to share data with our friends or
posting it to Facebook.

In narratology, the actual, flesh-and-blood author and reader are
seen as separate from the text. But we can usually identify an implied
author and an implied reader in the text. The implied reader (or listener)
of one of Trump’s speeches is, for instance, clearly not a European who
appreciates universal healthcare, or a refugee from a war-torn country,
but such people may well be among the actual flesh-and-blood readers
or listeners. Some texts also have a narrator and a narratee, that is, an
explicit speaker in the text, somebody who speaks in the first person and
an explicit listener or an explicit addressee. The term implied reader was
coined by Wolfgang Iser (1978), but when we use these terms to think
about the way apps address their users, it’s most useful to think about
the role of the implied reader as part of a larger system, as shown in
Fig. 3.1, which shows Seymour Chatman’s model of narrative communi-
cation as it works in a novel, or even a diary (1978, 151).

In his theories of the diary, Phillippe Lejeune writes that a diary is
always written for a reader, even if that reader may simply be the writer,
at some future date (Lejeune 2008, 324). It is impossible to imagine
writing for nobody. I would argue that we think of our self-tracking apps
in the same way. We are collecting our data for our future selves, and

Implied
author

Implicit
Reader(Narrator) → (Narratee)→

Real
author → →

Real
reader

The text

Fig. 3.1 Chatman’s model of the narrative communication situation (redrawn
from Chatman 1978, 151)

32 J.W. RETTBERg

perhaps for others as well: to share our accomplishments with a group or
peers, perhaps. We are also usually sending our data to a corporation that
combines our data with others to generate comparisons, and that data
may be used for quite different purposes than we imagined when we slid
the Fitbit onto our wrists or installed the app on our phones. For cor-
porations, data about our exercise patterns or other daily activities have
monetary value, which Chris Till argues, transform our leisure activities
into a form of labour that can be commodified and exploited (Till 2014).
One way of making that less visible to users (or labourers, in this model)
might be to make the apps seem to be more like individual people or
even a friend, rather than presenting them as technical data collectors.
Such a devious plan is probably not necessary to make users anthropo-
morphise their devices and think of them as intimate companions rather
than the agents of corporations that surveil us. Individual users rarely
see the full scale of data collection. For a user, the relationship is mostly
experienced as being between the user and the device.

This is not simply about the intimacy of a wearable device or a smart-
phone. Diary-writers have also long anthropomorphised their diaries,
imagining a ‘you’, a reader that the writer is writing for. One may well
argue that this ‘you’ is a requirement of language itself. Speech is founded
upon conversation or at least upon an audience. In diary-writing, we
often address our words to a ‘dear diary’, imagining the diary itself to be
a safe, silent listener.

Here is an example of how ‘dear diary’ is used in a serial magazine
story written in 1866. Note that this is from a fictional diary, so the use
of ‘dear diary’ may be slightly parodic, or at least intended to capture a
certain type of personality in the fictional diary-writer:

March 2nd.–Now, my diary, let me tell you all about today. You are the
only bosom-friend I have, dear diary, and you keep all my secrets, that is,
you would keep them if I had any to confide in you. (Worboise 1866, 16).

Do we still imagine a ‘dear diary’ when we open our self-tracking apps
on our phones? Do we imagine our machines as audiences? Or as sub-
jects in their own rights?

‘Dear diary’ is a direct address of a narratee, giving the diary itself
a human subjectivity. Based on a search of google Books’ corpus of
digitised, published books,1 we can see that the expression ‘dear diary’
began to be used in print in the mid-eighteenth century, but became

3 APPS AS COMPANIONS: HOW QUANTIFIED … 33

really popular in the last decades of the twentieth century. Interestingly,
both the phrase ‘dear diary’ and the word ‘diary’ were used markedly less
in print after the turn of the twenty-first century, which seems very likely
to be connected to Internet use (see Fig. 3.2).

Perhaps we don’t need to anthropomorphise our diaries anymore now
that we have the Internet, with real people as potential readers of our
blog posts and Facebook updates. Although there are clearly many sim-
ilarities between traditional diaries and the way people share stories of
their daily lives in social media (Rettberg 2014a), there has been a transi-
tion from sites like OpenDiary.com, that very explicitly used diary con-
ventions to structure the users’ writings, to platforms like Snapchat and
Tumblr that don’t reference traditional diary conventions at all (Martinviita
2016; Rettberg 2017, forthcoming). For the purpose of this chapter,
though, what I am interested in is the way that diarists have anthropomor-
phised their diaries, for instance, by writing to their ‘Dear Diary’.

confessIng secrets to A dIAry or App
Both diaries and self-tracking balance between the private and the public.
Today, the privacy of a personal diary is often seen as its defining fea-
ture. Diaries are sold with padlocks and keys and used as confessional

Fig. 3.2 google Books Ngram Viewer chart showing the occurrence of the
phrase ‘dear diary’ (with different capitalisation) in books published between
1800 and 2000 that have been digitised by google. Chart generated 01.06.2016

34 J.W. RETTBERg

spaces where it is safe to pour out all one’s secrets. Historically in
Western culture, the diary was sometimes quite explicitly seen as a way
to confess sins directly to god (Heehs 2013, 49), but also as a tool for
spiritual self-improvement. Sixteenth century Jesuits had explicit guide-
lines for writing spiritual narratives about themselves (Molina 2008),
and other sixteenth- and seventeenth-century guides exist that empha-
sise both self-abasement before god and recording mercies, grace and
deliverances (Rettberg 2014b, 5–7). Some of the spiritual work in this
self-narration took place when diary-writers shared and discussed their
diaries with friends or with the congregation. So, although there is a
strong history of private diaries, where the author would be horrified if
others read her diary, there is also a strong parallel tradition of diaries
that were expected to be shared with others and that were specifically
intended as self-improvement tools (Humphreys et al. 2013). This latter
kind of diary obviously has something in common with the Quantified
Self (QS) movement’s drive towards self-improvement. There are many
examples of self-improvement projects that combine self-representation
with more quantifiable kinds of self-tracking. For instance, the app You
(you-app.com) gives users daily tasks to complete and asks them to doc-
ument each task by taking photographs and writing short comments,
which can be shared with friends or kept private. Taken together, these
photographs and comments become a kind of diary. gratitude projects
such as #gratitude365 are another example. Here, participants aim to
share daily photographs of something they are grateful for, with a shared
hashtag that creates a flexible sense of community as well as allowing
individual users to organise their own contributions. Keeping a record
of what you are grateful for is an old technique for self-improvement,
recommended, for instance, in John Beadles’ A Journal or Diary of a
Thankful Christian (Beadle 1656; Rettberg 2014b, 5–6).

Interestingly, QS has a similar tension between the private and the
public as diaries do. The Show and Tell meetings that are common at QS
events and on the QS blog are very explicitly about sharing, and as with
many shared diaries, the purpose is self-improvement. Yet there is also a
strong sense that people find over-sharing to be rude. Complaints about
Facebook friends who post every map of their run or every song they
hear on Spotify to their Facebook timeline are common. We also need to
recognise that some of the drive to share one’s personal data is driven not
by the individual users, but by the corporations that develop the services
(Ajana 2017; Till 2014).

3 APPS AS COMPANIONS: HOW QUANTIFIED … 35

Apps As compAnIons And Independent subjects
Paper diaries and many Quantified Self apps are silent listeners, existing
only as receptacles for our data. Their interfaces are often designed to
appear objective and serious, as shown in the screenshots in Fig. 3.3.

But some apps are programmed to appear as characters, as subjects of
their own. For instance, the activity tracker Lark is designed to look like a
messaging app with a conversational agent or chatbot sending messages to
the user: ‘Hey there, hope you’re having a fine morning’. Lark uses con-
versations instead of graphs to tell me about my activity level: ‘Awesome
job. Averaging 1 hour 31 minutes of activity last week. That’s great!’

Lark doesn’t usually allow the user to write back in natural language.
Instead, it usually offers a few different responses to its questions that the
user can choose between. There’s only one button offered as a possible
response to the comment about last week’s activity: ‘Okay’. When I click
it, a new message appears. ‘Nice job walking for 23 minutes in the early
afternoon last Tuesday’, Lark praises me. ‘That was a long one!’ The
only option in this chat is to click the prescripted response: ‘Oh yeah!’

Independently conscious technology is a common topic in science fic-
tion, usually thematising the uneasy balance between the machine as a
benevolent assistant and the machine as a too-powerful threat. Asimov’s

Fig. 3.3 From left to right: iPhone Health app, Reporter, Withings

36 J.W. RETTBERg

laws of robotics are intended to solve this problem by programming loyalty
to humans into the operating system of an artificial intelligence (AI). Of
course, even a rule programmed in 1 s and 0 s can be interpreted in differ-
ent ways, and so the system backfires when the AI realises that humans are
harming themselves by destroying the environment, and so the AI decides
to control humanity to protect us from ourselves (Asimov 1950).

This fear of machines is far older than AI. The Luddites famously
rebelled against the machines that were replacing human workers by
destroying mechanical knitting machines, looms and other factory
machinery. This basic fear of humans being made redundant and simply
unnecessary by more perfect and efficient machines has carried through
to our time, though today it is often expressed as a fear of machines tak-
ing over our world completely, as in the Matrix film trilogy. In popu-
lar fiction, machines are often shown as untrustworthy or even evil, as
we see in Obi Wan Kenobi’s warning to Luke about Darth Vader: ‘He’s
more machine than man now. Twisted and evil’ (episode 6, after Yoda’s
death). Often science fiction shows a transition from benevolent to
malevolent machines, which is often accompanied by a shift from the
machine having a human-like consciousness to being incomprehensible
to humans. We have an urge to create robots that are made in our image,
but we also find robots that are too similar to humans to be uncanny.
Andrea guzman writes that Siri, the conversational agent built into
iPhones, is deliberately scripted to seem non-threatening, so we don’t asso-
ciate her with ‘the cultural image of malicious machine’ (guzman 2017).

The idea of computers as a benevolent companion species is also
deeply ingrained in computer science and the development of technol-
ogy. Alan Turing’s question of how we would be able to tell the differ-
ence between a computer and a human was taken up as the Turing Test,
a challenge spanning decades to create software that a human cannot
recognise as non-human. The most successful chatbots in this sense of
‘seeming human’ have had very clearly defined topics or modes of con-
versation. Joseph Weizenbaum’s ELIZA, one of the first truly famous
chatbots, was modelled as a Rogerian psychologist, who listens to the
human interactant and repeats words back to her to encourage her to
speak more (Weizenbaum 1966). Here is an often-cited example of a
conversation with ELIZA:

young woman: Men are all alike.
eliza: In what way?

3 APPS AS COMPANIONS: HOW QUANTIFIED … 37

young woman: They’re always bugging us about something specific
or other.

eliza: Can you think of a specific example?
young woman: Well, my boyfriend made me come here.
eliza: Your boyfriend made you come here?
young woman: He says I’m depressed much of the time.
eliza: I am sorry to hear you are depressed.
young woman: It’s true. I’m unhappy.
eliza: Do you think coming here will help you not to be

unhappy?
young woman: I need some help, that much seems certain.
eliza: What would it mean to you if you got some help?
young woman: Perhaps I could learn to get along with my mother.
eliza: Tell me more about your family.
young woman: My mother takes care of me.
eliza: Who else in your family takes care of you?
young woman: My father.

Some apps play upon this role of companion. Lark is one example.
Another is the customised music and podcast app Capsule.fm, which lets
you choose between seven AI personalities as your host, and each will
speak to you by name. The description of the app on the iTunes app
store emphasises the subjectivity of the app: ‘Capsule.fm is run by lov-
ing machines, mixing music, social media updates, news and podcasts
into the perfect soundtrack for where you are, and what you are doing.
Capsule.fm knows you better than your friends, and gets smarter the
more you listen’.

Apps like Lark and Capsule emphasise technology as friendly in order
to gain our confidence. Lark is not particularly self-referential, and the
scripting of its conversations does not present the app as though it is
aware of being a program rather than a human being. Similarly, it does
not speak as though the user is aware that it is a program.

The robot voices of Capsule.fm, on the other hand, are very explicit
about their robot nature and use humour to play with the idea of their
having full-fledged personalities. Capsule.fm’s robot voices are loving. A
sample from the website includes the following words, spoken in a soft,
female, computer-generated voice:

Confession time: I have a little crush on you, Sarah. Ever since you down-
loaded me, I have this special feeling towards you.

38 J.W. RETTBERg

The robot hosts of Capsule.fm are like radio DJs. They introduce and
play music from your phone and your Spotify playlists, read news head-
lines and suggest podcasts other users listen to. Most of what they say is
typical patter. They joke and make general observations, then read the
title of the song that’s up next. Most of the hosts’ speech is pre-written
by the human developers, although variables are slotted in: the user’s
name, or an adaptation of her name, as when my host addressed me as
Jilly Bear rather than Jill. A recurring feature of the jokes is that they
comment quite explicitly on the ontological status of the hosts, either
speaking in the first person and expressing feelings, as here:

Hi, Jilly Bear. I want to thank you again for listening to Capsule.fm. I
really appreciate it. (Capsule.fm app, 30.05.2016)

Or, the jokes play upon the user’s full knowledge that the host is not in
fact a real human, but lives in a phone:

Now, go disinfect your fingers before you touch me anymore on your
iPhone. (Capsule.fm app, 30.05.2016)

Positioning the device or app as a companion makes its difference from
us explicit. Our devices are not human, not our selves. And yet they have
agency, or at least, we imbue them with agency and subjectivity.

Vi, billed on its website as ‘the AI personal trainer who lives in bio-
sensing earphones’ takes the anthropomorphism of a self-tracking device
a step further, presenting Vi as ‘a friend’ who ‘will help you’. The prod-
uct website getvi.com gushes:

‘Put Vi on and start a relationship with a friend for your fitness. Each day,
Vi tracks you, gets smarter, and coaches you to real results. Vi will help
you meet your weight goals and improve your training’.

Vi’s voice speaks into your ears from earphones, so nobody else can hear.
Her voice is a soft voice, with an appealing, supportive sense of joy. It is
not robotic: each phrase and word were recorded by a human female and
they are recombined algorithmically to fit each situation. The earphones
track the user’s motion and heart rate, and the user speaks to interact
with the device and to share information.

The promotional examples of interactions between Vi and users that
are shown on the website show that Vi is designed to show empathy. In
one video, showing a man running uphill on a wooded trail with the Vi

3 APPS AS COMPANIONS: HOW QUANTIFIED … 39

earphones on, Vi uses information about the user’s heart rate and speed
to suggest that he slows down. Then, she praises him for his effort:

Vi: Looks like you’re fatigued. Are your legs done?
Runner: Yeah… I’m done.
Vi: Okay, stop here. Keep walking to gradually slow your heart

rate down.
Vi: Amazing effort today!

conclusIon: speAkIng wIth mAchInes
Diaries have long been anthropomorphised. We address them directly
when we share our secrets with them. The use of conversational agents
in self-tracking apps and devices such as Lark and Vi suggests that we are
moving towards a similar relationship with our devices, where we nar-
rate our experience to the device, and it speaks back to us, establishing
a relationship between human and technology that emphasises a shared
agency, a collaboration rather than the traditional notion of humans
using their technologies as tools they are in control of. By allowing our
devices to be our coaches, they become more than mere extensions of
humans, they are becoming our equals.

Ted Nelson wrote in Dream Machines, his 1974 self-published and
extremely influential vision of computers: ‘the computer is a Rorschach,
and you make of it some wild reflection of what you are yourself ’
(Nelson 1974, DM3). ‘Identifying with machines is a crucial cultural
theme in American society, an available theme for all of us,’ he wrote
in another entry in Computer Lib, the

Place your order now for a similar assignment and have exceptional work written by one of our experts, guaranteeing you an A result.

Need an Essay Written?

This sample is available to anyone. If you want a unique paper order it from one of our professional writers.

Get help with your academic paper right away

Quality & Timely Delivery

Free Editing & Plagiarism Check

Security, Privacy & Confidentiality