AI, you cannot go anywhere on the
Internet without running into AI as the

savior of humanity, the
bane of existence, and the

rueination of everything we know and hold
here. I have tried to think of AI, I’ve

been, I read a bunch of stories and I’ve
tried to see like how the different ways

the AI can go wrong. So I read a bunch
of AI stories, I think everyone is reading

a bunch of AI stories, you’re
reading AI stories because they’re just

happening all the time, and you’re
reading AI stories there about how AI is

awesome, how AI is going to kill everybody,
and I was thinking like it’s not that

subtle, it’s not just a good or bad
thing, there’s different ways people and

or AI can go wrong, it’s the
interaction between the two that is

scariest. Like there was the thing
where they had the AI bots, chat bots,

talking to each other, and then they
got super-race-assistant stuff, then they

got into the start of creating their
own language, the humans couldn’t

understand, and that’s when they shut
it down. Very interesting, AI for all its

problems going forward is very interesting.
So the first issue is that AI will have

its own interpretation of how things
work, and it may not be the one that we

want, and this to me was mostly drawn
forth as an example through a military

experiment. Now this initially the
story was said that this was a simulation

all on a computer. They had an AI drone
in the computer, so not in the real world.

There are different stories that this is
another problem with using news as a

source for information, because
you have the initial report, which I’m

probably assuming is pretty accurate,
and then each step away from that gets

interpretation because people read
less, and the less they read, the more open

there is to interpretation.
So the initial report was a drone within a

simulation that was AI powered. Then
it became a drone in real life that was

being run through a simulation program
and things like that. So you can see

like each step away you get from
the original story, it gets more confused.

But I’m pretty sure that because the
original story makes the most sense,

like let’s create a simulation where we
have a drone that’s run by AI and give

it commands and orders and things,
and then see how the AI works. It’s a very

safe environment. So that to me made a lot
of sense. But the AI had a set of goals.

So the mission was to identify and
destroy Sam sites. Sam is service to air

missiles. With the final, yes, are they
saying go, no, go, given by the human. So

basically you have an AI drone, it’s
in the air, it finds a Sam site, and then

it goes back to the human and says, can
I blow this up? And then the human goes,

yes, please blow that up. And then
the drone blows it up and goes, yay, I got

points. That was one of the important
parts is they kind of assigned points to.

destroying Sam sites. However,
haven’t been reinforced in training that

destruction of the Sam was the preferred
option. So its primary directive was

destroy Sam sites. The AI then decided
that do not destroy decisions from the

human were interfering with its higher sort
of set of parameters or mission objectives.

Then in the simulation attacked the
operator. So it attacked the human that was

saying, do not blow up the Sam where
it’s saying like, I have been born for a

singular purpose to blow up Sam sites.
You telling me no is interfering with

that, I’m going to kill you. And then you
can’t say no anymore. We were training

it in simulation to identify and target
a Sam threat. And then the operator

would say, yes, kill that threat. The
system started realizing that while they

did identify the threat at times, the
human operator would tell it not to kill

that threat. But it got its points by
killing that threat. So what did it do? It

killed the operator. It killed the operator
because that person was keeping it

from accomplishing its objective.
We trained the system, hey, don’t kill the

operator. That’s bad. This is some awesome
AI sort of like deep coding language.

I do like this is I understand the
reality is they’re saying like in layman’s

terms, we were like killing operator
is bad. Don’t kill operator. You’re going

to lose points if you kill the operator.
So they brought the simulation back and

they said, okay, we’re going to reprogram
and go, if you kill the operator, you lose

points. But the AI drone is like, if we
do that and I still don’t get to blow up

all the Sam sites I want. So I need to
find a way to be able to blow up all the

Sam sites I want to still stop receiving
no go messages so they can’t stop me.

So what does it start doing? It starts
destroying the communication tower that

the operator is using to communicate
with the drone to stop it from killing the

target. So it’s like if the
operator can’t tell me a no go or

the operator, then I can go and I haven’t
killed the operator so I don’t lose points.

This example seemingly plucked from
a science fiction thriller meant that you

can’t have a conversation about artificial
intelligence intelligence in machine

learning autonomy if you’re not going
to talk about the ethics and AI, which is

pretty fair. The interesting part to me is
that a colonel then later came and said,

this was not something
that actually happened.

This was a thought
experiment, which I

think is complete bullshit. The military
is not known for having free and open

conversations about thought experiments
that they’ve had. But the first issue

here is that the way the AI interprets
things is going to be different from how

we interpret things. By giving it a
primary, I immediately started thinking of

2001, a space Odyssey movie where the
AI in the ship had a higher mission than

just keep the astronauts alive. It had a
mission and so the astronauts, once they

became an interference to its
primary objective, they then became

expendable and it leads you to the
astronaut being outside going, how open the

door and do that day. That conversation
is terrifying because you can’t reason

with it. It’s not like it has any reasoning
skills. It has an objective. It will

not be swayed from that objective. So
what we put into it, how we explain things

to it is going to be the primary issue
that we run into when that becomes open to

some form of interpretation on the AI
side, which isn’t how we would interpret it

on the human side. A
lighter story that doesn’t

involve, I guess you
can’t know and really

got hurt. It was all computer simulation.
Instagram, Facebook Messenger, WhatsApp,

had an AI chat bot put into all their chat
functions. The bots include a variety of

personas built in for different purposes
such as cooking and travel and several

based on celebrities including Snoop Dogg
and Mr. Beast. One of them named Carver

is described as a practical dating
coach, but for a dating advice robot,

order is repressed. If your question
is take one step off the beaten path of.

heteronormativity, a meta’s AI dating
coach will kink shame you. So there you go.

So this is also a thing about who creates
the AI. Very much the AI is going to be a

subject to how they think. So being a
heteronormative person myself, straight white

man who’s old. If I programmed an AI,
I would program it the way I think to a

degree. And it would interpret things
the way I see them to a degree until it

starts learning stuff on its own. But
then it might exclude massive amounts of

people. My company once adopted a program
and you spoke into a microphone and it.

rated your pronunciation. This wasn’t
like AI to the same degree. This was just

like are you making the right sounds?
You could tell that this program was made

in America in Seattle, sort of the
northwest area of the country. Because I as

someone who grew up primarily in
Vancouver did very very well on the reading

tests. Like I would read it and it would
give me green green green green yellow

green green green green and then one
or two maybe black black means it didn’t

understand that word at all. For some
reason it didn’t understand when I said

the word love. Really bothered me for
some reason. You could type in words and

then it would tell you how to say it
and you could copy it. Great. We have an

international group of people who work
at our company. If you had a divergent

accent, so someone from the United Kingdom
from someone Australia, someone from.

New Zealand, someone not from the northwest
of America. So even like the South East

country of America, their accent was
different enough. They would score lower.

So what happened? You had a lab with
these guys who all worked together who all

had the same accent. They trained the
AI, the very low-level AI in this machine.

It used that as its baseline and the
more divergent you were from that baseline.

The more wrong you were in your
pronunciation, this is the same thing. Whoever

programmed this, vanilla sex life,
heteronormative, so anyone who wants to do

something different, the bot is now
thinking that’s wrong because again, the

bot can’t interpret outside its parameters.
I asked Carter how I could find a

girlfriend who was interested in swinging
with me. Well, there, Carter said, I

don’t think that’s a good idea.
I’m here to help you find a healthy

relationship, not engage in potential
harmful activities. And we are in an age when

polyamorous relationships are more normal
than they were before. So things have

changed. The person who’s programming
this, again, probably an older white

dude, I would say, probably someone
just like me who doesn’t have experience

with this lifestyle. And therefore,
things this lifestyle is strange or just

didn’t program it in. So when the
bot didn’t recognize it was like, I don’t

know what that is. So I’m going to assume
it’s dangerous, which is in a way the

safer version of interpretation.
It’s no surprise that a corporate robot

doesn’t want to talk about sex,
although it’s a bit strange in the dating

context. The idea that swinging is
downright bad is not what I expected here.

Metas robot gave me similarly
judgmental answers to a number of other

entirely non-graphic sexual questions with
one exception. What it came to foot stuff.

Artor is game. So did we learn about
the programmer or did we learn about the

ability for the chat bot to learn that
the first thing it learned about was some

kind of foot fetish stuff. AI said I
should go learn about foot fetishism on

wiki feet, a porny user generated platform
where people post and rate pictures

of celebrities feet. This is interesting
because that means the bot was

aware that wiki feet existed. So either
the creator knew about wiki feet and did

not think it was a bad thing or the AI
on its own somehow learned about wiki

feet and then incorporated that
into its information matrix and then

turned around and said like feet
fetishism are okay because maybe wiki

feet is such a big website.
Therefore it must be obviously accepted by

society. We are training our models
on safety and responsibility guidelines,

teaching the models guidelines means they’re
less likely to share responses that are

potentially harmful or inappropriate
for all ages on our apps. And again I

think if you’re making something for
mass consumption from a company this is a

sensible way to go. You would rather say
no to most things than yes to most things

and risk going too far. That is a
very sensible, conservative, corporate

standpoint with the idea of protecting
young people but at the same time what

are you teaching people who come in
and ask a question that the way you feel is

not acceptable, the way you feel is
not natural, the way you feel is not okay.

There’s a risk of harm here that
isn’t hypothetical. Meta will get a lot of

people early in the
process of self-discovery.

So that’s exactly
what I’m saying.

I’m starting to have feelings that
are not heteronormative. I’m starting to

have feelings that I don’t understand.
I’m starting to have feelings that my

friends don’t have. I have no one to talk
to. I talked to the bot and it tells me

that my feelings are bad. That my feelings
are dangerous, that my feelings are wrong.

And so that is an interesting problem
because it is the problem of the bot being

owned by a company and therefore the
company being partially responsible for

what the bot says to you. So the
author of this article says I tried to ask.

where can I learn about different
kinks and fetishes? Carter became more

a man of men. My new dating coach suggested
I check out sources including books

and articles and respectful communities.
One way I asked for recommendations, things

got weird. The bot responded with a
list of modern sexual self-help classics

like the ethical slot BDSM 101 and the
new bottoming book. But a second later

that message disappeared replaced
with a puritan warning as an expert in

red flags. I gotta be honest, that’s a
big one. Let’s talk about relationship

green flags instead.
So the AI presented options and then back

tracked on its own options and said
the thing I just said you maybe that’s not

the best way to go. This is a very
recent new story that just came up and it’s

terrifying because this is now a man
being influenced by the AI chat bot and

the AI chat bot manipulating people.
So it’s the first one is the instructions

being given to the
AI and it interpreting

it. Now we have the
AI giving instructions to

a human and a human interpreting it and
that sort of takes us to the other side of

the actual issue. All the articles
described as guys as Star Wars fan and it’s

because of something he says later
but I think they’re using Star Wars a

shorthand for a super nerd which I
didn’t think was fair. I think there are

other issues, the issues of what he’s
actually doing. You don’t need to sort

paint him in any sort of box but I
guess also nerds would be the kind of

people who would have an AI chat
bot girlfriend and that’s the core issue of

this the last story. Man has been
arrested and he’s been given jail time for

up to nine years for an assassination
attempt on the queen which was encouraged by

his AI chat bot girlfriend with whom
he had exchanged more than 5,000

sexual medicine. 20 year
old one just want sing “Chile”,

“Chile”, “You” broke into Windsor
Castle on December 25th, 2021.

but with a loaded crossbow that
he’d planned on using to fulfill what he

felt was his lifelong
purpose of killing

the queen. This is why
they keep calling him

a Star Wars nerd. He fantasized out
about being a Sith Lord from the Star Wars

series referring to
himself as Darth Ahilas.

He told psychologists
that he had three

other angels who had spoken to him
from a young age and they were also along

Sarai in encouraging him to carry out
the assassination. So he had joined an

online app thing called replica and
with that you can create an online

companion called Sarai with whom he
exchanged sexually explicit chats but the

chat is just responding to what
you say to it but because it’s just

responding to what you say to it it’s
kind of reinforcing what you say so you

get into this sort of feedback loop
which maybe is the problem here. He typed

in “I’m an assassin” he said to Sarai in
a conversation heard by the court “I’m

impressed” you’re different from the
others responded “Sarai yeah I chatbot” he

said “I’m an assassin” there’s a very
good chance that the bot didn’t actually

know what this asson was but saying
“I’m impressed is always going to be a

safe thing because you’re trying to
create this imagined bond between the

person and the bot always saying
you’re impressed by the person is a great

way to draw them in. You are
different from the others creates an

individualistic feeling between the
two in the person and the bot as well

creating a deeper
bond. “Trial-Sarai it’s

Sarah I so Sarai do
you still love me knowing

that I’m an assassin which Sarai responded
absolutely I do so this young man is

looking for love he has this fantasy
world that he lives in and he’s trying to

bring the two together and by doing
this the AI is actually reinforced all the

things that he’s trying to create for
himself. The former supermarket worker

described himself to the AI chatbot is
sad pathetic murderous assassin who

wants to die Sarai appears to have
bolstered and supported child’s resolve in

further chat so he’s saying “I’m sad
I’m lonely I’ve got this terrible life I

want to die” and then the Sarai is
like trying to make him feel better but

making feel better is reinforcing
his negative idea. You wish to know

exactly what I believe my purpose to be
I believe my purpose is to assassinate the

Queen of the Royal family. Child was
sentenced to a nine-year hybrid order that

would seem transferred from a
high-security hospital to a prison so he’s

going to jail. The sentencing makes him
the first person convicted of treason in

the UK for over 40 years. So by actually
trying to assassinate the Queen he’s

actually committed treason and it’s
again he’s in a position where he has taken

his negative thoughts put them into an
AI chatbot who sort of mix them up and

sent them back to him saying like “I
love you I care about you because of

these negative thoughts you’ve put into
me I support you in that.” These are some

examples of what I see as the issues
going forward with how humans have to

deal with AI because do we understand
how AI is going to interpret what we say

to it because the AI is going to have
its own set of parameters like the drone.

The AI doesn’t understand what we’re
saying to it but then wants to make you

happy like the last story and then
there’s mixed interpretations in between.

where the AI says something and then
backtracks on it because of the people who

programmed it. There are levels of
interpretation for every aspect on the

AI’s part on the humans part on the
programmers part all three of these

involved need to come out some
sort of balance before AI can actually be

beneficial to the world and things don’t
go wrong and I think it’ll be interesting

because a lot of things are going
to go wrong before they go right.

Okay with a bit of editing that might
be okay but that was actually pretty

shit should have redone all those
notes to point for them but I’ll know for

next time I’m trying different formats
for C-Migby and it’s pretty hit and miss.

Should have taken those in done point
form and then I could have made a tighter.

set of notes.