Chat with us, powered by LiveChat PHIL 2037 The Idea of AI Has Always Been Slightly Concerning Essay - Credence Writers
+1(978)310-4246 [email protected]

Description

PHIL 2037G ? Bi-Weekly Reflection Instructions
Format
? Length: 350?500 words
o Include the word count at the end of the document
o Quoted text and citation information does not count towards word count
? Submit using ?.docx? or “.pdf” format
o Corrupted documents do not count as submitted
? Cite any idea that is not your own, even if you are not quoting an author. Do so using
parenthetical citations with a page number and year if available, e.g., (Nguyen 2020, 320)
o If you are only using texts from class, you do not need a bibliography page.
However, if you are using external sources, then those must be listed in a
bibliography.
? Must be in 12 font, Garamond font, 1.5 spacing, and use justified alignment (this mean the
lines are all the same length/the paragraphs are square to the page).
? Put your name and the name of the assignment at the top.
Specification Checklist
? All instructions?including formatting instructions?have been followed.
? The reflection is written in response to one of the readings from the previous two weeks and
engages significantly with its content.
? The reflection has a ?they say/I say? structure: it first introduces an idea, view, or claim from
a reading and then develops a response to or reflection on that idea, view, or claim.
? The reflection develops a substantive response to the reading.
? The reflection is written in a way that is generally clear and easy to follow.
? The characterization of the views in the readings is generally accurate.
Instructions
+Formatting instructions have been followed

In order for the assignment to count as complete, you must follow all of the formatting instructions.
This means (1) satisfying the length constraints (it must be over 350 words and under 500 words),
including a word count, (2) using Garamond font, and (3) using justified alignment. The most common
error on this assignment is forgetting to include a word count and not using justified alignment. Here is a link that
explains how to justify text in Word:
https://support.microsoft.com/en-us/office/align-or-justify-text-b9096ed4-7323-4ff3-921a1ba7ba31faf1
Here is a link that explains how to do it in Google Docs:

Justification in Google Drive / Docs Tutorial

Please note that your word count should not include quotes.
+The reflection is written in response to one of the readings
Your reflection should focus on one of the readings, podcasts, or video clips made available for the
past 2 weeks. (Note: some readings aren?t the appropriate subject for these reflections (like short
news items, and these will be clearly marked on OWL.)
+The reflection has a ?They say/I say? structure
Your reflection should be written in a ?They say/I say? structure. (Note: This will be explained in
class.) This means that you must first introduce and explain an idea or argument from the reading
and then respond to this idea or argument with your own thoughts. This means that your reflection
should have at least 2 paragraphs:
1. The first paragraph should introduce and explain an idea or argument from the reading. This
paragraph should be about 3-5 sentences and engage with the details of the readings. You
want to make sure you explain the idea adequately, as if you were explaining it to a friend.
2. The second (or third, or fourth) paragraph should respond to this idea or argument. Your
response should not only present what you think about the topic but also explain why you
think this.
Please note that you should not take this to mean you should only use 2 paragraphs. You should use as many
paragraphs as necessary. What it does mean is that you need to use at least 2 paragraphs. Also, this
is a starting point. Once you get the hang of this format you can add your own voice and framing
+The reflection develops a substantive response to the reading.
Your reflection must present and elaborate your own view. And this view and elaboration must be
substantive. It should not just be an expression of mere agreement or disagreement, or mere
approval or disapproval.
Possible types of responses:
3. Exploring an assumption or belief that a reading led you to reevaluate;
4. Exploring a realization that you had as a result of the reading;
5. Discussing a change in practice that you were motivated to undertake (e.g., using social
media less) as a result of one of the readings;
6. Identifying a feature of your life or experience that one of the readings helped you make
sense of;
7. Responding to one of the readings in light of your own experience and understanding of the
world;
8. Responding directly to a point or argument made by the author;
NOTE: this list is not exhaustive
+The reflection is written clearly and easy to follow
Your reflection should be clearly written and easy to follow. This means the sentences are well
constructed and follow naturally from each other. Do check for grammar and typos as these can
often make your meaning unclear (it? not just about breaking ?rules? but communicating with your
reader.) The most common issue I?ve encountered in student writing is that they need to break up their
paragraphs. In general, each new idea or point should have its own paragraph. Typically, if your
paragraphs is over 5 or 6 sentences long, you should consider breaking it up.
+The reflection is generally accurate
Your reflection should be accurate of the reading. This means your interpretation should be
grounded in the text and reflect a chartable reading of the author?s view.
BEFORE SUBMITTING, PLEASE CHECK THAT YOUR ASSIGNMENT MEETS ALL
OF THE STANDARDS SET OUT IN THE “SPECIFICATION CHECKLIST”
A Quick History of AI
PHIL 2037G: Philosophy and AI
Agenda
1.
A rough overview of the historical
developments that led to modern AI
1.
2.
3.
4.
WW2 and code-breaking
How this led to ideas of a ?thinking machine?
The basis of automating math & logic
What this meant for thinking about intelligence
WWII
During WWII, computation became a major focus
of research for two reasons:
1. Trajectory calculation
2. Code breaking
Trajectory calculation
ENIAC: used for
trajectory
calculations
Electronic way to
automate rote
mathematical
calculations
Code breaking
Enigma: used by
German armed forces
to encode messages
Uses a ?substitution
cipher? with an evil
twist
Code breaking
Each letter becomes a
new letter; but the
substitution cipher
changes for each
letter
1023 – 10114 possible
combinations
Code breaking
Could take a computer doing a billion
calculations/second a million years (on the low
end) to break the code by trying every
combination
But, if one could make some guesses about the
substitutions, it could reduce the possibilities
And the structure of the Enigma machine itself
imposed some restrictions on the possibilities
Code breaking
You can think of it like solving a giant
Sudoku puzzle
Once you know what some of the
squares are, the structure of the grid
and the rules of the puzzle let you
figure out more and more
Code breaking
In practice, by making a guess as to the locations of
common phrases (heil Hitler; or the day and date) the
possibilities could be reduced to around 35,000
Calculating at 120 operations/minute?too fast for a
human, but within reach for the machines of the day?
it took an average of 3-4 hours to break the code.
Code breaking
That gave the impetus to build such machines.
And to fund their development (and the current
scientific funding structure in the US is a legacy
of this time).
Code breaking
The interesting psychological point, however,
was that these machines were automating
mathematics and (more importantly) LOGIC.
This led straight to the idea that it might be
possible to build thinking machines
AI and logic: a slightly longer view
This idea had been a long time developing.
AI and logic: a slightly longer view
Back in the 1840s, Ada Lovelace saw that Charles Babbage?s
?Analytical Engine? ? which could, in theory, solve
arithmetical and algebraic equations, could be applied to
many more domains of human life
AI and logic: a
slightly longer view
Alan Turing?s later efforts to
formalize the relationship between
machines and computation made it
clear that general computers were
possible not just in theory, but in fact.
Turing and Shannon
Alan Turing, who worked on the Enigma code breaking,
had proven that it was possible in principle to build a
machine that could compute anything computable
(1936).
And Claude Shannon (founder of information theory)
had shown that formal logic was computable (1937).
Math, circuits, and computation
0 + 0 = 00
0 + 1 = 01
1 + 0 = 01
1 + 1 = 10 (aka 2)
Using binary numbers, a small number of circuit designs allowed
one to add, subtract, multiply, divide any two numbers, and do a
lot more.
Logic and computation
But 1 and 0 aren?t just numbers, they can also represent
true and false
Logic and computation
Just as in arithmetic, there are a limited number of logical
transformations one can perform.
All one needs is:
1.
the input registers,
2.
the set of circuits,
3.
a controller to choose which circuit to connect the registers to,
4.
the output register, and
5.
a way to copy the output register to the input register
(for repeated applications).
Logic and reasoning
With this, it is possible to automate any logical derivation.
So, if human reasoning = logic, human reasoning is itself
computable.
Or, even if it isn?t logic, it might be based on some other
computable system.
Logic and reasoning
This realization completely changed the terms of the debate in
psychology over what thinking was, and how to study it.
No longer: what are the laws that relate stimulus and behavior
(Behaviorism)
Now: which aspects of intelligence are theoretically computable,
and how is that computation accomplished?
Logic and reasoning
This explicitly drew attention to such things as:

The design of inner circuits
– Inner processes like encoding, storage, transformation and
retrieval
Speaking of inner circuits?.
But what about the brain?
In fact, in parallel with these developments came a
seminal paper
?A logical calculus of the ideas immanent in nervous
activity? by McCulloch & Pitts
Speaking of inner circuits?.
McCulloch & Pitts explicitly united an idea from
neuroscience: neurons were individual units joined by
synapses
With a version of propositional logic developed by
Bertrand Russell
Speaking of inner circuits?.
McCulloch & Pitts model of the neuron:
Speaking of inner circuits?.
But technology had to catch up
As with so much in the history of AI, it depended not
just on good ideas (formalizations), but also good
technology
In the 1950s, much better computers started to be built,
and people got a bit (over-) excited
Mind as computer
John McCarthy, 1955, ?The Dartmouth Conference?
?We propose that a 2 month, 10 [person] study of artificial
intelligence be carried out during the summer of 1956 at Dartmouth
College in Hanover, New Hampshire.
The study is to proceed on the basis of the conjecture that every
aspect of learning or any other feature of intelligence can in principle
be so precisely described that a machine can be made to simulate it.
An attempt will be made to find how to make machines use language,
form abstractions and concepts, solve kinds of problems now reserved
for humans, and improve themselves.?
Mind as computer
Herbert Simon:
?Over Christmas, Al
Newell and I invented
a thinking machine?
(1956)
(with Alan Newell)
A Quick History of AI
Part 2
PHIL 2037G: Philosophy and AI
Agenda
? ?We?ve
invented a thinking machine!? ? or not
? What were these early researchers after?
? What strategies were them deploying?
? What assumptions did they make?
? and what were the limits of those strategies/assumptions?
?
Neural Networks as an alternative model
? And old idea that has seen a big resurgence
? Expert Systems vs ?learning? machines
? A ?black box? that is invisibly around us
Mind as computer
Herbert Simon:
?Over Christmas, Al
Newell and I invented
a thinking machine?
(1956)
(with Alan Newell)
Logic Theorist
? That
program was Logic Theorist
? It
introduced the notion that reasoning is a search of a
set of possible states
? In
particular it used a tree structure to represent
possible logical moves
? It
then searched the space of possible moves until it
reached the goal state
Logic Theorist
Same idea powers computer chess ?AI?
? These
?expert systems? can be
thought of as a big flow chart
?A complicated series ?if, then?
statements arranged in a ?decision
tree?
Logic Theorist: early hype
? Herb
? [We]
Simon:
invented a computer program capable of
thinking non-numerically, and thereby solved
the venerable mind-body problem,
explaining how a system composed of matter can
have the properties of mind
Shakey (1966)
Shakey models his
environment, and
performs calculations on
the model to decide how
to act in the world.
http://www.youtube.com/w
atch?v=qXdn6ynwpiI
Shakey (1966)
?
Shakey was designed to push blocks around in a
highly simplified environment. He could, indeed,
accomplish tasks like stacking the red box on the blue
box, or blocking doors with boxes. That is to say,
Shakey works: if you told him to think of his house,
and how to get there, he would know how to do it.
?
And the way he does it is he keeps a model of the
environment, makes plans using the model, and then
acts in the world.
?Expert systems? and Intelligence
?What assumptions does this make
about intelligence?
?Intelligence is a matter of logic and
calculation, often relying on ?expert
knowledge?, and making the best
decisions given a set goal.
general ai and narrow ai
?If this is what you take
?intelligence? to be, then it can be
displayed (or simulated) by code or
machines.
?The hard part is putting all our
existing ?expert? knowledge into
code.
general ai and
narrow ai
? Setbacks, both technical
and theoretical, led to
what?s been called the
?AI Winter? where
research stalled, funding
dried up, and progress
ground to a halt.
general ai and
narrow ai
? But AI systems, after
some contingent
historical developments,
have popped up all
around us.
? Not found in robots, but
on the web, and on your
phone
Neural
Networks
Neural Networks: an alternate
approach
?
Rosenblatt?s Perceptron (1957) ?a perceiving and
recognizing automaton?
Neural Networks
?
Have generally been applied to classification problems.
?
For instance: does this YouTube video contain cats or not?
?
Are not programmed, they are trained by changing the
weights.
Neural Networks
?
People knew early on that networks of perceptrons would
be much more powerful
?
But for a long time nobody could figure out how to train a
network
Neural Networks
?
One big development: a technique called
?backpropagation? was popularized by Rumelhart,
Hinton & Williams in 1986
?
Just as important were contingent
historical changes about data storage,
processing power, and much else
Neural Networks
Neural Networks
?
By 1989, a neural net was applied to the
problem of human handwriting
recognition
Neural Networks
? Today,
successors to that simple system (after a
number of technical advances that will get us too
far into the weeds) are behind:
? Facebook facial recognition
? Automatic video classification
? Google Translate
? In
the form of so-called ?deep learning?
Invisible AI all around us
? AI
systems are everywhere.
? In your phone
? On your social media feeds
? In your search results
? Transcribing voice to text
? Recognizing faces (and other images)
? Detecting fraud
? Etc. etc. etc.
A Quick History of AI
Part 3
PHIL 2037G: Philosophy and AI
Agenda
?
The success of Neural Networks as a type of Narrow AI
? Intelligence located in algorithms that aren?t programmed by
?learn? from data, ?invisibly? around us in many services
?
But what are Algorithms?
? Quick answer: a set of instructions.
? So how did we get from a recipe to a ?black box?
?
And where are we going?
Neural Networks
Neural Networks
? Today,
successors to that simple system (after a
number of technical advances that will get us too
far into the weeds) are behind:
? Facebook facial recognition
? Automatic video classification
? Google Translate
? In
the form of so-called ?deep learning?
Invisible AI all around us
?
AI systems are everywhere.
? In your phone
? On your social media feeds
? In your search results
? Transcribing voice to text
? Recognizing faces (and other images)
? Detecting fraud
? Etc. etc. etc.
AI is not (just) abstract intelligence, but also hardware.
It is embodied or relies on physical infrastructure and,
as we?ll discuss in future weeks, human beings too.
Algorithms
?
Algorithm: a finite, definite, determinate, effective
mapping of input to output
?
Finite: it must end after a finite number of steps
?
Definite: each step must be precisely defined
?
Determinate: each step must provide the same output
with the same inputs
?
Effective: it must lead to the correct solution
Algorithm
?
?To summarize … we define an algorithm to be a set of
rules that precisely defines a sequence of
operations such that each rule
is effective and definite and such that the sequence
terminates in a finite time.? (Stone 1972)
A bit more?
?
Substrate Neutral: depends only on its logical structure
(amenable to functional abstraction)
?
Mindless: sufficiently simple to be performed by a
mechanism
?
Guaranteed: always gives the same results on the same
data/input
Computation
?
A (classical) computer, then, is simply this:
? A mechanism (automaton) that represents data (inputs) and
is capable of performing an algorithm on it
? Represents: symbols with content expressed in some vehicle
? Vehicle can be:
? The status of valves
? The voltages on wires
? Code
? Same algorithm can be multiply realizable on different sorts of
physical structures
Computation
Computation
0 + 0 = 00
0 + 1 = 01
1 + 0 = 01
1 + 1 = 10 (aka 2)
A small number elementary operators can be combined to
compute any computable function.
Computers
?
A mechanism (automaton) that represents data (inputs)
and is capable of performing an algorithm on it:
? Algorithm capable: able to implement elementary
operations and pass through a series of appropriate states as
dictated by the rules and the data
? The elementary operations can be mathematical, logical, or
really any function that can be implemented
?
The simplest sort of mechanism for following algorithms is called a finite
state machine
Finite state machine
?
Most basic definition: a mechanism with a finite number
of states that will move between those states depending
on input
Finite state machines and
computation
?
Another finite state machines is the so-called Turing
Machine, thought up by Alan Turing who we met earlier.
?
A Turing Machine consists of a finite state machine plus a
tape that can provide symbolic inputs and receive symbolic
outputs.
? (getting into the details would take us a bit afield)
Turing machines
Also, it may not seem like it, but
in fact your laptop computer (any
digital computer) just IS a
(general purpose) Turing machine
It just has a different architecture
(called a von Neumann
architecture) and a very large
number of possible states
And, of course, is programmable
So, computation is simply:
? the manipulation of symbols according to formal rules in the
form of an algorithm
? But?
?
With the advent of newer, bigger neural networks, and
further developments like machine learning,
algorithms have become bit opaque.
? That is, these systems are unlike a decision tree, and the nodes don?t all function
like logical operators.
?
This is often referred to as a ?black box,? in that we don’t
know what happens in the middle layers of the network,
only input and output
Machine Learning
GPT-3

Purchase answer to see full
attachment

error: Content is protected !!