Quick Search


Tibetan singing bowl music,sound healing, remove negative energy.

528hz solfreggio music -  Attract Wealth and Abundance, Manifest Money and Increase Luck



 
Your forum announcement here!

  Free Advertising Forums | Free Advertising Board | Post Free Ads Forum | Free Advertising Forums Directory | Best Free Advertising Methods | Advertising Forums > Other Methods of FREE Advertising > Free Link Exchange

Free Link Exchange Free Link Exchange

 
 
Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old 03-17-2011, 01:22 AM   #1
learning7966
Warrant Officer
 
Join Date: Mar 2011
Posts: 310
learning7966 is on a distinguished road
Default microsoft windows 7 pro sale ware of the

observe that the two specs just given vary in the to start with
maps quoted entities onto other quoted entities. the 2nd has no quotes.
the very first operate maps symbols onto symbols; the second purpose maps the
numbers referred to by the arguments on the primary purpose onto the numbers
referred to through the values on the initial function. (a function maps arguments
onto values.) the initial operate is often a form of linguistic "reflection"
in the 2nd.the key idea behind the adder is that of an isomorphism between these
two functions. the designer has found a machine which has physical aspects
that can be interpreted symbolically, and under that symbolic interpretation,
there are symbolic regularities: some symbols in inputs result in other
symbols in outputs. these symbolic regularities are isomorphic to rational
relations among the semantic values on the symbols of a sort that are useful
to us, in this case the relation of addition. it is the isomorphism between
these two functions that explains how it is that a device that manipulates
symbols manages to add numbers.now the idea on the brain as a syntactic engine driving a semantic engine
is just a generalization of this picture to a wider class of symbolic activities,
namely the symbolic activities of human thought. the idea is that we have
symbolic structures in our brains, and that nature (evolution and learning)
has seen to it that there are correlations between causal interactions among
these structures and rational relations among the meanings from the symbolic
structures. a crude example: the way we avoid swimming in shark-infested
water is the brain symbol structure `shark' causes the brain symbol structure
`danger'. (what makes `danger' mean danger will be discussed below.)the primitive mechanical processors "know" only the "syntactic"
forms from the symbols they process (e.g., what strings of zeroes and ones
they see), and not what the symbols mean. nonetheless, these meaning-blind
primitive processors control processes that "make sense"--processes
of decision, problem solving, and the like. in short, there is a correlation
between the meanings of our internal representations and their forms. and
this explains how it is that our syntactic engine can drive our semantic
engine.3the last paragraph mentioned a correlation between causal interactions
among symbolic structures in our brains and rational relations among the
meanings for the symbol structures. this way of speaking can be misleading
if it encourages the picture with the neuroscientist opening the brain, just
seeing the symbols, and then figuring out what they mean. such a
picture inverts the order of discovery, and gives the wrong impression of
what makes something a symbol.the way to discover symbols in the brain is first of all to map out rational
relations among states of mind, and then identify aspects of these states
that can be thought of as symbolic in virtue of their functions. perform
is what gives a symbol its identity, even the symbols in english orthography,
though this can be hard to appreciate because these functions have been
rigidified by habit and convention. in reading unfamiliar handwriting, we
may detect an unorthodox symbol, someone's weird way of writing a letter
with the alphabet. how do we know which letter in the alphabet it is? by its
perform! th% perform of a symbol is som%thing on% can appr%ciat% by s%%ing
how it app%ars in s%nt%nc%s containing familiar words whos% m%anings w%
can gu%ss. you will have little trouble figuring out, on this basis, what
letter in the last sentence was replaced by `%'. 2.2 is usually a wall a computer? john searle (1990) argues against the computationalist thesis the
brain is often a computer. he does not say that the thesis is false, but rather
that it is trivial, because, he suggests, everything is usually a computer; indeed,
everything is every computer. in particular, his wall is often a computer
computing wordstar. (see also putnam, 1988, for a different argument for
a similar conclusion.) the points in the last section allow easy understanding
with the motivation for this claim and what is wrong with it. in the last
section we saw that the key to computation is an isomorphism. we arrange
things so that, if certain physical states of a machine are understood as
symbols, then causal relations among those symbol-states mirror useful rational
relations among the meanings of those symbols. the mirroring is an isomorphism.
searle's claim is that this sort of isomorphism is cheap. we can regard
two aspects from the wall at time t as the symbols `0' and `1',
and then we can regard an aspect of the wall at time t + 1 as `1',
and so the wall just computed 0+1=1. thus, searle suggests, everything (or
rather everything that is big or complex enough to have enough states) is
every computer, and the claim that the brain is really a computer has no bite.the problem with this reasoning is the isomorphism that makes a
syntactic engine drive a semantic engine is more full-bodied than searle
acknowledges. in particular, the isomorphism has to include not just a particular
computation that the machine does perform, but all the computations
the machine could have performed. the point can be made clearer
by a look at figure 6, a type of x-or gate. (see o'rourke and shattuck,
forthcoming.)
figure 6: the numerals at the
beginning of arrows indicate inputs. the numerals at the beginnings of arrows represent inputs. the computation
of 1 + 0 = 1 is represented through the path a-->c-->e. the computation
of 0+1 = 1 is represented by the path a-->b-->e, and so on. now here
is the point. in order for the wall to be this computer, it isn't enough
for it to have states that correspond to `0' and `1' followed
by a state that corresponds to `1'. it must also be such that had
the `1' input been replaced by a `0' input, the `1'
output would have been replaced through the `0' output. in other
words, it has to have symbolic states that satisfy not only the actual
computation, but also the possible computations that the computer
could have performed. and this is non-trivial.searle (1992, p. 209) acknowledges this point, but insists nonetheless
that there is no fact with the matter of whether the brain is really a specific computer.
whether something is mostly a computer, he argues, depends on whether we decide
to interpret its states in a certain way, and that is up to us. "we
can't, on the one hand, say that anything is definitely a digital computer if we can
assign a syntax to it, and then suppose there is often a factual question intrinsic
to its physical operation whether or not a natural system such as the brain
may be a digital computer." searle is right that whether something is definitely a
computer and what computer it is is in part up to us. but what the example
just offered shows is that it is not totally up to us. a rock, for
example, is not an x-or gate. we have a great deal of freedom as to how
to interpret a device, but there are also very important restrictions on
this freedom, and that is what makes it a substantive claim that the brain
is really a computer of a certain sort. 3 functionalism and the language of thought thus far, we have (1) considered functional analysis, the computer model
for the mind's approach to intelligence, (2) distinguished intelligence from
intentionality, and (3) considered the idea with the brain as a syntactic
engine. the idea from the brain as a syntactic engine explains how it is that
symbol-crunching operations can result in a machine "making sense".
but so far, we have encountered nothing that could be considered the computer
model's account of intentionality. it is time to admit that although the
computer model for the mind has a natural and straightforward account of
intelligence, there is no account of intentionality that comes along for
free.we will not survey the field here. instead, let us examine a view which
represents a kind of orthodoxy, not in the sense that most researchers believe
it, but in the sense that the other views define themselves in large part
by their response to it.the basic tenet of this orthodoxy is that our intentional contents are
simply meanings of our internal representions. as noted earlier, there is
something to be said for regarding the content of thought and language as
a single phenomenon, and this is actually a quite direct way of so doing. there is
no commitment in this orthodoxy on the issue of whether our internal language,
the language in which we think, is the same or different from the language
with which we speak. further, there is no commitment as to a direction of
reduction, i.e.,microsoft windows 7 pro sale, as to which is more basic, mental content or meanings of
internal symbols.for concreteness, let us talk in terms of fodor's (1975) doctrine that
the meaning of external language derives from the content of thought, and
the content of thought derives from the meaning of elements in the language
of thought. (see also harman, 1973.) according to fodor, believing or hoping
that grass grows is known as a state of being in one or another computational relation
to an internal representation that means that grass grows. this can be summed
up in a set of slogans: believing that grass grows is having `grass grows.'
in the belief box, desiring that grass grows is having this sentence (or
one that means the same) in the desire box, etc.now if all content and meaning derives from meaning of the elements of
the language of thought, we immediately want to know how the mental symbols
get their meaning.4 this can be described as question
that gets wildly different answers from different philosophers, all equally
committed to the cognitive science point of view. we will briefly look at
two of them. the first point of view, mentioned earlier, takes as a variety
of paradigm those cases in which a symbol in the head might be said to covary
with states in the world in the way that the number of rings in a tree trunk
correlates with the age of the tree. (see dretske, 1981, stampe, 1977, stalnaker,
1984, and fodor, 1987, 1990.) on this view, the meaning of mental symbols
is definitely a matter from the correlations between these symbols and the world.one version of this view (fodor, 1990) says that t is the truth condition
of a mental sentence m if and only if: m is in the belief box if and only
if t, in ideal conditions. that is, what it is for `grass is green' to have
the truth condition that grass be green is for `grass is green' to appear
in the belief box just in case grass really is green (and conditions are
ideal). the idea behind this theory is that there are cognitive mechanisms
that are designed to put sentences in the belief box when and only when
they are true, and if those cognitive mechanisms are working properly and
the environment cooperates (no mirages, no cartesian evil demons), these
sentences will appear in the belief box when and only when they are true.one problem with this idea is that even if this theory works for "observation
sentences" such as `this is yellow', it is hard to see how it could
work for "theoretical sentences." a person's cognitive mechanisms
could be working fine, and the environment could contain no misleading evidence,
and still, one might not believe that space is riemannian or that some quarks
have charm or that one is in the presence of a magnetic field. for theoretical
ideas, it is not enough to have one's nose rubbed in the evidence: you also
have to have the right theoretical idea. and if the analysis of ideal conditions
includes "has the right theoretical idea", that would make the
analysis circular because having the right theoretical idea amounts to "comes
up with the true theory". and appealing to truth in an analysis of
`truth' is to move in a very small circle. (see block, 1986,p 657-660.)the next approach is known as functionalism (actually, "functional
role semantics" in discussions of meaning) in philosophy, and as procedural
semantics in cognitive psychology and computer science. functionalism says
that what gives internal symbols (and external symbols too) their meanings
is how they operate. to maximize the contrast with the view described in
the last two paragraphs, it is useful to think from the functionalist approach
with respect to a symbol that doesn't (on the face of it) have any
sort of correlation with states with the world, say the symbol `and'. part
of what makes `and' mean what it does is that if we are sure of `grass is
green and grass grows', we find the inference to `grass is green' and also
`grass grows' compelling. and we find it compelling "in itself",
not because of any other principle. (see peacocke, 1993) or if we are sure
that one of the conjuncts is false, we find compelling the inference that
the conjunction is false too. what it is to mean and by `and' is
to find such inferences compelling in this way,office Home And Business 32bit, and so we can think for the
meaning of `and'as a matter of its behavior in these and other inferences.
the functionalist view of meaning applies this idea to all words. the picture
is that the internal representations in our heads have a perform in our
deciding, deliberating, problem solving--indeed in our thought in general--and
that is what their meanings consist in.this picture can be bolstered by a consideration of what happens when
one initially learns newtonian mechanics. in my own case, i heard a large number
of unfamiliar terms more or less all at once: `mass', `force', `energy',
and the like. i never was told definitions of these terms in terms i already
knew. (no one has ever come up with definitions of such "theoretical
terms" in observation language.) what i did learn was how to use
these terms in solving homework problems, making observations, explaining
the behavior of a pendulum, and the like. in learning how to use the terms
in thought and action (and perception as well, though its role there is
less obvious), i learned their meanings, and this fits with the functionalist
idea the meaning of a term just is its function in perception,
thought and action. a theory of what meaning is can be expected to jibe
with a theory of what it is to acquire meanings, and so considerations about
acquisition can be relevant to semantics.an apparent problem arises for such a theory in its application to the
meanings of numerals. after all, it is known as a mathematical fact that truths in
the familiar numeral system `1',`2',`3'... are preserved,
even if certain non-standard interpretations with the numerals are adopted
(so long as non-standard versions on the operations are adopted too). for
example, `1' might be mapped onto 2, `2' onto 4, `3'
onto 6,microsoft office 2010 Home And Business activation key, and so on. that is, the numerals, both "odd" and "even",
might be mapped onto the even numbers. since `1' and `2' can have
the same functional role in different number systems and still designate
the very numbers they usually designate in normal arithmetic, how can the
functional role of `1' determine whether `1' means 1 or 2?
it would seem that all functional role could do is "cut down"
the number of possible interpretations, and if there are still an infinity
left after the cutting down, functional role has gained nothing.a natural functionalist response would be to emphasize the input
and output ends with the functional roles. we say "two cats"
when confronted with a pair of cats, not when confronted with one or five
cats, and our thoughts involving the symbol `3' affect our actions
towards triples in an obvious way in which these thoughts do not affect
our actions towards octuples. the functionalist can avoid non-standard interpretations
of internal functional roles by including in the semantically relevant
functional roles external relations involving perception and action (harman,
1973). in this way, the functionalist can incorporate the insight for the
view mentioned earlier that meaning has something to do with covariation
between symbols and the world.the emerging picture of how cognitive science can handle intentionality
should be becoming clear. transducers at the periphery and internal primitive
processors produce and operate on symbols so as to give them their functional
roles. in virtue of their functional roles (both internal and external),
these symbols have meanings. the functional role perspective explains the
mysterious correlation between the symbols and their meanings. it is the
activies on the symbols that gives them their meanings, so it is no mystery
that a syntax-based system should have rational relations among the meanings
with the system's symbols. intentional states have their relations in virtue
of these symbolic activities, and the contents with the intentional states
on the system, thinking, wanting etc, are inherited from the meanings of
the symbols. this is the orthodox account of intentionality for the computer
model of your mind. it combines functionalism with a commitment to a language
of thought. both views are controversial, the latter both in regard to its
truth and its relevance to intentionality even if true. note, incidentally,
that on this account of intentionality, the source of intentionality is
computational structure, independently of whether the computational structure
is produced by software or hardware. thus the title of this chapter, in
indicating the mind is the software from the brain has the potential
to mislead. if we think in the computational structure of a computer as
coming entirely from a program put into a structureless general purpose
machine, we are very far from the facts about the human brain--which is
not such a general purpose machine.at the end of this chapter, we will discuss searle's famous chinese room
argument, which is actually a direct attack on this theory. the next two sections
will be devoted to arguments for and against the language of thought. 3.1 objections to the language of thought theory many objections have been raised to the language of thought picture.
let us briefly look at three objections made by dennett (1975).the 1st objection is that we all have an infinity of beliefs (or at
any rate a very large number of them). for example, we believe that that
trees do not light up like fire-flies, and that this book is probably closer
to your eyes than the president's left shoe is to the ceiling in the museum
of modern art gift shop. but how can it be that so many beliefs are all
stored in the rather small belief box in your head? one line of response
to this objection involves making a distinction between the ordinary
concept of belief and a scientific concept of belief towards which
one hopes cognitive science is progressing. for scientific purposes, we
home in on cases in which our beliefs cause us to do something,
say throw a ball or change our mind, and cases in which beliefs are caused
by something, as when perception of a rhinocerous causes us to believe that
there is a rhinocerous in the vicinity. science is concerned with causation
and causal explanation, so the proto-scientific concept of belief is the
concept of a causally active belief. it is only for these beliefs
that the language of thought theory is committed to sentences in the head.
this idea yields a very simple answer to the infinity objection, namely
that on the proto-scientific concept of belief, most of us did not have
the belief that trees do not light up like fire-flies until they read this
paragraph.beliefs in the proto-scientific sense are explicit, that is, recorded
in storage in the brain. for example, you no doubt were once told the
sun is 93 million miles away from the earth. if so, perhaps you have this
fact explicitly recorded in your head, available for causal action, even
though until reading this paragraph, this belief hadn't been conscious for
years. such explicit beliefs have the potential for causal interaction,
and thus must be distinguished from cases of belief in the ordinary sense
(if they are beliefs at all) such as the belief that all normal people have
that trees do not light up like fireflies.being explicit is to be distinguished from other properties of mental
states, such as being conscious. theories in cognitive science tell us of
mental representations about which no one knows from introspection, such
as mental representations of aspects of grammar. if this is right, there
is much in the way of mental representation that is explicit but not conscious,
and thus the door is opened to the possibility of belief that is explicit
but not conscious.it is important to note that the language of thought theory is not meant
to be a theory of all possible believers, but rather only of us.
the language of thought theory allows creatures who can believe without
any explicit representation at all, but the claim with the language of thought
theory is that they aren't us. a digital computer consists of a central
processing unit (cpu) that reads and writes explicit strings of zeroes and
ones in storage registers. one can think of this memory as in principle
unlimited, but of course any actual machine has a finite memory. now any
computer with a finite amount of explicit storage can be simulated by a
machine with a much larger cpu and no explicit storage, that is no
registers and no tape. the way the simulation works is by using the extra
states as a form of implicit memory. so, in principle, we could be simulated
by a machine with no explicit memory at all.consider, for example, the finite automaton diagrammed in figure 7. the
table shows it as having three states. the states, `s1', `s2',
and `s3', are listed across the top. the inputs are listed on
the left side. each box is in a column and a row that specifies what the
machine does when it is in the state named at the top for the column, and
when the input is the one listed at the side of the row. the top part of
the box names the output, and the bottom part from the box names the next
state. this is what the table says: when the machine is in s1,
and it sees a 1, it says "1", and goes to s2. when
it is in s2, if it sees a `1' it says "2" and
goes into the next state, s3. in that state, if it sees a `1'
it says "3" and goes back to s1. when it sees nothing,
it says nothing and stays in the same state. this automaton counts "modulo"
three, that is, you can tell from what it says how many ones it has seen
since the last multiple of three. but what the machine table makes clear
is that this machine need have no memory from the sort that involves writing
anything down. it can "remember" solely by changing state. some
theories based on neural network models (volume iv, ch 3) assume that we
are such machines.
figure 7: finite automaton that
counts "modulo" three suppose, then, that we are digital computers with explicit repesentations.
we could be simulated by finite automata which have many more states and
no explicit representations. the simulators will have just the same beliefs
as we do, but no explicit repesentations (unless the simulators are just
juke boxes in the type from the aunt bubbles machine described in 1.1). the
machine in which remembered items are recorded explicitly has an advantage
over a computationally equivalent machine that "remembers" by
changing state, namely that the explicit representations can be part of
a combinatorial system. this point will be explained in the next section.time to sum up. the objection was that an infinity of beliefs cannot
be written down in the head. my response was to distinguish between a loose
and ordinary sense of `belief' in which it may be true that we have an infinity
of beliefs, and a proto-scientific sense of `belief' in which the concept
of belief is the concept of a causally active belief. in the latter sense,
i claimed, we do not have an infinity of beliefs.even if you agree with this response to the infinity objection, you may
still feel dissatisfied with the idea that, because the topic has never
crossed their minds, most people don't believe that zebras don't wear underwear
in the wild. perhaps it will help to say something about the relation between
the proto-scientific concept of belief and the ordinary concept. it is natural
to want some sort of reconstruction for the ordinary concept in scientific
terms, a reconstruction of the sort we have when we define the ordinary
concept for the weight of a person as the force exerted on the person by
the earth at the earth's surface. to scratch this itch, we can give a to start with
approximation to a definition of a belief in the ordinary sense as anything
that is either (1) a belief in the proto-scientific sense, or (2) naturally
and easily deduced from a proto-scientific belief.a 2nd objection to the language of thought theory is provided by dennett's
example of a chess-playing program that "thinks" it should get
its queen out early, even though there is no explicitly represented rule
that says anything like "get your queen out early". the fact that
it gets its queen out early is an "emergent" consequence of an
interaction of a large number of rules that govern the details of play.
but now consider a human analog in the chess playing machine. shouldn't
we say that she believes she should get her queen out early despite her
lack of any such explicit representation?the reply to this challenge to the language of thought theory is that
in the proto-scientific sense of belief, the chess player simply does not
believe that she should get her queen out early. if this seems difficult
to accept, note that there is no additional predictive or explanatory force
to the hypothesis that she believes she should get her queen out early beyond
the predictive or explanatory force in the explicitly represented strategies
from which getting the queen out early emerges. (though there is no additional
predictive force, there may be some additional predictive utility, just
as there is utility in navigation to supposing the sun goes around
the earth.) indeed, the idea that she should get her queen out early can
actually conflict with her deeply held chess principles, despite being an
emergent property of her usual tactics. we could suppose that if you point
out to her that her strategies have the consequence of getting her queen
out early, she says "oh no, i'd better revise my usual strategies."
so postulating that she believes that she should get her queen out early
could lead to mistaken predictions of her behavior. in sum, the proto-scientific
concept of a causally active belief can be restricted to the strategies
that really are explicitly represented.perhaps there is often a quasi-behaviorist ordinary sense of belief in which
it is correct to ascribe the belief the queen should come out early
simply on the basis of the fact that she behaves as if she believes it.
even if we agree to recognize such a belief, it is not one that ever causally
affects any other mental states or any behavior, so it is of little import
from a scientific standpoint.a third objection to the language of thought theory is provided by the
"opposite" from the "queen out early" case, dennett's
sister in cleveland case. suppose that a neurosurgeon operates on a someone's
belief box, inserting the sentence "i have a sister in cleveland".
when the patient wakes up, the doctor says "do you have a sister?"
"yes", the patient says,microsoft office 2007 Enterprise upgrade key, "in cleveland." doctor: "what's
her name?" patient: "gosh, i can't think of it." doctor:
"older or younger?" patient: "i don't know, and by golly
i'm an only child. i don't know why i'm saying that i have a sister at all."
finally, the patient concludes that she never really believed she had a
sister in cleveland, but rather was a victim of some sort of compulsion
to speak as if she did. the upshot is supposed to be that the language of
thought theory is false because you can't produce a belief just by inserting
a sentence in the belief box.the objection reveals a misleading aspect of your "belief box"
slogan, not a problem with the doctrine that the slogan characterizes. according
to the language of thought theory, believing that one has a sister in cleveland
is really a computational relation to a sentence, but this computational relation
shouldn't be thought of as simply storage. rather, the computational
relation must include some specification of relations to other sentences
to which one also has the same computational relation, and in that sense
the computational relation must be holistic. this point holds both for the
ordinary notion of belief and the proto-scientific notion. it holds for
the ordinary notion of belief because we don't count someone as believing
just because she mouths words the way our neurosurgery victim mouthed the
words "i have a sister in cleveland." and it holds for the proto-scientific
notion of belief because the unit of explanation and prediction is much
more likely to be groups of coherently related sentences in the brain than
single sentences all by themselves. if one is going to retain the "belief
box" way of talking, one should say that for a sentence in the belief
box to count as a belief, it should cohere sufficiently with other sentences
so as not to be totally unstable, disappearing on exposure to the light. 3.2 arguments for the language of thought so it seems that the language of thought hypothesis can be defended from
these a priori objections. but is there any positive reason to believe it?
one such reason is that it is part of a reasonably successful research program.
but there are challengers (mainly, some versions from the connectionist program
mentioned earlier), so a stronger case will be called for if the challengers'
research programs also end up being successful.5 a major rationale for accepting the language of thought has been one
or another form of productivity argument, stemming from chomsky's
work (see chomsky, 1975.) the idea is that people are capable of thinking
vast numbers of thoughts that they have not thought before--and indeed that
no one may have ever thought before. consider, for example, the thought
mentioned earlier that this book is closer to you than the president's shoe
is to the museum gift shop. the most obvious explanation of how we can think
such new thoughts is the same as the explanation of how we can frame the
sentences that express them: namely, via a combinatorial system that we
think in. indeed, abstracting away from limitations on memory, motivation,
and length of life, there may be no upper bound on the number of thinkable
thoughts. the number of sentences in the english language is certainly infinite.
but what does it mean to say that sentences containing millions of words
are "in principle" thinkable?those who favor productivity arguments say this: the explanation for
the fact that we cannot actually think sentences containing millions of
words would have to appeal to such facts as that were we to try to think
sufficiently long or complicated thoughts, our attention would flag, or
our memory would fail us, or we would die. they think that we can idealize
away from these limitations, since the mechanisms of thought themselves
are unlimited. but this claim that if we abstract away from memory, mortality,
motivation, and the like, our thought mechanisms are unlimited, is actually a doctrine
for which there is no direct evidence. the perspective from which
this doctrine springs has been fertile, but it is an open question what
aspect in the doctrine is responsible for its success. @comment[kripke objection:
unclear what idealization is. kalso, we do have evidence from making load
easier]after all, we might be finite beings, essentially. not all idealizations
are equally correct, and contrary to widespread assumption in cognitive
science, the idealization to the unboundedness of thought may be a bad one.
consider a finite automaton naturally described from the table in figure 7.6 its only form of memory is change of
state. if you want to get this machine to count to 4 instead of just to
3, you can't just add more memory, you have to give it another state by
changing the way the machine is built. perhaps we are like this machine.an extension of your productivity argument to deal with this sort of problem
has recently been proposed by fodor (1987), and fodor and pylyshyn (1988).
fodor and pylyshyn point out that it is fact about humans that if someone
can think the thought that mary loves john, then she can also think the
thought that john loves mary. and likewise for a vast variety of pairs of
thoughts that involve the same conceptual constituents, but are put together
differently. there is usually a systematicity relation among many thoughts
that begs for an explanation in terms of a combinatorial system. the conclusion
is that human thought operates in a medium of "movable type".however, the most obvious candidate for the elements of such a combinatorial
system in many areas are the external symbol systems themselves.
perhaps the most obvious case is arithmetical thoughts. if someone is capable
of thinking the thought that 7 + 16 is not 20, then, presumably she is capable
of thinking the thought that 17 + 6 is not 20. indeed, someone who has mastered
the ten numerals plus other basic symbols of arabic notation and their rules
of combination can think any arithmetical thought that is expressible in
a representation that he can read. (note that false propositions can be
thinkable--one can think the thought that 2+2 = 5, if only to think that
it is false.)one line of a common printed page contains eighty symbols. there are
a great many different arithmetical propositions that can be written on
such a line--about as many as there are elementary particles in the universe.
though almost all of them are false, all of them are arguably thinkable
with some work. starting a bit smaller, try to entertain the thought that
695,302,222,387,987 + 695,302,222,387,986 = 2. how is it that we have so
many possible arithmetical thoughts? the obvious explanation for this is
that we can string together--either in our heads or on paper--the symbols
(numerals, pluses, etc.) themselves, and simply read the thought off the
string of symbols. of course, this does not show that the systematicity
argument is wrong. far from it, since it shows why it is right.
but this point does threaten the value in the systematicity argument
considerably. for it highlights the possibility that the systematicity argument
may apply only to conscious thought, and not to the rest in the iceberg
of unconscious thought processes that cognitive science is mainly about.
so fodor and pylyshyn are right the systematicity argument shows that
there is often a language of thought. and they are right that if connectionism
is incompatible with a language of thought, so much the worse for connectionism.
but where they are wrong is with respect to an unstated assumption: that
the systematicity argument shows that language-like representations pervade
cognition.to see this point, note that much from the success in cognitive science
has been in our understanding of perceptual and motor modules. the operation
of these modules is neither introspectible--accessible to conscious thought--nor
directly influencible by conscious thought. these modules are "informationally
encapsulated". (see pylyshyn (1984), and fodor (1983).) the productivity
in conscious thought that is exploited by the systematicity argument certainly
does not demonstrate productivity in the processing inside such modules.
true, if someone can think that if john loves mary, then he can think that
mary loves john. but we don't have easy access to such facts about pairs
of representations on the form involved in unconscious processes. distinguish
between the conclusion of an argument and the argument itself. the conclusion
in the systematicity argument may well be right about unconscious representations.
that is, systematicity itself may well obtain in these systems. my
point is that the systematicity argument shows little about encapsulated
modules and other unconscious systems.the weakness of the systematicity argument is that, resting as it does
on facts that are so readily available to conscious thought, its application
to unconscious processes is more tenuous. nonetheless, as the reader can
easily see by looking at any cognitive science textbook, the symbol manipulation
model has been quite successful in explaining aspects of perception thought
and motor control. so although the systematicity argument is limited in
its application to unconscious processes, the model it supports for conscious
processes appears to have considerable application to unconscious processes
nonetheless.to avoid misunderstanding, i should add the point just made does
not challenge all for the thrust from the fodor and pylyshyn critique of connectionism.
any neural network model from the mind will have to accomodate the fact of
our use of a systematic combinatorial symbol system in conscious thought.
it is hard to see how a neural network model could do this without being
in part an implementation of a standard symbol-crunching model.in effect, fodor and pylyshyn (1988, p.44) counter the idea the
systematicity argument depends entirely on conscious symbol manipulating
by saying the systematicity argument applies to animals. for example,
they argue the conditioning literature contains no cases of animals
that can be trained to pick the red thing rather than the green one,
but cannot be trained to pick the green thing rather than the red
one.this reply has some force, but it is uncomfortably anecdotal. the data
a scientist collects depend on his theory. we cannot rely on data collected
in animal conditioning experiments run by behaviorists--who after all, were
notoriously opposed to theorizing about internal states.another objection to the systematicity argument derives from the distinction
between linguistic and pictorial representation that plays a role in the
controversies over mental imagery. many researchers think that we have two
different representational systems, a language-like system--thinking in
words--and a pictorial system--thinking in pictures. if an animal that can
be trained to pick red instead of green can also be trained to pick green
instead of red, that may reflect the properties of an imagery system shared
by humans and animals, not a properly language-like system. suppose fodor
and pylyshyn are right about the systematicity of thought in animals. that
may reflect only a combinatorial pictorial system. if so, it would suggest
(though it wouldn't show) that humans have a combinatorial pictorial system
too. but the question would still be open whether humans have a language-like
combinatorial system that is used in unconscious thought. in sum, the systematicity
argument certainly applies to conscious thought, and it is part of a perspective
on unconscious thought that has been fertile, but there are difficulties
in its application to unconscious thought. 3.3 explanatory levels and the syntactic theory on the mind in this section, let us assume that the language of thought hypothesis
is correct in order to ask another question: should cognitive science explanations
appeal only to the syntactic elements in the language of thought (the `0's
and `1's and the like), or should they also appeal to the contents
of these symbols? stich (1983) has argued for the "syntactic theory
of mind", a version in the computer model in which the language of
thought is construed in terms of uninterpreted symbols, symbols that may
have contents, but whose contents are irrelevant for the purposes
of cognitive science. i shall put the issue in terms of a critique of a
simplified version in the argument of stich (1983).let us begin with stich's case of mrs. t, a senile old lady who answers
"what happened to mckinley?" with "mckinley was assassinated,"
but cannot answer questions like "where is mckinley now?", "is
he alive or dead?" and the like. mrs. t's logical facilities are fine,
but she has lost most of her memories, and virtually all the concepts that
are normally connected to the concept of assassination, such as the concept
of death. stich sketches the case so as to persuade us that though mrs.
t may know that something happened to mckinley, she doesn't have any real
grasp for the concept of assassination, and thus cannot be said to believe
that mckinley was assassinated.the argument that i will critique concludes that purely syntactic explanations
undermine content explanations because a syntactic account is superior to
a content account. there are two respects of superiority in the syntactic
approach: first of all, the syntactic account can handle mrs. t who has little
in the way of intentional content, but plenty of internal representations
whose interactions can be used to explain and predict what she does, just
as the interactions of symbol structures in a computer can be used to explain
and predict what it does. and the same holds for very young children, people
with wierd psychiatric disorders, and denizens of exotic cultures. in all
these cases, cognitive science can (at least potentially) assign internal
syntactic descriptions and use them to predict and explain, but there are
problems with content ascriptions (though, in the last case at least, the
problem is not that these people have no contents, but just that their contents
are so different from ours that we cannot assign contents to them in our
terms). in sum, the initial type of superiority of the syntactic perspective
over the content perspective, is that it allows for the psychology on the
senile, the very young, the disordered, and the exotic, and thus, it is
alleged, the syntactic perspective is far more general than the content
perspective.the second respect of superiority with the syntactic perspective is that
it allows more fine-grained predictions and explanations than the
content perspective. to take a humdrum example, the content perspective
allows us to predict that if someone believes that all men are mortal, and
that he is definitely a man, he can conclude that he is mortal. but suppose that the
way this person represents the generalization that all men are mortal to
himself is via a syntactic form from the type `all non-mortals are non-men';
then the inference will be harder to draw than if he had represented it
without the negations. in general, what inferences are hard rather than
easy, and what sorts of mistakes are likely will be better predictable from
the syntactic perspective than from the content perspective, in which all
the different ways of representing one belief are lumped together.the upshot of this argument is supposed to be that since the syntactic
approach is more general and more fine-grained than the content approach,
content explanations are therefor undermined and shown to be defective.
so cognitive science would do well to scrap attempts to explain and predict
in terms of content in favor of appeals to syntactic form alone..but there is often a fatal flaw in this argument, one that applies to many
reductionist arguments. the fact that syntactic explanations are better
than content explanations in some respects says nothing about whether content
explanations are not also better than syntactic explanations in some
respects. a dramatic way of revealing this fact is to note that if the argument
against the content level were correct, it would undermine the syntactic
approach itself. this point is so simple, fundamental, and widely applicable,
that it deserves a name; let's call it the reductionist cruncher. just as
the syntactic objects on paper can be described in molecular terms, for
example as structures of carbon molecules, so the syntactic objects in our
heads can be described in terms on the viewpoint of chemistry and physics.
but a physico-chemical account from the syntactic objects in our head will
be more general than the syntactic account in just the same way the
syntactic account is more general than the content account. there are possible
beings, such as mrs. t, who are similar to us syntactically but not in intentional
contents. similarly, there are possible beings who are similar to us in
physico-chemical respects, but not syntactically. for example, creatures
could be like us in physico-chemical respects without having physico-chemical
parts that function as syntactic objects--just as mrs. t's syntactic objects
don't operate so as to confer content upon them. if neural network models
in the sort that anti-language of thought theorists favor could be bio-engineered,
they would fit this description. the bio-engineered models would be like
us and like mrs. t in physico-chemical respects, but unlike us and unlike
mrs. t in syntactic respects. further, the physico-chemical account will
be more fine-grained than the syntactic account, just as the syntactic account
is more fine-grained than the content account. syntactic generalizations
will fail under some physico-chemically specifiable circumstances, just
as content generalizations fail under some syntactically specifiable circumstances.
i mentioned that content generalizations might be compromised if the syntactic
realizations include too many syntactic negations. the present point is
that syntactic generalizations might fail when syntactic objects interact
on the basis of certain physico-chemical properties. to take a slightly
silly example, if a token of s and a token of s-->t are
both positively charged so that they repel each other, that could prevent
logic processors from putting them together to yield a token of t.in sum, if we could refute the content approach by showing the the
syntactic approach is more general and fine grained than the content approach,
then we could also refute the syntactic approach by exhibiting the same
deficiency in it relative to a still deeper theory. the reductionist cruncher
applies even within physics itself. for example, anyone who rejects the
explanations of thermodynamics in favor for the explanations of statistical
mechanics will be frustrated from the fact the explanations of statistical
mechanics can themselves be "undermined" in just the same way
by quantum mechanics.the same points can be made in terms of your explanation of how a computer
works. compare two explanations of your behavior with the computer on my desk,
one in terms with the programming language, and the other in terms of what
is happening in the computer's circuits. the latter level is certainly more
general in that it applies not only to programmed computers, but also to
non-programmable computers that are electronically similar to mine, for
example, certain calculators. thus the greater generality of the circuit
level is like the greater generality on the syntactic perspective. further,
the circuit level is more fine grained in that it allows us to predict and
explain computer failures that have nothing to do with program glitches.
circuits will fail under certain circumstances (for example, overload, excessive
heat or humidity) that are not characterizable in the vocabulary of your
program level. thus the greater predictive and explanatory power for the
circuit level is like the greater power of the syntactic level to distinguish
cases of your same content represented in different syntactic forms that
make a difference in processing.however, the computer analogy reveals a flaw in the argument that the
"upper" level (the program level in this example) explanations
are defective and should be scrapped. the fact that a "lower"
level like the circuit level is superior in some respects does not show
that "higher" levels such as the program levels are not themselves
superior in other respects. thus the upper levels are not shown to be dispensible.
the program level has its own type of greater generality, namely
it applies to computers that use the same programming language, but are
built in different ways, even computers that don't have circuits at all
(but say work via gears and pulleys). indeed, there are many predictions
and explanations that are simple at the program level, but would be absurdly
complicated at the circuit level. further (and here is the reductionist
cruncher again), if the program level could be shown to be defective by
the circuit level, then the circuit level could itself be shown to be defective
by a deeper theory, for example, the quantum field theory of circuits.the point here is not the program level is definitely a convenient fiction.
on the contrary, the program level is just as real and explanatory
as the circuit level.perhaps it will be useful to see the matter in terms of an example from
putnam (1975). consider a rigid round peg 1 inch in diameter and a square
hole in a rigid board with a 1 inch diagonal. the peg won't fit through
the hole for reasons that are easy to understand via a little geometry.
(the side of your hole is 1 divided through the square root of 2, which is known as a number
substantially less than 1.) now if we went to the level of description of
this apparatus in terms with the molecular structure that makes up a specific
solid board, we could explain the rigidity with the materials, and we would
have a more fine-grained understanding, including the ability to predict
the incredible case where the alignment and motion in the molecules is such
as to allow the peg to actually go through the board. but the "upper"
level account in terms of rigidity and geometry nonetheless provides correct
explanations and predictions, and applies more generally to any rigid
peg and board, even one with quite a different sort of molecular constitution,
say one made of glass--a supercooled liquid--rather than a solid.it is tempting to say that the account in terms of rigidity and geometry
is only an approximation, the molecular account being the really correct
one. (see smolensky, 1988, for a dramatic case of yielding to this sort
of temptation.) but the cure for this temptation is the reductionist cruncher:
the reductionist will also have to say that an elementary particle account
shows the molecular account to be only an approximation. and the elementary
particle account itself will be undermined by a still deeper theory. the
point of a scientific account is to cut nature at its joints, and nature
has real joints at many different levels, each of which requires
its own form of idealization.further, what are counted as elementary particles today may be found
to be composed of still more elementary particles tomorrow, and so on, ad
infinitum. indeed, contemporary physics allows this possiblity of an infinite
series of particles within particles. (see dehmelt, 1989.) if such an infinite
series obtains, the reductionist would be committed to saying that there
are no genuine explanations because for any explanation at any provided level,
there is always a deeper explanation that is more general and more fine-grained
that undermines it. but the existence of genuine explanations surely does
not depend on this recondite issue in particle physics!i have been talking as if there is just one content level,microsoft office Standard 2007 product key, but actually
there are many. marr distinguished among three different levels: the computational
level, the level of representation and algorithm, and the level of implementation.
at the computational or formal level, the multiplier discussed earlier is
to be understood as a perform from pairs of numbers to their products,
for example, from 7,9 to 63. the most abstract characterization at the
level of representation and algorithm is simply the algorithm from the multiplier,
namely: multiply n by m by adding m to zero n times. a less abstract characterization
at this middle level is the program described earlier, a sequence of operations
including subtracting 1 from the register that initially represents n until
it is reduced to zero, adding m to the answer register each time. (see figure
2.) each of these levels is mostly a content level rather than a syntactic level.
there are many types of multipliers whose behavior can be explained (albeit
at a somewhat superficial level) simply by reference to the fact that they
are multipliers. the algorithm mentioned gives a deeper explanation, and
the program--one of many programs that can realize that algorithm--gives
still a deeper explanation. however, when we break the multiplier down into
parts such as the adder of figures 3a and 3b, we explain its internal operation
in terms of gates that operate on syntax, that is in terms of operations
on numerals. now it is crucially important to realize that the mere possibility
of a description of a system in a certain vocabulary does not by
itself demonstrate the existence of a genuine explanatory level. we are
concerned here with cutting nature at its joints, and talking as
if there is mostly a joint does not make it so. the fact that it is good methodology
to look foremost for the operate, then for the algorithm, then for the implementation,
does not by itself show that these inquiries are inquiries at different
levels, as opposed to different ways of approaching the same level. the
crucial issue is whether the different vocabularies correspond to genuinely
distinct laws and explanations, and in any given case, this question will
only be answerable empirically. however, we already have good empirical
evidence for the reality on the content levels just mentioned--as well as
the syntactic level. the evidence is to be found in this very book, where
we see genuine and distinct explanations at the level of purpose, algorithm
and syntax.a further point about explanatory levels is that it is legitimate to
use different and even incompatible idealizations at different levels.
see putnam (1975).) it has been argued that since the brain is analog, the
digital computer must be incorrect as a model of your mind. but even digital
computers are analog at one level of description. for example, gates of
the sort described earlier in which 4 volts realizes `1' and 7 volts
realizes `0' are understood from the digital perspective as always
representing either `0' or `1'. but an examination at the
electronic level shows that values intermediate between 4 and 7 volts appear
momentarily when a register switches between them. we abstract from these
intermediate values for the purposes of one level of description, but not
another. 4. searle's chinese room argument as we have seen, the idea that a certain type of symbol processing can
be what makes something an intentional system is fundamental to the
computer model with the mind. let us now turn to a flamboyant frontal attack
on this idea by john searle (1980, 1990b, churchland and churchland, 1990;
the basic idea of this argument stems from block, 1978). searle's strategy
is one of avoiding quibbles about specific programs by imagining that cognitive
science with the distant future can come up with the program of an actual
person who speaks and understands chinese, and that this program can be
implemented in a machine. unlike many critics from the computer model, searle
is willing to grant that perhaps this can be done so as to focus on his
claim that even if this can be done, the machine will not have intentional
states.the argument is based on a thought experiment. imagine yourself presented
a job in which you work in a room (the chinese room). you understand only
english. slips of paper with chinese writing on them are put under the input
door, and your job is to write sensible chinese replies on other slips,
and push them out under the output door. how do you do it? you act as the
cpu (central processing unit) of a computer, following the computer program
mentioned above that describes the symbol processing in an actual chinese
speaker's head. the program is printed in english in a library in the room.
this is how you follow the program. suppose the latest input has certain
unintelligible (to you) chinese squiggles on it. there is definitely a blackboard on
a wall on the room with a "state" number written on it; it says
`17'. (the cpu of a computer is really a device with a finite number of
states whose activity is determined solely by its current state and input,
and since you are acting as the cpu, your output will be determined by your
intput and your "state". the `17' is on the blackboard to tell
you what your "state" is.) you take book 17 out from the library,
and look up these particular squiggles in it. book 17 tells you to look
at what is written on your scratch pad (the computer's internal memory),
and provided both the input squiggles and the scratch pad marks, you are directed
to change what is on the scratch pad in a certain way, write certain other
squiggles on your output pad, push the paper under the output door, and
finally, change the number on the state board to `193'. as a result
of this activity, speakers of chinese find the pieces of paper you
slip under the output door are sensible replies to the inputs..but you know nothing of what is being said in chinese; you are just following
instructions (in english) to look in certain books and write certain marks.
according to searle, since you don't understand any chinese, the system
of which you are the cpu is really a mere chinese simulator, not a real chinese
understander. of course, searle (rightly) rejects the turing test for understanding
chinese. his argument, then is that since the program of a real chinese
understander is not sufficient for understanding chinese, no symbol-manipulation
theory of chinese understanding (or any other intentional state) is correct
about what makes something a chinese understander. thus the conclusion
of searle's argument is that the fundamental idea of thought as symbol processing
is wrong even if it allows us to build a machine that can duplicate the
symbol processing of a person and thereby duplicate a person's behavior.the best criticisms on the chinese room argument have focused on what
searle--anticipating the challenge--calls the systems reply. (see the responses
following searle (1980), and the comment on searle in hofstadter and dennett
(1981).) the systems reply has a positive and a negative component. the
negative component is that we cannot reason from "bill has never sold
uranium to north korea" to "bill's company has never sold uranium
to north korea". similarly, we cannot reason from "bill does not
understand chinese" to "the system of which bill can be described as part does
not understand chinese. (see copeland, 1993b.) there can be a gap in searle's
argument. the positive component goes further, saying the whole system--man
+ program + board + paper + input and output doors--does understand chinese,
even though the man who is acting as the cpu does not. if you open up your
own computer, looking for the cpu, you will find that it is just one of
the many chips and other components on the main circuit-board. the systems
reply reminds us the cpus with the thinking computers we hope to have
someday will not themselves think--rather, they will be parts
of thinking systems.searle's clever reply is to imagine the paraphernalia for the "system"
internalized as follows. 1st, instead of having you consult a library,
we are to imagine you memorizing the whole library. 2nd, instead
of writing notes on scratch pads, you are to memorize what you would have
written on the pads, and you are to memorize what the state blackboard would
say. finally, instead of looking at notes put under one door and passing
notes under another door, you just use your own body to listen to
chinese utterances and produce replies. (this version on the chinese room
has the additional advantage of generalizability so as to involve the complete
behavior of a chinese-speaking system instead of just a chinese note exchanger.)
but as searle would emphasize, when you seem to chinese speakers to be conducting
a learned discourse with them in chinese, all you are aware of doing is
thinking about what noises the program tells you to make next, presented the
noises you hear and what you've written on your mental scratch pad.i argued above that the cpu is just one of many components. if the whole
system understands chinese, that should not lead us to expect the cpu to
understand chinese. the effect of searle's internalization move--the "new"
chinese room--is to attempt to destroy the analogy between looking inside
the computer and looking inside the chinese room. if one looks inside the
computer, one sees many chips in addition to the cpu. but if one looks inside
the "new" chinese room, all one sees is you, since you
have memorized the library and internalized the functions of the scratchpad
and the blackboard. but the point to keep in mind is that although the non-cpu
components are no longer easy to see, they are not gone. rather, they are
internalized. if the program requires the contents of one register to be
placed in another register, and if you would have done this in the original
chinese room by copying from one piece of scratch paper to another, in the
new chinese room you must copy from one of your mental analogs of a piece
of scratch paper to another. you are implementing the system by doing what
the cpu would do and you are simultaneously simulating the non-cpu components.
so if the positive side with the systems reply is correct, the total system
that you are implementing does understand chinese."but how can it be", searle would object, "that you implement
a system that understands chinese even though you don't understand
chinese?" the systems reply rejoinder is that you implement a chinese
understanding system without yourself understanding chinese or necessarily
even being aware of what you are doing under that description. the systems
reply sees the chinese room (new and old) as an english system implementing
a chinese system. what you are aware of are the thoughts in the english
system, for example your following instructions and consulting your internal
library. but in virtue of doing this herculean task, you are also implementing
a real intelligent chinese-speaking system, and so your body houses two
genuinely distinct intelligent systems. the chinese system also thinks,
but though you implement this thought, you are not aware of it.the systems reply can be backed up with an addition to the thought experiment
that highlights the division of labor. imagine that you take on the chinese
simulating as a 9-5 job. you come in monday morning after a weekend of relaxation,
and you are paid to follow the program until 5 pm. when you are working,
you concentrate hard at working, and so instead of trying to figure out
the meaning of what is said to you, you focus your energies on working out
what the program tells you to do in response to each input. as a result,
during working hours, you respond to everything just as the program dictates,
except for occasional glances at your watch. (the glances at your watch
fall under the same category as the noises and heat given off by computers:
aspects of their behavior that is not part from the machine description but
are due rather to features of the implementation.) if someone speaks to
you in english, you say what the program (which, you recall, describes a
real chinese speaker) dictates. so if during working hours someone speaks
to you in english, you respond with a request in chinese to speak chinese,
or even an inexpertly pronounced "no speak english," that was
once memorized from the chinese speaker being simulated, and which you the
english speaking system may even fail to recognize as english. then, come
5 pm, you stop working, and react to chinese talk the way any monolingual
english speaker would.why is it that the english system implements the chinese system rather
than, say, the other way around? because you (the english system whom i
am now addressing) are following the instructions of a program in english
to make chinese noises and not the other way around. if you decide to quit
your job to become a magician, the chinese system disappears. however, if
the chinese system decides to become a magician, he will make plans that
he would express in chinese, but then when 5 p.m. rolls around, you quit
for the day, and the chinese system's plans are on the shelf until you come
back to work. and of course you have no commitment to doing whatever
the program dictates. if the program dictates that you make a series of
movements that leads you to a flight to china, you can drop out with the simulating
mode, saying "i quit!" the chinese speaker's existence and the
fulfillment of his plans depends on your work schedule and your plans, not
the other way around.thus, you and the chinese system cohabit one body. in effect, searle
uses the fact that you are not aware for the chinese system's thoughts as
an argument that it has no thoughts. but this is an invalid argument. real
cases of multiple personalities are often cases in which one personality
is unaware in the others.it is instructive to compare searle's thought experiment with the string-searching
aunt bubbles machine described at the outset of this paper. this machine
was used against a behaviorist proposal of a behavioral concept of
intelligence. but the symbol manipulation view of your mind is not a proposal
about our everyday concept. to the extent that we think of the english system
as implementing a chinese system, that will be because we find the symbol-manipulation
theory on the mind plausible as an empirical theory.there is one aspect of searle's case with which i am sympathetic. i have
my doubts as to whether there is anything it is like to be the chinese system,
that is, whether the chinese system can be a phenomenally conscious system.
my doubts arise from the idea that perhaps consciousness is more a matter
of implementation of symbol processing than of symbol processing itself.
though surprisingly searle does not mention this idea in connection with
the chinese room, it can be seen as the argumentative heart of his position.
searle has argued independently of the chinese room (searle, 1992, ch 7)
that intentionality requires consciousness. (see the replies to searle in
behavioral and brain sciences 13, 1990.) but this doctrine, if correct,
can shore up the chinese room argument. for if the chinese system is not
conscious, then, according to searle's doctrine, it is not an intentional
system either.even if i am right about the failure of searle's argument, it does succeed
in sharpening our understanding of your nature of intentionality and its
relation to computation and representation.7
learning7966 is offline   Reply With Quote
 


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT. The time now is 12:46 PM.

 

Powered by vBulletin Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Free Advertising Forums | Free Advertising Message Boards | Post Free Ads Forum