· 6 years ago · Apr 18, 2019, 03:02 AM
1bOffice hours for CASA: UCF301, Mon-Fri, 12-6Cogsci class representative: Jaylen Stark (on Facebook.)No textbook - Recommended readings only, but will not be on the exam or any tests.September 22 is first tutorialTA Office hours:Test 1: 10 short answers (choose 6)
2 * Explain it
3 * Connect it to other terms
4 * Explain how it's important
5
6Test 2: 9 short answer (choose 4) and 3 long questions (Choose 1)connect the term to related terms, and how it's connected. (And definition.) Critically engage it:Papers:
7 * Need a thesis
8 * Back it up with evidence
9 * Cannot be trivial; it must have an argument.
10
11HARD COPIES ARE REQUIRED.September 10th, 2018The mind: What do we mean?"Brain is hardware, mind is software." Is this true?"If we can make something, we probably know how it works." How AI works -> If we can make a mind, we can understand just about how it works.Flash Crash, what happened?Understanding behaviour: Different level of analysis.
12 * Intelligence
13 * Language: Communication.
14 * Literacy: Not predisposed
15
16 * Allows the networking of brains
17 * Notes; allows you to network with yourself!
18
19Cognitive Science: How can we describe it? In terms of a Vision?The interdisciplinary study of the mind; three visions for it.Generic Nominalism. A very large umbrella.Looking at multiple disciplines, and if it's related to the mind, then you are a cognitive scientist.Metaphor: Multiple groups are at the same party, but not meeting new people.Interdisciplinary Eclectocism:Like an inter-faith conference, putting two different religions in the same room and seeing what happens.Rather than a melding, it would be an insightful transfer of information.Metaphor: Groups are talking to each other, but they leave with the same people they came with and not talking to each other outside of that.Synoptic IntegrationEventually, it will start to integrate into something together. Deliberate "language" that allows the transfer of information between the two disciplines, such that the two can become more integrated.Metaphor: They change each other, and make a new group with a new identity.Synoptic Integration is the best representation of the multiple layers of Cognitive Science.As we integrate, it creates insight and power.Science is not just about knowledge production, it's also about how to intervene more effectively on the world.Example: F = M x A. It has so many applications to it, yet it is so simple. Has a general application.Metaphor is a way to transfer information; is a pig. They are dirty, sloppy, etc. However, instead of assuming they are literally like a pig, in terms of identity, instead we can learn more about them with the metaphor.Multi-Apt: lets you branch out and touch many different things. Elegant: can apply to lots of different things. One subject branching out.Convergence: Many things converging on one thing. Makes things seem real; like 3D movies. Convergence is also related to Trustworthy; if multiple people tell you they saw something strange, you're more likely to believe it.The Spider of plausability: highly converging things which converges on one thing; and then branches out. Example: The theory of natural selection. There's lots of evidence of it, but it can explain a lot.Plausible does NOT mean it is true.Lots of things converging onto one thing with one application; Trivial. Lots of converging evidence, but only one thing. Lots of things result in one conclusion.One stream of Input but lots of output: Farfetched. Example: CONSPIRACY. One thing explains a lot, thin stream of evidence applies to a lot.The fundamental question of cognitive science: What is consciousness? How do we experience consciousness?Three Scientific Revolutions: Next class.In an essay the conclusion; why we should care, the implications or applications.September 17th, 2018Humans don't understand humankind well; a cognitional hole in our world.Naturalistic Imperative; to naturalize phenomenon, to explain it.To analyze phenomenon, to formalize phenomenon, and to mechanize the phenomenon.Analyze; Thales (A person.) Taking a phenomenon, and breaking it down into understandable parts.Three fragments:
20 1. All is the moist.
21 2. The lodestone has Psyche.
22 3. Everything is filled with gods.
23
24All is the moist: Everything is made of water. It's wrong, but it's not dumb. Often times, the facts of biology flip on its head. However, the methods will not. Thales tries to explain the world to what it's made of. Water is a fundamental substance.The lodestone has psyche: A lodestone is a naturally occuring magnet. He observed that magnets are weird. Magnets move, on their own. Bring a magnet close to metal, and it moves. What else moves on its own? Animals; people. In some sense, the lodestone shares a common feature with animals. They move on their own.Everything is filled with gods: Everything is interesting, and worth looking at. It is complex, and worth investigating and worshipping.Common sense explains unfamiliar things, in terms of familiar things.Science explains familiar things, in terms of unfamiliar things.The assumption that things are simple, is just wrong.F = M x A is wrong, but it isn't stupid. Just because it's been proven wrong doesn't mean it's dumb thinking.Formalize; Decart; I think, therefore I am. Figuring out a way to translate something that is qualitative, into something that is formalized, like an equation.Cartesian graphing; assigning discrete numbers to space. He took a phenomenon of the world, and as a result ended up with numbers. Ending up with numbers gives you a high amount of convergence; often times, the numbers don't lie.How do we talk to other disciplines?Homoncular fallacy; Explaining a phenomenon by using the same phenomenon. A philosophical problem.Mechanization; Once you have the logic, and the math, you can make a computer/program that does it for you.Alan Turing:If you can make a machine that does something reliably, it brings you closer to an explanation.The three scientific revolutions have been scary, but good. Our ability to annihilate ourselves has strictly come out of formalization. Starting the 4th revolution soon!With mechanization, we are closing in on higher automation and higher unemployment.The final barrier of humanity is the mind.Often times, scientific explanations will defy our intuition. Planes being shot; investigating where they were shot. However, the right thing to do is to put armour where it wasnt shot, because the planes that were shot there didn't come back.Categorization: making categories, in a psychological sense, is a grouping of things that seem to belong together.
25 * Makes it easier to navigate the world, reducing cognitive load.
26 * Allows us to "code our experience"; when we encounter things they're not all 100% separate, can imply things.
27 * Can improve communication; Saying Fire is much easier than describing it.
28
29Example: Chalk. When you see chalk, you can expect it to behave in a certain manner.How do you teach a machine that things are obvious?What does similarity mean? Partial Identity. What is identity? They share properties. The more properties it shares, the more similar it is.Any two objects are similar and not similar in an indefinite amount of ways. Solution? Pay attention to the relevant similarities. However, it begs the question. How do we define what is relevant? Many theories presume relevance; they do not explain it.Machines should not only be able to do what humans do, but to make mistakes that humans makes.The mind can create categorie s (Ad-hoc categories) and then define them; like soup and dinosaurs.Barsalou created the concept of the ad-hoc category. Resemblance theory; Lance Rips and Smith.If thing A is close to thing B, thing B is close to thing A. A reciprocal relationship.However, many people will not consider that. Taiwan is like China, but China is not like Taiwan.Similarity judgement; like spidey sense for Spiderman; What defines "dangerous?"Just because someone has made something an equation, does not make it a formalization.Scholarly SourcesSplit the essay functionally.10 sources from the last 10-15 years; Minimum.A "good" essay will cite 20, and have read 30-40.Use google scholar, then if you find something you like then use OneSearch.Source = Scholarly article/book.APA format.In tutorials; how to make, defend, and counter arguments.Interesting Documentary; Three identical Strangers.Fourth Scientific Revolution; once we can figure out what goes on in our mind, we can fit ourselves back into the world.Categorization allows you to:
30 * Describe things collectively
31 * Project predictions; chairs have common features just like chalk does; but they each can have slight differences.
32
33Categories are things that seem to belong together.If we define things as being similar in some way and dissimilar in others, we can do an indefinite amount of ways. However, we must "intelligently ignore" a lot of things to just understand the relevant similarities.The resemblance theory; Lance Rips criticizes it.
34 1. If similarity judgement drives categorization, we have a hard time defining when categorization affects similarity judgement. Example: waves. Sound waves, waves of the ocean etc. Saying that they are similar is true, but they are not perceptually similar.
35 2. Ad-hoc categories; categories created on the fly. We are NOT "born" with categories; a lot of the time, they are made up on the fly. Therefore, how are we making them?
36
37Putting things into a formula does not mean it is formalized. Example: Tversky made the formula of:The similarity of I and J is a function of the Common features of I and J, minus what I has that J doesnt, minus what J has that I doesn't.Salience function; repeats itself (The function up there.) There are constants in front of all of these; they have weight. How do we decide the weighting? Begs the question.Salience can be defined with a contrast class; They pop out against a background. What is flammable? Wood is the one that jumps out. What else is flammable? Humans. Paper. However, we make things out of a variety of substances, but wood burns easier. That's why wood jumps out more. How things stand out is contextual. For example, a knife to cut a cake could instantly turn into a defense weapon against an intruder.Intercategorical; Asbestos. Can be spun into clothing, but is a rock.Rips criticizes;Reason-Based categories and Similarity based categorization exists.Experimental design is meant to distinguish causation from correlation. For example: There is a noticeable increase in global warming. As greenhouse gases have gone up, Caribbean piracy has gone down. Will funding more caribbean piracy lower global warming? Absolutely not. There's no causation, and it is a wrong correlation. Experimental Design is designed to turn correlation into causation.Example: If I change x and y does not change, that is a mark against causation.If I change y and x does not change, there's a problem. Why? Because then we can probably infer that they're not based on each other.Double Dissociation; you can change their assessment of how similar things are, and it doesnt change whether or not they're part of a category.Smith responds with; There are features, and knowledge/beliefs. You can preserve the features. What are the issues? How do you experience features? Do you not have knowledge and have beliefs about those features? How are they distinct?Common sense doesn't work well.We confuse cause and effect.We confuse correlation and causation.We confuse describing something formally, with formalization.Maybe we're using concepts to categorize?Aristotle:
38 * Often times, his observations of the world are very baseless.
39 * A universal genius; and defined the "classical theory of concepts."
40
41
42 * Hinges on the idea that "A concept is a mental definition." Not like a dictionary definition, but a list of features which is the set of both necessary and sufficient conditions. For example: a fire. Oxygen is a necessary condition. But it is not sufficient. Sufficient = not enough by itself. A flamethrower is a sufficient condition for a fire (You can make a fire.) However, it is not necessary (You don't need a flamethrower to make a fire.)
43 * The essence of a category is the complete set of sufficient and necessary conditions.
44
45 * A lot of things we have don't have essence.
46 * The meaning of a concept can be captured by:
47
48 * A conjunctive list of features. "Conjunctive?" A word like "and." A triangle is ___ and ___ and ___. A triangle is 3 sides and has ___
49 * Must bottom out into iconic features. When you break it down, it has to be able to break down into nothing further. It has to be built into fundamental blocks. Like lego!
50 * Each feature is individually necessary and all are jointly sufficient. If you hit all the features, you are a member of a category. If you don't, you're not part of that category.
51 * What is and isn't a member is clearly defined.
52 * All of the members of the category are equally representative. All equally good examples.
53 * When you organize it into a taxonomic hierarchy, the subordinate definitions include the definition of the super-ordinate category. Superordinate: Animals. Subordinate category to Animal: Mammal. Subordinate Category to Mammal: Feline. Repeat. If an animal is alive, then everything below it is alive. Top-Down, and answers the question of "What is a relevant feature?" The necessary features. Like with the triangle; 3 sides is necessary, and therefore is relevant.
54
55We can understand primitive concepts, and then use those concepts to understand complex sentences that can be entirely unique.What defines a game? Fun? What defines fun? There are lots of things that are fun that aren't games. There are also non-fun games. Games have family resemblance. Members of a family resemble each other, but are not identical. There's a certain amount of resemblance from an overlapping amount of traits.Eleanor Rosch: Prototype Theory.
56 * You have a prototype of something in your mind, and whether or not something is in a category is based on that prototype. Things are being compared to a prototype.
57 * There is no set of necessary and sufficient conditions in order to be a member of a category.
58 * Categories are unclear.
59 * Instances of similarity are judged by typicality.
60 * Membership in a category is determined by similarity to the prototype.
61
62This explains why:
63 * people can't define things too well. Example: Arguments on Abortion; one group argues about the soul, the other argues about tissue.
64 * Members aren't equal; They share a different number and mix of features compared to the prototype.
65
66Problems with prototype theory:How do we generate prototypes? How do we determine what's similar and what isn't? Homoncular.There is no limit in prototype theory.Similarity; if it isn't running on sufficient and necessary conditions, what is it running on? Prototypes? It begs the question.Typicality Gradient: A lot of prototypes, "typical", changes based on context. On a break? A worker on a break? A professor in a lecture? The number of prototypes become indefinitely large.If there's so many different prototypes, what defines the "typical?"Typicality Gradients are also quite unstable. An odd number? Some numbers are more odd than others. Like how 15 is more odd than 5.Be skeptical of feature lists.Prototypes are lists of features, and are weighted by their typicality. Multiply each feature by the typicality rating, then add it up. If it crosses a threshold, it is a member of the category.We put theories on a background and compare them to each other.Sidenote: We may be swapped by super-intelligent AI. Unlikely? Maybe. Minor AI that invalidates a human job should be rethought, such as lawyers. Lawyers usually don't sit in courtroom, they just read. Most people already rely on technology heavily. It is unlikely that we will be overrun by robots, because we will understand ourselves better.Monday, October 1Categorization is based on similarity judgements. But there are problems with it!Double disassociation; what Lance Rips said was a critique of the original argument.Smith says Similarity drives categorization. However, things can be similar and not in the same category. Things can also be in the same category but not similar.Geshtalt (german): The structural organzational function of something that ensures it's efficacy. Getting a bunch of bird parts and throwing it into the air doesn't make a bird; The geshtalt is the way the parts stay together; the structural and functional organization. You can't just list off the parts of something. You can disrupt the geshtalt; for example, having a dead rabbit. When it's in front of you, it's a dead rabbit. After cutting it into multiple pieces, you just see food and not a rabbit anymore.The theory theory (Or: The microtheory of concepts): We're not dealing with concepts; the categories are more like our theories. What do we mean? We have theories to make sense out of data. Example: Grades are low. Explanation: People are dumb in that year, professor was bad, etc. They probably won't say something like "bad lightbulbs." There can be an unlimited amount of theories. Where your theory applies depends on where you test it. Mixing two theories could make an entirely different theory. Example:Expert repair and Engine repair. Think about the word repair; Adding expert adds "expertise" to the repair. The repair has been made expert. But with engine repair, it's not like it's being repaired like an engine, it's that the engine is being repaired. Pet and Fish are two different things, but making it a pet fish is different. Like a bird: The "birdiness of a bird" is hard to describe. In turn, we use a list.Treat a concept like a theory. Gestalt Psychology. The theory tries to explain and predict things. Theories are meant to be predictive. Microtheory has a predictive concept.Now, why is the theory theory wrong? Because theory is homuncular. Theories do not present themselves; they are things that we generate to explain data. When you look at which theories we choose,how do we select it?Intercategorical: In-between categories. Would you rather be hit by a bus or eaten by an animal? Most would be hit by a bus; humans are not considered to be food, and being eaten violates that.Categorization is not a good candidate for cognition.Certainty is tricky. Examples: 1. The switch from the Earth is at the center to the Sun is at the center. People used to think that, until math changed that entirely. There are certain things that open your eyes.Tutorial, October 5Types of LogicRecognize Structure and Content!Logic: Reliable patterns in good arguments.Deductive Logic: It's truth preservative; Any truth introduced will be true the whole way through.Example: If it is raining, the ground is wet. It is raining, therefore the ground is wet.Inductive Logic: Pattern Recognition; Truth Generative. It's when you produce a truth. However, it is not necessarily truth preservative.Abductive Reasoning: Inference to the best explanation. When you have something, try and disprove/wreck it. If it stays up, then you might have something to work with.Microtheory of concepts; Part 2!Theory of theories; Why? because most of the theories that we come across are homoncular. The problem that the theory of theories was designed to solve is that mixing two things makes a whole new thing. To solve these problems, microtheory tries to make concepts predictions about the world. It lets you adjust the model of the world on the fly. Why is it wrong? Because it's homoncular.Don't be right, be convincing. Aucum's Razor: Avoid the unnecessary multiplication of entities.October 15thMemory. Looking for a more basic concept to explain things; Bottom-Up explanation.Information storage and retrieval. Remember: If it contradicts our experimental evidence (The theory we're proposing contradicts the evidence/things we know,) it's wrong. We need to come up with a different one.Different forms of memory: Sensory Memory, a quick flash. Not what we're focused on.Short term: Short-Term memory is like hearing a phone number.Working memory: The workspace in which we manipulate ideas; If we think of long-term memory like a fridge/cupboard, then working memory is like our counter-top.Long-term memory: Can last until we die? It can persist for a long time, but it does degrade a little. As you get older, it tends to compress a little bit. It will mush together. If someone has a memory issue in old age, they may remember the earlier things more prominently. Teachers have a strange degradation: It has a very shallow steep as they get older, but then suddenly drops off.Common sense around long-term memory: The spatial metaphor memory. What is memory? Memory is like a library. There's a big space, full of shelves, stable objects are kept in stable locations. To retrieve a memory, is like looking through a space to find it. Supporting Evidence for this: A common ancient method of training memory, which relies on this model, called the Method of Loci. Imagine a space, one you know well. Imagine a path through that space. Imagine objects at regular intervals along that path, and the objects are meant to symbolize things you want to memorize. The more exotic the object, the better you can remember it. By treating memory in this spatial way, we can memorize things better.Language of Training and Language of Explanation: How the training says it works is how it actually works: not true.Example of Training: The equation of riding the bike. Language of explanation: Trying to implement it.Example: What rhymes with blue? Shoe. What's' like blue? Cyan. But when you say Cyan and Shoe, the two don't really add up and makes you think of the other. Also, the context in which you're asked to remember something can change how you remember it.Experiment: Show somebody a figure, and give a priming word. The word can affect the way they reproduce the object. Memory is not reproductive; it is reconstructive. The features that are pulled out are based on the meaning. You don't remember things accurately, you just reconstruct things on the fly. Evolution helps you make adaptive reconstructions. It doesn't want your memory to be accurate, it helps you be intelligent. A contextual structure.Stroope effect: Meditators can reliably "defy" the effect; selectively controlling their attention. Other defiers: Hypnotizers. Can't read, but could only see the colour.Car crash with headlight and without; reproduced with hypnotised people. Hypnotized people are more suggestible. Can create false memories.a) is TABLE in capital letters? Yes, but you can answer the question whether or not you understood the word.b) Does "Market" rhyme with "Weight?" No, you need to think about the sound of it.c) Would "Friend" Fit in the sentence "He met a ___ on the street?" Yes, but you'd need a conception of what a friend is.Shifting context changes the way that someone may reconstruct something. Example: If someone is in a house, asking questions from the perspective of a home owner versus a burglar will produce/remember different things.Encoding Specificity: The context that we learn something will change the way in which we remember/retrieve things.Homoncular vs Infinite Regression?Tutorial, Friday October 19thMark Breakdown:
67 * 2 marks for identifying the concept and why we care.
68 * 4 marks for relevance; what is it doing?
69 * 2 marks for criticism
70 * 2 marks for novel connections; Do something with the info that isn't taught. Show that you understand it. Deconstruct a theory another way? Surprise the grader. Don't connect multiple novel connections to a single concept. Things are only new once.
71
72Explanation vs Description: Tell them Why? Use an example from the course and show its significance to the course. Why is it in the story, and why is it in the course?Theories are descriptions of the world.We can only track behaviour by predicting behaviour.What is a concept? Concept = Theory and Theory = Prediction, Therefore Concept = Prediction. What is a prediction? Example: Candy. Our ability to predict Candy is determined by past things we've seen; therefore we have a category about it. What does Microtheory try to do? Create categories from theories. ` Concepts under prototype theory don't work well. If you try to combine "Pet" and "Fish", the two don't add up with each other.Microtheory fails because it reduces everything down too far.Hair on our head, ok. Hair on the table, not ok. Violates Purity.Bayesan Brain Theory?Why resemblance theory? An area of study for cognitive Science.Nelson Goodman: Things can be infinitely similar or dissimilar.Smith tries to formulize the idea of being similar and dissimilar. How do we come up with a theory that finds what the relevant similar things are?Rips first criticization: There's instances in which the similarity is not reciprocal. For example, Taiwan is similar to China. However, China is not really that similar to Taiwan. Certain members of a category have more of a pull than others; therefore the similarity is not as reciprocal as you think.Smith responds: Traverskys Formula, but the main takeaway is that in determining how similar two things are, for all the features, you need to go over weight (Is this more important than another?) and salience (How much does this stand out?)Rips: Double Dissociation. if you have X and Y, if you manipulate X and nothing happens to Y; they're not Causally related but simply correlational. Correlational isn't really that helpful. However, when we talk about this, it isn't the dissociation between two things that could be related (Like china and taiwan), if we can show that we can move something out of a category but still say they're similar and if we can get people to see two things as being similar but also be in the same category, similarity does not create categorization. Cats and dogs: cats and dogs are both mammals. Cats and dogs are both different categories, they're similar (Fur, eyes, ears, four legs, tail, etc.) However, they're in separate categories. Are cats and dogs dissimilar? Yes! Cats are like.. moody, scratchy, prissy, etc. Dogs are obedient, go for walks, lick you, etc. Can list dissimilar; but they're both mammals. We categorize things based on not just similarity.Smith: Difference between features (Things in the world) and knowledge/beliefs (things in your mind, can be manipulated.) You can manipulate/impose on those features and change similarity judgement. Why? Because it's reliant on judgement. Categorization is top-down, but you can't explain how it works. If similarity and category judgement were right, it's a bottom-up system. Smith failed! It's homoncular. How does the brain work? The brain categorizes things. How do we categorize things? By using judgement. How does judgement work? By categorization. How do we categorize? It's homoncular. Trying to explain judgement.Difference between homoncu and infinite regress: Homoncular fallacy is a cyclical argument that can lead to infinite regress. A cyclical argument is cyclical the moment you ask the same question twice. Infinite regress is if you go deeper, and it asks smaller and smaller questions to which there is no end.Central Executive: Takes signals, and turns it into sound. It doesn't help explain how it's converted; it just says "This is what it does." How does the central executive do it? We turned the question from how do WE Do it, to how does the CENTRAL EXECUTIVE do it? It's inside of the first problem.October 29, 2018Memory is spatial. Supported by the method of loci.Things against spatial metaphor:
73 * You can recall things like "Have I been to Bangkok?" You instantly think no.
74 * Your brain isn't like a video camera.
75
76Memory is about processing information; The depth of processing determines how well you encode it.The context in which you learn something is important!What constitutes levels/depth of processing?Transfer appropriate processing: When your brain is remembering something, it wants to remember it so that you can deploy/apply it to as many things as possible.Working memory: we have all this brain power, but we forget a phone number quickly.Memory needs relevance.Essay:APA, 2k words. Double spaced. MUST HAVE A TITLE. Needs an abstract. Anedotes + personal pronouns are ok, but have evidence. Should have a Psychology focus. add Implications and applications. Don't write about working memory. Do something that interests me. 10 REPUTABLE scholarly sources. 10 or fewer years old.GPS: general Problem solver.TUTORIALIf they can manage to find a sentence that can creatively misinterpreted, they will.Thesis DevelopementAlways do background research. It's okay to read more! 20 or so is more the typical. Cover your bases; if something came up in Psychology, check Philo/Neurosci/something. It's more of a philisophical paper. Given all this data, what can we think from it?UnpaywallSciHubAPA! Get the publication manual at the bookstore.Abstract + sources not included in word count.Do a vague abstract, then flesh it out after.Thesis: What you're trying to convince them of. Classical-Rhetorical format. Don't arrange TOPICALLY, arrange FUNCTIONALLY. 1: Provide arguments. 2: Counter-arguments. (Predict what critics would say and refute it in advance.)Logos Ethos Pathos: We ONLY get Logos. ONLY evidence. They don't care where the idea is from so long as it's a good thing.Challenging Thesis: It's reasonable for someone to say "That's ridiculous." Don't say something trivial. Expect to do convincing. Run it by a TA, a member of CASA, and someone who has nothing to do with cogsci.Argumentative Thesis: Compare two sides, and state your position. Don't phrase something as an opinion; Not "I argue", "I believe."Be able to defend from Philosophy, Neuroscience, Psychology, etc. IF NOT, SPECIFY THE SCOPE YOU'RE TRYING TO ARGUE AT. It's okay to say "I'm not getting into this because it's too complicated." And do so. Have you covered your bases? Don't say "prove," you heavily suggest something is true or false. Might wanna focus a bit more on philosophical holes. Also, make your thesis as specific as possible. If someone can say: "Yes, and? What's your point?" don't do that.Problem solving: Grief? Not an emotional paper, but thinking "How do we deal with ___" From a problem-solving standpoint?The replication crisis: Has good examples of Ethos and Pathos .Write within the box!Jerry Fodor, Cognitive Science; for good examples of Cogsci.Monday, November 12General Problem Solver (GPS): Most machines are specialized to do one thing. A GPS wants to solve multiple things! At that time, they just started AI.Solving a problem; break it into four parts!
77 * Initial State: Where you're at.
78 * Goal State: Where you want to end.
79 * Operator: What you can do to affect it.
80 * Path constraints: Things you can't do if you want to preserve general problem solving ability.
81
82Operators are like "moves" you can make to solve a problem. Like a tree diagram going from one to the other, but restricted by a path constraint. The issue is that path constraints are usually huge, even for small issues. How do we calculate a search space? F to the power of D. An average chess game is 60 moves. 30 to the power of 60 is the possible amount of games. That's about 10 to the power of 88. We can't calculate it like this for now.Important: Heuristics and Algorithms.An algorithm is a method that is guaranteed to produce a solution (Usually brute force.) A heuristic improves the chances of finding a solution. If you have a large library, counting each book one by one (to get the total amount of books) is an algorithm. A heuristic is like taking an average of a section, then counting how many sections you have. It's close, not accurate, but close. Heuristics let you intelligently ignore most of the problem, and still be close to the answer.This search model is combiditorial explosive. (?) You can't stumble through a huge space, because it takes time and power to check so many possibilities.Your brain has 10 to the 11 neurons. It's a big number, but it doesn't hold a candle to 30 to the power of 60.Chess is really simple, actually. Square spots, 2 colours, limited number of piececs, and a set number of moves.How do humans solve problems? We for sure don't do it through brute force. We don't have some kind of cosmic power of intelligence, but we still navigate through problems well.Distinction between well-formed problems and ill-formed problems.For example, 54 times 97. You might not be able to do it instantly, but you know the answer is numerical, and it'll be larger than both numbers.How to take good notes isn't really a well-formed problem. Initial state: I don't have good notes. End state: I want better notes. What does it mean to have good notes?How do you have a good first date? It's not a well-formed problem, because you don't want to follow a script.When a heuristic is misapplied, it's called a bias. Principle is called TANSTAAFL, otherwise known as the No free lunch theoreom. For all the conditions that a heuristic helps, theres also conditions in which a heuristic is hurtful.Its easier to think about a word that starts with ___ than a word that has ___ as the 3rd letter.Availability Heuristic: Judging the likelihood of a concept based on how easy it is to think of an example. For example, drive someone to the airport. Salience = more probable.Going back to the GPS... They didn't think about algorithms, they focused on heuristics. Problem formulation is easy, the problem is the solution.How does problem formulation work?Example: 9 dot problem. Most of the time, people formulate it as a different kind of problem; not a "connect the dot" problem. Nothing said you couldn't go outside the box, but it was assumed when they formulated the problem. One thick line = one line solution, 3 lines = go through the tops +bottom of the dots.Another example: Mutilated Chessboard. 32 white squares, 30 black squares. After learning about this information, it suddenly becomes more obvious that it's impossible.Good papers to write: Come up with a different way of looking at a problem. Reformulate the problem, assume different things, consider it differently. Problems that are seemingly insolvable, become possibly solvable.Do we solve by trial and error? No, we narrow the field first, then we apply trial and error. But how do we apply trial and error? It's more of a question of which logic you're applying. Emotions often act heuristically, that lets us narrow the search space of possible solutions. Some people operate fine with pure logic, but asking something like "Do you want a red or blue pen" will make them stop, because they're thinking about lists as to the advantages about blue/red.Useful Heuristics:Trial and error: Theres so many options, so the likelihood of getting the right answer is small if the search space is big.Hill-Climbing Heuristic: Warmer/Colder. For example, diagnosing medicine for depression. Here's 100 ml, come back in 2 weeks. How do you feel? They say good, up the dosage. They say too good, lower the dosage. What's the issue?Problem of Local Maxima. Like a hill. You can hit a local maximum easily,Identify Initial, Identify end, Identify salient differences. How do you determine those salient differences?Example: I want a sandwich. No bread? Get bread. No store, find one.Problem: Salient. How many differences/similarities are there? infinite! What's relevant? The relevant ones. Issue!but it's useful. How do you solve this? Do this. But what if I can't do this? Do this instead. etc. However, it's homoncular.Tutorial: October 16thDon't order the essay topically.Classical rhetoric: Essay is organized functionally.
83 * Introduction!
84
85 * Hook people into the essay
86 * Convince people it's worth reading
87 * Not quite a hook, "you care about what I'm going to say." Convince logically.
88 * don't start with platitudes.
89 * Thesis at end of first paragraph
90 * Maybe "Throw out" the first few sentences.
91 * Make people interested.
92 * Statement of Case!
93
94 * "I'm taking this side."
95 * Have a critical stance.
96 * "Here's my thesis, here's what I think about it." "Do we do this, do we do this, etc."
97 * Evidence!
98
99 * Provide evidence to support your case.
100 * Present data/evidence, gather evidence and stuff, then generalize a pattern.
101 * Abductive might be a good idea.
102 * Counter-evidence!
103
104 * The important part!
105 * Try and predict objections to your thesis before-hand and counter them.
106 * Worth around a full grade level.
107 * Make the opponents side look as good as possible, and then try and counter their counter. (Steel-man)
108 *
109 * Conclusion.
110
111 * ELI5.
112
113Quick notes:If the experimental design is important, explain it. However, if it isnt, just explain the finding. Furthermore, if multiple people say the same thing, you can just explain and then cite multiple people at the end.Human intelligence is the product of a lot of dumber things. Decompose it a lot, into something thats nearly mechanical.Formulating the problem is the issue.Friday, November 30th - TutorialWhat is an abstract?Formal Logical Fallacy: "You can't make this claim"Informal Logical Fallacy: Failures of relevance; makes the reader ask "Why do I care?"Straw Man Falllacy: Misrepresenting another argument in a way that makes it weaker than the original statement Way around it: Makea steel-man, remove the steel-man, then argue that.Begging the question: When the conclusion is the premise; Assuming the thing thats being questioned is true. use THIS MAY CONCLUDE, AND DO NOT USE THE WORD PROOF/PROVE. Have multiple lines of evidence.Homoncular Fallacy: Explaining the phenomenon with the phenomenon. A very specific way to beg the question. DO RESEARCH to counter this. TRY DIFFERENT WAYS OF WORDING ASSUMPTIONS. Make sure it runs backwards.Unwarranted generalization: Generalizing a phenomenon based on insufficient evidence. More evidence. Have enough.Equivocation: Bouncing between two definitions of the same word. State the definition clearly. Put square brackets around the word, replace it.Questionable Cause: Assuming causation without theoretically or experimentally proving it. Don't compare things you can't back up. False Dichotomy: "With us or against us"; Assuming there's only 2 options/either or, to a situation. Literature SearchFalse Analogy: Assuming that because the line of reasoning works in one domain, that it should work in another. Literature searchAbstract: 250 word summary of what you're doing in the paper. Should look more like your conclusion. Review of the thesis and how you got there. Conclusion: We did __ for ___ reasons, and heres what happened. One/two lines to describe a section. Abstract: Explain it like you would explain in an elevator. DON'T MAKE IT AN INTRODUCTION. Make it like: Here's what I did, if you want more it's in the paper.December 3rd, 2018Wiseberg says there are insight problemsInsight problems are not a feature of the world; the world doesn't have problems, we have problems.Left hemisphere focuses on sequential + featural ; the right hemisphere focuses on the Gestalt -> The whole.
114 * You want to switch between the two at the right time, the timing matters.
115
116Hemispheric Slosh; If the brain can switch at the right time, it's like the konami code of understanding things.The spotlight metaphor for attention is misleading; You can't just "pay attention." Pay attention to what?
117 * Maybe attention is more layered?
118
119Michael Polany ; Focal and Subsidiary awareness.
120 * When you're tapping something, you're not aware of the sensation OF the fingers, you're aware of the sensation THROUGH them. You're getting the same information either way, but when you pay attention to B rather than A then you
121
122 * Metzinger and Apter - Transparency-opacity shift. Instead of looking through something, you look at something. Similarily; Stroope effect!
123 * Repeating the same word over and over; doesn't even sound like a word, you just hear a sound without meaning. Transparency: looking through something to see something else. Opacity: Looking at something,
124 * Gestalt to featural ; Opacity to transparency. Imagine like a graph; GTFO going clockwise. 9-dot problem, you jump to the gestalt too fast.
125 * Schizophrenia; Look at it as salience disregulation. If you're walking down the street and everything means too much. Some drugs increase dopamine; rather than thnink of dopamine as reward, it should be "pay attention!" Everything is important
126 * People who are practiced in mindfulness are better at insight problem solving.
127
128 * What's mindfulness? Still your mind, still your thoughts, pay attention to the sensations of your breath; pulls you down to the featural sensory level of your experience; Scaling down.
129 * Vervaeke and Ferraro; scaling up; if you want to "Give the world compassion", you try and encompass the whole world transparently.
130 * Deyoung, Flanders, Peterson (2008): Frame breaking. The anomalous card task: Hold up a card, and ask them to say the SUIT. Ace of diamonds = diamonds. However, add an "Anomalous" card; an impossible suit: A black jack of hearts. How quickly do they identify this? How quickly do they "Break frame"
131 * Perceptual understanding -> Predictive of insight problem solving.
132 * Draw an H out of S; if someone is hurt in the Left, they create an H (The overarching concept.) If someone is hurt in the right, they might draw a ton of small "s" but not the H.
133 * Knoblich (1995) : Matchstick problem. Chunk Decomposition and Constraint Relaxation.
134
135 * Chunk Decomposition: Relies on Chunking. Chunking can also block how we learn things.
136 * Theres a candle and two rings. How do you connect the rings? Take the string from the candle and tie it together. Break the candle into components.
137 * What are we talking about? Break it down. Can we break it down?
138 * Constraint relaxation: automatized assumptions you bring into a problem. Deautomaze the assumptions; but how? Bring it into our awareness. What assumptions are we making?
139 * Phase function fit: Administering a drug outside of a normal time can cause you to overdose.
140 * Stephen and Dixon (2009): Gear problems! Shown a diagram with clock gears. They start to force it; this goes this goess this goes this etc. But then odd/even, it alternates. Insight move: Going from force tracing to recognizing odd/even. Phase space reconstruction: They determined overall levels of entropy in processing. Right before they recognized the switch, the entropy would spike. Noise in the image might speed up the solution.
141 * Per Bak did work with sand; self-organizing criticality. Like an hourglass: As sand goes down, it creates an increasingly smaller cone, then it breakes down and it avalanches on the base. Long period of order: you can predict where the sand will land, UNTIL it can't handle, and it breaks down; chaos. This also happens in the brain! Self-organizing criticality.
142
143Network Theory: Imagine a network of knowns and unknowns. 1) System needs to be efficient. 2) System must be resilient; if something is damaged, it'll still work. To solve resilience: Build a regular network. Make 2 links between every node. Random: have a random amount of links. Regular is super inefficient, random is super efficient. The mean distance is low. However, the network is also fragile. Alternatively: Small world network! Small number of random links, but is similar to a regular structure. 70% as resilient as a regular network, 70% as efficient as a random link. Can find small world networks in the brain. When a regular network becomes a small world network, that is the experience of insight. 1) People good at insight have more small world networks. 2) Self-Organizing Criticality occurs when you have small-world networks.January 7th, 2018Office hours: Monday 1-3, Thursday 4:30-5:30. Must sign up!email: john.vervaeke@utoronto.caThis term is about Artificial Intelligence!Through AI, Computation Models of Cognition can create Synoptic Integration.The Pre-History of AIScientific Revolution: The first proposal is that we can try and build minds.It was a revolution against something.3rd generation cogsci claims that maybe we lost too much?A world view is made up of two theories: A Theory of the structure of the world, and how we view it. Theory of knowing and of being.Aristotilian view: A view made by Aristotle. (This may not be true! It's plausible.)What is it to know something?
144 * We think of knowledge as a true description of something.
145 * Explanation word is be-cause: When we can cause it to be.
146 * As Aristotle says:
147
148 * Who knows a chair more? Someone who knows a chair, or can describe it?
149 * Make! Because they can cause it to be. They understand the cause to its existence.
150 * What does the maker have that the other doesn't?
151 * Feature lists lack Gestalt. The structural organization that makes something.
152
153 * As a result, that carpenter HAS Gestalt. (Eidos is a similar greek word. Related to: idea.)
154 * Aristotle said: I can make a chair because I have the Gestalt.
155 * If it acts like a chair (has the functional structural organization), it is ACTUALLY a chair.
156 * This theory makes lots of sense! Lots of things converge to it, and it explains a lot.
157 * Con-FORMIT-y theory.
158
159 * The Gestalt in that the mind and an object conform to one thing.
160 * How do you know if something is real? Whether the conformity is true?
161 * First, we make sure that the relevant organs are functioning properly. Are you drunk/stoned/etc.?
162 * Second, no distortion in the environment. Too loud? Hard to hear? etc.
163 * Is there inter-subjective agreement?
164
165 * Evaluate ALL of these, rationally!
166 * If it all checks, its likely that what happened is real.
167 * Put these two together! Get the two theories to mutually support each other.
168 * Geo-centric world view: The Earth is at the center, and everything moves around it.
169
170 * Aristotle Science relied heavily on experience of us, NOT on math.
171 * Living things can move on their own, because they have internal drive.
172 * If I drop a chair, why does it move? Does it have internal drive?
173 * Earth, Air, Fire, Water; The four elements, with the Earth in the middle. Natural Motion! Like fire: Ashes fall, Water condenses, Fire goes up, Smoke goes even higher.
174
175 * The theories support each other.
176 * It lasted 1800 years, how did it fall apart?
177
178 * Trade becomes important and picks up.
179
180 * People can get wealthy, without relying on the church or religion.
181
182 * They started making corporations.
183 * Ships can sink though!
184 * Navigation is important. People looked at the stars.
185 * In the Hindu world, accounting was picking up.
186
187 * Information processing is better, so they're trying to get better information of the heavens.
188
189 * Calculation is better, computation is better. Processing more data.
190 * Everything is an illusion! The MATH makes it work better.
191 * As this happens, instead of the mind and the world working with each other, experience becomes a barrier.
192
193 * However, the MATH is what helps you get around this.
194 * When your mind knows, it's running math.
195 * Now, Galileo is impressed, and is influenced by Plato.
196
197 * He says math of the language of the universe.
198 * If you can apply math to the universe, you can get knowledge.
199
200 * For now, Math was geometry.
201 * Galileo used geometry to represent things other than shpaes and space.
202 * Inertia: Inert, no interla structure, etc.
203 * Things in the world are not like us.
204
205 * Things are acting accidentally, NOT without purpose. Inert. (Time mark: one hour or so.)
206
207 * Scientific Revolution: Matter is inert.
208 * What are we? We act on purpose.
209 * Is my acting on purpose, just an illusion?
210 * Descartes is a big deal in cogsci.
211
212 * Cartesian graphing: Everything can be represented by an equation.
213 * Graphing is not natural.
214 * We believe that reality is captured by equations.
215 * Information is represented in an abstract and symbolic matter.
216
217 * Cognition is computation.
218 * Hobbs
219 * We can do more than theorize about minds; we can MAKE minds.
220
221 * Descartes disagrees with this!
222 * Galileo realized not everything can be represented with math.
223
224 * How sweet is Honey? They can be an illusion.
225 * He proposed PRIMARY and SECONDARY qualities.
226
227 * Primary qualities are OBJECTIVE; can be measured mathematically.
228 * Secondary qualities are NOT mathematical.
229
230 * Called qualia.
231 * Scientific revolution said matter is not made of qualia.
232
233 * If our minds are matter, they cannot have qualia.
234 * You can't make a mind by making a material machine.
235 * The meaning is in the IDEA that is attached.
236 * If something is purely mechanical, it does not operate with meaning.
237
238 * Remember the three criteria that must be met! Those three create meaning.
239 * How do you think rationally, have consciousness, and be mechanical?
240
241AI scientifically: Weak and Strong AI.Strong AI = AGI, Artificial General Intelligence. (AGI revolution)Strong AI tries to recreate the mind; an instance of mind. If it succeeds, it would be immoral to turn the machine off.Reverse-engineer the test questions!Good arguments, good explanation, good exposition.The ROLE of artificial intelligence in Cognitive Science.Weak AI do what we tell them.Strong AI is the idea that we can make material minds; if cognition is computation, we can make an instance of a mind.Descartes showed that matter is purposeless; no meaning.
242 * Humans and living things act with purpose.
243
244The core of who and what you are, is your consciousness and your rationality. These things cannot be captured by a material mechanism.A well-respected minority believes that strong AI will fail; that Hobbs was wrong.
245 * Information is encoded in propositions that are abstract and symbolic in nature.
246
247 * Proposition; can be evaluated as true or false.
248 * If a machine can get good enough at encoding information and propositions, we have made cognition.
249 * Therefore, WHAT is a computer in the abstract?
250
251 * What kind of machine/process can do what Hobbs and Descartes mean?
252 * GOFAI: Good old fashions artificial intelligence! Going back to Descartes and Hobbs. Also known as First Generation Cognitive Science.
253
254 * A computer is an interpreted, automatic, formal system.
255
256 * What is a formal System?
257
258 * You have rules, or manipulating tokens.
259 * Monopoly! You have matter in Monopoly. You have tokens which can be moved around; manipulated.
260 * You can alter and replace tokens; move a house, put up a hotel.
261 * You can add new tokes, take them away.
262
263 * There are rules which regulate how you manipulate those tokens.
264 * There is a configuration of tokens, and I can manipulate them according to the rules. But, those rules should still apply no matter what.
265 * A formal system is self-contained. I don't need to pay attention to anything other than the configuration of tokens, and the rules.
266 * Meaning is irrelevant. You can play Monopoly however you want.
267 * Look at Chess; the Knight used to be a war elephant, for example. However, it still serves the same purpose.
268
269 * Formal means it only focuses on a relationship of the token's configuration; the form, formal.
270 * As long as you can identify the types of tokens, move them to the rules, and repeat, it is formal. It goes on it's own dynamics.
271 * We dont need to know what the token stands for; just it's relational configuration to the rest of the tokens.
272 * Meaning matters to explaining behaviour.
273 * There is a difference between type and tokens!
274
275 * It doesn't matter what the token is, as long as the token represents the same thing.
276
277 * If you break a chess piece, you can use a new one so long as it adheres to the same rules.
278 * The formal system is medium independent.
279
280 * In this context, medium = singular of media.
281 * So long as the system can reliably distinguish and manipulate the tokens, it doesn't matter what they're made of.
282 * To say that it can be made of anything, does not mean it can be immaterial. Chess does not have to be in any one media.
283 * Formal Equivalence; example of playing chess across multiple boards and methods. (Multiple Realizability)
284
285 * For each distinct position in one system, there is an equivalent in another.
286 * Whenever a move would be legal in one system, it would be legal in another.
287 * AI needs multiple reliability; through formal systems. Is there anything else though?
288 * The mind is software; the formal system.
289 * The body is hardware; hte physical medium.
290 * However, Neuroscience and AI are a weird story. For chess, you dont need to
291
292Test questions:
293 1. AI has a long history in which ideas developed which made possible its central idea of cognition as computation. Explain the historical developement carefully, and explain why artificial intelligence was such a brilliant hypothesis. Also explain the problems facing the project of creating strong artificial intelligence.(DO NOT TALK ABOUT WEAK AI. ADDRESS THE DISTINCTION.)
294 2. Explain the nature of insight and what relevance it has to our understanding of cognition.
295 3. Why is it difficult to make memory the basic process in which we naturalize cognition?
296 4. Carefully explain the nature of formal systems and the promise of thesis that cognition is computation as well s the difficulties facing the proposal.
297
298You can refer to your short answer questions for the long answer questions. The ONLY internal reference that is allowed. Cannot refer to one short answer within another.Long answer should be 2-3 times the length of a short answer.Demonstrate UNDERSTANDING.Descartes arguments against cognition is computation.Strong AI tries to show how HOBBES is right; that cognition IS computation.
299 * What is Computation?
300 * A computer is an interpreted automatic formal system.
301
302 * Recall formal systems; tokens, rules, rules operate the configuration of the tokens, and the type of token says how it can follow the rules.
303 * Formal systems are self-contained; dont need to pay attention to anything other than the configuration and the rules.
304 * They are also medium independent; meaning as long as the system can determine the token and manipulate the token to the rules, what it is made out of is irrelevant.
305 * Two formal systems can also be equivalent (Formal equivalence); instances of the same formal system.
306
307 * When this happens, theres multiple realizability, meaning the same formal system can be represneted in multiple physical mediums.
308 * What is needed to achieve formality?
309
310 * The rule of being digital: The relationship of digitality to computation processing.
311
312 * To explain this, lets explain some other things first.
313 * A positive technique is a technique that can completely succeed according to a standard we agree upon. Example: Guessing the number of people in the room.
314 * A reliable technique is a technique that succeeds very often. Example: Counting the number of people in a room.
315 * A formal system "writing" is when it manipulates tokens. (When it follows the rules and manipulates it to those rules.)
316 * A formal system "reading" is when it can identify what token it is, after it has been manipulated.
317 * Write-read cycle: manipulate, identify, manipulate, etc.
318 * Now, back to digital. A digital system is a set of positive and reliable techniques for producing and re-identifying techniques or configurations of tokens from some pre-specified set of types.
319
320 * Better version: A system is digital if it has reliable and positive techniques for write-read cycles.
321 * If a system doesn't have reliable and positive techniques, the self-containedness of the read-write will break down.
322
323 * A shakespearean sonet is digital; it can be reproduced as the original.
324 * A painting is not; the historical properties are crucial to identifying the importance of a work of art.
325 *
326 * Formal systems must follow rules.
327 * The rest of the world doesn't follow rules.
328 * How can a machine follow rules without engaging in a homoncular fallacy?
329 * You can use rules to describe a chair; like f=ma.
330 * But the chair does not follow rules.
331
332 * For example, you can use calculus to explain how the solar system works.
333 * But the solar system does not "do" calculus.
334 * We need to show how a formal system can follow rules without equivocating rule following and rule describing, and without a homuncular fallacy.
335 * The assumption that you can break complex rule following into simple rule following, and then break that simple rule following can be explained mindlessly is one that exists.
336
337 * You can decompose rule following into primitive algorithms.
338 * The idea is that any complex rule following behaviour can be broken down into primitive operations that follow primitive rules.
339 * Doorknobs are structured to work a special way, because the rules are built into its functionality.
340 * Darwin showed us that things that work a way do not need to be structured beforehand.
341 * Proposal: We are running a formal system. We can analyze our mind into primitive algorithms (a primitive operation following a primitive rule), then formalize it (Show how it works in a formal system), then you can mechanize (make a machine that can do it.)
342
343 * It satisfies the naturalistic imperative. How to analyze cognition, formalize cognition, then mechanize cognition.
344 * We can decompose intelligence into an army of idiots.
345
346 * INtelligence-computer computation can be explained with a primitive algorithm. It is not equivocating rule following and rule describing, and it is not explaining the mind with a mind.
347 * is there something improtant about being "living" that is missing from the machines? Is that important?
348 * Maybe, formal systems lack central features that biological things have? This might be relevant to how the mind works though...
349 * To make it automatic, put all components of something into one machine. The machine has tokens, a token manipulator, and a referee.
350
351 * In each of those, it has tokens, a token manipulator, and a referee.
352
353 * In each of THOSE, it has tokens, repeat!
354 * The machines are made up of submachines which are made up of submachines which are...
355 * It's just the next set of things.
356 * You can decompose complex computatoin into primitive algorithms. Just because it's made out of algorithms, doesn't mean its working in algorithmic fashion however. There's a difference between being built out of algorithms, and operating algorithmicaly..
357
358 * A chess program uses algorithms, but plays heuristically. If it tries to play algorithmically, it'll explode.
359 * You can also decompose a complex machine into simple machines. (Binary, 1 or 0.)
360 * However, you need an interpretive automatic informal system.
361 * Needs meaning! A computer does not have meaning; you interpret it to have meaning.
362 * 1985 proposal: This fails. FIND THIS IN THE LAST LECTURE.
363
364 * how do you turn tokens into symbols?
365
366 * When does a model become an instance?
367 * Tokens are syntactive, but symbols are semantic.
368
369 * Tokens represent things, while symbols mean things.
370
371 * Symbol Grounding Problem; can't
372 * Logic is just about the syntactic relations between components.
373 * If you have an argument that is in words under reason but under symbols in logic, then they are the same. But if you can't distinguish it, they are the same. (2 hours into lecture)
374 * all men are mortal, socrates is a man, socrates is mortal. A x(xM > Mt), Ex(xm), xm
375
376 * One has meaning, one represents it.
377 * Pure syntax cannot contain contradictions.
378
379 * Can be multiple meanings, the STRUCTURE is the same.
380 * How do you get tokens to mean something?
381 * After going over the differences, it's clear theres a difference between simulating a mind and having a mind.
382
383 * how do you know when a simulation has become an explanation?
384
385Topic: AIPurpose (What do I want to argue): AI is impossibleMethod (How do I want to argue it): By looking at ___ and codes, AI is impossible.Same mark scheme, short answers are out of 20 (ish) and long answer is out of 30How well can you explain something like a teacher?Relisten to the lecture!:First attempt to show that Hobbes is right: The General Problem Solver!Is relevance a formal property? can it be captured semantically or syntactically?If you can't formalize relevance, you can't make cognition computation.
386 * The failure of the GPS showed the centrality of this issue.
387
388IS RELEVANCE FORMALIZABLE IN FORMAL SYSTEMS?After the GPS failed, the idea of strong AI moved on.
389 * Idealization; trying to remove irrelevant or competing factors, to get a precise analysis.
390 * Many sciences use the scientific method; controlling for confounding (unaccounted for) variables.
391 * Always in a trade-off situation in an experiment; the more you idealize, the more the experiment loses the similarity to the real world.
392 * No such thing as "The data speaks for itself."
393
394 * Balance between controlling for variables, and not losing ecological validity (The applicability of the experiment to the real world.)
395 * GPS failed, because solving problems is ridiculous; too complex.
396
397 * Creating microworlds; Idealize that microworld, then scale it up.
398 * Terry Winograd created something called SHRDLU (An acronym that doesnt stand for anything)
399 * SHRDLU was a microworld, where there was very limited features of the environment. SHRDLU could move things around.
400
401 * It would figure out to move __ out of the way, then move ___, then get the specified object.
402 * SHRDLU failed, because you cant scale up that microworld.
403 * SHRDLU does not initiate anything; it is reactive.
404 * It does not need to perceive its world or causally interact with it; it is disembodied AI.
405 * SHRDLU does not solve problems, it avoids them.
406 * Because its environment is so simple, it can keep track of everything. It avoids Combiditorial explosion.
407 * SHRDLU cannot be scaled up, because it is incapable of qualitative development; therefore it cannot be scaled up.
408 * SHRDLU is a simulation that will not be an instance of cognition.
409 * Will it give me an explanation of things that I already know?
410 * SHRDLU lacked common sense.
411 * We need to make something analagous with General Intelligence.
412
413 * Knowledge access problem. We need to structure it the right way so that it has easy access to relevant information.
414 * Context Sensitivity and knowledge interconnected; How does a system bring relevant information to bare? Knowledge access.
415 * Trying to get a way of how fast you can get relevant information.
416
417 * At the time, intelligence was how fast you could figure out something.
418 * That's wrong though;speed is not a factor of intelligence.
419 * EFFICIENCY is more important.
420 * Stereotypes: You dont want to represent everything that is known about a topic, because that would slow down efficiency.
421
422 * Want FAMILIAR and FREQUENT, but that's slippery terms. Then what's a "typical" dog? PROTOTYPE THEORY.
423 * Is stereotype theory just an instance of prototype theory? Does it inherit all the problems?
424 * No! It has cross-referencing; Redirection of one term to another text which defines that term.
425 * This is how the system handles the problem. The problem is that it's not clear that it is a solution.
426 * Staying on topic is an issue. Relevance then can become homoncular.
427 * Too fluid in connectivity, but too rigid in how information is represented.
428 * For example: Take does not have a singular meaning. The meanings are not completely disconnected from each other.
429 * It can be too vague if too broad, but can raise the issue of "What to use" if too narrow.
430 * The number of stereotypes is shown by how well-defined the problem is.
431 * Where is information stored? We draw information from many things and integrate it, as needed, when needed.
432 * Stereotypes suffer from tunnel vision: Too rigid in what they represent
433 * Stereotypes also suffer from Blurred vision: Cannot stay on topic.
434 * Climbing a tree to reach the moon: It starts looking nice, but fails because it's not the right way to do things.
435 * The Frame Problem: Zero'ing in on relevant information. Shanahen: Important guy who made this!
436
437 * How do you represent change in a problem?
438 * Thesus' Ship.
439 * When you solverepresenting change in the frame problem, what remains is the philisophical problem of relevant. How does it bring relevant information to bare?
440 * Cognitive Agent: An agent can determine the consequences of its own behaviour, and change its behaviour accordingly.
441
442 * Make a machine that's a cognitive agent!
443 * What about side effects?
444 * Make the machine capable of paying attention to unintentional side effects!
445 * COMBIDITORIAL EXPLOSION!
446 * No definition of relevance.
447 * Relevance problem; Organize memory by breaking up memory into a bunch of compartments. Those things are categories, and then you have a relevance issue.
448
449 * How many boxes?
450 * Determining how something changes is based on how its already represented, and the things it is defined/interacting with.
451 * Compartments depend on your ability to categorize things based on relevance.
452 * Let sleeping dogs lie (Dont' disturb things you shouldn't) : You've represented the relevant properties of the phenomenon in question.
453 * If relevance cannot be formalized, Descartes has beaten Hobbs.
454 * Cognition is LIKE a language.
455
456 * We can communicate our cognition, like a language!
457 * When we reason things, its the same as when we do an argument.
458 * But we break this down immediately. DISCREDIT THIS.
459 * If you dont model cognition as a language, what do you model?
460
461 * Model cognition on how brains work?
462 * Earlier version of connectionism!
463 * Perceptrons: They can do a lot of things! Says whether an input is in a class or not, in parallel, and it's all linked. Basic form of a neural network.in
464
465 * There are lots of things that perceptrons cant solve; A perceptron can solve one specific instance but will fail others. Behaviourism!
466 * adding in hidden layers gets networks to do things that perceptrons can't.
467 * Is cognition computational and language like, or not?
468
469 * Binary? Is language binary?
470 * If you don't accept it, then neural network theory is not an alternative.
471 *
472
473GPS FailedMicro-worlds: Too complex. Can't scale up,because it cannot give us principled answers; it avoided confronting problems.Stereotypes: Trying to understand how this relevance process is being solved by talking about how MEMORY is organized.
474 * Shares issues with prototype theory, because it's close to prototype theory!
475 * Homoncular proposal!
476
477People started to challenge GOFAI; Neural Networks! Fodor; biggest defender of GOFAI, said relevance could not be formalized.
478 * GOFAI was fundamentally flawed; it was trying to understand cognition as language.
479 * Logim gets unjammed, we have an issue.
480 * Perceptrons: Proved that they could not do basic logic functions. Immediately destroyed as a proposal for cognition.
481
482 * Behaviourism (from Psychology) has formally been disproved.
483 * Perceptrons are different kind of machines! People argued this, after it went underground.
484
485Connectionism: Don't model your proposed simulations on language, model them on the brain.
486 * If we model it on language, then multiple realizability is put on the strain.
487 * Neutral Networks are understood and modelled by circles called nodes, which are connected to other nodes.
488 * Neural networks usually only have hundreds; our brain has so many more. Do the nodes represent groups? No, because nodes work differently than our neurons.
489 * The networks also lack functional elements of our brain.
490
491How Neural networks are inspired by brains rather than language:
492 * In brains, 1 neuron can excite or inhibit another.
493 * Neurons have a causal relationship with each other.
494 * Non-Linguistic variables we can change:
495
496 * Manipulate this casual valence: Excite or inhibit. + or - (Not quite a good explanation.)
497 * Signal strength
498 * Some connections are more important; the weight of a neuron.
499 * Attachment is evolutionary based (You're born prematurely because of evolution.)
500
501 * Based off the emergency system (amygdala.)
502 * Neofrontal Cortex has a huge effect on the neofrontal cortex normally.
503 * During grief,Limbic System has a huge effect on the neofrontal cortex.
504 * The WEIGHT OF THE CONNECTIONS matters.
505 * Do not confuse connection weight and signal strength! Connection weight is what kind of biological magnifiers are attached to the signal.
506
507 * Signal strength is how much water is coming down the pipe, connection weight is how large the pipe is.
508 * Imagine Node B and C both connected to node A. Node B can be excitatory node, node C can be an inhibiting node.
509
510 * B has a positive valence and a medium connection weight (+3) and a medium signal strength (x4).
511 * C has a negative valence and a weak connection weight (-1) and has a strong signal strength (x6).
512 * Turns out to be +6, positive. Sum = weight times signal strength.
513 * positive = increase firing, negative = reduce
514
515Competence is not what you do, its what you're capable of doing.A problem a neuron faces is thatSubmarines use echo location: Is it a rock in front of you, or a mine? Whichever node is more active, determines what it is. Structure: Input, Hidden, output.Digitize the sound wave first! Difference on the input nodes based on what goes into them. Then, you get values. (For example, .2 for rock and .6 for mine.)Target value - Performance (actual) value = error Value. What is the error value?
516 * It's hard to determine how much of an impact a specific neuron had.
517 * Something goes through and does an analysis that checks the probablity that a connection caused an error.
518 * Proportionally adjust connection weight based on blame.
519 * Backpropogation of error! Repeat this over and over, until the network gets as good (or better!) than a human at solving this.
520
521Plasticity: altering the architecture.. Training and tinkering; cycle, to modify a neural network.Training is learning, tinkering is plasticity.Advantages of neural networks:Solved problems that normal computation models cant! Many environmental things happening at once, they can pick up on these things.
522 * Parallel distributive processing!
523
524 * The environment is running in a parallel distributive fashion: lots of stuff is happening at once.
525 * Invariant problem: "Aaah" can be said quietly or loudly, but they all mean the same thing although its a different volume.
526
527 * Your brain uses interacting variables. Picked up a compelx pattern of multiple variable constraints.
528 * Can give a viable account of procedural knowledge.
529
530Graceful Degradation: As light dims, your ability to read goes down. performance degrades gracefully.Machines have catastrophic changes.Graceful Degradation of their plasticity. You mess around with your architecture!but, learning is based on something else having already learned. Remember target value - Performance value? Target value is based on humans.Learning is based on successful learning.We need a theory of unsupervised learning and unsupervised plasticity, a theory that connects the two, and explains the relationship between neural netwrks and the brain.We have the first two! In a non-complete but slightly advanced state.Self-regulation is important to the theory of unsupervised learning.When a system is self-organizing, there is no distinction between its functioning and developement. It is entirely entwined. How do you get something to learn, without giving it a target value to correct itself?Theres different ways of writing an "a."
531 * Imagine a network with two difference kinds of connections.
532 * Include things that belong, exclude ones that don't.
533 * Scatter plot; line of best fit (might touch no dots!)
534
535 * Interpolations and extrapolations!
536 * Compression loses data, wanna leave what's variant and leave out what's invariant.
537 * If you have a model that interpolates and extrapolates well, that one is a more accurate representation of the environment.
538 * Step 1:
539
540GO TO OFFICE HOURS: ASK ABOUT THE VARIANCE/INVARIANCE EXAMPLE.Use try/except stuff!Process/Event history in application.py is important!understand process event history.Tutorial: March 1st"How to limit a scope" Make sure "Can a sufficiently overpowered experiment answer this question?" is falseAssuming that the laws of the universe are coherent, is this always true?Don't use prove without using the proper philosophical meaning!Check your presuppositions; what are you assuming about the world? Make sure something is true first.Use consistent words, especially for heavy lifting words. But! Don't write an essay to write the essay; Don't spend too much time explaining.Should be done defining before around page 2.Is the mind bInary? DO NOT GO NEAR NEUROSCIENCE.Cognitive Processes, Mechanisms, etc. DO NOT MENTION THE BRAIN.Make sure to define the mind.Last time: Connectionism! Networks as an alternative to GOFAI. Why? To understand cognition.Neural Network theory is based off non-language like variables, based on how the brain operates, to create a system for Strong AI.
541 * Variables: Strength, Valence, and Weight.
542
543 * All non-language based!
544 * It is not clear what nodes in a neural network are in a mind!
545 * Neural Networks try to explain the mind by using these variables which are plausibly similar to those in a neuron.
546 * Adding intermediary nodes changes the competence of the network: Not how much it can learn, but what KIND of things it can learn.
547 * Parallel processing: Solves "Chicken and egg" problems.
548
549Backpropogation cannot be learning though! It fails, because it requires a target value that has been independently learned and is provided.
550 * Theory of learning that requires learning.
551
552Unsupervised learning: Weight-Sleep algorithm.
553 * A way to create a self-organized process, such that the network trains itself.
554 * The network goes through a stage where it takes the input, and does "data compression."
555 * Abstracts invariant patterns, takes those patterns and generates variations.
556
557 * The generation sensitizes the recognition weights, so the next time it takes more complex variations and make more complex variations.
558 * Neural network feeds back on itself.
559 * What seems to be happening is a process where the system is developing and functioning.
560 * The developement and functionality are happening at the same time.
561
562We thought computational model was right: The mind is a formal system.
563 * We just needed to know the structure of the formal system.
564 * The way in which cognition is developmental is ignored by GOFAI. Completely neglected
565
566 * In GOFAI, history and function are separate.
567 * Neuroscience is kinda irrelevant for GOFAI.
568 * If you have this new system, the two are meshed together. Cognition is inherently developmental.
569
570Old model: Software: Its own formal systemHardware: Its own formal systemNew model: Software and hardware are self-organizing, and are linked together.New point: How do we do unsupervised plasticity???
571 * Work of Marschel and Sheltz: 1996
572 * Cascade correlation models: A criticism on a GOFAI rejection of development.
573 * They criticize Jerry Fodor.
574
575 * There's 2 types of developements in developmental Psychology: Quantitative and Qualitative.
576 * Quantitative is the ability to learn more, but does not require you to have any new functions. (A change in content!)
577
578 * If you want to learn numbers, you need a new function. Numbers are not words!
579 * Qualitative is the ability to learn functions you didn't have. (A change in competence! WHAT YOU ARE CAPABLE OF DOING.)
580
581 * Example: a kid learning the golden rule.
582 * Qualitative needs a change in the formal system.
583 * Objection: You can't derive stronger logic from weaker logic.
584 * Everything we think we're learning is actually innate: WE'RE NOT DEVELOPING.
585
586 * However, if there are cognitive processes that are not computational, there is a possibility of an objection.
587 * GOFAI implies no qualitative development.
588 * Neural Network theory can give us an account of qualitative development.
589 * Alignment phase: network grows random new nodes: Can be affected by the network, but cannot affect the network.
590 * Find the node who's activity level correlates the most with the error value.
591
592 * Plausibly observable in the human brain (Neurogenesis)
593 * Minimization stage: The node is not only affected, but can start to affect the network now.
594
595 * Why is it called minimization? If, when the node is connected it reduces the error, it stays. If it doesn't, it's killed off and syou start it all over again.
596 * A CASCADE of correlations.
597 * Kids acquire functions in stages:
598
599 * Identity: They assign something to a variable.
600 * Addition
601 * Multiplication/Division
602 * The cascade correlation model did the same.
603
604 * This is a reasonably plausible proposal for how cognitive development works.
605 * However, this is not COMPLETELY unsupervised.
606
607 * It needs a measure for how much error.
608 * Something like this is going on!
609 * Unsupervised plasticity has to work with unsupervised learning.
610 * Neural networks using non-computational processes to solve a problem!
611
612Goal: can we have a real alternative to GOFAI? First generation cogsci: GOFAI2nd generation cogsci: Neural networks3rd Generation cogsci: Developmental/parallel models that are based on Neural Networks.Debate: What is the relationship between cognition and language and language-like processing?Fodor gave a critique of neural network theory.Noam Chomsky: Language use requires structural associations, not just relative association.Association means they are co-active. A loves B: all three nodes are active.Theres a syntactic structure which causes a difference in meaning.Fodor says that Neural Networks do not have syntactic relations which allow for predication.
613 * All it has are patterns of co-activation, which is a modern version of associations.
614 * Associations cannot distinguish between different thoughts.
615 * The only way a network to function is to propose a formal system on it.
616
617 * It relies on a language-like predication.
618
619Logic is an instance of a formal system: If it's truth preservative, it must follow logic -> Computer! (because it follows a formal system.)Truth preservation and Predication come together, to show that you have systemticity. Neural networks dont need to be systematic, in principle.Neural networks can have a punctate mind: It can form one association, and not form another.Our minds are systematic, but connectionism doesn't help to explain it.Fodor is winning the argument by stacking the deck: He's talking about adult-human cognition.The point of cognitive science is to explain how we do science, but this doesn't explain all the other important aspects of our cognition that is not propositional: Our procedural knowledge and intelligence we have.Its right about something important, but in the big picture is not correct.People integrate a computational process with a neural network.Deep learning and neural networks show correlations, but not explanations. Can association cause causation?Plausability: Looking for converging lines of evidence.In psychology, dual processing models are rising.A way of processing that seems like a neural network: called S1.Part of our brain is like a neural network, and part is like How do we know when we have what we want for cogsci?Simulation problem: When does simulation become instantiation? When do we have strong AI? What do we use as a standard for saying we've acheived strong AI?Simulation Problem: When does the explanation transfer? Identity relationship -> Explanation should transfer.Leibniz: The identity of indescernables. Two things are identical if they are indescernable. Numerically identical: One and the same thing. Categorical identity: The same thing, only seperated by space and time.The turing test: If someone cannot discern whether they are talking to a machine and a human, they are identical, and the explanation can transfer between them.
620 * This is a property of the test, not the machine. Why is it set up this way? Visual characteristics are irrelevant to whether or not something is a cognitive agent.
621 * What are the relevant factors? Sound familiar?
622 * The Turing Test is dependant on us having an explicit or implicit theory on the relevant factors of comparison.
623 * We can run on intuition, folk psychology, or explicit psychology.
624 * The machine didn't pass "the" test, it passed "your" test based on your own intuition.
625
626 * Heavily biased towards computational processing.
627 * The turing test is not static! It is changing as we do Cognitive Science.
628
629New functionality: stronger logic. (cannot get stronger logic from weaker logic!)
630 * Qualitative development is impossible, all of what Piaget talks about must be an illusion.
631
632response: Only if you accept Cognition is Computation. If you don'tNetwork stages go through stages that kids go through.If we move from formal systems to dynamic systems, then self-organization becomes important. When it starts self-organizing, there is no distinction; they are deeply intertwined. between development and organization. Cascade correlation models use an error function: and are not unsupervised. We need an account of UNSUPERVISED learning to avoid a homoncular situation.How do we know when we've gone from a simulation to an instantiation? Weak AI to strong AI? Fodor: If a human and a machine are similar, you can transfer things between them. However, there is identity between the machine and the human! How do we determine that? The turing test; if we can't discern the difference between them, then it is justified to transfer an explanation from a machine to an organism. The turing test is wrong though! It's not just a test of the machine, but a test of the interpretative framework around the machine. (What does that mean? The Turing Test does away with the "irrelevant" factors. The turing test is not a comparison between everything; it is only a comparison between the relevant factors, not irrelevant factors (like appearance.) But, how do we justify the factors we compare? We have two sources: Our intuition, or explicit psychology. Our intuition is often wrong and misleading, as Fodor says (and what we have seen!) Instead, we should tie the turing test to explicit psychology, meaning that the turing test is not static! It is always changing and evolving. The standard turing test is just conversation, now it's more "can the machine play tag? Navigate a room? etc."New stuff!If Fodor is right, then we need a strong discourse between people doing explicit psychology and strong AI. It doesn't matter what the machine can do, what matters is the justification of the transfer between the machine and the human. Now, we need to be more clear about how we would evaluate whether we have sufficient identity between the simulation and the human/organism. Until now, we've been talking about the KIND (not complete identity). Now, it's HOW MUCH identity. Consider a professor and a recorded lecture: They say the same thing, you can infer the same things from what they say. However, the simulation is not an instance of cognition. Why? The video cannot respond to questions and cannot interact! It does not understand new material; while a human can. We are looking for identity not just in performance, but in competence. Performance is what you have done, competence is what you can do. The number of sentences we can produce and understand exceeds the number of sentences we ACTUALLY will produce and understand. We need to be able to explain what has, and what is capable of being done. Patterns that can explain any and all of what is possible of something/someone. (Slight regression: This is not just in linguistics/psychology, this is the endeavor of science! Science attempts to offer predictive explanations; it tries to explain what can ever happen.) Fodor says the machine and the person must have the same behavioural repertoire: For example, in the turing test, you should be sure you're having interaction that would get at tests of its competence, not just its performance. Many machines that pass the turing test match performance, but do not necessarily test linguistical competence. We should also test: Navigational competence, procedural competence, consciousness, what is salient, etc. We have this test, now what? We only have weak equivalence right now. Its not what we need for strong AI: we need strong equivalence for strong AI. Why is it weak? Like SHRDLU! It could do things, by keeping track of every possible thing in its environment. However, it would reach a combiditorial explosion. Weak equivalence does not guarantee that the competence is being implented in processes we could also use. SHRDLU uses a strategy that is not used by actual organisms. Theoretically: a machine the size of Africa, searching through massive databases, doesn't do it the same way we do. It has identity of PRODUCT, but not of PROCESS. A machine can win jeopardy with the internet, but it has the internet so it's not like us. We need not just equivalence of product, but strong equivalence of product and process. It must use the same operations! (The Turing Test is getting pretty tough now, isn't it?) What now? We need good evidence for identity of operation. Fodor says: We can't use a material test; functions are multibly realizable! Like wood or metal carries out the same operations to fly; if we look at what it's made of, we can't know what it's capable of because operations are multibly realizable. We can't identify identity of operation, by identifying what it's operating on materially. So, now what? How do we get identical operations? Strong equivalence means we need identity of operations, but multiple realizability of operations stops that! Fodor says: We use formal equivalence. THAT is what we are looking for. (It sneaks in the assumption that the mind is a formal system, though, the central assumption of GOFAI!!!) If we give this up, then we need to replace formal equivalence with something else. What do we offer in place of formal equivalence, to get strong equivalence to run the turing test? Until we get that, we're running the Turing Test on our intuition. If we want justification and we admit that some of our competence is not a formal system, we need an alternative to formal equivalence. (Not going to be answered yet!) Now, time passes. 1968 Jerry Fodor (Young Fodor!) says: Theory of mind is computational functionalism, and the theory to answer that is to use GOFAI. 1991 Jerry Fodor says: GOFAI is ill-founded, not because it has the wrong picture of mind though. Physics is not trying to make a machine that's indistinguisable for the length of a conversation; Disneyland is not a major scientific achievement. By 1991, Neural networks are contesting GOFAI. Now, we turn to someone else to understand this argument. We don't consider Disneyland an achievement. Green says: We don't really question "Oh, is this a river?" We may be perceptually fooled, but why are we not scientifically fooled by other things? You see a river, you trace it back to the source, and theres a huge faucet. You don't think "Is there a faucet at every river source?" It's ridicolous, but why? We can't transfer an explanation from the source of the river in Disneyland, to Earth. Disneyland does not have the ontology of the physical world: Disneyland was designed on purpose, while Earth was not. Our ability to transfer is whether there's a consensus about the criteria of the physical; We can distinguish fraud from the real, because we have an accepted criteria of the physical. The problem we have now, is that theres no criteria of the cognitive. No consensus on the mark of the mental; all we have is our unchallenged intuition about whether something is cognitive or not. What we have are many different competing accounts on what is the criteria of the cognitive; Some say cognition is inherently computational, some say that it's representational, and people will disagree. We don't know how we should answer the question about whether something is a cognitive agent, because we have no consensus around the fundamental ontology. (Ontology: The study on the structure of being.) GOFAI can't work, because the machines identity is irrelevant. We don't have an agreed upon criteria of the cognitive. No matter what is proposed, there is an intellectual debate for the rejection of the proposal. If you cannot justify the transfer of information, you only have WEAK AI, not Strong AI. Why don't we have a critiera of the cognitive? Why is it so hard? See: Chiappe and Kukla. "AI and Scientific understanding." They begin with: Plato says you do science by "Carving nature at it's joints." Analyze, formalize (non-homoncular account of how it works), mechanize. Doing this for cognition is hard! In science, we look for generalizations. They talk about J.S. Mill: How do we categorize things such that we can do Science? Systematic Import! We can form a category of: White things! It is a category we can make judgements about. What else do they share? Nothing! We can't do science on the categorization, because it does not support powerful generalization. Systematic import is when a category supports Science by affording powerful generalization. Now, compare white things to horses. The category of horses has systematic import. Categorization has systematic import if it's homogenous: Members of the category are essentially the same. However, there is no real "essence." One way of understanding Science is that Science discovers categories that have essences. If a category does not have an essence, then there cannot be a Science of it; similar to how there is no science of white things. You cannot sample a white thing and infer things about the rest of things in that category. The systematic import must also be stable across time, or else science cannot be done on it! They also must be intrinsic; they must belong to the object normally. For example, money is not intrinsic because we give it meaning. Money is attributed. What is presenting us from analyzing, formalizing, and mechanizing cognition? The frame problem. It dissolves into two problems, but it really is the relevance problem. Relevance does not have systematic import; there is no essence. There can be no science of relevance, and there can be no formalize/analyze/mechanize for cognition, and there can be no criteria for cognition, and there can be no strong AI. Solution: maybe we don't need a theory of relevance! Maybe we need a theory of relevance realization. Organisms have a wide set of properties that lets it survive. They are not stable! Creatures are fit to their environment! If we understand the fitedness, we can understand what "God" was thinking (Because religion.) Darwin didn't come up with an essence for fitedness; he came up with a dynamical system theory for how fitedness is constantly in a self-organizing manner, where fitedness is based on previous instances of fitedness. To establish criteria, you make a scale of judgement. You look for what's too strong, what's too weak, and then hone in on the middle. First scale is T1: Microworlds, like SHRDLU! Too weak. T2 is the Turing Test: It just fails. Straight up fails. (Why? The symbol grounding problem! What is this? The problem of original being. The problem of getting meaning for the original system. Imagine someone in a room: Ideograms come in, and he's instructed to send out a specific string of ideograms. (Chinese room argument!) This is like the turing test! The symbols are not grounded in meaning: They float free.) System reply: The little man doesn't understand english, but the room is! He memorizes the book now; he has no idea what you're talking about. But you're missing the behaviour; the appropriate behaviour. However, just make him press buttons/do stuff to make the robot act. It keeps passing the turing test, but it's no guarantee that you're in a chinese room situation. Passing the turing test has no guarantee that we're solving the problem of original being/the symbol grounding problem. The virtual machine reply! (GO OVER THIS) AI presupposes multiple realizability. If it is multiply realizable, you cannot identify it chemically/ physically, etc. Science disccovers things that have essences. Any AI attributed, not intrinisical. T3: Total Turing Test; too strong! Cognition is a formal system, causes us to have formal equivalence. How to turn simulation to explanation? Fodor slides into advocacy for cognition is computation (Assumes that the mind is a formal system!) Fodor proposes (later on) that GOFAI was not the correct method; he spoke about how we dont do science by mapping gross observable variants (Disneyland isnt a scientific achievement.) We have no way to map the cognitive; we must "Carve nature at its joints." Chiappe and Kukla argue: we run into the frame problem! The core problem is the relevance problem; Our analysis keep failing, because they turn into circular explanations, because we have homoncular presuppositions of relevance. Any attempt fails! Maybe we can scale criteria appropriately! A scale of judgements. T1 is like SHRDLU, T2 is the turing test, which fails because of the symbol grounding problem. The problem of original meaning: Chinese room argument! Set up a system that is a formal system, passes the turing test, but has no grasp of meaning with the conversation that is taking place. Simply passing the turing test does not mean that we have solves the problem of original meaning. Virtual machine reply: You condition the man to produce the behaviour, althoguh the man does not understand chinese, he is implementing a virtual machine which does! A virtual machine is multiply realizable (What makes it virtual!) There cannot be a science of relevance. Multipley realizable systems cannot intrinsically exist; Minds must be intrinsic! They cannot be attributed. Why? Because minds attribute. However, things like erosion are multipley realizable and intrinsically exist; however, they are self-organizing. Evolution is a process that self-changes: It's function and development are inseparably intertwined.Lets take the relevance realization as a criterion of the cognitive. T1: No good! T2: no good! T3: Total Turing Test: Just right! The Total Turing Test is: A machine has to be able to solve the turing test, across all domains, for which humans are deemed cognitive agents, and must be able to do it for the lifetime of a human being, because development matters to function. Neural Networks degrade gracefully: Anything that passes this will be a cognitive agent. He's right, but its irrelevant that he is. If we have a device that passes the total turing test, we will agree that we have a cognitive agent. Buy low sell high: how do we know whats low? How do we know whats high? We need to know what it is, and how it works in the cognitive domain so that we are a general problem solver. It's right, but it's useless. T4: Neuromnetic level: Targetting John Surral. (Surral advocates biological naturalism: The principles at work in biology are theoretically similar to what is in cognition.) Must have something that mimics the brain in order for it to have cognition. Neural networks? The similarity between neural networks and the brain is vague: There is no explanation for what in the brain the nodes stand for. Surral needs something stronger: Only organic brains similar to ones we know can possess cognition. Only things that are biochemically and organically similar can have cognition. Harnad argues against that: That's too strong! Many causal properties of the brain: Work at a specific temperature, takes up a specific amount of space; it's hard to see why those causal factors are relevant to being a cognitive entity. There are many causal features of the brain! Evolution can go many different ways, and there can be organisms that have brains that work at differnt temperatures but are just as cognitive. It has too much identity: T1 and T2 are too weak, T4 is too strong.T5: Quantum identity. Consciousness is a quantum effect: Humans are not quantum computers. Quantum computation would be an instance of weak AI, because it will fail Fodors criteria to get strong AI. Consciousness is strange, Quantumness is strange, and they're related. Very bad argument, however! It's not clear, other than them both being weird and related in certain experiments, (Electron split experiment), there is no claim and evidence that we are comprehensively quantum in our consciousness.We need to know how to come up with a scientific explanation as to how we are a general problem solver. What now? We hit on the fundamental issues of consciousness, etc. so we go back to philisophical questions proposed by Descartes: we go to pure philosophy. The role of philosophy of mind in Cognitive Science: How do we answer this? Go back to Descartes. Philosophers don't solve problems the normal way we do, and people think they're useless! Descartes argues between mind and matter: The mind has meaning, has purpose, qualia, normativity, and truth. Matter has none of that, but has: inertia, existence. Between these, we have opposite sets of properties: Because we have opposite sets of properties, they are different! (Cartesian dualism) Mind stuff and matter stuff; two kinds of stuff. Now, why is this attractive? Advocating for this means you support substance dualism: Reality is divided in mind and matter. It gives insight into life and death. The part we want to go on is mental: Our consciousness, etc. If our body is gone, it doesn't matter. (Go over cartiesian dualism.) He believes that we are our soul, not our body. When you dream, you preserve your identity but you lose your body. Attractive because of bias! Is it a good argument? It is not deductively valid! We must list all properties of mind and matter; what if the next 50 properties are shared? It would only be good if it was exhaustively complete, and history has shown that it is not yet. However, it can be saved! What is not shared is respectively an essential property of mind and matter; they are essentially different. However, we dont have an agreed upon idea of the essence of mind. Inference to the best explanation: abduction. Most scientific explanations are inderence to the best explanation. Intelligent design is the best explanation for life: For example, you see a watch on the beacah. It is intentionally designed; it did not just "happen." The issue with inference to the best explanation is not deductively valid, but we pick the best explanation for the phenomenon. Darwin comes along and shows that everything can be explained with natural selection: It explains why theres intelligent design, and bad design. Inference to the best explanation is based on where you are in history: Intelligent Design is good, until Natural Selection comes along. The argument is good, so far as scientific history unfolds. (If you're before 1859, intelligent design is rational.) Descartes says that mind/matter is what we want to explain: How Aristotilian world model doesnt work, because of the scientific revolution he's in. Hobbes explanation doesn't work, because Hobbes' explanation wasn't good. Neoplatonism is scary, so it's rejected. Cartesian dualism wins; it beats everything out. how well does this theory go forward? No. What's holding it back? How do things in my mind cause change in body? How does thirst make me look for liquid? How do they causally interact? Descartes proposes: The nervous system is hydraulic in nature: Like pumps. Your mind and soul impact the system, because the primeal gland affects animal spirits. An animal spirit is a rareified; very rareified. This matter is what mediates between mind and brain. Objection: What does this do? Its worthless! It does nothing, because matter cannot interact with mind. Until the theory solves this problem of how mind causes bodily change and vice versa, it will die off. Malebranche supports this: Occasionalism is what he proposes. Mind and body cannot interact, and the intermediary does not work. He proposes: A has a mental state, God sees the mental state, and causes physical change. What's the problem? God doesn't show up in experiments. He's using a hypothesis that is more controversial than the conclusion. The French give up, and Leibniz comes in. The identity of discernables: Leibniz says having God in everything won't work, so he uses the point that we cannot distinguish between causation and correlation. We cannot tell if two things are causal, or correlated. Pre-established harmony: It looks like mind and body are interacting, but there is a powerful, bidirectional correlation. Mental and brain events are tightly correlated. Property Dualism rises; the idea that properties of the whole are not properties of the parts. Allied and justified in terms of emergence; two forms. Strong and weak emergence. The argument is (An argument by analogy): Chlorine is deadly. Sodium is deadly. Put it together, you get salt! Emergence must be the case: When the whole is different than the parts. Now, look at the brain. All the brain bits have mental properties, but the whole has mental properties. The physical bits interact, and the mental bits emerge. The brain is organized in such a way that it has emerging properties. The mind cannot survive without the brain. A property dualist says that there can be physical things which induce mental things. Weak emergence says that only physical can induce physical. A dualist thinks that there are parts of reality which fall out of the physics. Elemental property dualism: Metaphysical elements! Initially, it was the 4 elements: earth wind water fire. These things are not made of anything else. Air is basic! Problem with Elemental property dualism: Implies panpsychism;What do we think about Descartes understanding of cognition? First argument: "The properties of mind and matter are opposite to each other, and they belong to two different kinds of stuff." Cartesian/Substance Dualism. Not deductively valid, because it needs an exhaustive list, which is impossible because changes over history. Descartes conclusion of 2 separate types of matter was the inference to best explanation at the time. Mind-Body problem messes with this though, and "throws a wrench in." We have too much evidence of brain causing mental events and vice versa. Descartes tried to solve this with the proposal of animal spirits (very weak!), the idea that there's a special kind of "transparent" matter. Didn't answer it! Looked at God, but God is too controversial (Premise is too controversial than the conclusion.) Then, its stated that we cannot distinguish between causal and correlation; Pre-established harmoney. However, God comes into this, so it doesn't work, and Cartesian Dualism ultimately fails. Enter property dualism: We don't have a separate/separable substance thing called mind/soul that exists independently than the brain; we have mental properties that are not reducable to physical properties. Emergence- Mental Properties are emergent from physical properties of the brain. Same way that the "Wetness" of water is from hydrogen and oxygen. The mind is to the brain as the wetness of water is to Hydrogen and Oxygen. The wetness of water cannot be independent of Hydrogen and Oxygen; the properties are emergent. This relies on equivocation; Weak and strong emergence. The ontology of physics greatly outweighs materialism. Physics is accessible because there is a physical explanation of it; this is what gives property dualism its legitamicy. This then extends to the mind; but this is where it breaks down! The property dualist evokes strong emergence; that there is no explanation for how the properties emerge, so the analogy does not hold. It only works on equivocation. One way out: To propose elemental property dualism, elemental properties are just as fundamental and therefore there is nothing that can explain the relationship between mental and basic physical properties, dont worry about lowest level mental properties. However, arguments! Should information be considered? Implies pansychism: need independent properties for mental properties about a table; not what I have, but what it would have if humans did not exist.
633
634
635What do we think about Descartes understanding of cognition? First argument: "The properties of mind and matter are opposite to each other, and they belong to two different kinds of stuff." Cartesian/Substance Dualism. Not deductively valid, because it needs an exhaustive list, which is impossible because changes over history. Descartes conclusion of 2 separate types of matter was the inference to best explanation at the time. Mind-Body problem messes with this though, and "throws a wrench in." We have too much evidence of brain causing mental events and vice versa. Descartes tried to solve this with the proposal of animal spirits (very weak!), the idea that there's a special kind of "transparent" matter. Didn't answer it! Looked at God, but God is too controversial (Premise is too controversial than the conclusion.) Then, its stated that we cannot distinguish between causal and correlation; Pre-established harmoney. However, God comes into this, so it doesn't work, and Cartesian Dualism ultimately fails. Enter property dualism: We don't have a separate/separable substance thing called mind/soul that exists independently than the brain; we have mental properties that are not reducable to physical properties. Emergence- Mental Properties are emergent from physical properties of the brain. Same way that the "Wetness" of water is from hydrogen and oxygen. The mind is to the brain as the wetness of water is to Hydrogen and Oxygen. The wetness of water cannot be independent of Hydrogen and Oxygen; the properties are emergent. This relies on equivocation; Weak and strong emergence. The ontology of physics greatly outweighs materialism. Physics is accessible because there is a physical explanation of it; this is what gives property dualism its legitamicy. This then extends to the mind; but this is where it breaks down! The property dualist evokes strong emergence; that there is no explanation for how the properties emerge, so the analogy does not hold. It only works on equivocation. One way out: To propose elemental property dualism, elemental properties are just as fundamental and therefore there is nothing that can explain the relationship between mental and basic physical properties, dont worry about lowest level mental properties. However, arguments! Should information be considered? Implies pansychism: need independent properties for mental properties about a table; not what I have, but what it would have if humans did not exist. Need a new explanation of how the non-consciousness and non-cognitive generating properties of the table are different from the non-conscious, non-cognitive properties of the brain. Other Descartes arguments we'll come back to about descartes for original meaning etc.! The "Stupid" argument for dualism: I am aware of my mind, I am not aware of my brain, so the mind is not the brain. Why is this bad? Take this example: I am aware of Superman, I am not aware of Clark Kent, therefore Clark Kent is not Superman. The intentionalists fallacy (also the introspective fallacy) are invoked here. Dualism is done for now (But not Descartes!) Now! Talks about physicalism and mentalism! Reject dualism, pick up physicalism. Loosely (and incorrectly) called materialism, however! Physicalism states that what really exists is the ontology given to us by physics. First, we need to look at a different position. It argues that the mind/brain problem, should not be solved but rather dissolved. That, the task of the philosophy of mind is not to find a solution that explains physical and mental, but to see that the question is a malformed one. Wittgenstein started this: Essence to a game, etc. He says that many of our philosophical problems should not be solved; we are violating the principles of how language is properly used. There are questions that sohuldn't be answered, but should be dissolved because it breaks the rules of how we make questions. For example. what time is it on the sun? It doesn't make sense, because what time it is is relevant to where you are on the Earth. When asking about the mind and the brain, we are making this kind of mistake. "Category mistake": using the terms in the wrong way! When I'm moving, it's relative to the Earth. The Earth itself cannot be moving, then. We need to change the meaning of motion, so it can apply to things like the Earth. The problem is that the status is not clear; On one end, we should dissolve the question "What time is it on the sun." Pseudo-questions like this make category mistakes. On the other side, we have arguments that make us change the meaning of terms. For example, old version of "computer" was just people. We changed what is meant by a computer! Is the mind-brain question a pseudo-question? Or do the definitions need to be changed? Ryle says he thinks that the cartesian position is generated by making a series of cactegory mistakes. We have a verb: Believe. We take it to refer to an action! The problem is, we dont see any action when people are believing. You dont seem to be doing anything, therefore it must be a "secret" action, in a "secret" place. A special place called the mind, where special unobservable actions are ocurrinng. Ryle says that we are not paying attention to how the word is used; and makes a distinction! The distinction is between dispositional (Trait) and occurent (State) terms. Ocurrent processes are where you can tell me when/where it started. It makes sense to say that it was interrupted, resumed, ended. Dispositions don't work like that, though! For example, salt is soluble in water. Salt is not dissolving in water all the time; it is describing a conditional relationship. If you put salt in water, it will dissolve. Another example: Nouns! Heres a substance, and it is poison. Poison on my shelf is not always poisoning; to say it is poison is to describe a condition. If someone ingests it, they will suffer harm/be poisoned. If ___, then ___. Conditional relationships! When we do this move from believe -> action -> Secret, then belief is occurrent; an ocurrent process. However, belief is dispositional! You are not stating an occurrent process, you are stating a disposition; a multi-track disposition. Ryle says that the whole idea that mental terms refer to secret terms/secret processes is to treat mental terms as if they are occurrent, when they are dispositional! All mental terms are just ways of talking about behaviour! Mental terms are talking about dispositions. This is known as logical or philisophical behaviourism (The idea that there is no need to display a relationiship between mind and brain), because there is no set of properties c alled "mental". It is a category mistake. All mental terms are ways of talking about our disposition to behave. There is no mind-brain problem, Descartes just misled us! This argument presupposes that the language is unproblematic. Now, lets look at how this move is problematic. Belief itself is a problem; compare it to imagining! Problem 1: Some of our cognitive processes are ocurrent, so it's not wrong to look for what causes them! Problem 2: If I believe its raining, what would I do? What are the if/thens if it's raining? It's unlimited! UNCONSTRAINED! Third problem: When you take a mental term, you should be able to replace it/transofrm it into dispositional statements with no mental statements in it, a process called discharging. i believe it is raining, so I close the window. Why did I close the window? I do not want to get wet. Is "want" a physical or mental term? Mental! I only closes when I don't want it to get wet, but I believe it is raining. If I believe, I want. If I want, I believe. It doesn't get rid of mental terms! Super actor super spartan problem: A super spartan has no change when pain. Super actor is the most convincing actor of all time; they can act as if they are in pain, or not. Can generate all behaviour without having pain. The super spartan has the mental state pain, and generates nothing. The super actor generates all the behaviour, and has none of the state. You acnnot talk about how your "brain" is behaving, because brains are not behavors. People are behavors. Double Dissociation: can have state without behaviour, can have behaviour without the state. When this occurs, this is good evidence that they are different! If x varies independent of y and y varies independent of x, then x and y have strong evidence to be different. Similarily, you cannot translate mental terms to behavoiural terms. Third problem: Perceptrons cannot invoke stimulus! Behaviourism has been disproved. Most people say that Ryle is partialy right; right about belief, but to say the mind-brain problem is a pseudo-argument collapses. Philosophical behaviourism has been rejected. Now, central state materialism arose. People use a different term nowadays though (But its misleading!): The identity thereom. Why is this problematic? any position that does 2 things is some version of the identity theory. When philosophers use this, they use it to refer to a specific proposal. Mental properties are physical properties, we just dont realize it. If I say that lightning is a kind of electricity does not mean that all electrical events are related. What happened? We know that we have a theory of lightning. A folk theory of lightning: a bright flash, causes thunder, hot, etc. This theory goes through theoretical reduction: We can explain everything that was happening to explain features of lightning. Lightning can be completely explained as being a kind of electricity. We can think of heat as molecular motion. We can reDeafduce heat and pressure, and explain the relationship between them with the theory of molecular motion. This theory explains a lot more than a lot else. Theory reduction to the theoy of electricity can explain lightning, and a bunch of other stuff! Lightning is just a kind of electricity. Heat/Pressure is just a kind of molecular motion. What am I not saying? There's no such thing as lightning; that's ridiculous! Non-Physical theories can be completely reduced to physicla theories. We also have a folk theory of mind: Belief, memories, all these terms! The claim is that this will be theoretically reduced to neuroscience. Beliefs/memories/desires are just kinds of neurological processes. This does not mean that "Love is just a neurochemical process" and love does not exist; thats liek saying light is just electricity, therefore lightning does not exist. An identity argument is an identity argument: If the thing you're identifying love with is real, then so is love, because love is identical with something that is real. Argument by Analogy: A promisary theory. Promising that a folk theory of mind will be of neuroscience, because of the folk theory of lightning caused the theory of electricity. Evidence for: We have mental states, but evolved from organisms that didn't. Evolution is a physical process; a physical process coming from physical things should only result in a physical thing. Babies: started non-mental, went through a physical process, and now are here. (Was a soul inserted? No! Rejected dualism.) A physical process of a physical thing should result in a physical thing. Next: increasing success of neuroscience! Explaining more and more through brain states. If you reject dualism and embrace physicalism, then you will kind of think in some sense the mind is a physical thing. Pain = C fibres firing: Equating a physical state and a mental state. What's the problem with this? These are theories of our subjective experience. Lightning is a subjective experience of electricity. Similarily, the mind is a subjective experience of the brain. The issue: The mind is having the subjective experience of the brain. Who's having the subjective experience? Another mind! Mind+. What's going on? (Go over this! around 2 hours in.) Most people reject Identity theory, because the analogies are hard to make work, and because of the success of artificial intelligence. Not true to say that love is a biochemical event! Computation functionlaism replaced this: Mental states are computational states. Now, back to Descartes arguments to Hobbes! Argument around Qualia: Secondary quaities that exist just in the mind and not in theArguments that computational functionalism is not just qualia: The inverted spectrum argument. Says that a mental state is identical to the computational functions that are implementing it. Qualia? The subjective experience. Lets chose colour qualia: Red and green are not frequencies of electromagnetism; the frequencies have existed without redness and greenness existing. Imagine fred and susan: Wherever Fred is experiencing greenness, susan is experiecing redness and vice versa. They think it's the same though (Green is red, red is green.) It's functionally identical, but are the exact opposites.Another: The absent qualia problem.In the middle of the night, government people do things to someone. They experience big pain, and it gets mapped out. Then, they get all the people of the country and give them cards; 1 and 0. 1 means a neuron is firing, 0 is not firing. How frequently you hold a card is how frequently it is firing. They perfectly replicate what is going on in the brain. Is there pain spread out across the country? No! but why? One has qualia, the other doesn't. If two states are equivalent but have different qualia, one has qualia one doesnt, computational functionalism cannot explain qualia. Philosophical zombie is the problem: Phenomenon called blind sight. People have the subjective experience of blindness; no visual qualia. You bring a bottle in the room, they dont know where it is. Functional ability to respond, but no experience. Deaf hearing is similar. Numb touch is similar. Philisophical zombie: can do something withotu having conscious experience of it.Nagel: what's it like to be a bat? bats echo-locate. we can explain everything, but don't have the qualia to be a bat. People have been trained to do this, and they have qualia. Theres a space in front of themNext problem: Mary has monochromatic vision (Black and white.) Studies colour vision, until she becomes the worlds authority on colour vision, and can explain it better than anyone else. She can suddenly see red due to a medical explanation! Mary knows something she didnt know; the qualia of red. Distinct from knowing the functional explanation of red. Arguments are opening an explanatory gap: Makes the problem of qualia a hard problem! Once we give the best functional explanation, they will not have explained qualia at all. Qualia make no functional difference; its not just you cant explain that they cant interact, its that they cant possibly interact. If the qualia makes no difference on fred+susans behaviour, it has no impact on their behaviour; no causal functional role. Epiphenomalism: The idea that qualia are always effects, but not causes. The noise the car makes; the noise is an effect of the motor, but has no causal or functional role. cannot fix the car by working on the noise. Epiphenomena: Effect that has no causal functional role. Qualia do nothing. Do not affect life; just like how noise of a car has no role on how it operates. Tension between advocating for non-physicality of qualia, and the consequence of ephiphenomalism. You shouldn't care if all of your qualia is gone because it makes no difference, according to this argument. Qualia are just around; hanging around, doing nothing. All these arguments require that someone has qualia: and that they are true. Problematic? People know that they have qualia. How do you know something that has no causal power? How can something exist in actuality (act) if it has no causal power? How can you know it if it has no causal power? No sense! Doesn't make sense at all. If to know something is to have qualia, do you know your qualia? do you have qualia of your qualia? Dualism is bad, physicalism isnt doing as well. Logical behaviour is rough! Identity theory has been undermined by the progress of AI, Computational Functionalism has the problems we've seen. When we put it together, we get something as problematic as the problems themselves. Relevance realization is the criterion of the cognitive. It could be analagous to The idea that we need an account of relevance realization. A dynamicalsystem account of relevance realization. One that will capitalize on the biology of evololution.Dynamical System: Juerrero book; Dynamics in Action. Kant says: Model of causation: A causes B causes C. Independent causes change in dependant, not vice-versa. What causes A? God? But God cannot cause Causation, because that makes no sense! He puts it aside, saying that's the main issue. This is good, because it prevents circular explanation. It prevents you from using the thing you will explain in the explanation of it. Kant says: What makes the leaves on the tree? A tree! where does the tree get the energy from? from the leaves! Trees cause the leaves, which cause the trees... Trees cause trees. He says living things are self-organizing. They demonstrate complex feedback loops, where a causes b causes c... causes a. Circular causation! Trying to trace that causes circular explanation, which is BAD. Kant says there can never be a science of living things. That sounds wrong; biology is a science of living things! Look back at a causes b causes c. This is incomplete! THeory: being self-organizing is not sufficient to be a living thing! What dynamical systems allows is to trace circular causation without getting circular explanation. Someone kicks a ball. Why did it move? Because i moved it? yeah, but... Its also round, its also on a flat surface, nothing is stopping it. The framework causes you to zone in on one mindset. Causes cause events, which causes changes in actuality. Causes also have constraints, which cause conditions, which cause changes in possibility. Now, look at actuality and reality. Actuality and reality are synonymous! The opposite of actuality is... potential! Is potential reality? No! This is wrong. You must reject this, because potential must be rejected if you adopt that mindset. An explanation that uses constraints is just as real as one that uses causes. For example, trees! Why do trees grow horizontally? It increases the probability that it touches sun. The events cause a structural functional organization, which constrains events. It shapes the chances of future events of happening. Complex pathways, which cause a structural functional organization, which shifts the probability of things happening. tall is marked, Short is not. If someone is tall, how tall are they? are they tall? If someone i short, are they short? yes. Same deal with constrained; Constrained is unmarked. Darwins theory of natural selection (Evolution is not a new concept, evolution by natural selection is!) Reproduction: a feedback system! Humans make more humans make more humans...If there is no reproduction, there is no evolution. THere are two sets of constraints: There is scarcity! Scarcity kills off many options!Because of Scarcity, the environment kills off forms of life. Also, variation! Variation enables new options for the kind of morphology that creatures canh ave. If there is no variation, then things cannot evolve/change. What d we have? A feedback loop! Variation adds designs, selection kills them. Evolve is related to revolve: System is constantly changing. If you ask someone __, then they will tell the truth. They are honest! Honesty is a virtue. Virtue = Disposition = natural. You need: a virtual governor, and a virtual generator. You dont need an intelligent designer, you just need a good dynamical systems theory. No theory of relevance; relevance has no systematic import. (Cut things up into categories, white thinigs etc.) Relevance is not intrinsic, cannot have a theory of relevance. With darwin: No theory of fitedness (Creatures "fit"), because there is no essence/stability to fitedness.Representation levelSearle says: representations are aspectual. (This means you really only see an aspect of a water bottle.) an aspect is dependant on having a particular point of view, which then causes consciousness. Aspectuality relies on relevance realization! Relevance Realization is below the level in which you work with representational meaning.Selective traits: CompressionEnabling: ParticularizationHow to balance generalization and discrimination?Neurons firing together: Data compression. Not together: Open to hcangesSelf organizing Criticality: sand, flaling then going up etc. This can be seen in the brain!Semantic information is syntactic information that a physical systam has about its envrionmne,t which is causally necessaryfor the system to maintaini tself. import-ance: System imports what it needs for itself. Need a self-organizing system that seeks out conditions to maintain its existence.Working memory - Chunking! Working memory is a filter for how much has been processed for relevance. The more is relevant, the more it's in working memory. general intelligence : correlation with how much your working memory filters! Link between consciousness and intelligence; if things get more intelligent, will they be conscious? Integrated information theory: treeYou know you're conscious,just by being conscious.Need demonstrative reference. Makes something salient!
636
637Final: same format as 2nd exam!
638
63910 short answer choose 4,
6404 long answer choose 1.
641
642
643Not tested on anything already been tested on (But Specific terms and specific questions will not occur.)
644
645Slight emphasis on new material, but not that much!
646
647Add detail.
648
649WE have the overall argument, we need to ask "Why" we think something? Get different sides of the argument, after.
650Clearly state your thesis.
651Be blunt! Effective communication. Be accessible. Always define big/ambiguous words, but only use when necessary.
652
653What was the point of this course?