Events Add an event Speakers Talks Collections
 
AAAI-2020
February 17, 2020, New York, USA
AAAI-2020
Request Q&A
AAAI-2020
From the conference
AAAI-2020
Request Q&A
Video
Fireside Chat with Daniel Kahneman| AAAI 2019
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
1.51 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk


00:20 Daniel Kahneman intro

01:00 Theory and experiments

03:20 Pre-linguistic system

05:30 Mathematical computations

07:35 Linguistic abilities of the system

10:20 Turing test

11:40 Psychologist methods

13:00 Common sense

18:30 Different tasks

24:20 Logical base

27:30 Machine learning

About speaker

Daniel Kahneman
Professor of Psychology and Public Affairs Emeritus at Princeton School of Public and International Affairs

Daniel Kahneman is Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem. Dr. Kahneman has held the position of professor of psychology at the Hebrew University in Jerusalem (1970-1978), the University of British Columbia (1978-1986), and the University of California, Berkeley (1986-1994). Dr. Kahneman is a member of the National Academy of Science, the Philosophical Society, the American Academy of Arts and Sciences and a fellow of the American Psychological Association, the American Psychological Society, the Society of Experimental Psychologists, and the Econometric Society. He has been the recipient of many awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association (1982) and the Grawemeyer Prize (2002), both jointly with Amos Tversky, the Warren Medal of the Society of Experimental Psychologists (1995), the Hilgard Award for Career Contributions to General Psychology (1995), the Nobel Prize in Economic Sciences (2002), the Lifetime Contribution Award of the American Psychological Association (2007), and the Presidential Medal of Freedom (2013). Dr. Kahneman holds honorary degrees from numerous Universities.

View the profile
Share

About the relationship between between machine and human decision-making. And the desert discussion with the Professor Daniel Cunningham, so it doesn't need to watch introduction. I think but anyway, I'll do a little bit. So that needs a proctologist and it is very well known for his work comp psychology of human judgment and decision-making and he was awarded the 2002 Nobel Prize in economic science and human errors based on our every 6 and by us and his

2011 book thinking fast and slow that you see here. Describe summarizes is theory and a lot of experiments to support the theory is very interested in a yard and the Machine decision-making as well. Not just human decision-making so much that they know one of our discussion. He told me that if it were a young student again, he would go and study AI psychology is really an amazing and great inspiration. As you may have seen also from the talk of that unit Award winners that sweet we had yesterday.

I need spiritual to build an advanced Toby's machines that can be human life. doctrine of this informal discussion is that they would be all about having his enough opinions and his faults about the Iron & Machine dishes of making vaginal Jeff and and your Chihuahua and the possibly here he comes we will try to get as much information as possible from Danny to to Advanced AI so I was just asking Danny and also Briefly summarize the main ingredients of a well. The first thing I should say that the system 1 and system 2

enough mine. They just happened to be the first hundred pages of the book. Which but most people of the only paid out of the very few comments about the way the firm's have been were used last night was the first time that I heard them used in the context of the eye. And there was one remark. I think that you're who I made. that system one is pre-linguistic that surprised me because so far as I'm concerned system one certainly knows language understands

has has a wet system one has so far as I'm concerned the most interesting characteristic is That it has a representation of the world and the representation of the world doesn't generate expectations specific prediction for what's going to happen. Next. You really have no idea what I'm going to say next. But what is happening? Is that when I say something Usually you're not going to be surprised. That is it's this reconciliation of what happens with the model with an existing Marvel and then existing representation,

which is something miraculous that the system when does all the time and was continuously when we're surprised and when we're not surprised when will surprise them to typically good chicken that is the difference between the two was something else that I noted yesterday. For me the main difference would be implicit vs. Explicit and so implicit it's obvious what it is. But you said that sound to me, you know, very naive mean in terms of Distinction something like the manipulation of symbols in terms of introspection versus certainly the case

and what system to is specific specific heat up doing that's the one I think doesn't do is serial sequential operations that is where you have to move through steps. You could move fairly fast, but there are distinct steps that you've got to move through in logic and reasoning in mathematical computations of some kind so those would be the difference. As a psychologist and it wasn't exactly the way the two of us are being used here. Is a psychologist but there's another psychologist and that was about it more than I do.

but I'm influenced by him and I think that there is actually what I know. Is that over the last couple of years there has been a kind of up of small between psychology and and they are having the recent debate between your shoe and Gary Marcus clearly indicated and Gary proposes something that seems very natural in terms of what I have said which is a sort of hybrid architecture, which I know it's not popular here. This is not what people trying to do. But if you're looking

introspectively at the way the mind works in the same way that bite for spectrum you say something is automatic and some is North system 1 VS System to you. You would get to that distinction between explicit and implicit with implicit and explicit explicit. So that looks like symbols. Thank you. So so Joshua, maybe you were more. You know, how would you? Would you agree with his definition of implicit rather than conscious. Regarding the language thing. And yesterday I probably didn't have time

to express my thoughts as much as I couldn't I would like so I totally agree that the system one has very powerful is you know, what an important part of linguistic abilities but it's say you need both of them. I think together kind of linguistic abilities that humans have and I think I think was Jeff. How come yesterday about some of the mistakes that current state-of-the-art Transformers do in natural language processing which which give the impression that the current systems understand the world

in a very Incomplete way Chris one aspect is that they're not grounded in perception like but there may be also something else about what nation which I tend to associate with system to so when we use language in a way that we can reason with it and we are able to use it in completely novel situations. I have the impression that it belongs more to the the system to part. Like everybody else. I'm terribly impressed by things like Google translate and you know, I've been

looking at some of the outputs of Transformers. What strikes me about these systems is they don't know what they're talking about? This is this is really significant requirement to me, you know so far as I'm concerned the fact that they sound like like language that you know, this is something that somebody could say, but what are they saying that is what is when do they know about the world and and it seems to me that all the progress on language processing is is at the surface of things.

For system to know what it's talking about. And that that I think is is a profoundly important question. It would clearly have common sense. It would it would know something about the world and let me know. Thanks an interesting question for everybody to ask themselves. What what would it take for someone new version of the Turing test? What would it take for the computer to convince you that it knows what it's talking about right now. We are learning the language models using only text and there's a lot of knowledge about the world which we

got by being in the world and interacting and receiving and and then innocence language is just you know the labels until I see something right then I think it used to be the case that if you said we can tell her that knows what he's talking about, but you can say the same thing in a different language. People who said yeah. So we do that and then people say, oh that doesn't ruin your back. So then we say well maybe we shouldn't knows what he's talking about. If you can show her the picture that can describe what's

in the picture but that doesn't seem to can't anymore either. So there is a slight worry that whenever you come up with something someone else a well, but it doesn't really know what it's it's just doing behavioral things. You know, how was your test but it doesn't it doesn't experiences a good example of this until before the Transformer Revolution the best performance of computer systems on Winograd schema about 60% where chances at 50% and its 90% that doesn't mean any assistance Avenue level of common sense, but there is enough data in

the language to leave the detective ambiguity that Yo people answer the phone. Anyway, I can have thought required common sense. They don't know that said a lot of people are working also grounded language running right through the efforts in other places to create virtual environment so you can have agents operates and then perhaps using language to describe what you're doing or listen to what what we're doing such a project Facebook called called habitat and that that's the hope of surviving language into some sort of reality reality reality

set the the benefits of information you get from destroying foods are all the types of perceptual modality is much much higher than what you get so that you know, how do you run only two language all the intuitive physics that if I put this thing on the table and I let it go, you know, it's going to fall right, but but Describe the situation where I could have a sort of you know, physics simulation engine in my head that I learned by observation. That's that's I think they're the cracks of Common Sense was wondering

about you and asking myself that question. How would I know if it does what it's talking about to talk to imagine a device without the sensorium without some way of connecting to the world of reality so that if it's only words it's going to be difficult and I agree with Jeff to distinguish. But but even if it is words what you would really want and that's common sense. And cuz I would think you would want the inferences that people normally can make about the world and they can make about the world because they know the world you would

want those influences to be made and you would I don't know what this we would have to convert to social construct. Does it end so that way because it knows the world for just because it has a road answer and it's very much like rote learning versus learning with understanding. We know that there is a difference. I want to come back on the word ending because it might be a bit misleading and I want to Channel Road a nice piece of this understanding is not black and white and so the current models do

understand a lot of things about our world and we're being a building model going to have better common sense because they will have a model of the world children understand some things and don't understand other things so we know it's grade levels. And so when people say, oh your system doesn't understand what they really mean is, they don't understand this particular piece of knowledge about the world. And so we're making progress and it's too many it's going to be gradual. Let me introduce another aspect and I noticed I mean as far as I understand

in your theory you say that you showed that in our decision-making some tasks. We start addressing them in solving them in with our system to capabilities. And then after the while we are familiar enough with them and that we pass them through the system one and this Dynamic aspect. I didn't know if he said that in the Bible, what do you think about that? Everything is explicit in driving and then with experience the explicit becomes implicit and that freeze resources so that you can drive and talk at the same

time. You cannot do weather that is an essential aspect of cognition that should be reproduced in human-level AI or whether it's simply that it takes people trying to learn something that the machine would learn much faster. I don't know. I'm not sure that skill acquisition of that time with the with the move from explicit or implicit is really there is a fundamental reason for this and in my words, it would be that it's much easier to create new combinations of words, but also Concepts in the

system to well and then once we put this in place, we can compile it to something automatic in system one, but but for system one to have jumped into this a new Behavior with other system to fart, but have taken a lot more practice than if you didn't have system to this what I think I think you're the reason perhaps that we need to. CertiFit attention to what you were doing in when we don't you drive if we can do anything else is because we have only one system to engine until we have to

devote it to whatever test is at hand. Then once the skill is compiled into the into system one, then, you know, then becomes part of the subconscious that we can talk yesterday. He's pretty much of that type. So you have it what Simulator the TV lights with the cars around you can do and from that you can predict the number of difference in all your ways. You can kind of assistant to like approach where you you you you find yourself to mail pass by by optimization, but then what is a sentence that uses that to train a reactive policing? You know what you have to

do. That's more like assistant 1 So can I make a comment about system 1 and system 2 I have a very different picture which is lips. If you have a big power level on that or Big Lots in your arms, the song things that you can do in one operation in the urine that is like a huge instructions that machine in a few hundred milliseconds in your own that couldn't do it. Maybe do some I'm diabetic encephalon something. I'm not such a primitive operation incredibly powerful. I'm you can see an analogy but in the end you can get to tasks that are so

difficult. You can't even this great big you will not count do them in one-by-one operation like that break the task up in several operations. And typically what that involves is remapping bits of the tallest going to the new world Hardware. Cherokee we know about this the universe and then this hostage of galaxies and then those galaxies and he goes all the way time to protons and electrons and quacks and Things become think about that a little ones. What we have to do is we use the same Hardware I think about structures of all

these levels but we just met reality ownes the harbor NJ from way so that we can cope with it. Meant Tariq is like that with limited help but I like seeing what's happening. Is that as soon as you get a complicated you have to ask in several different ways and we use the same hardware and so we did was going on is that We put these very powerful operations. You can see things intuitively one step, but when things get more complicated you have to use that several times and then there's the problem of how do you decide to use it several times? But

that's the kind of problem system one can solve. So seeing a plan for example, seeing a plan for how to do something is a primitive operation operations. System 1 and system 2 is quite closely related to what you can do and Powerline what you have to do to clinch shooting. Sunlife You by simple. Is that the symbols on the input on the symbols on the on the idea that I have to be symbols in between is like the idea of a my teeth that I was tribute to coast in a little bit on fire when she's such you know, what pixels are we know how to

make a picture to pick something and what's inside must be like pixels. If you go to the mall to love people like photoablation will making exactly the same the sacred symbols than you what symbols are they could see operations on the symbols when you do them on paper they thought that's what is this huge operations with Mass Effect. No, Johnny, actually had a very good time for argument, which is the limited Buffalo Mac, and I don't know what to do about it. What you describe

is completely compatible. I think with my view of system to the question is what would happen at those junctures when you're passing something when returning something from parallel to serial download serial is part of the definition of system 1 and system to system one is massively parallel and system 2 has to operate so look at those descriptions are not all that far apart. Subjectively. I would think it does feel that when you are operating in steps, although you may be highly

skilled and you're not constantly rebooting to the program necessarily what you are operating on feel life symbol that represent station of what you did in the previous computation. It wouldn't think so too kind of Ravenna. Westchester said there's a lot of things that need to look on the undersurface lock symbol manipulation, but internally Maynard and so it's going to be one of the point I was making yesterday as well. You know, is it possible to replace the bolts? Bye bye back to his advantage of it and logic by continuous function

so we can make it make it compatible with the Green Bay's running with it running. So I think that's really the big question and in the end it may not look at all like like, you know, the author of CEO symbol manipulation, which is what some of the so some of the Timberly swier really gets like this because they insisted it has to be stable condition and you know, if it if it looks like there's a note sure they'll be happy. So so so maybe speak against. My friends here have one of my students has been very successful

and now has proposed the idea that discretization. In the middle of the system to sequence might be useful because it would facilitate the communication between the different parts that are half exchange information by reducing the number of choices. Essentially that that you know things that can be communicated to make those pieces of computations with modules more plug and play more interchangeable night. So it's going to be harder for 1 module to have some quite a patient with

a DiMaggio Downstream if it has to go to a bottleneck that's more discreet and nature than if it can stand there lyrics that starts with an argument at least that's interesting. Yeah, so another thing is that you know, a lot of symbolic reasoning logical base with an old sewing between because we weren't also like a hundred percent correctness about some tasks right about solving some tag. Is that what you think of symbols can give you that I mean many problems are much too complex to be reduced

where you cannot describe them exactly with symbols and symbolic manipulation. Then I think that some people would think that they would be able to say why would you want to do it in a different way which may be required many much more resources and the computer If you need more flexibility, yes. No, I think that that's why you know some part of this community has decided to look at this problem. Where you going to find with the symbolic manipulation that you

put in The Outpost right? So would you say something that can be approached instead With the Enemy have not known symbolic. Might be I might use that kind of formal reasoning with symbols is one of the very last things we learned to do. I think you get to be a teenager before you can do for a reasoning with symbols or maybe have to start being a teenager but Then you punch having a big factor is your symbol Chinese the checking millions of soft constraints are not to love about common sense is so this is all by Phil Johnson light

away smell something where you have to do a little bit of reasoning and I sent it. I make it. I'm striking. You can't do it or you make mistakes. I make it concrete and it's no problem at all. And the essence of a symbol is no content. It's only got identity unknown identity with another symbol and the content is all of these things going on at once and I think that's a large fraction with common sense is our system one. If I understand well can deal well with causality

a few pieces of information and build the whole story right because of machine learning and deep-learning do not deal well with causality yet. So so then that's a different situation is more like machine learning approach a system to the symbol. You know, that's clearly something that that babies are born with and the devel it becomes explicit in the first few months. They're born with two kinds of causality with physical causality. And with intention intention

is it would seem is really a fundamental part of human intelligence. So, you know, you would expect before, you know, I'll take fish oil human-level intelligence. You certainly would expect that problem to be cracked and it's interesting that it seems to be a primitive. It seems to be a given that you start from the end of the development of intelligence. He goes to to meet you this cuz I lie to you right when is using profanity in front. So you have a model of the world and you're trying to figure I'd like, how can I affect the world to to obtain a particular particular results? And

this is what you do what you do Pearl and others have worked on and then there is the problem of relationships from data or from Intervention. But sometimes without intervention possible in certain cases, that's where that's where I can help. I mean you need a running engine of some kind to establish a relationship. So there's a lot of work in this area. So if you want to live for people in the mission on your contacts to work on this was going to throw cups and some of the students develop as far as we don't believe what's going on between

particular with Martina jet ski has and others have Humans are not particularly good at is that machine cuz I received from observation or even from from Intervention if you were so good at it you causes two things. I don't really have any cause mites we well, you know, I do, you know, what causes two natural events. In fact, probably all of religion is pretty much that way right now. That's a good at it. So the question is do we need explicit mechanism for discovering causality or is it just a natural

consequence of having that you have a causal inference engine? Just want to add that quickly mentioned some work that we're doing in my group that some of it is going to publish doesn't care where we use meta learning and you'll mess to do causal Discovery know it was fine. What are the causes and which causes what and so I think this is just the beginning where we can use Greek and based methods system one in order to discover causal structure using your lesson.

I think I'm one question you can clarify so when you're using system to sequentially is each individual step of system to a cool to system one. I would think so, okay. Stop completely separate clearly the meaning is a system one. But but there is it's a shortcut that you're manipulating and that sounded very compatible. Okay, so I think that I mean we could continue for hours here to exchange ideas and to get more XP dacian about how to build Advanced AI system survive with stop here. We have anybody to talk, but I really would like to thank the Turing Award winners to coming

back for coming back this morning to discuss with Danny and then it'll be here with us. Thank you very much.

Cackle comments for the website

Buy this talk

Access to the talk “Fireside Chat with Daniel Kahneman| AAAI 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “AAAI-2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Artificial Intelligence and Machine Learning”?

You might be interested in videos from this event

August 3 - 6, 2020
Online
112
6.68 K
bayesian optimization, markov logic networks, multi-armed bandits, nonconvex optimization, online prediction, parental sets, theory and experiments, uai 2020

Similar talks

Aude Billard
Full Professor at EPFL
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Murray Campbell
Distinguished Research Staff Member at IBM
+ 4 speakers
Michael Bowling
Professor at University of Alberta
+ 4 speakers
Hiroaki Kitano
President and CEO at Sony
+ 4 speakers
Gary Kasparov
Chess Instructor at GRAND CHESS TOUR
+ 4 speakers
David Silver
Professor at Deepmind and University College London
+ 4 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Susan Athey
Economics of Technology Professor at Stanford Graduate School of Business
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Fireside Chat with Daniel Kahneman| AAAI 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
843 conferences
34172 speakers
12918 hours of content