Google AI Director Jeff Dean Talks About AutoML Bias and Automatic Weapons


This article is produced by Netease Smart Studio (public number smartman163). Focus on AI and read the next big era!燑/b>

Netease smart news May 12 news about a month ago, Google search and artificial intelligence executive John Giannandrea resigned, part of its AI research business by Google brain research department head Jeff Dean took over. Google Research also recently changed its name to Google AI at a time when AI continued to cover Google’s product lines and services.

On Tuesday, as Google introduced dozens of new features and updates at the I/O conference, Jeff Dean shared his vision for the future. These new features and updates include the ML Kit for mobile application developers, third-generation tensor processing unit chips, and Google assistants that can help users make calls.

Based on this, Dean explained the opportunities that artificial intelligence brings: creating new products and finding solutions to problems people have never considered before. Not only that, he also saw the challenges that accompany AutoML technology, which is an artificial intelligence model that can create other artificial intelligence models. In addition, he does not believe that Google should engage in the business of manufacturing autonomous weapons.

The rise of universal artificial intelligence (AGI)

Today, most artificial intelligence in the world only supports the completion of a single, narrow use case, such as the translation of a single sentence. But Dean said he hopes Google can create more artificial intelligence models to achieve multiple tasks and achieve a "common sense reasoning on the world."

He said: "I think that in the future, you will see that we are shifting more to models that can handle multiple events and then use these experiences to accumulate experience so that when we want to train a model to do other things, It can build the skills and expertise it already has."

For example, if a robot is asked to pick something up, it will understand how the hand works, how gravity works, and other understandings of the world. He said: "I think this will be an important trend that you will see in the next few years."

AutoML bias and opacity challenges

AutoML, this is an AI system that can create sub-AI, exciting or scary? The answer to this question depends on your object. The machine definitely disagrees with AI opponents. However, Google Cloud chief scientist Li Feifei said that from high-end developers to the ramen shop owner in Tokyo, AutoML lowers the threshold by creating a customized artificial intelligence model for everyone.

Dean thinks this is exciting because it helps Google to "automatically solve problems", but the application of AutoML also brings unique problems.

He said: "Because we are using more learning systems than traditional hand-coding software, I think this has brought us many challenges. One of the challenges is if you learn from biased data. So the machine learning model based on this learning will make these prejudices continue.” Therefore, a lot of work we are doing, and other work in the field of machine learning, are to figure out how we can train unbiased machine learning models. . ”

Another challenge is how to properly use AutoML to design safety-critical systems to apply artificial intelligence in industries such as healthcare. Decades of computer science practice have laid the foundation for the realization of systems such as hand-coding, and the same is true for the machine field.

Dean said that when you classify the types of dogs, making mistakes is one thing, and making mistakes in a safety-critical system is totally different. He said: "I think this is a very interesting and important direction for us. Especially when we start to apply machine learning in more safety-critical systems, this includes those who are deciding your healthcare or autopilot. The car thing."

Safety-critical AI requires more transparency

On Tuesday, Google CEO Sundar Pichai talked about how Google will use artificial intelligence in healthcare, based on information from electronic health records, along with Google artificial intelligence assistant new features and Android P beta release news. Re-admission of the patient. In addition, Google researchers published an article in the journal Nature of Digital Medicine that explained why artificial intelligence makes certain decisions about patients so that doctors can see the reasons for referrals in medical records.

In the future, Dean hopes that a developer or doctor who wants to know why artificial intelligence makes a specific decision can simply get feedback by asking the AI ​​model.

Dean said that today, the application of artificial intelligence in Google products requires an internal review process. Google is currently developing a set of guidelines for assessing whether artificial intelligence models contain prejudice. He said: "What you want is basically, just like the security review or privacy review of new features of your product, you want a ML fairness review, which is the integration of machine learning into our products. Part."

Dean said that when developers implement artificial intelligence through tools such as the ML Toolkit or TensorFlow, humans should also be part of the decision-making process, which has more than 13 million downloads.

Draw lines on artificial intelligence weapons

In answering a question, Dean stated that he does not believe that Google should engage in the business of manufacturing autonomous weapons. In March of this year, it was reported that Google is working with the U.S. Department of Defense to improve the analysis of drone collection video. He said: "When the whole society starts to develop more powerful technologies, I think that machine learning and artificial intelligence will have many interesting ethical issues."

“I personally signed a letter (about six to nine months ago in an open letter, the exact time is unknown) as an expression of my opposition to machine learning in the field of automated weapons.” I think, obviously, What kind of decisions we want to make and the company's decisions are a continuum. I think most people are uneasy about the use of automated weapon systems. According to the "New York Times" report, thousands of Google employees have signed a letter asking Google not to participate in the creation of "warfare technology" because it may be between Google's brand and the company and the public. The trust caused irreparable damage. Dean did not specify whether he signed the letter mentioned in the New York Times report.

Artificial Intelligence Drives New Projects and Products

In addition to the patient's re-acceptance of AI and a Gboard for understanding Morse code, Pichai also highlighted a previously published artificial intelligence study that accurately detected diabetic retinopathy and treated it as a highly trained ophthalmologist. Predict the emergence of problems. The degree of intelligence in artificial intelligence models has begun to surpass the level of imitation of human activities, and it is helping Google discover new products and services.

Dean said: "By using a lot of data to train these models, we can actually do something that we don't know we can do. This is a very basic advancement. We are now creating a brand new test and product, not Only use artificial intelligence to assist the training system."

(Selected from:venturebeat Compilation: NetEase smart participation: nariii)

Focus on Netease smart public number (smartman163), for you to interpret the big events in the AI ​​field, new ideas and new applications.

Laptop Stands And Risers

Laptop Stands And Risers,Laptop Riser Computer Laptop Stand,Laptop Folding Table Stand Aluminum,Laptop Stand Portable Aluminum Laptop Riser

Shenzhen ChengRong Technology Co.,Ltd. , https://www.laptopstandsuppliers.com

Posted on