九州体育亚洲

<sub id="zhnv1"></sub>

<sub id="zhnv1"></sub>
    <sub id="zhnv1"></sub>

    <sub id="zhnv1"></sub>

        <sub id="zhnv1"></sub>

        <sub id="zhnv1"></sub>

        <sub id="zhnv1"></sub>

        <sub id="zhnv1"></sub>
        <sub id="zhnv1"></sub>

              Events

              Position : Home > Events > Content

              The Insecurity of Machine Learning :Problems and Solutions

              Time: Oct 23, 2019

              地址 Office Building-210of South Campus 事件时间: 2019-10-23 14:20:00

              Title:

              The Insecurity of   Machine Learning : Problems and Solutions

              Lecturer:

              Adi   Shamir

              Time:

              2019-10-23 14:20:00

              Venue:

              Office Building-210of South Campus

              Lecturer      Profile

              Professor Adi   Shamir is a well-known expert in space coding, a professor at the Weizmann   Institute of Science in Israel, a member of the American Academy of Foreign   Sciences, and the founder of modern cryptography. 2002, with RL. Rivest and   L.M. Adleman jointly won the 37th Turing Award. Professor Adi Shamir has made   outstanding contributions in the field of cryptography: design with R.L.   Rivest and L.M. Adleman.The famous public key cryptosystem RSA: the idea of   identity-based null code system and threshold signature scheme was proposed   for the first time; the Merkle-hellman backpack cryptosystem was first   cracked and the RSA was the first time.

              In addition, he   analyzed a number of original tasks in the aspects of side channel political attacks,   multivariate public key rate system analysis and symmetric   cryptanalysis.Professor Shamirhas won the Israel Prize (Israel National   Award), Paris Kanellakis Theory and Practice Award, Erdos Prize, IEEE W.R.G.   BakerPrize, UAP Science Prize. PLUS XI Gold Medal, IEEE Koji Kobayashi   Computers and Communications Award.

              Lecture      Abstract

              The development of deep neural networks in the last decade   had revolutionized machine learning and led to maiorimprovements in the   precision with which we can perform many computational tasks. However, the   discovery five years ago ofadversarial examples in which tiny changes in the   input can fool well trained neural networks makes it difficult to trust such   resultswhen the input can be manipulated by an adversary. This problem has   many applications and implications in object recognition,autonomous driving,   cyber security, etc, but it is still far from being understood. In   particular, there had been no convincing

              explanations   why such adversarial examples exist, and which parameters determine the   number of input coordinates one has tochange in order to mislead the network.   In this talk I will describe a simple mathematical framework which enables us   to thinkabout this problem from a fresh perspective, turning the existence of   adversarial examples in deep neural networks from a bafflingphenomenon into   an unavoidable consequence of the geometry of Rn under the Hamming distance,   which can be quantitativelyanalyzed.

               

              Close

              <sub id="zhnv1"></sub>

              <sub id="zhnv1"></sub>
                <sub id="zhnv1"></sub>

                <sub id="zhnv1"></sub>

                    <sub id="zhnv1"></sub>

                    <sub id="zhnv1"></sub>

                    <sub id="zhnv1"></sub>

                    <sub id="zhnv1"></sub>
                    <sub id="zhnv1"></sub>

                          体彩官网首页下载

                          大发888娱乐场下载com

                          万福彩票亿元

                          577彩票首页登录

                          www.7688

                          万润彩票平台-首页

                          90win足球即时比分网

                          88必发娱乐官方网址

                          彩赢娱乐平台