回到顶部

Does Federated Learning Preserve Data Privacy?

2024年5月15日 9:00 ~ 2024年5月15日 10:00
线上活动 (活动行Live)

收起

活动票种
    付费活动,请选择票种
    展开活动详情

    活动内容收起

    李葆春教授:实证联邦学习的数据隐私保护

     

    【引言】

    近五年来,联邦学习风靡学界,其特点之一是可以为数据隐私提供保护。但是有研究对其数据隐私保护的能力提出了疑问,并提出可以使用梯度泄露攻击来重建训练数据。然而,在Plato平台进行的用联邦学习解决图像分类和自然语言处理任务等实验证明了梯度泄露攻击的说法并不成立,联邦学习中的数据隐私实际上得到了很好的保护。

    第十四期 IEEE TNSE 杰出讲座系列活动,我们有幸邀请到李葆春教授介绍联邦学习的数据隐私保护能力,并分享他在这个领域内的相关研究成果与有趣发现。

     

    执行主席

    Executive Chair

     

     

    黄建伟

    香港中文大学(深圳)校长讲座教授、协理副校长

    AIRS 副院长兼群体智能中心主任

    IEEE TNSE主编

    IEEE Fellow

    AAIA Fellow

     



    报告嘉宾

    Speaker

     

      

    李葆春

    多伦多大学教授

    加拿大工程院院士

    加拿大工程学会院士

    IEEE Fellow

     

    李葆春教授于1995年从中国清华大学计算机科学与技术系获得工学学士学位,随后于1997年和2000年分别在伊利诺伊大学厄巴纳-香槟分校的计算机科学系获得硕士和博士学位。自2000年起,他一直在多伦多大学的电子与计算机工程系任职,目前是该系的教授。自2005年8月以来,他担任贝尔加拿大计算机工程讲席教授。他目前的研究兴趣包括云计算、安全与隐私、分布式机器学习、联邦学习和网络技术。

     

    李博士发表了共计470余篇论文,累积引用量达25000,H-index指数为88,i10-index指数为338。他曾在2000年获得IEEE通信学会Leonard G. Abraham通信系统领域奖,2009年获得IEEE通信学会多媒体通信最佳论文奖,同年还获得了多伦多大学McLean奖。他在2023年获得了IEEE INFOCOM最佳论文奖,并在2024年获得了IEEE INFOCOM成就奖。同时,他是加拿大工程院院士、加拿大工程学会院士以及国际电气与电子工程师协会会士(IEEE fellow)。

     

      


     

    Does Federated Learning Preserve Data Privacy?

    Title: Does Federated Learning Preserve Data Privacy?

     

    As one of the practical paradigms that preserves data privacy when training a shared machine learning model in a decentralized fashion, federated learning has been studied extensively in the past five years. However, a substantial amount of existing work in the literature questioned its core claim of preserving data privacy, and proposed gradient leakage attacks to reconstruct raw data used for training. In the day and age of fine-tuning large language models, whether data privacy can be preserved is very important.

     

    In this talk, I will show that despite the conventional wisdom that federated learning pose privacy leaks, data privacy, in fact, may be quite well protected.  Claims in the existing literature on gradient leakage attacks are not valid in our experiments, for both image classification and natural language tasks. Our extensive array of experiments were based on Plato, an open-source framework that I developed from scratch for reproducible benchmarking comparisons in federated learning.

     

    Bio: Baochun Li received his B.Engr. degree from the Department of Computer Science and Technology, Tsinghua University, China, in 1995 and his M.S. and Ph.D. degrees from the Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, in 1997 and 2000. Since 2000, he has been with the Department of Electrical and Computer Engineering at the University of Toronto, where he is currently a Professor. He holds the Bell Canada Endowed Chair in Computer Engineering since August 2005. His current research interests include cloud computing, security and privacy, distributed machine learning, federated learning, and networking.

     

    Dr. Li has co-authored more than 470 research papers, with a total of over 25000 citations, an H-index of 88 and an i10-index of 338, according to Google Scholar Citations. He was the recipient of the IEEE Communications Society Leonard G. Abraham Award in the Field of Communications Systems in 2000, the Multimedia Communications Best Paper Award from the IEEE Communications Society in 2009, the University of Toronto McLean Award in 2009, the Best Paper Award from IEEE INFOCOM in 2023, and the IEEE INFOCOM Achievement Award in 2024. He is a Fellow of the Canadian Academy of Engineering, a Fellow of the Engineering Institute of Canada, and a Fellow of IEEE.

     



    举报活动

    活动标签

    最近参与

    • 微信用户
      收藏

      (6个月前)

    • 云飞
      报名

      (6个月前)

    • 张小盆友
      报名

      (6个月前)

    • 丁翔
      报名

      (6个月前)

    • L_n_n_L
      报名

      (6个月前)

    • AIRS研究院
      报名

      (6个月前)

    您还可能感兴趣

    您有任何问题,在这里提问!

    为营造良好网络环境,评价信息将在审核通过后显示,请规范用语。

    全部讨论

    还木有人评论,赶快抢个沙发!

    活动主办方更多

    微信扫一扫

    分享此活动到朋友圈

    免费发布