International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with IJCAI 2020 (FL-IJCAI'20)


Submission Due: May 10, 2020
Notification Due: June 15, 2020

Workshop Date: Friday, January 08, 2021 (08:00 - 13:00 UTC)
Venue (New): Blue Wing-North 4 (VirtualChair Gathertown)

Workshop Program

Time (UTC) Activity
07:45 – 08:00 Presenters to connect and test the system
08:00 – 08:05 Opening Remark
08:05 – 08:35 Keynote Session 1: Incentives for Federated Learning (Video Recording), by Boi Faltings (Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland)
08:35 – 10:20 Technical Talks Session 1 (7 talks, 15 mins each including Q&A)
  1. Lingjuan Lyu, Han Yu and Qiang Yang. Threats to Federated Learning: A Survey
  2. Zhaoxiong Yang, Shuihai Hu and Kai Chen. FPGA-Based Hardware Accelerator of Homomorphic Encryption for Efficient Federated Learning
  3. Ce Ju, Ruihui Zhao, Jichao Sun, Xiguang Wei, Bo Zhao, Yang Liu, Hongshan Li, Tianjian Chen, Xinwei Zhang, Dashan Gao, Ben Tan, Han Yu and Yuan Jin. Privacy-Preserving Technology to Help Millions of People: Federated Prediction Model for Stroke Prevention
  4. Yan Kang, Liu Yang and Tianjian Chen. FedMVT: Semi-supervised Vertical Federated Learning with Multi-View Training
  5. Lingjuan Lyu, Xinyi Xu and Qian Wang. Collaborative Fairness in Federated Learning
  6. Yang Liu, Xiong Zhang and Libin Wang. Asymmetrical Vertical Federated Learning
  7. Anna Bogdanova, Akie Nakai, Yukihiko Okada, Akira Imakura and Tetsuya Sakurai. Federated Learning System without Model Sharing through Integration of Dimensional Reduced Data Representations
10:20 – 10:30 Break (Presenters should connect and test the system)
10:30 – 12:15 Technical Talks Session 2 (7 talks, 15 mins each including Q&A)
  1. Depeng Xu, Shuhan Yuan and Xintao Wu. Achieving Differential Privacy in Vertically Partitioned Multiparty Learning
  2. Dipankar Sarkar, Ankur Narang and Sumit Rai. Fed-Focal Loss for Imbalanced Data Classification in Federated Learning
  3. Xu Guo, Pengwei Xing, Siwei Feng, Boyang Li and Chunyan Miao. Federated Learning with Diversified Preference for Humor Recognition
  4. Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen and Zhiqi Shen. FOCUS: Dealing with Label Quality Disparity in Federated Learning
  5. Dashan Gao, Ben Tan, Ce Ju, Vincent Zheng and Qiang Yang. Privacy Threats Against Federated Matrix Factorization
  6. Lixuan Yang, Cedric Beliard and Dario Rossi. Heterogeneous Data-Aware Federated Learning
  7. Shubham Bhatia and Durga Toshniwal. TF-SProD: Time Fading based Sensitive Pattern Hiding in Progressive Data
12:15 – 12:45 Keynote Session 2: Scalable and Heterogeneity-Aware Federated Learning, by Yiran Chen (Duke University, USA)
12:45 - 13:00 Closing and Award Ceremony

Keynote Session 1: Incentives for Federated Learning, by Boi Faltings (EPFL, Switzerland)

Abstract: When participation in federated learning is voluntary, data is likely to be biased because participants self-select themselves to satisfy ulterior motives. This phenomenon, well-known from social media, reviews and other data collections, can be avoided by providing incentives that replace them. The challenge is that such incentives should only reward truthful and accurate data. I will survey techniques for truthful information elicitation and show the challenges with their application to the federated learning setting. I show how incentives based on influence are suitable for federated learning and conclude with open issues for further research.

Biography: Boi Faltings is a full professor of computer science at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he heads the Artificial Intelligence Laboratory, and has held visiting positions at NEC Research Institute, Stanford University and the HongKong University of Science and Technology. He has co-founded 6 companies using AI for e-commerce and computer security and acted as advisor to several other companies. Prof. Faltings has published over 300 refereed papers and graduated over 38 Ph.D. students, several of which have won national and international awards. He is a fellow of the European Coordinating Committee for Artificial Intelligence and a fellow of the Association for Advancement of Artificial Intelligence (AAAI).

Keynote Session 2: Scalable and Heterogeneity-Aware Federated Learning, by Yiran Chen (Duke University, USA)

Abstract: Federated learning has become a popular choice for deploying on-device deep learning applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication efficiency are two major bottlenecks that hinder applying federated learning in practice. I will survey some prior arts and present our proposed techniques to address these challenges and discuss the open issues of the further research.

Biography: Yiran Chen received B.S and M.S. from Tsinghua University and Ph.D. from Purdue University in 2005. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then promoted to Associate Professor with tenure in 2014, held Bicentennial Alumni Faculty Fellow. He now is the Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of NSF Industry–University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC) and the co-director of Duke Center for Computational Evolutionary Intelligence (CEI), focusing on the research of new memory and storage systems, machine learning and neuromorphic computing, and embedded and mobile computing systems. Dr. Chen has published one book and more than 400 technical publications and has been granted 96 US patents. He serves or served the associate editor of several IEEE and ACM transactions/journals and served on the technical and organization committees of more than 60 international conferences. He is now serving as the Editor-in-Chief of IEEE Circuits and Systems Magazine. He received 7 best paper awards, 1 best poster award, and 14 best paper nominations from international conferences and workshops. He is the recipient of NSF CAREER award, ACM SIGDA outstanding new faculty award, the Humboldt Research Fellowship for Experienced Researchers, and the IEEE SYSC/CEDA TCCPS Mid-Career Award. He is the Fellow of IEEE, Distinguished Member of ACM, and a distinguished lecturer of IEEE CEDA.

  • Awards
    • Best Paper: Lingjuan Lyu, Xinyi Xu and Qian Wang. Collaborative Fairness in Federated Learning
    • Best Student Paper: Zhaoxiong Yang, Shuihai Hu and Kai Chen. FPGA-Based Hardware Accelerator of Homomorphic Encryption for Efficient Federated Learning
    • Best Application Paper: Xu Guo, Pengwei Xing, Siwei Feng, Boyang Li and Chunyan Miao. Federated Learning with Diversified Preference for Humor Recognition

    Accepted Papers

    1. Lingjuan Lyu, Han Yu and Qiang Yang. Threats to Federated Learning: A Survey
    2. Zhaoxiong Yang, Shuihai Hu and Kai Chen. FPGA-Based Hardware Accelerator of Homomorphic Encryption for Efficient Federated Learning
    3. Ce Ju, Ruihui Zhao, Jichao Sun, Xiguang Wei, Bo Zhao, Yang Liu, Hongshan Li, Tianjian Chen, Xinwei Zhang, Dashan Gao, Ben Tan, Han Yu and Yuan Jin. Privacy-Preserving Technology to Help Millions of People: Federated Prediction Model for Stroke Prevention
    4. Yan Kang, Liu Yang and Tianjian Chen. FedMVT: Semi-supervised Vertical Federated Learning with Multi-View Training
    5. Lingjuan Lyu, Xinyi Xu and Qian Wang. Collaborative Fairness in Federated Learning
    6. Depeng Xu, Shuhan Yuan and Xintao Wu. Achieving Differential Privacy in Vertically Partitioned Multiparty Learning
    7. Dipankar Sarkar, Ankur Narang and Sumit Rai. Fed-Focal Loss for Imbalanced Data Classification in Federated Learning
    8. Xu Guo, Pengwei Xing, Siwei Feng, Boyang Li and Chunyan Miao. Federated Learning with Diversified Preference for Humor Recognition
    9. Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen and Zhiqi Shen. FOCUS: Dealing with Label Quality Disparity in Federated Learning
    10. Dashan Gao, Ben Tan, Ce Ju, Vincent Zheng and Qiang Yang. Privacy Threats Against Federated Matrix Factorization
    11. Lixuan Yang, Cedric Beliard and Dario Rossi. Heterogeneous Data-Aware Federated Learning
    12. Yang Liu, Xiong Zhang and Libin Wang. Asymmetrical Vertical Federated Learning
    13. Anna Bogdanova, Akie Nakai, Yukihiko Okada, Akira Imakura and Tetsuya Sakurai. Federated Learning System without Model Sharing through Integration of Dimensional Reduced Data Representations
    14. Shubham Bhatia and Durga Toshniwal. TF-SProD: Time Fading based Sensitive Pattern Hiding in Progressive Data

    Call for Papers

    Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while exploiting the values in the data, since the most useful data for machine learning often tend to be confidential. Increasingly strict data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) bring new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization. More resources about federated learning can be found here.

    In order to explore how the AI research community can adapt to this new regulatory reality, we organize this one-day workshop in conjunction with the 29th International Joint Conference on Artificial Intelligence (IJCAI-20). The workshop will focus on machine learning systems adhering to the privacy-preserving and security principles. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The workshop intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of secure and privacy-preserving compliant machine learning. Both theoretical and application-based contributions are welcome. The FL series of workshops seek to explore new ideas with particular focus on addressing the following challenges:

    • Security and Regulation Compliance: How to meet the security and compliance requirements? Does the solution ensure data privacy and model security?
    • Collaboration and Expansion Solution: Does the solution connect different business partners from various parties and industries? Does the solution exploit and extend the value of data while observing user privacy and data security?
    • Promotion & Empowerment: Is the solution sustainable and intelligent? Does it include incentive mechanisms to encourage parties to participate on a continuous basis? Does it promote a stable and win-win business ecosystem?

    We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the workshop. At least one author of each accepted paper is expected to represent it at the workshop. Topics include but not limit to:

    Techniques

    1. Adversarial learning, data poisoning, adversarial examples, adversarial robustness, black box attacks
    2. Architecture and privacy-preserving learning protocols
    3. Federated learning and distributed privacy-preserving algorithms
    4. Human-in-the-loop for privacy-aware machine learning
    5. Incentive mechanism and game theory
    6. Privacy aware knowledge driven federated learning
    7. Privacy-preserving techniques (secure multi-party computation, homomorphic encryption, secret sharing techniques, differential privacy) for machine learning
    8. Responsible, explainable and interpretability of AI
    9. Security for privacy
    10. Trade-off between privacy and efficiency

    Applications

    1. Approaches to make AI GDPR-compliant
    2. Crowd intelligence
    3. Data value and economics of data federation
    4. Open-source frameworks for distributed learning
    5. Safety and security assessment of AI solutions
    6. Solutions to data security and small-data challenges in industries
    7. Standards of data privacy and security

    Position, perspective, and vision papers are also welcome.

    Special Benchmarking Track
    In addition, the workshop will also encourage researchers to demonstrate and test their ideas based on a set of benchmark datasets (https://dataset.fedai.org/#/). To this end, the special benchmarking track calls for submissions that evaluate the proposed methods using the benchmark datasets. If your submission uses the aforementioned datasets for experimental evaluation, please select option (B) or (C) from the "Submission Details" dropdown list.

    For enquiries, please email to flijcai20@easychair.org.

    Submission Instructions

    Submissions should be between 4 to 7 pages following the IJCAI-20 template. Formatting guidelines, including LaTeX styles and a Word template, can be found at: https://www.ijcai.org/authors_kit. We do not accept submissions of work currently under review. The submissions should include author details as we do not carry out blind review.

    Submission link: https://easychair.org/conferences/?conf=flijcai20

    Join the IEEE P3652.1 Federated Machine Learning Working Group

    Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners. This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning and 4) associated regulatory requirements. More information can be found here.

    If you are interested in joining this working group, please contact Ms Ya-Ching Lu at angelica.lv@clustar.ai.

    Organizing Committee

    • Steering Committee Chair:
      • Qiang Yang (Hong Kong University of Science and Technology / WeBank, China)
    • General Co-Chairs:
    • Program Co-Chairs:
      • Han Yu (Nanyang Technological University, Singapore)
      • Yiran Chen (Duke University, USA)
    • Local Arrangements Co-Chairs:
      • Kilho Shin (Gakushuin University, Japan)
      • Takayuki Ito (Nagoya Institute of Technology, Japan)
      • Tianyu Zhang (WeBank, China)
    • Special Track Co-Chairs:
      • Bingsheng He (National University of Singapore, Singapore)
      • Di Jiang (WeBank, China)
      • Yang Liu (WeBank, China)
    • Publicity Co-Chairs:
      • Boyang Li (Nanyang Technological University, Singapore)
      • Lingjuan Lyu (National University of Singapore, Singapore)
    • Web Chair:
      • Jun Lin (Joint SDU-NTU Centre for AI Research (C-FAIR))

    Program Committee

    • Aleksei Triastcyn (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
    • Anit Kumar Sahu (Bosch Center for Artificial Intelligence, Germany)
    • Aurélien Bellet (Inria, France)
    • Bao Wang (University of California, USA)
    • Boi Faltings (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
    • Chaoyang He (University of Southern California, USA)
    • Daniel Peterson (Oracle Labs, USA)
    • Dimitrios Papadopoulos (The Hong Kong University of Science and Technology, Hong Kong)
    • Fabio Casati (Servicenow, USA)
    • Guodong Long (University of Technology, Sydney)
    • Jalaj Upadhyay (Apple, USA)
    • Jianshu Weng (AI Singapore, Singapore)
    • Jianyu Wang (Carnegie Mellon University, USA)
    • Jun Zhao (Nanyang Technological University, Singapore)
    • Konstantin Mishchenko (King Abdullah University of Science and Technology, Saudi Arabia)
    • Leye Wang (Peking University, China)
    • Lifeng Sun (Tsinghua University, China)
    • Mingshu Cong (The University of Hong Kong, Hong Kong)
    • Nguyen Tran (The University of Sydney, Australia)
    • Pallika Kanani (Oracle Labs, USA)
    • Paul Pu Liang (Carnegie Mellon University, USA)
    • Pengwei Xing (Nanyang Technological University, Singapore)
    • Peter Richtarik (King Abdullah University of Science and Technology, Saudi Arabia / University of Edinburgh, UK)
    • Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
    • Rui-Xiao Zhang (Tsinghua University, China)
    • Seong Joon Oh (Clova AI Research, LINE Plus Corp., South Korea)
    • Sewoong Oh (University of Illinois at Urbana-Champaign, USA)
    • Shiqiang Wang (IBM, USA)
    • Tianchi Huang (Tsinghua University, China)
    • Tribhuvanesh Orekondy (Max Planck Institute for Informatics, Germany)
    • Virendra Marathe (Oracle Labs, USA)
    • Xi Weng (Peking University, China)
    • Xin Yao (Tsinghua University, China)
    • Xu Guo (Nanyang Technological University, Singapore)
    • Yan Kang (Webank, China)
    • Yang Zhang (CISPA Helmholtz Center for Information Security, Germany)
    • Yihan Jiang (University of Washington, USA)
    • Yiqiang Chen (Institute of Computing Technology, Chinese Academy of Sciences, China)
    • Yongxin Tong (Beihang University, China)
    • Zelei Liu (Nanyang Technological University, Singapore)
    • Zheng Xu (University of Maryland, USA)
    • Zhicong Liang (The Hong Kong University of Science and Technology, Hong Kong)
    • Zichen Chen (Nanyang Technological University, Singapore)
    • Ziyin Liu (The University of Tokyo, Japan)

    Organized by

     

    In Collaboration with