International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with IJCAI 2020 (FL-IJCAI'20)


Submission Due: April 26, 2020 May 10, 2020 (23:59 UTC-12)
Notification Due: May 24, 2020 June 15, 2020 (23:59 UTC-12)

Workshop Date: January 5-10, 2021 (tentative)
Venue: Kyoto, Japan
(with online meeting contingency plan)

Accepted Papers

  1. Lingjuan Lyu, Han Yu and Qiang Yang. Threats to Federated Learning: A Survey
  2. Zhaoxiong Yang, Shuihai Hu and Kai Chen. FPGA-Based Hardware Accelerator of Homomorphic Encryption for Efficient Federated Learning
  3. Ce Ju, Ruihui Zhao, Jichao Sun, Xiguang Wei, Bo Zhao, Yang Liu, Hongshan Li, Tianjian Chen, Xinwei Zhang, Dashan Gao, Ben Tan, Han Yu and Yuan Jin. Privacy-Preserving Technology to Help Millions of People: Federated Prediction Model for Stroke Prevention
  4. Yan Kang, Liu Yang and Tianjian Chen. FedMVT: Semi-supervised Vertical Federated Learning with Multi-View Training
  5. Lingjuan Lyu, Xinyi Xu and Qian Wang. Collaborative Fairness in Federated Learning
  6. Depeng Xu, Shuhan Yuan and Xintao Wu. Achieving Differential Privacy in Vertically Partitioned Multiparty Learning
  7. Dipankar Sarkar, Ankur Narang and Sumit Rai. Fed-Focal Loss for imbalanced data classification in Federated Learning
  8. Xu Guo, Pengwei Xing, Siwei Feng, Boyang Li and Chunyan Miao. Federated Learning with Diversified Preference for Humor Recognition
  9. Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen and Zhiqi Shen. FOCUS: Dealing with Label Quality Disparity in Federated Learning
  10. Dashan Gao, Ben Tan, Ce Ju, Vincent Zheng and Qiang Yang. Privacy Threats Against Federated Matrix Factorization
  11. Lixuan Yang, Cedric Beliard and Dario Rossi. Heterogeneous Data-Aware Federated Learning
  12. Yang Liu, Xiong Zhang and Libin Wang. Asymmetrical Vertical Federated Learning
  13. Anna Bogdanova, Akie Nakai, Yukihiko Okada, Akira Imakura and Tetsuya Sakurai. Federated Learning System without Model Sharing through Integration of Dimensional Reduced Data Representations
  14. Shubham Bhatia and Durga Toshniwal. TF-SProD: Time Fading based Sensitive Pattern Hiding in Progressive Data

Call for Papers

Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while exploiting the values in the data, since the most useful data for machine learning often tend to be confidential. Increasingly strict data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) bring new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization. More resources about federated learning can be found here.

In order to explore how the AI research community can adapt to this new regulatory reality, we organize this one-day workshop in conjunction with the 29th International Joint Conference on Artificial Intelligence (IJCAI-20). The workshop will focus on machine learning systems adhering to the privacy-preserving and security principles. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The workshop intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of secure and privacy-preserving compliant machine learning. Both theoretical and application-based contributions are welcome. The FL series of workshops seek to explore new ideas with particular focus on addressing the following challenges:

  • Security and Regulation Compliance: How to meet the security and compliance requirements? Does the solution ensure data privacy and model security?
  • Collaboration and Expansion Solution: Does the solution connect different business partners from various parties and industries? Does the solution exploit and extend the value of data while observing user privacy and data security?
  • Promotion & Empowerment: Is the solution sustainable and intelligent? Does it include incentive mechanisms to encourage parties to participate on a continuous basis? Does it promote a stable and win-win business ecosystem?

We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the workshop. At least one author of each accepted paper is expected to represent it at the workshop. Topics include but not limit to:

Techniques

  1. Adversarial learning, data poisoning, adversarial examples, adversarial robustness, black box attacks
  2. Architecture and privacy-preserving learning protocols
  3. Federated learning and distributed privacy-preserving algorithms
  4. Human-in-the-loop for privacy-aware machine learning
  5. Incentive mechanism and game theory
  6. Privacy aware knowledge driven federated learning
  7. Privacy-preserving techniques (secure multi-party computation, homomorphic encryption, secret sharing techniques, differential privacy) for machine learning
  8. Responsible, explainable and interpretability of AI
  9. Security for privacy
  10. Trade-off between privacy and efficiency

Applications

  1. Approaches to make AI GDPR-compliant
  2. Crowd intelligence
  3. Data value and economics of data federation
  4. Open-source frameworks for distributed learning
  5. Safety and security assessment of AI solutions
  6. Solutions to data security and small-data challenges in industries
  7. Standards of data privacy and security

Position, perspective, and vision papers are also welcome.

Special Benchmarking Track
In addition, the workshop will also encourage researchers to demonstrate and test their ideas based on a set of benchmark datasets (https://dataset.fedai.org/#/). To this end, the special benchmarking track calls for submissions that evaluate the proposed methods using the benchmark datasets. If your submission uses the aforementioned datasets for experimental evaluation, please select option (B) or (C) from the "Submission Details" dropdown list.

For enquiries, please email to flijcai20@easychair.org.

Submission Instructions

Submissions should be between 4 to 7 pages following the IJCAI-20 template. Formatting guidelines, including LaTeX styles and a Word template, can be found at: https://www.ijcai.org/authors_kit. We do not accept submissions of work currently under review. The submissions should include author details as we do not carry out blind review.

Submission link: https://easychair.org/conferences/?conf=flijcai20

Join the IEEE P3652.1 Federated Machine Learning Working Group

Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners. This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning and 4) associated regulatory requirements. More information can be found here.

If you are interested in joining this working group, please contact Ms Ya-Ching Lu at angelica.lv@clustar.ai.

Organizing Committee

  • Steering Committee Chair:
    • Qiang Yang (WeBank, China/Hong Kong University of Science and Technology, Hong Kong)
  • General Co-Chairs:
  • Program Co-Chairs:
    • Han Yu (Nanyang Technological University, Singapore)
    • Yiran Chen (Duke University, USA)
  • Local Arrangements Co-Chairs:
    • Kilho Shin (Gakushuin University, Japan)
    • Takayuki Ito (Nagoya Institute of Technology, Japan)
    • Tianyu Zhang (WeBank, China)
  • Special Track Co-Chairs:
    • Bingsheng He (National University of Singapore, Singapore)
    • Di Jiang (WeBank, China)
    • Yang Liu (WeBank, China)
  • Publicity Co-Chairs:
    • Boyang Li (Nanyang Technological University, Singapore)
    • Lingjuan Lyu (National University of Singapore, Singapore)
  • Web Chair:
    • Jun Lin (Nanyang Technological University, Singapore)

Program Committee

  • Aleksei Triastcyn (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Anit Kumar Sahu (Bosch Center for Artificial Intelligence, Germany)
  • Aurélien Bellet (Inria, France)
  • Bao Wang (University of California, USA)
  • Boi Faltings (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Chaoyang He (University of Southern California, USA)
  • Daniel Peterson (Oracle Labs, USA)
  • Dimitrios Papadopoulos (The Hong Kong University of Science and Technology, Hong Kong)
  • Fabio Casati (Servicenow, USA)
  • Guodong Long (University of Technology, Sydney)
  • Jalaj Upadhyay (Apple, USA)
  • Jianshu Weng (AI Singapore, Singapore)
  • Jianyu Wang (Carnegie Mellon University, USA)
  • Jun Zhao (Nanyang Technological University, Singapore)
  • Konstantin Mishchenko (King Abdullah University of Science and Technology, Saudi Arabia)
  • Leye Wang (Peking University, China)
  • Lifeng Sun (Tsinghua University, China)
  • Mingshu Cong (The University of Hong Kong, Hong Kong)
  • Nguyen Tran (The University of Sydney, Australia)
  • Pallika Kanani (Oracle Labs, USA)
  • Paul Pu Liang (Carnegie Mellon University, USA)
  • Pengwei Xing (Nanyang Technological University, Singapore)
  • Peter Richtarik (King Abdullah University of Science and Technology, Saudi Arabia / University of Edinburgh, UK)
  • Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
  • Rui-Xiao Zhang (Tsinghua University, China)
  • Seong Joon Oh (Clova AI Research, LINE Plus Corp., South Korea)
  • Sewoong Oh (University of Illinois at Urbana-Champaign, USA)
  • Shiqiang Wang (IBM, USA)
  • Tianchi Huang (Tsinghua University, China)
  • Tribhuvanesh Orekondy (Max Planck Institute for Informatics, Germany)
  • Virendra Marathe (Oracle Labs, USA)
  • Xi Weng (Peking University, China)
  • Xin Yao (Tsinghua University, China)
  • Xu Guo (Nanyang Technological University, Singapore)
  • Yan Kang (Webank, China)
  • Yang Zhang (CISPA Helmholtz Center for Information Security, Germany)
  • Yihan Jiang (University of Washington, USA)
  • Yiqiang Chen (Institute of Computing Technology, Chinese Academy of Sciences, China)
  • Yongxin Tong (Beihang University, China)
  • Zelei Liu (Nanyang Technological University, Singapore)
  • Zheng Xu (University of Maryland, USA)
  • Zhicong Liang (The Hong Kong University of Science and Technology, Hong Kong)
  • Zichen Chen (Nanyang Technological University, Singapore)
  • Ziyin Liu (The University of Tokyo, Japan)

Organized by

 

In Collaboration with