International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with IJCAI 2020 (FL-IJCAI'20)
Workshop Date: Friday, January 08, 2021 (08:00 - 13:00 UTC)
Venue (New): Blue Wing-North 4 (VirtualChair Gathertown)
Workshop Program
Time (UTC) | Activity |
---|---|
07:45 – 08:00 | Presenters to connect and test the system |
08:00 – 08:05 | Opening Remark |
08:05 – 08:35 | Keynote Session 1: Incentives for Federated Learning (Video Recording), by Boi Faltings (Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland) |
08:35 – 10:20 | Technical Talks Session 1 (7 talks, 15 mins each including Q&A)
|
10:20 – 10:30 | Break (Presenters should connect and test the system) |
10:30 – 12:15 | Technical Talks Session 2 (7 talks, 15 mins each including Q&A)
|
12:15 – 12:45 | Keynote Session 2: Scalable and Heterogeneity-Aware Federated Learning, by Yiran Chen (Duke University, USA) |
12:45 - 13:00 | Closing and Award Ceremony |
Abstract: When participation in federated learning is voluntary, data is likely to be biased because participants self-select themselves to satisfy ulterior motives. This phenomenon, well-known from social media, reviews and other data collections, can be avoided by providing incentives that replace them. The challenge is that such incentives should only reward truthful and accurate data. I will survey techniques for truthful information elicitation and show the challenges with their application to the federated learning setting. I show how incentives based on influence are suitable for federated learning and conclude with open issues for further research.
Biography: Boi Faltings is a full professor of computer science at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he heads the Artificial Intelligence Laboratory, and has held visiting positions at NEC Research Institute, Stanford University and the HongKong University of Science and Technology. He has co-founded 6 companies using AI for e-commerce and computer security and acted as advisor to several other companies. Prof. Faltings has published over 300 refereed papers and graduated over 38 Ph.D. students, several of which have won national and international awards. He is a fellow of the European Coordinating Committee for Artificial Intelligence and a fellow of the Association for Advancement of Artificial Intelligence (AAAI).
Abstract: Federated learning has become a popular choice for deploying on-device deep learning applications. However, the data residing across devices is intrinsically statistically heterogeneous (i.e., non-IID data distribution) and mobile devices usually have limited communication bandwidth to transfer local updates. Such statistical heterogeneity and communication efficiency are two major bottlenecks that hinder applying federated learning in practice. I will survey some prior arts and present our proposed techniques to address these challenges and discuss the open issues of the further research.
Biography: Yiran Chen received B.S and M.S. from Tsinghua University and Ph.D. from Purdue University in 2005. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then promoted to Associate Professor with tenure in 2014, held Bicentennial Alumni Faculty Fellow. He now is the Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the director of NSF Industry–University Cooperative Research Center (IUCRC) for Alternative Sustainable and Intelligent Computing (ASIC) and the co-director of Duke Center for Computational Evolutionary Intelligence (CEI), focusing on the research of new memory and storage systems, machine learning and neuromorphic computing, and embedded and mobile computing systems. Dr. Chen has published one book and more than 400 technical publications and has been granted 96 US patents. He serves or served the associate editor of several IEEE and ACM transactions/journals and served on the technical and organization committees of more than 60 international conferences. He is now serving as the Editor-in-Chief of IEEE Circuits and Systems Magazine. He received 7 best paper awards, 1 best poster award, and 14 best paper nominations from international conferences and workshops. He is the recipient of NSF CAREER award, ACM SIGDA outstanding new faculty award, the Humboldt Research Fellowship for Experienced Researchers, and the IEEE SYSC/CEDA TCCPS Mid-Career Award. He is the Fellow of IEEE, Distinguished Member of ACM, and a distinguished lecturer of IEEE CEDA.
Accepted Papers
Call for Papers
Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while exploiting the values in the data, since the most useful data for machine learning often tend to be confidential. Increasingly strict data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) bring new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization. More resources about federated learning can be found here.
In order to explore how the AI research community can adapt to this new regulatory reality, we organize this one-day workshop in conjunction with the 29th International Joint Conference on Artificial Intelligence (IJCAI-20). The workshop will focus on machine learning systems adhering to the privacy-preserving and security principles. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The workshop intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of secure and privacy-preserving compliant machine learning. Both theoretical and application-based contributions are welcome. The FL series of workshops seek to explore new ideas with particular focus on addressing the following challenges:
We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the workshop. At least one author of each accepted paper is expected to represent it at the workshop. Topics include but not limit to:
Techniques
Applications
Position, perspective, and vision papers are also welcome.
Special Benchmarking Track
In addition, the workshop will also encourage researchers to demonstrate and test their ideas based on a set of benchmark datasets (https://dataset.fedai.org/#/). To this end, the special benchmarking track calls for submissions that evaluate the proposed methods using the benchmark datasets. If your submission uses the aforementioned datasets for experimental evaluation, please select option (B) or (C) from the "Submission Details" dropdown list.
For enquiries, please email to flijcai20@easychair.org.
Submission Instructions
Submissions should be between 4 to 7 pages following the IJCAI-20 template. Formatting guidelines, including LaTeX styles and a Word template, can be found at: https://www.ijcai.org/authors_kit. We do not accept submissions of work currently under review. The submissions should include author details as we do not carry out blind review.
Submission link: https://easychair.org/conferences/?conf=flijcai20
Join the IEEE P3652.1 Federated Machine Learning Working Group
Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners. This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning and 4) associated regulatory requirements. More information can be found here.
If you are interested in joining this working group, please contact Ms Ya-Ching Lu at angelica.lv@clustar.ai.
Organizing Committee
Program Committee
Organized by
In Collaboration with