International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with IJCAI 2020 (FL-IJCAI'20)
Workshop Date: January 5-10, 2021 (tentative)
Venue: Kyoto, Japan
(with online meeting contingency plan)
Call for Papers
Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while exploiting the values in the data, since the most useful data for machine learning often tend to be confidential. Increasingly strict data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) bring new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization. More resources about federated learning can be found here.
In order to explore how the AI research community can adapt to this new regulatory reality, we organize this one-day workshop in conjunction with the 29th International Joint Conference on Artificial Intelligence (IJCAI-20). The workshop will focus on machine learning systems adhering to the privacy-preserving and security principles. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The workshop intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of secure and privacy-preserving compliant machine learning. Both theoretical and application-based contributions are welcome. The FL series of workshops seek to explore new ideas with particular focus on addressing the following challenges:
We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the workshop. At least one author of each accepted paper is expected to represent it at the workshop. Topics include but not limit to:
Position, perspective, and vision papers are also welcome.
Special Benchmarking Track
In addition, the workshop will also encourage researchers to demonstrate and test their ideas based on a set of benchmark datasets (https://dataset.fedai.org/#/). To this end, the special benchmarking track calls for submissions that evaluate the proposed methods using the benchmark datasets. If your submission uses the aforementioned datasets for experimental evaluation, please select option (B) or (C) from the "Submission Details" dropdown list.
For enquiries, please email to firstname.lastname@example.org.
Submissions should be between 4 to 7 pages following the IJCAI-20 template. Formatting guidelines, including LaTeX styles and a Word template, can be found at: https://www.ijcai.org/authors_kit. We do not accept submissions of work currently under review. The submissions should include author details as we do not carry out blind review.
Submission link: https://easychair.org/conferences/?conf=flijcai20
Join the IEEE P3652.1 Federated Machine Learning Working Group
Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners. This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning and 4) associated regulatory requirements. More information can be found here.
If you are interested in joining this working group, please contact Ms Ya-Ching Lu at email@example.com.
In Collaboration with