FARI AI for the Common Good
Beschrijving
Our mission is to study, develop, and foster the adoption and governance of AI, Data and Robotics technologies in an inclusive, ethical and sustainable way.
We bring together world-leading researchers and experts in the field of AI (Explainable and Trustworthy), Data (Open) and Robotics (human-centric) to meet challenges at local level.
We involve ethical and civic steering committees onboard to actively participate, guarantee the applicability, acceptability and relevance of our projects. The initiative is driven by AI and Data ethics. For every potential project, we ensure to be aligned with the legal requirements of, for instance fairness, transparency and accountability, while respecting academic freedom and integrating ethical and societal concerns. We ensure that all FARI projects consider the EU’s seven key requirements for responsible AI, both in the context of research and in the context of use. All FARI’s projects will respect the ground rules of the responsible research and innovation.
Basismissies
Manifesto: a Research Institute on AI for the Common Good
Summary
The aim of the research institute is to enable, promote and perform excellent cross-disciplinary research on Artificial Intelligence in Brussels, inspired by the humanistic values of freedom, equality and solidarity that lay at the foundations of both the Vrije Universiteit Brussel (VUB) and the Université libre de Bruxelles (ULB), internationally acclaimed and with a local impact.
The research will concern both theoretical work and concrete applications, focusing on the advancement of AI, while engaging with local initiatives, addressing European challenges, and collaborating with a number of similar research communities at a global scale (e.g. the CLAIRE initiative).
Based on ethical and legal expertise and insights, the institute should push the AI envelope, exploring the key aspects of intelligence using cutting-edge engineering and computer science, and on the other it entails in-depth study of the disruptions caused by real-world deployment of AI systems: automating human decision-making processes will impact our daily social and economic activities, generating questions related to inequality, sustainability, ethnic bias, respect for the rule of law, as well as other disruptions to our social fabric.
The Institute ambitiously thus aims to integrate both sides of the AI coin by working in cross-disciplinary research teams from the start, drawing on philosophical, technical, medical, social sciences and humanities, while also engaging with civil society about the algorithms and intelligent systems, that regulate our daily activities.
Artificial Intelligence excellence in Brussels: Brussels is not only a strategic location for leading AI research efforts due to the presence of the core EU institutions. VUB and ULB universities had and currently have a pivotal role in the history of this field. For example: five ERC grants have been awarded recently to Brussels-based researchers active in the field of AI research and several cutting-edge international research projects are implemented by research groups affiliated with the VUB and the ULB.
Both universities offer the capacity, skills, and knowledge required for advanced research in this domain. As AI will remain in the EU spotlight for the next decade, Brussels provides the ideal gateway. The research institute will focus on radical AI excellence, powered by the shared values of our research community, such as respect for the environment, fundamental rights and the rule of law, fairness, transparency, accountability and most notably academic freedom. At the heart of Europe, the institute will, from the start, focus on setting up a network of international excellence with those initiatives inside and outside Europe that pursue similar objectives.
Driven by the core values of the ULB and VUB – freedom, equality and solidarity – the institute will underline the necessity of considering the implications of AI developments on our society and individual advancements and rights. Value-driven research aims at producing scientific knowledge for AI that is fair, transparent, and accountable. By highlighting these values, the institute commits to transparency at three levels: regarding its funding, regarding its research design and regarding the societal impact it may generate.
The institute will bring together scientists, scholarship, policy makers and business enterprises interested in AI and the consequent developments related to infrastructure, services and devices that contribute to individual empowerment without disrupting the common good, thus leading the way into a more concrete elaboration of a European vision of AI. This entails a commitment to implement the UN Sustainable Development Goals (SDGs) and a long term vision on the political economy of data-driven ecosystems, nourishing sustained reflection on the OECD rethinking of economic growth and societal well-being.
This feeds into the choice of the grand challenges to be addressed by the Institute, based on a reiterant dialogue about the foundations and the technological articulations of the social contract we depend on.
Legal protection by design: the institute will investigate how fundamental rights protection and respect for the rule of law can be articulated in the architecture of AI applications. It entails the imperative of translating legal rights and obligations such as privacy, freedom of expression, presumption of innocence, due process and non-discrimination into AI infrastructure. It also includes a focus on citizen participation in the deployment of AI in public administrations as well as on the impact of AI on the separation of powers inside the state. It builds on long standing expertise in this domain, as demonstrated in the ERC advanced grant on this topic in relation to legal technologies and in the Chair of Excellence of the French government on algorithmic regulations.
Promoting cross-disciplinary education & lifelong learning: the institute already fosters a pool of technical, social, medical and legal knowledge and is able to support the training of a future generation of scientists, decision-makers and practitioners leading the path on sustainable and responsible AI development. It will also probe and test novel ways in which to reach out to the wider public, in the understanding that the sharing of knowledge is crucial for individual empowerment, societal progress and fits well with the OECD Secretary General’s Advisory Group on a New Growth Narrative.
As a starting reference point, the institute will follow the European Commission’s working definition of Artificial Intelligence: “systems that display intelligent behavior by analyzing their environment and taking action — with some degree of autonomy — to achieve specific goals”. AI is hence used as an umbrella term for algorithmic decision-making, automated reasoning, machine learning, machine translation, natural language processing, computer vision, artificial agents, robotics and more.
Which values drive research on AI for the Common Good?
We focus on the Enlightenment values of the universities VUB and ULB, as well as the city of Brussels: freedom, equality and solidarity. We frame them as ‘enlightened values’ to highlight the need for transparency and respect for the human, as core to responsible AI development.
This transparency entails a commitment to work with open source code where possible, promoting Open Science / Open Data principles and promote transparency in our collaborations with stakeholders.
As to “freedom” we should not assume that AI applications will generate individual freedom and autonomy as a matter of course. We aim to continue long standing research into the impact of AI applications on different types of freedom: e.g. freedom from interference (e.g. the classical interpretation of the right to privacy); freedom to develop one’s identity (e.g. relating to the right to non-discrimination, freedom of expression). Other types of freedom are to be discussed.
As to equality, we can understand this as “equal status” and the recognition of diversity. We should not assume that the risks and benefits of AI will be equally distributed as a matter of course. This will require active intervention, (1) at the level of design strategies (e.g. fair, accountable and transparent computing), (2) at the level of a diverse community of researchers, and (3), at the level of engaging those whose lives will be impacted by AI infrastructure or applications in a diverse way..
As to solidarity: this means our commitment to the major challenges facing society and our concern for the respectful treatment of our fellow human beings and the world as a whole. We cannot assume that AI solutions will be conducive to the public good: we will continue long standing research into how surveillance systems and attempts to modify user behavior threaten individual autonomy as well as the social contract itself. This is related to the so-called tragedies of the commons that are caused by the maximization of individual interest, as with recommender systems or navigation systems. Introducing solidarity as part of a research agenda means commitment to the major challenges facing society, such as climate change, the ageing population, rising health costs, and explosive inequality on the job market. Though AI can be part of the answer, we need to integrate both the ethical and the safety issues raised by increased dependence on algorithmic decision-making, addressing them at the level of our design strategies.
We acknowledge that tensions and contradictions exist between these values, because AI has to operate in complex worlds. Therefore, we acknowledge our responsibility as researchers to investigate how to handle and explain these conflicts to each member of this society, and this from a cross-disciplinary perspective.
Why a cross-disciplinary approach?
AI is developed by human beings, and we need to make sure that this is done in a thoughtful, ethical and democratic way. Therefore, technical disciplines will cooperate with researchers from law, social science and humanities (including medical science), and involve people whose life will be impacted as part of the development process right from the outset, making sure that technological solutions solve real problems rather than rolling out whatever supports a short term business model. This will enhance democratic legitimacy, while also fostering the kind of transparency that is inherent in our commitment to enlightened AI.
Think tank: a European vision on AI
The cross-disciplinary approach will be core to the think tank that will be part of the Centre. As we acknowledge that technology in itself is neither good nor bad but never neutral (Kranzberg’s Law), we need to develop a structured way to assess AI applications as they are designed and built. As indicated above, we believe it is crucial to involve those who will live with the consequences of AI infrastructure into this assessment (participatory technology assessment), while also involving them into the design process (constructive technology assessment). The think tank will focus on constructive and participatory technology assessment, resulting in hands-on reporting about the way citizens understand, evaluate and recommend specified AI systems.
Grand Challenges for Society
The United States and China are world leaders in the field of AI for the entire ecosystem, from research to take-up by business. The EU seems to lag behind in the uptake of smart technologies. This may be connected with the fact that SMEs represent 99% of all businesses in the EU, as SME’s may be more prudent as to integrating digital technologies: only one out of five SMEs in the EU are highly digitized. This may in fact be a feature of the European economy, and not a bug.
Our Institute will target solutions that focus on maintaining the European strengths, without imitating the American and Asian model and without portraying the introduction of AI as an arms race. An important differentiator from USA and China is the EU’s emphasis on sustainable technologies, by e.g. embracing change based on responsible innovation. In the same vein, the Ethics Guidelines for Trustworthy AI of High-Level Expert Group on AI Ethics, present 7 key requirements that highlight the need for human centered AI that takes concern over privacy, fairness and security seriously. The key requirements are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal well-being and accountability. These requirements are similar to those of collaborative robots and may pave the way for an ambitious international compliance system in the field of AI.
They are not ‘ethical’ niceties but will enhance Europe’s competitive advantage over both under- and overregulated markets.
The central grand challenges we foresee are:
- At the social and political level, how should we design AI systems that are neither manipulative, nor creating undesirable dependence, but that foster instead fundamental rights, the rule of law and democracy? How can we counter attempts to lure people into technology acceptance and instead engage publics in the reconstruction of their environment?
- At the economic level, how to ensure fair wealth distribution in an AI-driven world in lines with the values of equality and solidarity? How should we divide tasks between humans and AI decision-making systems at the level of both infrastructure and applications while taking into account prevention of safety hazards or unfair bias?
- At the environmental level, how can we align the digital transition that increases our dependence on energy with the goal of environmental transition? How can AI systems contribute to smart energy grids and large scale use of renewable energy.
- At the legal level, how do we rethink the legal framework that shapes markets and potential business models, notably competition law, product liability for AI systems, and certification requirements such as the EU CE label? What standardization should be developed and what legal framework should be in place to prevent AI systems coming to the market that are based on unreliable and in-transparent research design with potential effects on health and security?
- At the scientific level, how can be distinguish between exploratory and confirmatory research, thus addressing the reproducibility and the credibility crises that are emerging in data-driven science?
From research to valorization
Finally, to best serve society and also nourish the economy, we believe that the research and innovation pipeline must start with excellence at the foundational level of mathematics, computer science and e.g. legal theory and philosophy of technology. One of our incisive innovations will be the cross-disciplinary interactions both in the theoretical investigations and the resulting applications. This will also allow us to situate the limitations of technical innovation, thus preventing society from investing in hyped business models that cannot serve the common good.
The collaboration between the different disciplines will have its impact on the existing education programs and lead to new modules in both traditional and new life long learning initiatives to educate the employees of the future and enhance the skill sets of the existing workforce (un-employed and employed), technology-savvy or not.
Artificial intelligence and robotics represent an immense new market. This will, in turn, give rise to new services building on new ecosystems. The size of these markets means that these technologies are in a position to disrupt and transform existing economies and societies. If we want AI that incorporates the values of freedom, equality and solidarity into AI products and services, we need to stimulate the kind of entrepreneurship that brings these research & innovation initiatives to the market.
Therefore, the institute’s objectives not only need to focus on creating and stimulating excellence through cross-disciplinary research activities, but will also focus on the support of some realizations in domains where AI for the Common Good can create both short and long term impact. Our priority is to survey and support these economic initiatives in order to control and verify whether they respect some basic values and ethical principles (such as “private life”).
In order to support governmental initiatives, the institute could, at the other hand, collaborate on real solutions related for example to “intermodal mobility”, “school assignment” and “assistance to unemployed persons”, as well as towards societal challenges related to diversity (multilingual learning programs), health (prevention and care) and ecology (circular economy).
All of the above will be core to the Centre, taking into account our commitment to the UN Sustainable Development Growth and the OECD’s call for a new growth narrative – as a commitment that will also inform how we collaborate with the private sector.
LIST OF THE CORE RESEARCH GROUPS AND REPRESENTATIVES INVOLVED (alph.order)
AI-lab (VUB): prof. Ann Nowé, prof. Geraint Wiggins, prof. Bernard Manderick, prof. Bart de Boer, …
Brubotics (VUB): Prof. Bram Vanderborght, prof. Dirk Lefeber
SaaS (ULB): Prof. Emanuel Garone
ETRO (VUB): Prof. Adrian Munteanu, prof. Bart Jansen, prof. Jef Vandemeulebroucke
IRIDIA (ULB): Prof. Hugues Bersini & Marco Dorigo & Mauro Biratarri & Thomas Stuetzle
LSTS RESEARCH GROUP (VUB): Prof. Mireille Hildebrandt
Machine Learning Group (ULB): Prof. Tom Lenaerts and Prof. Gianluca Bontempi
Centre Perelman (ULB): Prof. Gregory Lewkowicz & Benoît Frydman
Hoofdkantoor
FARI AI for the Common Good
+32488823853
Cantersteen 16
1000 Bruxelles
maandag : 08:30 - 17:00
dinsdag : 08:30 - 17:00
woensdag : 08:30 - 17:00
donderdag : 08:30 - 17:00
vrijdag : 08:30 - 17:00