TEAS 2024

17 June, 2024 Toronto, Canada
The first Toronto Ethics in AI Symposium.
Held at the University of Toronto in Bahen. Stay tuned for an announcement for TEAS 2025.
70+
Attendees
2
Keynotes
8
Talks

About TEAS

The Toronto Ethics in AI Symposium (TEAS) will expand on the ethical concerns and priorities of ML systems, tackle questions of algorithmic power and accountability, provide knowledge of ethical and trustworthy ML practices, and aid in developing a unified and coherent transdisciplinary scientific community which aims to build responsible and ethical AI systems.

Who should attend?

Students local to Toronto or otherwise! Our goal is to build a community for ethics in AI among students, and to encourage research in ethical and trustworthy AI. We provide an option for remote participation at registration.

Past Recordings

For talk and keynote recordings, see our YouTube channel or the playlist below.

Speakers

Lambert Hogenhout
Chief Data, Analytics and Emerging Technologies
United Nations
Read more →
Émile P. Torres
Postdoc, Inamori International Center for Ethics and Excellence
Case Western Reserve University
Read more →
Nicolas Papernot
Assistant Professor of Electrical and Computer Engineering at Computer Science
University of Toronto
Read more →
Sneha Talwalkar
PhD Student
Read more →
Mario Gibney
Co-Founder
AIGS Canada
Read more →
Rayanna De Faria Calaza Rocha
PhD Student, Department of Philosophy
York University
Read more →
Ramaravind Kommiya Mothilal
PhD student, Faculty of Information
University of Toronto
Read more →
Noah Khan
PhD Student in Social Justice Education
Ontario Institute for Studies in Education, University of Toronto
Read more →
Mishall Ahmed-Lallani
PhD Student in Political Science
York University
Read more →
Yashwinie Shivanand
Lead Writer (Liability),
UN Internet Governance Forum's Multistakeholder Expert Group
Read more →

Schedule

9:00 - 9:15

Registration Opens

Bahen Atrium
Registration will begin from 9 am and run the entire length of the symposium, please collect your name card before entering the symposium.

9:15 - 9:30

Land Acknowledgement and Opening Remarks

Bahen 1160
Conference organizers Yoyo and Sierra begin the symposium.

9:30 - 10:15

Opening Keynote: Ethics in AI

Bahen 1160
Chief of Data, Analytics, and Emerging Technologies at the United Nations overviews the field of AI ethics.

10:15 - 10:30

Break

Bahen Atrium

10:30 - 12:00

Session A

Bahen 1160

12:00 - 1:00

Lunch Break

Bahen Atrium
We'll provide pizza and sodas for lunch in the Bahen Atrium.

1:00 - 3:00

Session B

Bahen 1160

3:00 - 3:15

Break

Bahen Atrium

3:45 - 4:45

Closing Keynote: TESCREALism, the Tech Right, and the Reckless Race to Build Artificial General Intelligence

Bahen 1160
Postdoctoral scholar Dr. Émile P. Torres gives the closing keynote online.

4:45 - 5:00

Closing Remarks

Bahen 1160
Conference organizers Yoyo and Sierra close the symposium.

10:30 - 10:50

Characterizing Machine Unlearning through Definitions and Implementations

Bahen 1160
The talk presents open problems in the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning or copyright claims. The first part of the talk discusses approaches that provide exact unlearning: these approaches output the same distribution of models as would have been obtained by training without the subset of data to be unlearned in the first place. While such approaches can be computationally expensive, we discuss why it is difficult to relax the guarantee they provide to pave the way for more efficient approaches. The second part of the talk asks if we can verify unlearning. Here we show how an entity can claim plausible deniability when challenged about an unlearning request that was claimed to be processed, and conclude that at the level of model weights, being unlearnt is not always a well-defined property. Instead, unlearning is an algorithmic property.

10:50 - 11:10

Information Security vs. AI Risk Management

Bahen 1160
This talk compares information security risk management with AI risk management using two real-world scenarios: a common web application security risk and a failed AI experiment due to model mislearning. Through these examples, we shall explore the crucial role of explainability and auditability in mitigating AI risks, discuss security controls outlined in recent AI standards and their potential to prevent these issues, ultimately prompting the question: can we design better defenses against manipulation in the evolving world of AI?

11:10 - 11:30

The Case for Taking Catastrophic Risk from AI Seriously

Bahen 1160
This talk overviews Catastrophic risks posed by AI. The talk is given by Wyatt Tessari L'Allié on behalf of Mario Gibney.

What About Them? Consciousness as an Ethical Constraint to Consciousness Verification Procedures

Bahen 1160
I will identify a tension between promising tests devised to verify the presence of consciousness in AI and the risk of imposing novel ethical harms against conscious AI as potential members of our moral community. That is, some of these tests overlook the moral considerations that would apply to the AI were it conscious. As a case study, I will focus on Susan Schneider's AI Consciousness Test (ACT), which intentionally provides the AI with an incomplete training database, free of all consciousness-related material. However, I will object to this strategy because of its potential to harm test subjects. My argument is that the ACT creates a context analogous to that of epistemic injustice in human contexts. My main conclusion is that to avoid ethical pitfalls, consciousness tests should always be made ethically suitable for conscious test subjects. In this sense, suitability should be assessed with a context-based approach. In the context of the ACT, a conscious subject would presumably have the capacity to be a knower; as in, at a minimum, the subject would know the fact of its consciousness. Thus, to be ethically suitable, the ACT should not harm test subjects in that capacity.

Responsible AI in Practice

Bahen 1160
The talk highlights how practitioners use machine learning responsibly (or ethically) in practice. Though ML practitioners increasingly employ various Responsible ML (RML) strategies, their methodological approach in practice is still unclear. In particular, the constraints, assumptions, and choices of practitioners with technical duties-such as developers, engineers, and data scientists—are often implicit, subtle, and under-scrutinized. This talk is based on a study with 22 technically oriented ML practitioners across seven domains to understand the characteristics of their methodological approaches to RML through the lens of ideal and non-ideal theorizing of fairness. The talk will discuss how practitioners' methodological approaches fall along a spectrum of idealization. While they structured their approaches through ideal theorizing, they did not systematically document nor pay deliberate attention to their non-ideal approaches. The final part of the talk will discuss a new methodological approach, inspired by elements of nonideal theory, to structure technical practitioners' RML process and facilitate collaboration with other stakeholders.

1:50 - 2:10

The Ethics of Data Annotation: Karya

Bahen 1160
This presentation explores the ethical dimensions of data annotation within the context of Karya, a non-profit organization that leverages a socially responsible model. Karya's innovative approach addresses the ethical challenges inherent in AI data practices, emphasizing fairness, transparency, and the socio-economic upliftment of marginalized communities. By providing dignified work opportunities and fair wages, Karya ensures that the benefits of the AI revolution are equitably distributed. This talk delves into Karya's operational framework, highlighting how it contrasts with traditional data annotation practices that often exploit vulnerable labour. I will examine the organization's impact on local communities, the ethical principles guiding its model, and the potential for scalability. Through this lens, the presentation aims to underscore the importance of ethical considerations in AI development and the role of socially conscious initiatives in shaping a more just and inclusive technological future. Karya's model not only sets a benchmark for ethical data annotation but also paves the way for a broader conversation on the intersection of technology, labour, and ethics.

2:10 - 2:30

Human Rights for AI - A question of method, design, or application(s)?

Bahen 1160
Recent trends in AI and society research highlight critical areas of concern pertaining to human rights and AI in its various forms and applications. Issues range from algorithmic bias, lack of transparency in automated decision making, lack of accountability for harm sustained using AI, its impact on labour, art, and a variety of other public and private sectors. This talk seeks to address some of the growing concerns relating to the role of Human Rights in mitigating harm, aiding the responsible use and application of AI, and whether incorporating Human Rights frameworks for AI are a question of method, design, or application(s).

2:30 - 2:50

The Policy Network on Artificial Intelligence

Bahen 1160
An overview of a United Nations branch working on cooperative global AI governance.

Register

Free!

Register before June 14th, 2024.

Sponsors & Partners

Want to become a sponsor? Get in touch:
Become A Sponsor

Organizers and Volunteers

Thank you to our co-organizers and volunteers!
Yoyo Liu
Co-organizer
Computer Science at University of Toronto
Read more →
Sierra Wyllie
Co-organizer
Engineering Science at University of Toronto
Read more →
Prajna Pendharkar
Volunteer
Undergraduate student at the University of Toronto