Artificial Intelligence continues to be one of the most exciting technological developments of our times. With a significant talent base in Canada, AI’s market value is expected to reach over 120 billion by 2025. The impacts of the science are powerful – AI has the potential to change what businesses are capable of, deliver broad and meaningful impact to communities and shape the future.
However, AI is not without its implications – with one of the most meaningful being bias.
Simply put, the makeup of project teams, who they consult with, and what their values are can be the difference between a model that is inclusive and respectful, and one that isn’t.
It has been widely recognized that data sets, machines, and the humans that create them, are subject to bias. When biased AI models are deployed at scale, they can introduce risks to society. These are the headlines that make their way into the community – hiring algorithms that screen out female applicants, facial recognition systems that exhibit racial bias, or targeted advertising that puts individual privacy at risk. It is a multi-faceted problem, and the solution will be complex and challenging. That said, the integrity of the industry and the power of the science depend on it.
While there are ways to ensure machines are more fair and responsible, including fairness testing, monitoring and development of controls to detect potential issues, the impact of community representation cannot be overlooked. Simply put, the makeup of project teams, who they consult with, and what their values are can be the difference between a model that is inclusive and respectful, and one that isn’t.
For one of the most forward-thinking, innovative fields, AI lags behind in diversity. Element AI’s 2020 Global AI Talent Report noted that women represented only 15 percent of published research in the field. In 2019 – 2020, 19.9 percent of Computer Science doctoral degree recipients were female, and 21.7 percent of all doctoral computing degree recipients were female. The share of new Black computer science PhDs sits at an average of just 3.1 percent, and Indigenous representation is significantly low.
There is also a lack of mentorship and early learning programs, and the impacts of this can be severe. In order to solve the problem of bias in AI, organizations must focus as much on people as they do on science, ensuring that powerful models are being built by diverse teams.
In 2020, RBC and Borealis AI launched RESPECT AI, an online hub that brings open source research code, tutorials, academic research and lectures to the AI community, helping to make ethical AI available to all. Since launching the hub, we have released several open-source tools to help advance responsible AI adoption; published research and released a series of tutorials on bias, privacy, and other challenges in the field. We have also launched a RESPECT AI industry survey to better understand barriers to responsible AI adoption, and will be sharing the survey results later this year.
Our new Let’s Solve It mentorship initiative – now one of CIFAR’s National AI Training Programs – is focused on providing undergraduate students from diverse backgrounds with mentorship, contact training and guidance to solve real problems using AI. We have already had several teams (including two all-women teams!) go through this program. These undergraduate students had no prior knowledge of ML, and some have now landed internships and actual jobs in AI. It’s a small step, but we are extremely proud of this program, and we look forward to growing it across the country with additional support from CIFAR.
We also support diverse talent, and work closely with organizations like CIFAR, the Vector Institute, and AI4Good to ensure diversity in AI.
As an industry, we cannot push for innovation without considering representation. A team with diverse perspectives is better at challenging assumptions, identifying gaps in data and in systems and creating models that can have a positive impact on as many populations as possible. Organizations must consider criteria such as gender, race, socioeconomic background, work experience, age, ability, privilege, and experience with discrimination in order to develop strong AI.
Learn more about Tech @ RBC or check out our latest job opportunities here.
Photo courtesy Unsplash.