Categories
Talks

Who Will Change AI? A Reflection on Olga Russakovsky’s Talk on Fairness in AI

As a woman from Ethiopia in the field of AI, I’ve often found myself navigating a world that wasn’t built with me in mind. From algorithms that struggle to recognize my face to datasets that don’t reflect my reality, the biases embedded in technology are not just abstract concepts to me, they are a part of my daily life. So, when I had the opportunity to listen to Olga Russakovsky’s talk on fairness in AI, it felt like a breath of fresh air. Here was a leading researcher in computer vision, not just acknowledging the problems I’ve seen, but dedicating her work to solving them.


The Unseen Biases in Our Everyday Tech


Olga’s talk started with a powerful premise: AI models are everywhere, and they are learning from a world that is far from equal. She shared some unsettling examples that I think everyone should know about. Have you ever used a “hotness filter” on a photo app? Olga pointed out that one such app was found to lighten users’ skin to increase “attractiveness”. Or consider the groundbreaking Gender Shades study by MIT’s Joy Buolamwini and Dr. Timnit Gebru, which Olga highlighted. Their work exposed how commercial facial recognition systems were significantly less accurate for Black women compared to white men.


These biases extend beyond just our faces; they encode cultural stereotypes. Olga showed how one major dataset’s idea of a “groom” is almost exclusively a white man in a tuxedo next to a woman in a puffy white dress. This isn’t a global truth; it’s a narrow, Western-centric view frozen into code and perpetuated by algorithms.


These aren’t just isolated incidents. They are symptoms of a deeper problem: the data we use to train our AI models is often skewed. As Olga explained, many of these datasets are overwhelmingly composed of images of people with lighter skin. This lack of diversity in the data leads to a lack of fairness in the technology built from it.


A World Beyond the West: The Geographic Bias in AI


What really hit home for me was Olga’s discussion of geographic bias. She showed a map of the world where the countries that contributed the most to a popular computer vision dataset were shaded darkest. The United States and Western Europe were dark, while the entire continent of Africa was almost invisible. This isn’t just about representation; it has real-world consequences. Olga gave a brilliant example of a commercial computer vision system that failed to recognize a bar of soap because it was primarily trained on images of liquid soap dispensers from higher-income US households.


Growing up in Ethiopia, I can tell you that a bar of soap is a far more common sight than a fancy liquid dispenser. This example might seem trivial, but it speaks to a much larger issue. If AI is trained on a narrow slice of the world, how can we expect it to serve the needs of a global population? It’s a question that keeps me up at night, and it’s why I’m so passionate about bringing my own perspective and experiences into this field.


So, What Can We Do About It?

Olga didn’t just leave us with the problems; she also talked about the solutions. And it’s not as simple as just collecting more data. While more representative datasets are a crucial first step, we also need to be creative with our algorithmic interventions. This means developing new techniques to train AI models to be fair, even when the data is not. It’s a complex challenge that requires a combination of technical skill, ethical consideration, and a deep understanding of the societal context in which these technologies are deployed.


But perhaps the most important solution, and the one that resonated with me the most, is the need for more diversity in the field of AI itself. Olga shared a personal story about how a robot she was working on during her PhD could understand everyone’s speech except for hers. It was her own lived experience of being an “outlier” in the data that sparked her interest in this research. This is why organizations like AI4ALL, which Olga co-founded, are so vital. Their mission is to educate and support a diverse next generation of AI leaders, because they know that the people who build AI will ultimately shape its future.


I loved how Olga closed this part of her talk with a dose of humility, reminding us that this work is a marathon, not a sprint. She said, “No dataset is perfect. No algorithm is perfect. Let’s give each other grace. Let’s keep moving forward.”. It’s a call for persistent, collective effort.


The Road Ahead


Olga’s talk was a powerful reminder that building fair and equitable AI is not just a technical problem, it’s a human one. It’s about who gets to be in the room, whose voices are heard, and whose experiences are valued. As I continue my journey in AI, I carry this with me. My background as a woman from Ethiopia is not a limitation; it is a strength. It gives me a unique perspective that is desperately needed in this field.
The question Olga left us with is one I want to leave with you: AI will change the world, but who will change AI? I hope that you, like me, will be inspired to be a part of the answer.

Categories
Talks

Week 2: Guest Lecture

On Wednesday, July 9th, SGI fellows were treated to a one hour presentation by guest lecturer Aaron Hertzmann, Principal Scientist at Adobe Research in San Francisco, CA. Aaron was introduced by SGI fellow Amber Bajaj, who among other accomplishments, noted that Aaron was recently recognized by the Association for Computing Machinery (ACM)’s SIGGRAPH professional society “for outstanding achievement in computer graphics and interactive techniques” and correspondingly awarded the Computer Graphics Achievement Award. The title of the talk was “Toward a Theory of Perspective: Perception in Pictures” and began on a personal note, with Aaron conveying how he was often critical of his own art differing from the corresponding photo he would take of the scene he was illustrating.

From that anecdotal example, Aaron expanded his talk to cover topics of human perception, vision, theory of perspective, and much more, weaving it all together to paint a compelling picture of what factors contribute to what we, as humans, perceive as more accurate representations of our three dimensional reality on a two dimensional medium. He made a compelling point that, while single point perspective is typically how cameras capture scenes and that single point linear perspective is a common tenant of formal art classes, multi-point perspective more faithfully represents how we remember our experiences. In a world of electronics, digital imagery, and automation, it was striking how the lecturer made it clear that artists are still able to convey an image more faithful to the experience than digital cameras and rendered three dimensional imagery can capture.

Key points from Aaron’s talk:

  • Only 3% of classical paintings strictly follow linear perspective
  • A multi-perspective theory more faithfully captures our experience
  • MaDCoW (Zhang et al., CVPR 2025) is a warping algorithm that works for a variety of input scenes
  • 99.9% of vision is peripheral, which leads to inattention blindness (object lying outside the focus of our fovea)
  • We don’t store a consistent single 3-D representation in our head… it is fragmentary across fixations
  • There are systematic differences between drawings, photographs, and experiments

Finally, the lecture came full circle with Aaron returning to the art piece he presented at the start and noting seven trends he’s identified from his own work that merit further research: good agreement with linear perspective in many case; distant object size, nearby object size; fit to canvas shape; reduced / eliminated foreshortening; removed objects and simplified shapes; multiperspective for complex composition. Overall the lecture was thought provoking and motivational for the fellows currently engrossed in the 2025 Summer Geometry Initiative.