Imagine a world where artificial intelligence, the cutting-edge tool shaping our future, can't even recognize and replicate something as fundamental as your own body—because you're different. That's the startling reality Jessica Smith faced, and it's a wake-up call for how far we still have to go in making tech truly inclusive. But here's where it gets fascinating: things are changing, and fast. Stick around to discover how one woman's experiment exposed a major flaw in AI—and sparked a conversation about equality in the digital age.
Just this summer, Jessica Smith, a former Australian Paralympic swimmer, decided to play around with an AI image generator. She wasn't aiming for anything groundbreaking; she simply wanted to jazz up her headshot. So, she uploaded a full-body photo of herself and gave the AI very precise instructions: depict her as missing her left arm from below the elbow. For beginners diving into this, think of AI image generators like tools such as DALL-E or Midjourney—they're programs that create pictures based on text descriptions, powered by massive datasets of images scraped from the internet.
But here's the part most people miss: the AI struggled immensely. Despite her detailed prompts, the results kept coming back as a woman with two arms or one sporting a metallic prosthetic device. Intrigued, Smith asked the AI why it was so difficult, and the response was eye-opening—it didn't have sufficient data to work with. As she puts it, 'That was an important realisation for me that of course AI is a reflection of the world we live in today and the level of inequality and discrimination that exists.' In simpler terms, AI learns from what's already out there online, and if diverse representations are scarce, the technology simply can't imagine them accurately.
Fast-forward a bit, and Smith tried again on ChatGPT. To her astonishment, it now produced a spot-on image of a woman with just one arm, exactly like her. 'Oh my goodness, it worked, it's amazing it's finally been updated,' she shared with the BBC. 'This is a great step forward.' This might seem like a small victory, but for millions living with disabilities, it's monumental. As Jess explains, 'Representation in technology means being seen not as an afterthought, but as part of the world that's being built.' It goes beyond tech; it's about human dignity.
OpenAI, the creators of ChatGPT, confirmed they've implemented 'meaningful improvements' to their image generation model. They note, 'We know challenges remain, particularly around fair representation, and we're actively working to improve this—including refining our post-training methods and adding more diverse examples to help reduce bias over time.' This is progress, but as we'll see, it's just the beginning.
While Jessica's experience marks a win, others are still grappling with similar issues. Take Naomi Bowman, who has vision in only one eye. She tried to use ChatGPT to blur the background of a photo, but instead, it completely altered her face and made her eyes symmetrical. Even when she explicitly mentioned her eye condition and asked it to leave her face untouched, the AI couldn't process it. Naomi found it amusing at first, but now it's heartbreaking. 'It now makes me sad as it shows the inherent bias within AI,' she says. She's advocating for AI models to be 'trained and tested in rigorous ways to reduce AI bias and to ensure the data sets are broad enough so that everyone is represented and treated fairly.' For context, bias in AI often stems from skewed training data—if most images online show 'ideal' features, the system reinforces those norms, sidelining unique traits.
And this is the part most people miss: AI's biases aren't just technical glitches; they mirror society's prejudices. Disabilities are just one area; experts point out that AI often overlooks other underrepresented groups too. Gina Neff, a professor at Queen Mary University London, has criticized ChatGPT's environmental footprint, noting that its data centers guzzle energy equivalent to what 117 countries use annually. Is the quest for perfect images worth the planet's cost? That's a controversial angle—some argue innovation justifies it, while others say we need sustainable AI solutions.
Abran Maldonado, CEO of Create Labs, a US company developing culturally sensitive AI, emphasizes that diversity starts with the people building the systems. 'It's about who's in the room when the data is being built,' he explains. 'You need cultural representation at the creation stage.' Without input from those with real-life experiences, AI misses crucial details. A classic example is a 2019 US government study revealing that facial recognition tech was far less accurate for African-American and Asian faces than for Caucasian ones. This isn't just unfair; it can lead to real-world harm, like misidentification in security or law enforcement.
Jessica herself doesn't identify as disabled—she sees the challenges as societal barriers. 'If I use a public toilet and the tap has to be held down, that impacts my ability, not because I can't do it, but because the designer hasn't thought about me.' She warns that the same exclusion risks happening in AI, where systems are designed without everyone in mind. And here's where it gets controversial: Is AI perpetuating discrimination, or can we fix it before it's too late? Some might say companies like OpenAI are doing enough with updates, but critics argue deeper systemic changes are needed, including ethical oversight and diverse teams.
Jessica once shared her story on LinkedIn, and someone offered to try generating the image on their AI app. When it failed identically, she pointed it out, but the person ghosted her. 'That's typical of conversations around disability,' she notes. 'The conversation is too awkward and uncomfortable so people back away.' Awkward conversations are exactly what we need more of—open dialogues about bias, inclusion, and the human cost of tech.
What do you think? Is AI's evolution toward inclusivity a genuine leap forward, or just a band-aid on a bigger societal problem? Do you believe the environmental toll of AI is a price worth paying for better representation? Share your thoughts in the comments—do you agree with Jessica's optimism, or do you see darker implications for how AI shapes our world?