When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.

San Francisco hosted another tech conference about fixing problems the industry created. Women Impact Tech West 2025 took place on July 25-26, featuring the usual suspects discussing how artificial intelligence excludes women and other groups of people. Novel concept, except researchers have been screaming about this for years, while companies ignored them completely.

Around 1,000 people showed up to hear what anyone following AI development already knows – the algorithms running everything from hiring to healthcare reflect the biases of whoever built them. Mostly white guys from Stanford and MIT, in case that wasn’t obvious.

Opening Day of Women Impact Tech

Dr. Maya Thompson from OpenAI kicked things off with her talk “Bias Isn’t a Bug — It’s a Design Flaw.” Catchy title, though the content rehashed arguments academics made in 2019. Thompson’s big revelation? Training data reflects societal inequalities. Groundbreaking stuff right there.

Her quote about building “AI that recognizes all of us” sounded inspiring until you remember OpenAI’s actual hiring statistics. Company talks about inclusive AI while their engineering teams look like a Stanford computer science reunion. Actions speak louder than conference presentations.

Sonia Martinez from the Latinx AI Alliance brought actual data instead of feelgood platitudes. She showed specific examples where AI systems failed Latino communities because nobody bothered including diverse perspectives during development. Refreshing honesty compared to most corporate speakers.

Jennifer Chou from Google DeepMind discussed AI ethics, which felt awkward considering Google’s recent ethics team layoffs. Hard to take corporate ethics seriously when the same companies fire researchers who raise uncomfortable questions about their products.

The opening session energy felt forced. Lots of applause, but you could sense people thinking “we’ve heard this before.” Conference fatigue is real when the same problems get discussed repeatedly without meaningful solutions.

Panel Reality Check:

The “Building AI for Everyone” panel included reps from Microsoft, Meta, and smaller startups. Usually these things stay surface-level, but this one got uncomfortable fast. Good uncomfortable, not corporate uncomfortable.

Lena Abdurrahman from Anthropic dropped the session’s best moment. She’d flagged racial bias in their NLP model months earlier, got ignored, then watched executives scramble when public backlash forced changes. “We shouldn’t have to go viral to be heard” – that line hit hard.

Microsoft’s representative tried deflecting with corporate speak about “ongoing initiatives” and “commitment to improvement.” Translation: they’re aware of problems but haven’t actually fixed anything. Meta’s person did slightly better, admitting their algorithms have screwed over marginalized communities repeatedly.

Audience questions got spicy. People shared stories about being dismissed when raising bias concerns, having their research buried, or getting labeled “difficult” for pushing back on problematic designs. These unscripted moments revealed more truth than prepared presentations.

The panel ran over schedule because conversations kept getting real. Attendees seemed hungry for honest discussion instead of sanitized corporate messaging. Rare for tech conferences to break through the bullshit barrier.

Startup Pitch Fest

The showcase featured women-led companies claiming to solve AI bias through more technology. Because obviously the answer to problematic technology is additional technology. Logical.

InclusiveMind pitched AI assistants trained to understand neurodivergent speech patterns. Interesting idea, terrible demo. Their prototype barely worked under ideal conditions, let alone real-world scenarios. Classic startup move – promise everything, deliver minimum viable product six months late.

SheCanCodeAI presented an open-source framework for detecting hiring bias. The concept has merit, but open-source diversity tools face uphill battles. Companies prefer cheap solutions that let them claim progress without real change. Building sustainable communities around bias detection tools requires resources most startups lack.

Several presentations followed identical patterns: identify bias problem, propose technical solution, show limited proof-of-concept, ask for money. Whether these solutions actually work or just create new problems remained unclear from ten-minute pitches.

Some startups seemed genuinely committed to inclusive technology. Others appeared to be diversity-washing their way toward venture funding. Distinguishing between sincere efforts and opportunistic marketing required reading between polished presentation lines.

Workshop Chaos at Women Impact Tech West 2025

Bias mitigation workshops promised practical techniques for identifying algorithmic problems. Some delivered useful frameworks, others felt like academic exercises designed by people who’ve never shipped production AI systems.

The best sessions came from practitioners who’d actually fought bias battles within large organizations. They shared specific tools, code examples, and political strategies for getting bias fixes prioritized. War stories beat theoretical frameworks every time.

AI ethics hackathons generated excitement but little substance. Forty-eight hours isn’t enough time to solve problems that took years to create. Hackathon solutions tend toward technical band-aids for systemic issues requiring organizational transformation.

Workshop quality varied wildly depending on facilitator experience. Sessions led by researchers without industry experience felt disconnected from real-world constraints. Practitioners who’d navigated corporate politics provided actionable insights.

The most valuable learning happened during coffee breaks and hallway conversations. Attendees swapped stories about bias they’d encountered and strategies that actually worked within their organizations. Informal knowledge sharing beat formal presentations consistently.

Women Impact Tech westReality Check: What Actually Changes after Women Impact Tech West 2025?

Conference discussions mean nothing without implementation afterward. Attendees need to translate insights into hiring decisions, product changes, and organizational policies. Otherwise even the best conversations remain academic exercises.

Some companies will use participation to polish their diversity credentials without making internal changes. Others might implement real reforms based on conference insights. The difference depends on individual commitment and organizational culture.

Measuring conference impact requires tracking participant behavior over months and years, not just feedback surveys and social media engagement. Most diversity initiatives fail because they lack sustained follow-through after initial enthusiasm fades.

The people who attended already cared about inclusive AI. Converting skeptics and changing resistant organizations requires different strategies than conference education and networking.

LEAVE A REPLY

Please enter your comment!
Please enter your name here