Who are the Bias Buccaneers?
The members of our rag-tag group of misfits and renegades hold a common belief about the future of technological innovation as it pertains to the individual right to privacy, security, and ethical use of AI technologies. Bias Buccaneers, creates a forum for ongoing Algorithmic Bias Bounties™ (aka “Mutinies”). Pioneered by Twitter, and continued by Stanford HAI and Anthropic (among others), Mutinies allow a democratized and crowdsourced method of identifying and mitigating algorithmic biases, using the wisdom of many to address a near-infinite array of problems.
We have joined together as the Bias Buccaneers to help expand knowledge and experience applying technical skills that help practitioners to respond to the constantly changing technological landscape and its many far-reaching impacts on those using the latest technologies.
Mission I: Have Fun
Create engaging, and transparent methods of evaluating and addressing algorithmic bias. We will operationalize these methods through bug bounties focusing on specific datasets, algorithms, and applications. These bounties will have real incentives for Crew to be creative in finding potential, high-impact bias risks to be shared with the community.
Mission II: Grow Experts
Create standards that are crowd-tested and approved, and therefore, useful. We aim to make it easy and fun for AI engineers and data scientists to adopt and use algorithmic bias, fairness, and explainability standards. Our standards are developed by the experts at the open-source effort AVID, and vetted by a team of AI Risk and Security Experts.
Mission III: Earn Booty
We aim to host three paid Mutinies per year. By participating, Crew can earn real money and prizes and a place on our leaderboard so Buccaneers can increase their reputation as experienced professionals in the field of applied AI ethics and security.