top of page

The Mutineer's Pledge

Ahoy! Read the Code. Join the Crew

We truly appreciate the opportunity to collaborate with you on our maiden voyage. It is our view that incoming Crew share common beliefs on the future of technological innovation, especially how it pertains to the individual right to privacy, security, and ethical use of AI.

​

We believe that working with Bias Buccaneers will not only help with the fundamental and mechanical aspects of your profession. It will also ensure that, in your future work, you will be able to respond to the constantly changing technological landscape and its many far-reaching impacts on those using the latest technologies and platforms.

 

As an organization devoted to upholding truth, deconstructing falsehoods, and empowering individuals through the democratized, transparent, and fair use of data, we ask our Crew to stand with us and confirm our mutual alignment in a basic set of ethical ideals. These ideals pertain not only to ethical conduct as a participant of Bias Buccaneer Mutinies, or as a facilitator of data in your future undertakings, but to ethical use of artificial intelligence in business, consumer, defense, and academic capacities (among other pertinent fields).

 

We implore you to review the commitments below and openly acknowledge your belief in them before joining our Crew. Though not legally binding or required to follow Bias Buccaneers or participate in Mutinies, this letter is a record of our mutual agreement on the subjects mentioned below and the acknowledgement of an intention to create a safe, honest experience for all of those who wish to sail the high seas of information.

Set clear AI and anti-bias standards

As of this writing, there are currently no standardized approaches for developing and evaluating Artificial Intelligence (AI) systems. Though foundations for open standards in AI do exist (like OpenAI), they are not widely adopted and/or interpreted as law. This allows companies and governments to develop AI detached from any ethical or moral obligations to a charter or open standard.


Given the prevalence of AI bias (whether by error of omission or commission), large groups of people — often minorities and those considered to be part of marginalized communities — are often given a lesser, unequal, and disadvantaged experience on technology platforms when compared to others. Even with the awareness of these discrepancies on social media platforms, search platforms, and virtually every other sector in the technology world, unequal treatment of these groups is consistently growing at an alarming rate. This prevents platforms and services from supporting all users equally, thus causing irreparable harm in the process.

 

We ask our Crew to institute a charter and policy on how privacy and security will be handled internally and externally. This includes, but is not limited to:

 

  • Banning the use of biased data with the intention to deceive

  • Regularly consulting an internal or external diversity, equity, and inclusion officer and/or team trained to look at bias in algorithms

  • Recurring reexamination of the models and data on which AI and related algorithms are trained and evaluated

  • Removing programs and/or algorithms from a platform (or platforms) intended to deliberately mislead or defraud users for the express purpose of spreading misinformation

  • Rapidly responding to any future developments in duplicitous use of technology by creating applicable and appropriate policies pertaining to said new technologies

Avoid the creation or usage of any falsified
or biased data used to deceive

Seeing as a company or entity would not want falsified or biased data to feature on their platform, or have their platform known/become synonymous with intentionally duplicitous materials, we ask that signing companies, entities, and persons agree to not create said materials. This includes deepfakes, psychologically manipulative user experiences (particularly those detrimental to user mental/emotional health), or any data that has been altered to mislead or misguide an audience.

Build privacy- and user-forward products

In our modern world, both free and paid services often make users the product, selling and exploiting their data with little to no forethought on how to safeguard said data. These practices are often opaque, buried under pages of legalese, or simply hidden from the user. Opting out of the facilitation of user data between a company and third-party entities is often unwieldy or non-existent.

 

We ask that our Crew institute a clear and plainspoken privacy policy that is user-forward, letting the user know first and foremost what data you require of them, how you will use it, and how you will not use it. We also ask you to give the user any or all opportunities to opt out of some or all of the facilitation of their data with others, even if it means a sub-par experience for the requesting user (or users).

Take data security seriously and detail your processes

The illicit market for illegally-obtained user data values in the high billions, as data like credit card credentials, social security numbers, personal interests, and identifying data (address, age, etc.) can be used for a near-infinite number of fraudulent purposes. Once this data is in the hands of those seeking to use it for nefarious purposes, it cannot be recovered.

 

We request that Crew treat all data—user-related, internal, and beyond—as sacred. This means using the latest encryption technology, segmented storage of the most valuable data, and any data-related security and anti-fraud measures one can take to safeguard the most precious assets held. We ask you to detail your security processes for safeguarding user data and share them publicly. This is all to ensure the use that you are taking any and all steps to ensure their trust is not unwarranted, and that your commitment to security is on par with your commitment to privacy, truth, and the removal of bias.

We thank you again for your interest in collaborating with Bias Buccaneers, as well as your pledge to uphold standards in AI. 

​

We believe our shared commitment to stopping the dissemination of biased data or unethical AI on and off public platform is a sizable step towards creating a more honest and open world.

Sincerely, the Bias Buccaneers

bottom of page