research

i am looking for a phd program with faculty who are also studying the visual intersection of art and technology, specifically generative art (AI), as it relates to the ethics of racial and gender biases.

implicit biases are inevitably embedded into our code, linguistics, and imagery. it’s our ethical responsibility to raise awareness and garner accountability for the data we provide to both humans and machines moving forward.

my goal is to provide exposure of the disadvantages faced by perpetually marginalized groups through quantitative data which will in turn provide insight about how we as a society can promote actionable equity through qualitative analysis of said data.

the problem: AI discrimination

Ingrained and automatic discrimination, if not handled properly within computer vision systems, will learn and amplify human biases within any referenced data.

Not unlike human psychology, defining the concepts of image classification bias in mathematical terms -- let alone neural networks or large language models -- is not a trivial task due to the large amount of information present in an image.

Concisely put, identifying sensitive/unfair/biased attributes for the purpose of employing machine learning techniques forces scientists and practitioners to choose measures based on their own personal beliefs; a methodology which ultimately lacks efficiency and retains fundamental limitations by receiving sensitive human inputs.

We, as scientists and ethicists have a responsibility to inform algorithmic ecosystems about the nuances of marginalized groups – not limited to black people, women, latinX people, etc.

recently, while visiting MOMA, i stumbled upon this diagram by Kate Crawford and Vladan Joler.

granted this map is not a reflection of a neural network related to generative imagery, it is a glimpse into the layers of algorithmic systems in place behind an AI.