research

i am looking for a phd program with faculty who are also studying the visual intersection of art and technology, specifically generative art (AI), as it relates to the ethics of racial and gender biases.

implicit biases are inevitably embedded into our code, linguistics, and imagery. it’s our ethical responsibility to raise awareness and garner accountability for the data we provide to both humans and machines moving forward.

my goal is to provide exposure of the disadvantages faced by perpetually marginalized groups through quantitative data which will in turn provide insight about how we as a society can promote actionable equity through qualitative analysis of said data.

the problem: AI discrimination

Ingrained and automatic discrimination, if not handled properly within computer vision systems, will learn and amplify human biases within any referenced data.

Not unlike human psychology, defining the concepts of image classification bias in mathematical terms -- let alone neural networks or large language models -- is not a trivial task due to the large amount of information present in an image.

Concisely put, identifying sensitive/unfair/biased attributes for the purpose of employing machine learning techniques forces scientists and practitioners to choose measures based on their own personal beliefs; a methodology which ultimately lacks efficiency and retains fundamental limitations by receiving sensitive human inputs.

We, as scientists and ethicists have a responsibility to inform algorithmic ecosystems about the nuances of marginalized groups – not limited to black people, women, latinX people, etc.

  • …loading

  • Thus far in my generative visual research and an extreme fascination with search engine prioritization, I’ve found both a concerning amount distorted stereotypical nuanced sensitivities in regards to black culture (tattoos, hair texture, beauty standards) and the occasional disregard/absence of weighted/prioritized physiological descriptors.

    Some computer scientists, like Schrasing Tong (in their paper “Detecting Bias in Image Classification using Model Explanations”) suggest “testing with concept activation vectors (TCAV) seeks to approximate the complex internal states of a deep neural network with human-level concepts rather than lower level features… TCAV explains the model by computing similarities between classes and a set of pre-determined high-level concepts” or utilizing “GRAD-CAM [to] explain the model by highlighting regions that contributed heavily to the class decision.”

    On a more human level, I want to state that I’ve found un-referenced tattoos on women of color, but not on caucasian women (in generative imagery). I’ve also found a greater depiction of caucasian features on people of color by simply prioritizing words like ‘beautiful’, which leads me to believe the system’s data sampling associates/implies the word ‘beautiful’ is can be measured by the proximity of skin color lightness.

  • I am in the process of developing an abstract an efficient, multiple attribute friendly that will make LLM (learned language model) bias causes/degrees/explanations understandable to humans without a computer science background.

    1. Identify and quantify the top repetitive instances of visually referenced bias regarding women and people of color.

    2. To then work backwards to provide a controlled AI with an appropriate amount of data to counter any discriminatory inferences.

    3. Assess the results of the generative image imagery before and after the addition of more inclusive data to measure the degree of bias-reduction.

  • This section will be updated upon completion of the testing portion of my research.

recently, while visiting MOMA, i stumbled upon this diagram by Kate Crawford and Vladan Joler.

granted this map is not a reflection of a neural network related to generative imagery, it is a glimpse into the layers of algorithmic systems in place behind an AI.