Amendment to Item 55 (AI) on 4/24/25 Agenda
Posted: Tue Apr 22, 2025 7:14 pm
Colleagues,
I will be proposing an amendment to the AI ethics resolution, Item 55, on Thursday’s agenda.
The resolution is excellent and I commend MPT Fuentes for submitting it. After consulting with industry liaisons and AI ethics professionals, I believe that some small changes can help convey the intent of the resolution while avoiding some potential difficulties they highlighted.
First, I would like to add a definition of “discriminatory bias” to the resolution to replace most instances of the word bias. Discriminatory bias for our purposes is when AI systems systematically produce results that unfairly favor or disadvantage individuals, entities, or groups, based on protected characteristics including race, color, sex, or disability. I believe that this change will better convey the sorts of biases that we are interested in preventing with this resolution. I worry that leaving bias by itself could cast too wide a net leaving the city’s AI efforts subject to more procedural hurdles than we would prefer.
Second, I closed the circle between some defined terms (e.g., Social Scoring) and their use throughout the resolution. I hope that these changes help convey the original intent.
Lastly, I’ve added references to standards and regulatory frameworks for the City Manager to consult when developing AI literacy training, public communication, and city policy. These include EU AI Regulation, Texas law, and industry AI guidance. Direct links to the aforementioned are below.
I invite my colleagues discussion on this important issue.
Redline: https://drive.google.com/file/d/1NqJcDQ ... sp=sharing
Clean: https://drive.google.com/file/d/1XxhlpU ... sp=sharing
Thank you all!
-Marc
I will be proposing an amendment to the AI ethics resolution, Item 55, on Thursday’s agenda.
The resolution is excellent and I commend MPT Fuentes for submitting it. After consulting with industry liaisons and AI ethics professionals, I believe that some small changes can help convey the intent of the resolution while avoiding some potential difficulties they highlighted.
First, I would like to add a definition of “discriminatory bias” to the resolution to replace most instances of the word bias. Discriminatory bias for our purposes is when AI systems systematically produce results that unfairly favor or disadvantage individuals, entities, or groups, based on protected characteristics including race, color, sex, or disability. I believe that this change will better convey the sorts of biases that we are interested in preventing with this resolution. I worry that leaving bias by itself could cast too wide a net leaving the city’s AI efforts subject to more procedural hurdles than we would prefer.
Second, I closed the circle between some defined terms (e.g., Social Scoring) and their use throughout the resolution. I hope that these changes help convey the original intent.
Lastly, I’ve added references to standards and regulatory frameworks for the City Manager to consult when developing AI literacy training, public communication, and city policy. These include EU AI Regulation, Texas law, and industry AI guidance. Direct links to the aforementioned are below.
I invite my colleagues discussion on this important issue.
Redline: https://drive.google.com/file/d/1NqJcDQ ... sp=sharing
Clean: https://drive.google.com/file/d/1XxhlpU ... sp=sharing
Thank you all!
-Marc