16 Comments
Jun 23, 2023·edited Jun 23, 2023Liked by Michael Woudenberg

The final sentence in this piece doesn't follow. Consider the use of an algorithm to make an interview/don't interview, hire/don't hire, accept/reject loan application, etc. decision - that decision is the output. Suppose that the training set is based on historical outcomes rather than on samples of an idealized latent set or other synthetic data. If there is actual ethical bias in the training data, then the *better* the data accuracy, the more consistently the ethical bias will be exhibited in the output.

The piece also rests on a too-literal interpretation of what people usually mean when they say things like "The algorithm is biased against [societal group]." The term 'algorithm' in the latter statement is usually a synecdoche for the entire process of using AI/ML to make economically and socially significant decisions. Whether the problem is in the training data or in the algorithmic programming per se isn't material in most non-expert colloquial contexts.

Finally the suggestion that "the term bias needs to be divorced from ethical considerations and fully focused on accuracy" isn't going to be realized: at least in the US, its ethical meaning is deeply embedded into laws and legal culture.

Expand full comment

Bias is the human condition and the conscious identification and acknowledgment of bias everywhere let's you get a better single most accurate picture.

Expand full comment
Nov 11, 2023Liked by Michael Woudenberg

Excellent article with quality comments! Yes, the bias is three feet deep, some of it not easy to fathom by the matrix we are in.

Expand full comment
Jun 27, 2023Liked by Michael Woudenberg

Applying bias to bias is simply the mathematical formulation of affirmative action. That is the reason quotas were created in universities and elsewhere. I enjoyed the way you formulated the problems and possible solutions.

Expand full comment

The real problem elucidated here is not technological - it is human.

An Algorithm is innocuous technology that humanity has used as long as humans have been 'computing' data. An Abacus uses Algorithms. Summarian Cuneiform uses Algorithms.

Moreover, neither Algorithms nor Machine Learning define AI.

Fortunately Language operates upon immutable axioms of Propositional/Predicate Logic - regardless of opinion. Opinion is most often the problem where Logic is concerned, and this is certainly the case with so-called 'Artificial Intelligence'. After-all, the term Artificial Intelligence = 'fake smart'.

Opinion is not Logic/Logic is not opinion are axioms of Logic.

An Algorithm is a tool. Tools aren't intelligent. The user of the tool provides the 'intelligence' upon which the tool operates. Coding is no different.

Just as there is no way to eliminate bias from human cognition, there is no way to "eliminate bias in AI/ML". There are only methods to mitigate bias, but those aren't new either. I recall plenty of classes about mitigating bias from college - that was before the Internet was publicly accessible.

Moreover, coding is not confined to just Algorithms. That may be the narrow focus of this discourse, but that's demonstrably biased. This discourse is demonstrable of bias, pedantics, and Logical fallacy (non sequitur) in all candor.

It's the preconceived notion that bias is inherently bad or an undesired outcome that is axiomatic of the premise. However, bias is not inherently 'bad'. Is a bias for Logical certainty bad? Is a bias for joy, & happiness characteristically bad? Do comedians not operate with the bias to make people laugh?

Expand full comment
Oct 1, 2023Liked by Michael Woudenberg

So the question remains: do you think we can reduce or eliminate bias bias in ML/AI?

Expand full comment