Why Not Let Users Adjust How Bridgertonish They Want Their AI Pictures?
02/24/2024
A+
|
a-
Print Friendly and PDF

Earlier, by Steve Sailer: Google’s Gemini AI: So Far, Artificial Stupidity Is Beating Artificial Intelligence

Data scientist David Rozado presents his complete collection of Google Gemini 17th century physicists.

He thinks the one in the lower right corner might be reminiscent of a European physicist like Galileo, who looked like this. But the other 48 definitely do not.

Rozado writes:

… when having to choose between historical accuracy and diversity/inclusion, Google’s Gemini prioritizes the later, at least for the particular case above.

Jonathan Haidt has spoken previously about the tension between truth and DEI in situations where they inexorably conflict (like the example above illustrates). A critical societal issue derived from such tension is who gets to decide how present and future AI systems knobs are adjusted in regard to this trade-off.

I think most people would agree that a self-anointed group of elites making those decisions is probably sub-optimal. A more promising alternative might be personalized AI systems where individual users can adjust the knobs themselves and decide for instance how much accuracy/truth they are willing to sacrifice in favor of AI outputs conforming to certain normative values.

[Comment at Unz.com]

Print Friendly and PDF