6 Comments

Thank you very much for this clear explanation. In recent days we've seen the list of words that apparently US researchers are not allowed to use anymore, including words like women, LGBT+, equity, bias,etc. Reading this I realised that this ban, plus the training is these new AI models will erase information and bias these new models in terrifying new ways

Expand full comment

*training of

Expand full comment

Yes absolutely, the bias embedded in algorithmic source data has been a major concern for years , as all the data we collect has human biases in it - so if we remove the capacity to mitigate those biases, the models will simply amplify them.

What i didn't explore here but maybe i should have - is that the very concept of artifical general intelligence has its roots in eugenics - it's really about stratification of humans - so AI models based on removing "woke bias" will in fact be systematically embedding human stratifications in their outputs.

And as DOGE has every intention of deploying AI across all policy agendas - such as deportation - the impact could be horrific.

And thanks for reading - i really appreciate the feedback!

Expand full comment

Thank you so much for taking the time to write this! I really enjoyed the read. I'll need to take time to digest it and form my own thoughts about it, but I'd love to ask more about why you think independently trained, small-scale AI models (those created by artists, for example), also contribute to technofascism?

Expand full comment

Thankyou Vienna - that means a lot! and you make.a really good point - i actually realised quite late when editing that I needed to make a distinction between large LLMs, so called frontier models, and other forms of AI, like your example, which are far less problematic - i tried to retrofit the distinction into the piece, but probably not far enough.

Basically, yes its possible to train a small focused model - which is not dependent on these huge infrastructures - and i would also say there is nothing inherently technofascistic about those - depending on the data used, and the applicatons (hopefully the artists you have in mind are generally non- and maybe even anti-fascistic in their practice - but you never know...).

Incidentally, a lot of the most important breakthroughs with AI are using far smaller models.

I guess i would justify myself by saying in the current moment, people training their own independent models are a tiny part of all "AI" - most people's experience of, and exposure to AI is through massive LLMs - and derivatives of them.

i would also note that when training your own AI - for most people that would still mean essentially training and/or weighting an existing LLM:

So for example many feel they are in some way independent by using one of Meta's Llama models, which are (questionably) described as open source. My argument would be that those models are still very much part of these infrastructures - so do also bring us into contact with technofascistic apparatus...

i hope that at least goes some way to answering your question...

Expand full comment

Yes, very clear, thank you for your thoughtful response! I agree with you - many artists using AI in their practices are likely using derivatives of Big Tech LLMs that they may not be aware are contradictory to their (anti-fascist) values. I know a small group of artists building and training their own models. A tiny percentage, given that it requires great technical prowess, of course.

Expand full comment