This is a longform blogpost version of a thread I made on a whim after reflecting on the recent blunders of the KeepassXC PR attempts on both Mastodon and Bluesky.
大Context for this post
This Bluesky post (requires login) which warns people about a KeepassXC maintainer using Microsoft Copilot to experiment with the code, followed by some arguably questionable PR attempts which further eroded the trust of the software and the maintainer from several people, including myself.
The way these people handled this situation was horrible on top of the already shaky backbone on allowing AI-assisted contributions to the code, saying they don't encourage it but at the same time doing it themselves. I read this as implicitly encouraging it "if they're doing it, we could!" and even on the wake of being wildly unpopular they decided to double down.
My original thread on the thoughts, cleaned up
What one should do is not be staunchingly anti-"AI" (using ML going forward.) but to know your enemy and act accordingly while being informed.
If you cosplay a modern luddite (which I've increasingly been noticing in some people, especially art-centric people) and are stupidly frictionful against ML you're going to look uninformed at best and as a hypocrite at worst.
That is mainly because besides the generative content we all know and hate there are uses out there that some of you might not even realize use ML.
Using VoiceMod? ye, their voice changer is ML.
Using any online translator? ML too, we're way past the times of per-word translation.
OCR? yep, also ML. Computer vision!
CSAM filters? ML. It's NOT good to put humans to moderate that kind of thing entirely manually.
Text-to-speech? ML. Microsoft Sam as funny as it was is now obsolete.
Weather prediction? also ML!
Hell, have you seen protein folds being predicted? Good lord, interpolation, best traffic route, VOICE RECOGNITION?? All of that is now powered by some form of machine learning.
There are arguments that can be used against the whole movement and many are excellent, like concerns of plagiarism and the insane energy cost of the enormous data centers they want to build for their centralized entrypoints, but some of those arguments, in the big picture, don't hold water. Instead of denouncing the entirety of the technology you should take it for yourself, bake it with love and EAT it, not have Microsoft or Google or some big company munch the pie and feed it to you and then increase the distance of the spoon the moment they notice you REALLY like it.
Not only that, but also one must educate themselves on the topic at the very least to a certain degree, to know what to denounce and what to chill on, and not give in to companies that have ulterior motives when hyping up ML models.
Majorly, they're salesmen, not devs. I don't think the researchers that made breakthroughs like attention and the entire Transformer architecture wanted something like this to happen.
Bit of a side, but... did you know you can run some small LLMs on your own hardware?
That's running on a GeForce GTX1650 (my laptop and main device).
Locally.
Of course, there are some uses that are straight up horrible. Those are very clearly obvious, such as plagiarizing an artstyle without permission and things such as that. Mainly creative stuff, really. (Why are you willingly stripping yourself of the joy of creATion?)
I also have issue if you write large chunks of code (mostly its logic, templating and banal tediousness I believe is somewhat fine) using LLMs because who's writing the Damn Code, Then?
This also applies if your stances on allowing so are shaky, which is the whole gripe with the KeepassXC situation.
This also ALSO applies to placeholder assets in games and things of the like, and I could go on... but the biggest thing you should be against are the huge companies trying to sell you the product so you depend on them or actively poison the landscape (see Sora2).
Not to mention fascism overtones. I've been an observer this entire time as I do not live in the United States, but the current government has been heavily associated with the displacement of the ethical and the forging of fake news with their use of image and video generation, to the point image generation has been equated to a vehicle for fascism and, y'know, they kind of have a point to believe that.
ML is not the future as people want to try to hype/doom about, but it's a part of the future and you should be PROPERLY aware of it so you can have a steeled posture on the topic without resorting to weak arguments.
experiment!
learn!
know what thy enemy is and what could be helpful, at best.
Chatbots are not sentient. Do not rely on them psychologically, for this reason.
Diffusion models do not draw. Music models do not compose. Passing them as doing such to offset creative talent is malicious.
...but, are you going to kill somebody for using waifu2x (deep-learning based image scaler), or running mistral on their own machine for fun or personal, harmless assistance?
...Well...