Earnest AI

Mark Zuckerberg has proclaimed that Open Source AI Is the Path Forward. He's not wrong.

At the same time, he's absolutely not in it for primarily selfless reasons. When you're late to the tech trend, the best way to catch up in both R&D and mindshare is open source your stuff, so that's what Meta is doing.

Even though Mark doesn't yet have an innate understanding and appreciation for The Commons, I'm cheering for Meta's big bet on open AI.

Since what 'open source AI' actually entails is woefully undefined [1], I'll offer a simple illustration of what trustworthy AI necessarily looks like.

Mutual trust

Flawed as they may be, our new AI citizens are here to stay. The key to a happy coexistence is trust. Thankfully, knowing which AI-agents you can trust is actually very easy!

This is how you test your AI agent's trustworthiness: Ask it to explain exactly how it was built. A trustworthy AI agent will be able to walk you through its inner workings in great detail and at whichever level of complexity you prefer.

Crucially, the “self-insight” of your supposed AI-friend must extend to its original training data. It's nearly impossible to build trust and make friends with some one who doesn't have any memories and therefore cannot tell you anything about why it thinks the way it thinks.

If I ask my AI-friend to draw me a picture of a swan, we should be able to have a conversation like this:

Erlend: That's a beautiful swan drawing! Which drawings did you learn from to draw this one?

AI-friend: Doing an image-similarity search against my training library, I found these 20 (author-credited) images of swans (out of 20,000) that closely match the picture we [The System] rendered for you.

Erlend: Fascinating. And why did you display a photorealistic swan instead of, say, a cartoony one?

AI-friend: That would be because of parameters XYZ...

..and so on. Nothing should be off limits. Easily digestible snippets of data should be just as readily available as links to the full-size repositories.

True AI friendship demands sincerity

The most meaningful version of 'open source AI', to me, is a provably earnest AI. I can only trust an AI agent that readily bares its software soul to me at a moment's notice.

Maybe that seems like asking a lot. In my human-to-human relationships I also expect honesty, but not in the absolute way that I do in a human-to-AI relationship. That's because I know there will always be things my human friends simply can't tell me yet, or ever.

An AI agent on the other hand has no such reservations about what information to divulge, as it is not a conscious, thinking entity with wants and fears. Outside the context of its commercial purpose, the AI has no reason to obfuscate its self-knowledge from me.

As such, I will only ever pay money for earnest AI. Anything else is designed for deception. I will pay good money for honesty.

We must keep in mind that these models are trained by information that’s already on the internet, so the starting point when considering harm should be whether a model can facilitate more harm than information that can quickly be retrieved from Google or other search results.

If Mark wants to rebrand as the organic cloud farmer, the only way for him to prove his commitment to a truly regenerative practice is to fully open up the training data for Llama. You just grabbed it all from the open internet anyhow, right?

So show us exactly what goes into your AI produce. We, cultivators of The Commons and the corporations that want to monetize it, can't possibly build a 'broader ecosystem' together unless Meta and its ilk can be transparent about where it is getting its water, nutrients and seeds (inputs) from, and what byproducts (outputs) they're releasing into the ecological cycle.

[1] – The OSI is engaged in a deepdive to solve for 'what is open source AI?' and I applaud the effort, but to be frank I think their latest draft shows they are still stuck in an antiquated, software-centric (as opposed to people-centric) world view.