Having just started a company that primarily deals with large language models I’m occasionally thinking about the responsibilities that we have when we introduce a new AI agent in the digital space.
Besides preservation of the human species, a good rule that I think we should give ourselves is “avoid bullshit”, and while this rule must certainly be true for any human activity, I think it’s extremely important when you are dealing with the equivalent of BS thermo-nuclear devices.
I’m still working on my list, this is as far as I got.
Every time one of our AI agents produces an output we should ask ourselves:
- does this text improves the life of the intended recipient?
- will it be delivered only to the intended recipient (and not to a whole bunch of innocent bystanders)?
- is it as efficient as it can be in the way it uses language and delivers its message?
If these minimum parameters are not met, the agent should be destroyed.
As with everything else, AI is not the cause of this, there has been plenty of wasteful content well before a computer could start mimicking the output of a sapiens. And because these LLM models have been trained on a lot of this useless noise, they are extremely good at generating more.
So even before you worry if AI can get the wrong president elected or robots can terminate humanity, just make sure that you are not accidentally leaving the BS tap open when you leave.