Democratic processes to steer behavior of AI

We need improved democratic methods to govern AI behavior.

That’s what OpenAI says, giving a couple useful examples to describe the problem.

How should an AI decide in controversial opinions?

“Socially acceptepable for >50%”?
“Western ideologies”?
“Rules set out by a company investing in RLHF”?

OpenAI now offers a 10 x $100,000 grant
for teams or individuals
that work on that topic,
requiring a public prototype
and open sourcing the solution
until October.

It seems obvious that the “safer” you try to make an AI the dumber, boring, non interesting it will get.

Cutting off all the edges is not the solution,
so this idea and proposal makes a lot of sense.

I can imagine that this could lead even to country-specific or at least continent-specific versions of AI.


over hyped considering we have real power with quantum computing happening. i dont see the point, its still just a word calculator that can steal content

" Several issues can undermine democratic processes, such as the failure to adequately represent minority or majority groups, manipulation by special interest groups, insufficiently informed participants, or participation washing. We are looking for teams who proactively address these failure modes, and demonstrate awareness of the potential flaws and downsides of various approaches."

Excited to see how this plays out. Hopefully a few new cool projects come out of it.

1 Like

I highly doubt there will ever be consensus. It’s platform by platform. Ultimately, it’s a question of fine-tuning different versions depending on the local government.

We’re all biased in our input and output. The thing that’s incredibly difficult is where to draw the line. I thought Twitter was getting better at enforcing its own policies that made the platform safer.

Ultimately, in my opinion, we need guardrails that can keep the general population safe. It’s less about what’s socially acceptable, but there are many corners of the internet that I avoid.

It sucks that many people use technology to harm, hurt, or be mean. It reminds me of the AI bot from years ago Tay. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day - The Verge

That did not end well and I wouldn’t be surprised if we end up going in a similar direction somehow.

1 Like

Tay is an important milestone in AI chatbots because it was a lesson the AI community needed and a LOT of the controls and safety limits in ChatGPT come almost directly from lessons learned, painfully, back then with Tay.

1 Like

Precisely. I don’t think I’ll ever be an alarmist, but we’d be naive to not anticipate some bad players abusing the tech regardless.

1 Like

Honestly, the existing safeguards don’t really prevent much of that. Right from release I was saying “What if someone uses AI to write like a pre-teen, using that to groom kids?”. Sure, they’ve done what they can to have the AI not be capable of sexual or erotic writing, but there’s nothing to stop an awful lot of ‘DeepFake’ uses of all kinds, in all media.

It is commonly stated that social media was heavily used by certain factions to influence certain voting and even elections, and that was back when they needed to write everything by hand and translate it. What happens when certain states have state-sponsored propaganda engines, powered by AI, able to produce 100 times as much propaganda, and more effectively?

There are so many dark uses that AI could be turned to, and the reality of the broad spectrum of human nature says if a thing can be abused, then some will definitely abuse it. We’ll all need to adapt to a reality where every image or video could be faked, where fake news and propaganda could be produced at such a pace that it could dominate the truth and make it seem the minority view.

I’ve been a fan of sci-fi pretty much my entire life, and just the other day I was thinking about what a fool Asimov was. He’s the author who came up with the Laws of Robotics that are so heavily featured in ‘I Robot’. The first law being that a robot may never cause a human to die either through action or inaction. It is presented as the paramount law, that all the other laws have to give way to.

It seems like a great thought at first. But what happens when the human in question is in great pain with no chance of a recovery or meaningful life, and they simply want to turn off life support? What when the robot absolutely must, by its law, ignore a ‘do not resuscitate’ order?

Us humans tend to only see most needs after they occur at least once. Much of our planning and strategy is just like the military say it - it lasts until first contact, then pretty much everything is out of the window.

Heck, we probably need about 20 years just to sort out the new ideas and rules we need about copyright, now that an AI can learn entirely from just the works of others, but so long as it steals from all of them equally, no single breech of copyright (as the laws stand now) exist.

1 Like

I agree with everything you are suggesting. The laws of robotics do feel like science fiction. Additionally, we frequently experience unintended consequences with our decisions.

This story appeared the other day that highlights an even bigger threat than deep fakes on social media.

Yet, there’s a certain element of helplessness. We do not get much say in the solution. Just gotta hope for the best in my opinion.

1 Like

Democracy gave us Brexit, no thanks to that stuff haha

Bad people will do bad things whatever the tools, or lack of. whilst good people will always do good things, with or without tools.
Logic suggests we have mre good than bad people, all with access. So by that token and the laws of ‘exponential’ then bad should start to have less and less significance… Thats logical right?

How many drops of poison in a gallon of good drink does it take to make the whole drink toxic? Well it depends on the poison more than the quantities. Imagine a village made up entirely of firefighters. Could one arsonist prepared to strike at night be a problem?

There are millions, if not billions of cells in your body, right? And all it takes is for one of them to be cancerous, producing more cancer, despite how completely outweighed it is by the healthy cells replicating other healthy cells.

Get the general theme here?

Hmm, not sure, either that shit happens so dont worry about it? or shit happens so we better stop everything? :slight_smile: Im a glass half full guy naturally tho it used to be 3/4 full

1 Like

It is only a matter of time before life will find its way. Our opinions are mostly just simulations within the sandbox, not the one who actually controls the sandbox.

1 Like

when that all eventually comes out, it will be cats after all :slight_smile: