high angle photo of robot

If ChatGPT told you to jump off a bridge. . .


Who is responsible for the misdeeds of artificial intelligence?

I recently played around with the new OpenAI ChatGPT platform. It is a really cool tool that certainly has its uses professionally, much like other AI solutions. But who is responsible for it?

Who is accountable for AI?

One evening, a few years ago, I sat by a fire pit with a bunch of people from an ivy league school who, by in case you were wondering, are way smarter than me.

While the conversation veered around some fairly pretentious topics and opinions, one comment from an individual stuck in my mind, and hasn’t left since.

“AI is ‘laundered accountability.'”

Who is responsible if ChatGPT tells a person to jump off of a bridge?

Is it the responsibility of the person to not take the bad advice?

Probably, but what if AI murdered someone?

Sure, the AI did it, but it is based on algorithms built and created by data engineers and scientists.

But maybe a different person asked someone else to build and refine models in a way that was discriminatory towards telling people to jump off a bridge.

Maybe all of these people work for a company that sanctioned the use of this AI?. .

. . . But what if the people built the AI as an open source tool for fair use?

Like many other technology and computer discussions, it will take a while for law to catch up, but it will be interesting to see where it goes!

Leave a Comment