• ODT Gun Show & Swap Meet - May 4, 2024! - Click here for info

ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov's "Three Laws of Robotics"
 
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov's "Three Laws of Robotics"

I read that book in 5th grade maybe, loved it.
 
I agree that AI is dangerous.

What we should really worry about is Super Intelligence. If any AI becomes smarter than the smartest human, we will really have a problem.

How do you control something smarter than the smartest of your species?
 
i cant even count the number of times i've seen a new article about an advance in a.i. or watched a video of new robots running and jumping over obstacles and i just keep wondering, "have any of you ****s never seen a ****ing sifi movie???"
 
I just heard a podcast about little bots that design and initiate smarter bots. They just do certain types of data integration right now-but the designer was talking about all the novel and unexpected ways they design other bots.

Machine intelligence is something we will have to figure out soon. I have a tough time imagining how morals/ethics/limitations could be enforced in machine intelligences once they really take off.
 
Back
Top Bottom