Billy Crone: The A.I. Apocalypse
#1
This from a Protestant source, but the information, interspersed in the Protestant slant, is vital to see, hear and realize is a real threat TODAY. They make some 'Biblical' comparisons, that are actually quite possibly valid, but it is the breakdown of the progression and mass indoctrination of the people to accept this technology, which are most shocking. Check in at about 19 minutes. 'Watson'..."Terminator/Skynet" next technology, indeed.

Interesting video and good points made, so I thought, worth the short few minutes to watch and be educated about this seemingly unrealized threat. Meat starts about 2 minutes in...



One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
  
I don't need a good memory, because I always tell the truth.
Jessie Ventura

Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain

You don't have a soul. You are a soul. You have a body.
C.S. Lewis

Political Correctness is Fascism pretending to be manners.
George Carlin
Reply
#2
Thanks for sharing that.

Lord have mercy.

One day at a time I guess.
[-] The following 1 user Likes Sacred Heart lover's post:
  • Zedta
Reply
#3
(03-22-2018, 12:32 PM)Sacred Heart lover Wrote: Thanks for sharing that.

Lord have mercy.

One day at a time I guess.

Ya know, I don't remember the thread title, but I had a post that made the point that robotics and AI would be a serious threat to us, because of the lack of basic control to not kill people was ignored. Simple logic will lead one the conclusion that humanity is a threat to anything on the earth (even humanity itself!) so must be eliminated. AI, it is said, eventually comes to that conclusion...shockingly commonly.

What really worried me, personally, about the thread, was how nonchalant and lackadaisical most posters attitudes were over the issue. Shocking and expected. Because SciFi movies and books on the subject, is future reality often and in AI, its been woefully accurate way too often.
One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
  
I don't need a good memory, because I always tell the truth.
Jessie Ventura

Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain

You don't have a soul. You are a soul. You have a body.
C.S. Lewis

Political Correctness is Fascism pretending to be manners.
George Carlin
Reply
#4
(03-22-2018, 12:48 PM)Zedta Wrote: Ya know, I don't remember the thread title, but I had a post that made the point that robotics and AI would be a serious threat to us, because of the lack of basic control to not kill people was ignored. Simple logic will lead one the conclusion that humanity is a threat to anything on the earth (even humanity itself!) so must be eliminated. AI, it is said, eventually comes to that conclusion...shockingly commonly.

What really worried me, personally, about the thread, was how nonchalant and lackadaisical most posters attitudes were over the issue. Shocking and expected. Because SciFi movies and books on the subject, is future reality often and in AI, its been woefully accurate way too often.

It's upsetting, but on this particular issue what, if anything, can we do?
Reply
#5
(03-22-2018, 12:54 PM)Sacred Heart lover Wrote: It's upsetting, but on this particular issue what, if anything, can we do?

In reality? Virtually nothing. The profit motive has very quickly destroyed any possibility for a reasonable regulation of this technology. It has been reverted to a who gets what first mentality and nothing good ever comes from that when dealing with murderous technology. Look at all the zeal and effort that went into nuclear technology. That was such a good idea! :dodgy:  Right...

If the robotics industry should have listened to a rather savvy futurist, Isaac Asimov, and adopted his:

Quote:Source


The Three Laws of Robotics
(often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]


[and also, from same article, these salient outtakes from some of his books relating to how robotics may progress, very interesting how they seem quite a projection of what we are on the threshold of dealing with today]



By Asimov

Asimov's stories test his Three Laws in a wide variety of circumstances leading to proposals and rejection of modifications. Science fiction scholar James Gunn writes in 1982,
"The Asimov robot stories as a whole may respond best to an analysis on this basis: the ambiguity in the Three Laws and the ways in which Asimov played twenty-nine variations upon a theme".[14] While the original set of Laws provided inspirations for many stories, Asimov introduced modified versions from time to time.
First Law modified

In "Little Lost Robot" several NS-2, or "Nestor", robots are created with only part of the First Law.[1] It reads:
Quote:1. A robot may not harm a human being.

This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their positronic brains are highly sensitive to gamma rays the robots are rendered inoperable by doses reasonably safe for humans. The robots are being destroyed attempting to rescue the humans who are in no actual danger but "might forget to leave" the irradiated area within the exposure time limit. Removing the First Law's "inaction" clause solves this problem but creates the possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.[1]

Gaia
is a planet with collective intelligence in the Foundation which adopts a law similar to the First Law, and the Zeroth Law, as its philosophy:
Quote:Gaia may not harm life or allow life to come to harm.


Zeroth Law added

Asimov once added a "Zeroth Law"—so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity. The robotic character R. Daneel Olivaw was the first to give the Zeroth Law a name in the novel Robots and Empire;[15] however, the character Susan Calvin articulates the concept in the short story "The Evitable Conflict".

In the final scenes of the novel Robots and Empire, R. Giskard Reventlov is the first robot to act according to the Zeroth Law. Giskard is telepathic, like the robot Herbie in the short story "Liar!", and tries to apply the Zeroth Law through his understanding of a more subtle concept of "harm" than most robots can grasp.[16] However, unlike Herbie, Giskard grasps the philosophical concept of the Zeroth Law allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard's brain but instead is a rule he attempts to comprehend through pure metacognition. Though he fails – it ultimately destroys his positronic brain as he is not certain whether his choice will turn out to be for the ultimate good of humanity or not – he gives his successor R. Daneel Olivaw his telepathic abilities. Over the course of many thousands of years Daneel adapts himself to be able to fully obey the Zeroth Law. As Daneel formulates it, in the novels Foundation and Earth and Prelude to Foundation, the Zeroth Law reads:
Quote:A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice.

Quote:Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?"

"Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
— Foundation and Earth

A translator incorporated the concept of the Zeroth Law into one of Asimov's novels before Asimov himself made the law explicit.[17] Near the climax of The Caves of Steel, Elijah Baley makes a bitter comment to himself thinking that the First Law forbids a robot from harming a human being. He determines that it must be so unless the robot is clever enough to comprehend that its actions are for humankind's long-term good. In Jacques Brécard's 1956 French translation entitled Les Cavernes d'acier Baley's thoughts emerge in a slightly different way:
Quote:"A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!"[17]

Removal of the Three Laws

Three times during his writing career, Asimov portrayed robots that disregard the Three Laws entirely. The first case was a short-short story entitled "First Law" and is often considered an insignificant "tall tale"[18] or even apocryphal.[19] On the other hand, the short story "Cal" (from the collection Gold), told by a first-person robot narrator, features a robot who disregards the Three Laws because he has found something far more important—he wants to be a writer. Humorous, partly autobiographical and unusually experimental in style, "Cal" has been regarded as one of Gold's strongest stories.[20] The third is a short story entitled "Sally" in which cars fitted with positronic brains are apparently able to harm and kill humans in disregard of the First Law. However, aside from the positronic brain concept, this story does not refer to other robot stories and may not be set in the same continuity.

The title story of the Robot Dreams collection portrays LVX-1, or "Elvex", a robot who enters a state of unconsciousness and dreams thanks to the unusual fractal construction of his positronic brain. In his dream the first two Laws are absent and the Third Law reads "A robot must protect its own existence".[21]

Asimov took varying positions on whether the Laws were optional: although in his first writings they were simply carefully engineered safeguards, in later stories Asimov stated that they were an inalienable part of the mathematical foundation underlying the positronic brain. Without the basic theory of the Three Laws the fictional scientists of Asimov's universe would be unable to design a workable brain unit. This is historically consistent: the occasions where roboticists modify the Laws generally occur early within the stories' chronology and at a time when there is less existing work to be re-done. In "Little Lost Robot" Susan Calvin considers modifying the Laws to be a terrible idea, although possible,[22] while centuries later Dr. Gerrigel in The Caves of Steel believes it to be impossible.

The character Dr. Gerrigel uses the term "Asenion" to describe robots programmed with the Three Laws. The robots in Asimov's stories, being Asenion robots, are incapable of knowingly violating the Three Laws but, in principle, a robot in science fiction or in the real world could be non-Asenion. "Asenion" is a misspelling of the name Asimov which was made by an editor of the magazine Planet Stories.[23] Asimov used this obscure variation to insert himself into The Caves of Steel just like he referred to himself as "Azimuth or, possibly, Asymptote" in Thiotimoline to the Stars, in much the same way that Vladimir Nabokov appeared in Lolita anagrammatically disguised as "Vivian Darkbloom".

Characters within the stories often point out that the Three Laws, as they exist in a robot's mind, are not the written versions usually quoted by humans but abstract mathematical concepts upon which a robot's entire developing consciousness is based. This concept is largely fuzzy and unclear in earlier stories depicting very rudimentary robots who are only programmed to comprehend basic physical tasks, where the Three Laws act as an overarching safeguard, but by the era of The Caves of Steel featuring robots with human or beyond-human intelligence the Three Laws have become the underlying basic ethical worldview that determines the actions of all robots.
 
Things are too far out of control to stop a possibility that this future outcome may be realized in our time.
One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
  
I don't need a good memory, because I always tell the truth.
Jessie Ventura

Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain

You don't have a soul. You are a soul. You have a body.
C.S. Lewis

Political Correctness is Fascism pretending to be manners.
George Carlin
Reply
#6
I've wondered at times if the Antichrist would be some sort of satanically possessed AI. Consider how powerful an AI could really become and the "wonders" it could perform in order to deceive men.
Blood of Christ, relief of the burdened, save us.

“It is my design to die in the brew house; let ale be placed in my mouth when I am expiring, that when the choirs of angels come, they may say, “Be God propitious to this drinker.” – St. Columbanus, A.D. 612
[-] The following 1 user Likes GangGreen's post:
  • Zedta
Reply
#7
(03-22-2018, 02:44 PM)GangGreen Wrote: I've wondered at times if the Antichrist would be some sort of satanically possessed AI. Consider how powerful an AI could really become and the "wonders" it could perform in order to deceive men.

Objects can be inhabited by Demonic spirits, indeed...even dolls. Its not such a long stretch to include devices and even computer systems, I suppose.
One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
  
I don't need a good memory, because I always tell the truth.
Jessie Ventura

Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain

You don't have a soul. You are a soul. You have a body.
C.S. Lewis

Political Correctness is Fascism pretending to be manners.
George Carlin
Reply
#8
My understanding of the Antichrist was that he would be a man, like all of us.
Reply
#9
(03-22-2018, 02:55 PM)Justin Alphonsus Wrote: My understanding of the Antichrist was that he would be a man, like all of us.

I don't think he'll be 'like all of us'. He won't be redeemable.

Chapter 13 of The Apocalypse does talk about a 'second beast', however, that could be a form(s) of technology and could be something used to subjugate humanity into submission. The use of an identifier of it being a 'beast' may be to indicate that John had no vocabulary to better describe it.
One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
  
I don't need a good memory, because I always tell the truth.
Jessie Ventura

Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain

You don't have a soul. You are a soul. You have a body.
C.S. Lewis

Political Correctness is Fascism pretending to be manners.
George Carlin
Reply
#10
I don't get it.

Most everyone sees that Artificial Intelligence and Robots are going to be extremely bad news for humanity, whether it be jobs replacement or something like Terminator where in 2050 we are going to have a Rise of the Machines Dystopia.

And yet, it's like everyone is stuck in the thinking it is inevitable or something.

Just. Stop. Developing. This. Stuff....

But it's like the developers cannot help themselves. It can't end well.
[-] The following 1 user Likes BC's post:
  • Zedta
Reply




Users browsing this thread: 1 Guest(s)