Home


Overall
Site Map


Wes Penre Papers


Wes Penre
E-Books

   

WesPenre.Com

 
Blog


Updates


E-Books

 
Donations


Search


Email

 


 

"Synthetic Intelligence and the Transmutation of Humankind
--A Roadmap to the Singularity and Beyond"


-by Wes Penre, 2016

[http://wespenre.com]
 




 
Download this entire book in pdf
Right-click on the icon

 

Chapter 8:
"Opposition" in Academia and Science

 

Voices against Jade Helm ‘15

Transhumanists see nature as an obstacle that man must overcome. We could say that Transhumanism embraces the “physical realm,” while those who think along my lines, and those of most readers, embrace the “spiritual realm.” According to Transhumanism, nature has flaws, but most important; nature is slow to evolve and is the world where everything inevitably dies, contrary to Transhumanism, which is the world of eternal life, where everything always lives. Does Transhumanism remind you of Satanism, but also perhaps of alchemy?

Death must be overcome: death is scary, death is a mystery, and death is loss of information. The followers of the Transhumanist movement do not understand that knowledge is automatically stored in the mass consciousness and the so-called Akashic Records. The Overlords know this, of course, but they need to keep letting death be a mystery to people: something to fear. Transhumanism is based on fear of death, and this fear button is what they push to recruit their followers. Thus, the Controllers will never tell the masses about the afterlife. We are given hints about it, but by the same token, we are also given hints that there is no afterlife. The Controllers need to keep the yin and yang concept going in order to divide and conquer.

Apparently, there are certain people in academia and in science, who have begun to oppose the AI agenda. They can see the ultimate danger with Transhumanism and AI. About the time the infamous Jade Helm 15 happened (more about that later), thousands of scientists protested against Transhumanism and Artificial Intelligence. Two of the most famous scientists were Stephen Hawking and Noam Chomsky. They signed an open letter, calling for a ban on killer robots.[1] The scientist, who protested, were of the opinion that if we let the AI movement continue the way it was, the robots will eventually take over and eradicate mankind. They said it’s more or less inevitable because of the superior intelligence robots will have; robots will consider humans obsolete and terminate us. These scientists appeared extremely worried.

Some might say that these scientists are only playing a part of the AI game to present the opposite side of the Movement to again keep yin and yang in balance. This, I believe, is probably true to a large extent. Perhaps, Hawking is on the right track, now partly realizing what is actually going on behind the scenes, or he is just playing his role in the agenda.

Professor Hawking, and a number of other protesting scientists and academia, have been part of creating this future nightmare, and at one point, perhaps, they realized that what they have been involved in is destructive. They became concerned that, if not stopped, the AI movement might lead to the end of humankind. They might have believed that they needed to make up for the damage they unwittingly had done; hence, they started protesting against the Movement. Or again, it’s all just make-believe.

Stephen Hawking sits on the Board of Directors of the Future of Life Institute; a group consisting of some of the greatest minds of our time whose aim is to mitigate “existential risks facing humanity.” They now warn us of the danger of starting a “military AI arms race.” Other members of the Board are Elon Musk and Steve Wozniak, Apple’s co-founder.

There was recently a TV series called The Colony, which is about a totalitarian society, where humans are fenced into “colonies” with huge brick walls surrounding each colony. A malevolent alien race had fairly recently landed and segregated humans by creating these colonies, and depending on where you were located at the moment the aliens took over, that’s where you got stuck from thereon. This meant that you could get separated from friends and family, who might have been designated to live in another colony, without any communication between the colonies. No one was allowed outside the Walls (fences), or they would be killed or sent to some mysterious place called The Factory. The aliens kept themselves hidden, but were in contact with some key members of mankind, and the aliens had man surveil man in a super-strict military fashion. In fact, they were using drones that flew around everywhere in order to surveil people; they kept close track of everybody and killed them with a deadly beams of light weapon when programmed to do so.

The Colony was very popular, and new episodes are being made as I’m writing this. This series tells us things about our future that the Elite want us to know; the drones are one part of it. With these drones in mind, the media outlet The Independent writes the following in regards to the protesting scientists and their concerns:

These robotic weapons may include armed drones that can search for and kill certain people based on their programming, the next step from the current generation of drones, which are flown by humans who are often thousands of miles away from the warzone.

The letter says: "AI technology has reached a point where the deployment of such systems is - practically if not legally - feasible within years, not decades."

It adds that autonomous weapons "have been described as the third revolution in warfare, after gunpowder and nuclear arms".

It says that the Institute sees the "great potential [of AI] to benefit humanity in many ways", but believes the development of robotic weapons, which it said would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing, is not.

Such weapons do not yet truly exist, but the technology that would allow them to be used is not far away.  Opponents, like the signatories to the letter, believe that by eliminating the risk of human deaths, robotic weapons (the technology for which will become cheap and ubiquitous in coming years), would lower the threshold for going to war--potentially making wars more common.[2]

I believe this to be true, but AI is not only a danger when it comes to warfare; it’s a danger to humankind. Barbara Marciniak’s Pleiadians say that we are at the brink of extinction as a human race at this immediate point, and a new cycle will begin, in which a new type of human will emerge from the extinct Homo sapiens sapiens, and this new human will be AI.[3] They say that we, who have the knowledge, need to create new timelines where we don’t need to participate in this insanity, and that’s our only way out. If we don’t, we too will be sucked into this because it’s so easily done and very cleverly set up.

In 2015, the UK opposed a ban on killer robots at a UN conference, claiming that it sees no need for such a prohibition because the UK is not producing such weapons.[4]

This immediately contradicts what is actually going on in the UK. The UN conference was in 2015, but already in 2014, there were reports about drones filling the British skies in another Independent article (my emphasis),

The number of drones operating in British airspace has soared, with defence contractors, surveillance specialists, police forces and infrastructure firms among more than 300 companies and public bodies with permission to operate the controversial unmanned aircraft.

[…]

Other organisations able to operate drones in UK airspace include the Defence Science and Technology Laboratory, a research arm of the Ministry of Defence, and Marlborough Communications, which supplies UAVs [unmanned aerial vehicles] and other equipment to the British military. The Home Office and Defra have used drones, as have 11 other state bodies.[5]

One of the biggest lies, as I see it, is that because this is a “genetic library,” anything goes. Beings from other worlds or dimensions have the right to come here and experiment, and we are told that there is nothing that we can do about it— according to the Pleiadians. Which begs the question, do we even have the right to intervene in our own future?

This idea is shoveled into our consciousness by beings with ulterior motives. It is true that Earth is a Living Library, but this Living Library was nearly completed, perhaps billions of years ago, when the Queen of the Stars and Her Helpers set up their Experiment here. This didn’t mean that the genetic library couldn’t be adjusted, if needed; however, that was only allowed to be done by the Ancient Races—the Orion Queen and Her genetic engineers. Never was there an intention to let a band of outlaws come to Earth, kill and chase away the Original Creators, and take over the Library as they pleased, only to transform an already evolved race into slave labor. Never was it intended for these imposters to create an entirely new species that was inferior to the existing one, for the creators own selfish purposes, thus nullifying the entire purpose of the original Living Library. Moreover, it was never intended that this conquering gang of criminal star races should set up a frequency band, or quarantine, around Earth and our solar system, in order to decide who were allowed here and who were not.

Steven Hawking has said,

"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."[6]

As one who has contributed to AI and the Singularity, is he now getting cold feet, but still wants to hang on to the idea that AI can be beneficial, if handled with care? Hawking may be a genius, but he’s still a scientist at heart. Also, if someone, who does not have a criminal mind, realizes that they have participated in something that is very destructive, in order to make themselves feel better, they need to justify his or her wrongdoings, unless the person wants to take full responsibility. Responsibility is sometimes very difficult because we need to ransack ourselves, but it’s nonetheless necessary for all of us. Hopefully, Hawking is on his way, but I’m not so sure (we will scrutinize Hawkins more in the last section of this chapter). Understanding human behavior means that we can have more compassion for those who try to become better and less naïve.

Other concerned Voices from Academia

It is certainly not only Prof. Hawking who is speaking out against AI. Virtually everything that has to do with AI is extremely dangerous to our species, and it will wipe out humanity, as we know it, if it is allowed to progress. This is not a science-fiction scenario: it is inevitable. We are talking about Overlord technology, and all their technology benefits their purposes, not ours.


Multimedia 8-1: Scientists speak to the UN about
dangers of Artificial Super Intelligence (ASI).

In 2015, there was a UN meeting, attended by Anti-Singularitists, including MIT physicist Max Tegmark and the founder of Oxford’s Future of Humanity Institute, Nick Bostrom. They talked in depth about the possible dangers of Artificial Super Intelligence.[7] They postulated that in the beginning, mankind could benefit from these new technologies, but in the long term, AI would be an uncontrollable machine, whose actions cannot be anticipated by anyone on this planet.

Although prominent voices are being raised against AI and the Singularity, they still have little to no bearing on the final decision regarding whether or not the AI project should continue. The ball is rolling fast, and it can’t be stopped, unless enough people refuse to cooperate by not buying any of the smart products on the market; whatever these smart products might be in the near future. In addition, most people on this planet have nanobots in their blood stream because of chemtrails, vaccines, medications, and other sources, and these can be activated at any time. In order to resist this, we must have the knowledge, inner strength, and high consciousness necessary not to let these nanobots activate. It can be done, but it does require a focused person with high integrity and awareness.

Another outspoken person about AI is Apple’s co-founder, Steve Wozniak, who says,

"Computers are going to take over from humans, no question," he told the outlet. Recent technological advancements have convinced him that writer Raymond Kurzweil – who believes machine intelligence will surpass human intelligence within the next few decades – is onto something.

"Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people," he said. "If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."

[…]

"Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that …"[8]

It is interesting to note that Apple’s virtual assistant for the iPhone, Siri, uses Artificial Intelligence technology to anticipate users’ needs.[9] It seems as if Wozniak is speaking with a forked tongue. Apart from Hawking, Steve Wozniak is another person I would investigate. He is a Freemason, and his wife is a member of the female division of Freemasonry, the Order of the Northern Star.[10] Elon Musk would also be on my radar.

Elon Musk, the CEO of Tesla and (as we already know) Bill Gates of Microsoft have also raised their voices against AI. Although Gates is supposedly still on the fence on this issue, Musk is perhaps the more outspoken antagonists against AI, but his motives might be questioned. He has called AI the “biggest existential threat”[11] to mankind, and it’s hard to disagree with that.  Although he is an AI antagonist, he is still an investor in DeepMind and Vicarious,” two AI ventures. Why? He claims that,

“…it’s not from the standpoint of actually trying to make any investment return. I like to just keep an eye on what’s going on…nobody expects the Spanish Inquisition, but you have to be careful.”[12]

In a Reddit, Ask me Anything, Bill Gates agrees with Musk,

"I agree with Elon Musk and some others on this and don't understand why some people are not concerned," he wrote.[13]


Fig. 8-1: Apple’s Steve Wozniak.

As I’ve mentioned before, and as Dr. Kurzweil also mentions in his books and in lectures and interviews, the Controllers want to hear both positive and negative voices on AI and the Singularity, and even though not many protesting voices are being raised by the public, there are many in academia and in science who vouch against it. Much of it is just a dog-and-pony-show, but it still has some value, and people who are interested in finding out more about this can do so and at least take an individual standpoint. Remember that every individual’s standpoint on this is very important; the more people who make up their minds, the greater chance we have to stop this on a global scale.

Remember, as always, to scrutinize everybody in a higher societal position; even those who seem to be speaking our language. This also includes Prof. Stephen Hawking.

Stephen Hawking

Stephen Hawking is perhaps the world’s most well-known popular scientist today. Not only is he brilliant, but people also admire him for having accomplished so much despite his severely disabled body. People often read his statements when new discoveries are made in the field of physics, astrophysics, and astronomy. The question is how interested people are in listening to his warnings when it comes to AI. There are people who might be interested, but I believe very few have a distinct opinion about it because they think that they know too little, they might have other things to attend to, and they count on the Government to take care of it, believing the Government only works in the best interest of the people. Whether or not I am correct about Hawking when I suggest that it is his guilt that is motivating him to come forward in a big way, he has made his position clear on many occasions. In the same article that Wozniak, Musk, and Gates are mentioned (above), Hawking is lining up with them,

… physicist Stephen Hawking has warned that AI could eventually "take off on its own." It's a scenario that doesn't bode well for our future as a species: "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded," he said.[14]

In an Ask me Anything session in Reddit, Prof. Hawking replied to a question about robots becoming violent toward humans:

“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

“You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.”[15]

What bothers me with Hawking’s comments is that he is, for some strange reason, not taking the Super Brain Computer (SBC) into consideration. It’s well-known that the SBC is a real project and that the SBC is built to connect humanity to the new virtual reality. Obviously, Prof. Hawking must know about this.

Moreover, Prof. Hawking emphasizes that if the AI becomes more intelligent than humans, they will also be able to enhance their own intelligence as they please, and the difference in intelligence between an AI and a human will be greater than that between a human and a snail, Hawking warns us.[16]

Hawking still believes that we can create benevolent AI if we are careful with what our goals are. He is afraid of the undirected AI that is under development today, and instead he wants to create beneficial AI only. He wants us to start doing that today rather than tomorrow, before it’s too late, and computers have become too clever for us humans to handle. What Hawking fails to address is that even if we set a goal to create only beneficial AI, power-hungry psychopaths could quickly turn it into something much more malevolent, and they wouldn’t have any problems infiltrating the entire thing and taking it over. Again, Hawking is a smart guy; he must know Human Power Hunger 101. He may not know the rest of the story about the ET connection, but what he’s suggesting is quite naïve for someone coming from such an academic background.

Instead, it seems as if the famous scientist is playing his role in the agenda—wittingly or unwittingly—by on the one hand warning people of the downsides of AI and on the other hand promoting that we need to focus on getting into space as fast as science permits.[17] When doing this, he puts people’s focus on the idea that we need to colonize space as fast as we can in order to save and expand our species, which is exactly what Dr. Kurzweil promotes. And this is a key aspect of the AIF’s agenda.


[2] Ibid. op. cit.

[3] Sources: an array of Pleiadian lectures, spanning from 2014-2016.

[4] Independent, April 14, 2015, “UK government backs international development of 'killer robots' at UN”

[7] L J Vanier, Oct. 27, 2015 ,Dangers of Artificial Super Intelligence

[12] Ibid. op. cit.

[13] Ibid. op. cit.

[14] Ibid. op. cit.

[16] Ibid.

[17] BBC News, Jan. 19, 2016, Hawking: Humans at risk of lethal 'own goal'

 

Table of Contents


Next