Cyborg Soldiers, Artificial Intelligence, and Robotic Mass Surveillance May be Here Sooner Than You Think

By Carolanne Wright

Contributing writer for Wake Up World

Straight out of the science fiction film The Terminator, a 72-page Pentagon document lays out their plan for the future of combat and war, which will utilize artificial intelligence (or AI), robotics, information technology as well as biotechnology.

Proponents of advanced technology — such as robot soldiers and artificial intelligence — argue both can be made ethically superior to humans, where issues of rape, pillaging or the destroying of towns in fits of rage would be drastically reduced, if not eliminated. Many in the science community are casting a weary eye toward this technology, however, warning that it can easily surpass human control, leading to unpredictable — and even catastrophic — consequences.

Defense Innovation Initiative — The Future of War

The Department of Defense (DoD) has announced the United States will be entering a brave new world of automated combat in a little over a decade, where wars will be completely fought using advanced weaponized robotic systems. We’ve already had a glimpse of what’s to come with the use of drones. But, according to the DoD, we haven’t seen anything yet.

In a quest to establish “military-technological superiority”, the Pentagon ultimately has its sights set on monopolizing “transformational advances” in robotics, artificial intelligence and information technology — otherwise known as the Defense Innovation Initiative, a plan to identify and develop pioneering technological breakthroughs for use in the military.

Disturbingly, a new study from the National Defense University — a higher education institution funded by the Pentagon — has urged the DoD to take drastic action in order to avoid the downfall of US military might, even though the report also warns that accelerating technological advances will “flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress.”

The NDU report explores several areas where technological advances could benefit the military — one of which is mass collection of data from social media platforms that is then analyzed by artificial intelligence instead of humans. Another is “embedded systems [in] automobiles, factories, infrastructure, appliances and homes, pets, and potentially, inside human beings, [where] the line between conventional robotics and intelligent everyday devices will become increasingly blurred.” These systems will help the government to monitor individuals and the population and “will provide detection and predictive analytics.”

Armies of “Kill Bots that can autonomously wage war” are also a real possibility as unmanned robotic systems are becoming increasingly intelligent and less expensive to manufacture. These robots could be placed in civilian life as well, to execute “surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”

To counteract public outcry about autonomous robots having the capacity to kill on their own, the authors recommend the Pentagon should be “highly proactive” in establishing “it is not perceived as creating weapons systems without a ‘human in the loop.’”

Strong AI, which simulates human cognition — including self-awareness, sentience and consciousness — is just on the horizon, some say as early as the 2020s.

But not everyone is over the moon about these advances, especially where AI is concerned. Leaders in the field of technology, journalists and inventors are all sounding the alarm about the devastating consequences of AI technology that’s allowed to flourish unchecked.

AI Technology — What Could Possibly Go Wrong?

As the DoD charges ahead with its plan to dominate the military and surveillance sphere with unbridled advances in technology, many are questioning the serious ramifications of such a path.

Journalist R. Michael Warren writes:

“I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.

Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.

Once they can do that, we face two serious issues.

First, how do we teach these creatures to tell right from wrong — in our own self defense?

Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.” [source]

During a conference of AI experts in 1999, a poll was given as to when they thought the Turing test (where computers surpass humans in intelligence) would occur. The general thought was about 100 years. Many believed it could never be achieved. Today, Kurzweil thinks we are already at the brink of intellectually superior computers.

 

British theoretical physicist and Cambridge University professor Stephen Hawking doesn’t mince words about the dangers of artificial intelligence:

“I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. “Once humans develop artificial intelligence, it will take off on it’s own and redesign itself at an ever-increasing rate.” He adds, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

At the MIT Aeronautics and Astronautics department’s Centennial Symposium in October 2015, Tesla founder Elon Musk issued a stark warning about unregulated development of AI:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Furthermore, in a tweet posted by Musk in 2014, he thinks “We need to be super careful with AI. Potentially more dangerous than nukes.” In the same year, he said on CNBC that he believes the possibility of a Terminatorlike scenario could actually come to pass.

Likewise, British inventor Clive Sinclair believes artificial intelligence will be the downfall of mankind:

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

Microsoft billionaire Bill Gates agrees.

“I am in the camp that is concerned about super intelligence,” he says. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

That said, Gates’ Microsoft Research has designated “over a quarter of all attention and resources” to artificial intelligence development, whereas Musk has invested in AI companies in order to “keep an eye on where the technology is headed”.

Related reading: AI Building AI – Is Humanity Losing Control Over Artificial Intelligence?

Article sources:

Recommended articles by Carolanne Wright:

About the author:

Carolanne enthusiastically believes if we want to see change in the world, we need to be the change. As a nutritionist, natural foods chef and wellness coach, Carolanne has encouraged others to embrace a healthy lifestyle of organic living, gratefulness and joyful orientation for over 13 years

Through her website Thrive-Living.net she looks forward to connecting with other like-minded people from around the world who share a similar vision. Follow Carolanne on FacebookTwitter and Pinterest.

AI is Being Used to Create Fake Celebrity Porn, Because of Course it Is

By Tom Pritchard on at

AI is one of the new big things in the tech industry, with an immense amount of hype surrounding the concept of machine learning and utilising it for the greater good of mankind. And then there are the people using AI to make more convincing fake celebrity porn – using AI to stitch celebrity faces into porn videos with greater accuracy than ever before.

The tool was developed by Reddit user deepfakes, using publicly accessible tools and a face swap algorithm that he developed himself. While the final result isn’t perfect, the tool can pull images from YouTube clips and Google and use AI to create a somewhat convincing video of a particular celebrity.

It doesn’t have to be a celebrity either. It could be a random person you saw on social media or someone you know. Like the ridiculously accurate lip-syncing technology we saw earlier this year, the prospects of how this might be used in future are horrifying. Particularly since The Next Web points out that deepfakes’ algorithm is mostly a combination of readily-available tools like TensorFlow and Keras, and it wouldn’t take a genius to figure out how to stick them together.

So far deepfakes has produced fake GIFs featuring Gal Gadot, Emma Watson, Aubrey Plaza, Maisie Williams. They’re obvious fakes, with glitches occurring throughout the clips, but they look advanced enough to make this a pretty scary prospect for the future. Particularly since AI researcher Alex Champandard told Motherboard that these clips could take a consumer-level graphics card a few hours to produce. Apparently a CPU could also do the job, but it would take days to complete.

Obviously an algorithm would need a plentiful supply of images to stitch something basic together, which might not be as feasible for non-public figures. But given all the high-profile hacks over the past few years, coupled with the abundance of pictures some people love to post on social media, the source material might not be as difficult to find as you’d hope. [Motherboard via The Next Web]

New Robots Imagine Future Actions To Figure Out How To Manipulate Objects They’ve Never Encountered Before

Credit: UC Berkeley

By Alton Parrish | Ineffable Island

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

The robot that knows its future

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5.

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

Credit: UC Berkeley video by Roxanne Makasdjian and Stephen McNally

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

“Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime. We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills,” said Frederik Ebert, a graduate student in Levine’s lab who worked on the project.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. In contrast to conventional computer vision methods, which require humans to manually label thousands or even millions of images, building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously. Indeed, video prediction models have also been applied to datasets that represent everything from human activities to driving, with compelling results.

“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”

The Berkeley scientists are continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.

Read more great articles at Ineffable Island.

Facebook’s New Suicide Detection A.I. Could Put Innocent People Behind Bars

(Activist Post) Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.

Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.

You can stop imagining, this is really happening.

A recent TechCrunch article warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. (Source)

A private corporation deciding who goes to jail? What could possibly go wrong?

Facebook’s A.I. automatically contacts law enforcement

Facebook is using pattern recognition and moderators to contact law enforcement.

Facebook is ‘using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.’

Dedicating more reviewers from our Community Operations team to review reports of suicide or self harm. (Source)

Facebook

Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.

Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community. (Source)

Why are police conducting wellness checks for Facebook? Are private corporations running police departments?

Not only do social media users have to worry about a spying A.I. but now they have to worry about thousands of spying Facebook ‘Community Operations’ people who are all to willing to call the police.

Our Community Operations team includes thousands of people around the world who review reports about content on Facebook…our team reviews reported posts, videos and live streams. This ensures we can get the right resources to people in distress and, where appropriate, we can more quickly alert first responders. (Source)

Should we trust pattern recognition to determine who gets hospitalized or arrested?

Pattern recognition is junk science

A 2010, CBS News article warns that pattern recognition and human behavior is junk science. The article shows, how companies use nine rules to convince law enforcement that pattern recognition is accurate.

A 2016, Forbes article used words like ‘nonsense, far-fetched, contrived and smoke and mirrors’ to describe pattern recognition and human behavior.

Cookie-cutter ratios, even if scientifically derived, do more harm than good. Every person is different. Engagement is an individual and unique phenomenon. We are not widgets, nor do we conform to widget formulas. (Source)

Who cares, if pattern recognition is junk science right? At least Facebook is trying to save lives.

Wrong.

Using an A.I. to determine who might need to be hospitalized or incarcerated can and will be abused.