Cyborg Soldiers, Artificial Intelligence, and Robotic Mass Surveillance May be Here Sooner Than You Think

By Carolanne Wright

Contributing writer for Wake Up World

Straight out of the science fiction film The Terminator, a 72-page Pentagon document lays out their plan for the future of combat and war, which will utilize artificial intelligence (or AI), robotics, information technology as well as biotechnology.

Proponents of advanced technology — such as robot soldiers and artificial intelligence — argue both can be made ethically superior to humans, where issues of rape, pillaging or the destroying of towns in fits of rage would be drastically reduced, if not eliminated. Many in the science community are casting a weary eye toward this technology, however, warning that it can easily surpass human control, leading to unpredictable — and even catastrophic — consequences.

Defense Innovation Initiative — The Future of War

The Department of Defense (DoD) has announced the United States will be entering a brave new world of automated combat in a little over a decade, where wars will be completely fought using advanced weaponized robotic systems. We’ve already had a glimpse of what’s to come with the use of drones. But, according to the DoD, we haven’t seen anything yet.

In a quest to establish “military-technological superiority”, the Pentagon ultimately has its sights set on monopolizing “transformational advances” in robotics, artificial intelligence and information technology — otherwise known as the Defense Innovation Initiative, a plan to identify and develop pioneering technological breakthroughs for use in the military.

Disturbingly, a new study from the National Defense University — a higher education institution funded by the Pentagon — has urged the DoD to take drastic action in order to avoid the downfall of US military might, even though the report also warns that accelerating technological advances will “flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress.”

The NDU report explores several areas where technological advances could benefit the military — one of which is mass collection of data from social media platforms that is then analyzed by artificial intelligence instead of humans. Another is “embedded systems [in] automobiles, factories, infrastructure, appliances and homes, pets, and potentially, inside human beings, [where] the line between conventional robotics and intelligent everyday devices will become increasingly blurred.” These systems will help the government to monitor individuals and the population and “will provide detection and predictive analytics.”

Armies of “Kill Bots that can autonomously wage war” are also a real possibility as unmanned robotic systems are becoming increasingly intelligent and less expensive to manufacture. These robots could be placed in civilian life as well, to execute “surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”

To counteract public outcry about autonomous robots having the capacity to kill on their own, the authors recommend the Pentagon should be “highly proactive” in establishing “it is not perceived as creating weapons systems without a ‘human in the loop.’”

Strong AI, which simulates human cognition — including self-awareness, sentience and consciousness — is just on the horizon, some say as early as the 2020s.

But not everyone is over the moon about these advances, especially where AI is concerned. Leaders in the field of technology, journalists and inventors are all sounding the alarm about the devastating consequences of AI technology that’s allowed to flourish unchecked.

AI Technology — What Could Possibly Go Wrong?

As the DoD charges ahead with its plan to dominate the military and surveillance sphere with unbridled advances in technology, many are questioning the serious ramifications of such a path.

Journalist R. Michael Warren writes:

“I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.

Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.

Once they can do that, we face two serious issues.

First, how do we teach these creatures to tell right from wrong — in our own self defense?

Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.” [source]

During a conference of AI experts in 1999, a poll was given as to when they thought the Turing test (where computers surpass humans in intelligence) would occur. The general thought was about 100 years. Many believed it could never be achieved. Today, Kurzweil thinks we are already at the brink of intellectually superior computers.

 

British theoretical physicist and Cambridge University professor Stephen Hawking doesn’t mince words about the dangers of artificial intelligence:

“I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. “Once humans develop artificial intelligence, it will take off on it’s own and redesign itself at an ever-increasing rate.” He adds, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

At the MIT Aeronautics and Astronautics department’s Centennial Symposium in October 2015, Tesla founder Elon Musk issued a stark warning about unregulated development of AI:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Furthermore, in a tweet posted by Musk in 2014, he thinks “We need to be super careful with AI. Potentially more dangerous than nukes.” In the same year, he said on CNBC that he believes the possibility of a Terminatorlike scenario could actually come to pass.

Likewise, British inventor Clive Sinclair believes artificial intelligence will be the downfall of mankind:

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

Microsoft billionaire Bill Gates agrees.

“I am in the camp that is concerned about super intelligence,” he says. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

That said, Gates’ Microsoft Research has designated “over a quarter of all attention and resources” to artificial intelligence development, whereas Musk has invested in AI companies in order to “keep an eye on where the technology is headed”.

Related reading: AI Building AI – Is Humanity Losing Control Over Artificial Intelligence?

Article sources:

Recommended articles by Carolanne Wright:

About the author:

Carolanne enthusiastically believes if we want to see change in the world, we need to be the change. As a nutritionist, natural foods chef and wellness coach, Carolanne has encouraged others to embrace a healthy lifestyle of organic living, gratefulness and joyful orientation for over 13 years

Through her website Thrive-Living.net she looks forward to connecting with other like-minded people from around the world who share a similar vision. Follow Carolanne on FacebookTwitter and Pinterest.

China wants an “orderly exit” from bitcoin mining

Mining With Energy From Humans – Is it Really Possible?

Mining With Energy From Humans - Is it Really Possible?

Is it truly possible to harness energy from the human body to power cryptocurrency mining rigs?

A Netherlands-based technology company, Speculative.Capital has pioneered a project that explores the possibility of harnessing energy from idle human subjects.

To do so, the company created body suits that turn body heat into electricity from human subjects to power computers that are mining cryptocurrency.

According to their website, 37 people were involved in the project. The concept is pretty simple, a subject lies down for a few hours while the body suit harnesses energy from their body heat.

The technology is pretty nifty, as small thermoelectric generators harvest the temperature differential between the subject’s body temperature and the surrounding ambient temperature of the room. The electricity generated is then used to power mining rigs, that mined recently created cryptocurrencies that promised to create good future growth in value.

In total, the subjects provided enough power for the computers to mine for 212 hours, that’s just over eight days, and they claim to have unlocked 16,594 coins during that time.

The chosen cryptocurrencies were Vertcoin, StartCOIN, Dash, Lisk, Litecoin and Ethereum. Vertcoin and StartCOIN accounted for the majority of coins unlocked, while comparative crypto heavyweights Litecoin and Ethereum were the least mined coins – given the now-scaled difficulty to mine them.

During that 212 hour period, the 37 subjects produced 127,210 milliwatts of power.

Let’s make some assumptions here

If you have a Nvidia 1060 six GB graphics card in your computer, you can expect to get a hashrate of 19 MH/s at 80 watts when mining Ethereum – going with data from 1stminingrig.com.

Going with CryptoCompare’s current calculations – you would only be able to mine 0.002487 of Ethereum a day and that is using all 127,210 milliwatts, which equals 127 watts, of the power harnessed by the bodysuits.

While the idea is admirable, quirky and exciting, it seems like far too much effort for too little reward.

However, projects such as these push the boundaries of technology and expand the limits of what the human body is capable of, and how we view and explore the way we harness and produce energy in the future.

Whenever projects like these are undertaken and results are published, people are quick to debunk and belittle the work that has been done. While the project may not have produced nearly enough energy to mine extraordinary amounts of cryptocurrency, it is an alternative and green way of looking to power the miners needed to maintain the Blockchain.

Why not use the sun?

Speculative.Capital does make one wonder what other alternatives there are to power up mining rigs – especially for hobby miners at home.

The easiest, and probably most accessible option is solar power – if you live somewhere sunny.

Solar panels are easy to obtain and setup, although you will need an inverter and batteries to store their power. But given a steady a supply of sunlight, and you could easily produce enough energy to power a home-built mining rig.

Going by these calculations on solarpowerrocks.com, an average solar panel will produce 250 watts in an hour. If you get four hours of full sun, your panel will generate 1,000 watts of energy – eight times as much produced by our friends in the Netherlands.

At the end of the day, the efficiency of their chosen method matters not. What is imperative is that we look for cheaper and cleaner energy sources to power the power-hungry mining industry that continually verifies the Blockchain of cryptocurrencies.

As it stands, the cumulative power consumption of mining operations worldwide use more power than a number of individual African countries,

Blockchain and cryptocurrencies promise decentralized and anonymous transactional services to the common man – but we need to be conscious of the effect it has on power grids worldwide. If we can find better solutions – we should be using them.

The likes of Speculative.Capital and other technology companies are blazing a new trail for the cryptocurrency space, and it will be a massive triumph if more miners look to alternative power sources in the future.

At the time of publishing, Cointelegraph had not received a reply for an interview from Speculative.Capital.

ICO to Build Next Generation AI Raises $36 Million in 60 Seconds

SingularityNET raised $36 mln in one minute, completely selling out of its native AGI tokens. While this is an enormous amount of money to raise in an incredibly short period of time, it’s somewhat unsurprising considering demand. The company asserts that the issue was massively oversubscribed, with 20,000 people registered to participate, seeking to buy $361 mln worth of tokens.

The company reduced the number to a more manageable level, according to its press release, by:

“[Screening] all applicants using layers of algorithms, in addition to manual review, to comply with global KYC/AML regulations. This reduced the pool of contributors to 5,000, but also set a new standard for fundraising via Blockchain with respect to global legislation.”

Artificial general intelligence

SingularityNET aims to create a decentralized marketplace of AIs, where each AI can interact with one another (and pay one another) as needed to solve customers’ problems. Founder Ben Goertzel gave an example:

“If you need a document summarized, as a user you can put a request into SingularityNet…

You may get bids from twenty different document summary nodes…and you may choose one with the right balance of reputation and price.

But now that document summary node if it hits something in the document it can’t deal with, it can outsource that…if the document summary node that you’re paying…hits an embedded video it can outsource that to a video summarizing node and it can then pay it some fraction of the money it was paid. Or, if it sees a quote in Russian…it can outsource that …to a Russian to English translation node that can do that translation, then send it back to the document summary node.”

Popular field

Artificial intelligence and machine learning are hot trends in computing these days, but are largely controlled by massive corporations. These corporate titans develop their own proprietary systems and software and keep it in-house. SingularityNET intends to decentralize this heavily centralized field, allowing developers of AI tools to monetize them and non-corporate users to benefit from them.

As with any new venture, it remains to be seen whether this is even possible, or whether behemoths like Google will forever dominate the field of AI. One thing is certain – there is plenty of interest in decentralized AI systems. SingularityNET’s token sale could not make that any more clear. Just like the Nicholas Cage movie, these tokens were “gone in sixty seconds.”

AI is Being Used to Create Fake Celebrity Porn, Because of Course it Is

By Tom Pritchard on at

AI is one of the new big things in the tech industry, with an immense amount of hype surrounding the concept of machine learning and utilising it for the greater good of mankind. And then there are the people using AI to make more convincing fake celebrity porn – using AI to stitch celebrity faces into porn videos with greater accuracy than ever before.

The tool was developed by Reddit user deepfakes, using publicly accessible tools and a face swap algorithm that he developed himself. While the final result isn’t perfect, the tool can pull images from YouTube clips and Google and use AI to create a somewhat convincing video of a particular celebrity.

It doesn’t have to be a celebrity either. It could be a random person you saw on social media or someone you know. Like the ridiculously accurate lip-syncing technology we saw earlier this year, the prospects of how this might be used in future are horrifying. Particularly since The Next Web points out that deepfakes’ algorithm is mostly a combination of readily-available tools like TensorFlow and Keras, and it wouldn’t take a genius to figure out how to stick them together.

So far deepfakes has produced fake GIFs featuring Gal Gadot, Emma Watson, Aubrey Plaza, Maisie Williams. They’re obvious fakes, with glitches occurring throughout the clips, but they look advanced enough to make this a pretty scary prospect for the future. Particularly since AI researcher Alex Champandard told Motherboard that these clips could take a consumer-level graphics card a few hours to produce. Apparently a CPU could also do the job, but it would take days to complete.

Obviously an algorithm would need a plentiful supply of images to stitch something basic together, which might not be as feasible for non-public figures. But given all the high-profile hacks over the past few years, coupled with the abundance of pictures some people love to post on social media, the source material might not be as difficult to find as you’d hope. [Motherboard via The Next Web]