Newly Released CIA Documents Talk of Psychic Experiments and Telepathy

By EV | We Are Anonymous

The CIA recently released to the internet a trove of documents related to a broad range of bizarre topics, from UFO sightings to documents related to MK-Ultra. In total, over 930,000 documents were released, and among those catching alternative news source’s attention are documents related to the Stargate Project. This is partially due to a recent cryptic re-Tweet from Edward Snowden.

In a Tweet from last year October, NYT Minus Context stated, “Remember that people don’t have access to your secret thoughts and feelings,” to which Snowden re-Tweeted, “Well, most people.” Regardless of what exactly Snowden meant by that Tweet, it has intrigued many considering the documents the CIA has now released on the internet.

According to the Federation of American Scientists (FAS), the Stargate Project was one of a variety of “remote viewing programs” conducted by the US government, many of which used code names such as Sun Streak, Grill, Flame, and Center Lane. “These efforts were initiated to assess foreign programs in the field; contract for basic research into the phenomenon; and to evaluate controlled remote viewing as an intelligence tool.”

Related Article: Physicist Contracted by CIA Shares Everything He Knows About ESP

In layman’s terms, the CIA (with the help of the NSA) was trying to spy on other nations and obtain information through the use of astral projection – or out-of-body experiences. The program reportedly lasted from 1972 to 1995, when the CIA finally concluded that the project “has not been shown to have value in intelligence operations.” Some might find it interesting that it took over 20-years to conclude an experimental project had no value, but we digress.

The Russia-controlled news agency, Sputnik, reports there are many who are unconvinced that all the documents released by the CIA are factual, and true enough, the CIA are known for being “a bunch of fucking liars.” As explained by Noam Chomsky, the CIA basically acts as a scapegoat for the executive branch of the US government to keep their reputation clean.

In a statement from Dmitry Efimov, security expert and member of the Moscow Council’s Advisory Committee on Security:

I think this was published on the personal orders of CIA Director Brennan, a famous neocon who is leaving along with Obama and who is probably using this opportunity to create a new stream of misinformation. Particularly since there is no such thing as the whole truth, there is the truth which is present in the CIA’s real documents, which of course exist, but I think that a lot of work has been done to falsify a huge number of documents in this batch and change the relationship to the Vietnam War, for example.”

While the documents have been available to the public since 1995, they were only accessible on four computers in the back room of the National Archives in Maryland. The CIA apparently planned to release the documents on the internet at the end of 2017, but finished the work ahead of schedule, and instead released them just a few days before Trump’s inauguration. No matter what, we can be sure the CIA is not going to release anything that will put them in too negative a light, and it’s reasonable to assume the rogue department would take the opportunity to use the documents to create their own version of the truth.

While the documents released by the CIA are sure to provide some interesting reading, we suggest readers remember the department’s reputation and take the information with a grain of salt. We currently live in a generation where the more factual information comes from whistleblowers and leaked or hacked documents, so with that in mind, it will be interesting to see if Snowden elaborates further on his previous thoughts or the CIA’s newly released archive.

Related Article: Why We’re Becoming More Psychic – Is it All About “The Thinning Veil?”


This article (Newly Released CIA Documents Talk of Psychic Experiments and Telepathy) is a free and open source. You have permission to republish this article under a Creative Commons license with attribution to the author and AnonHQ.com.

Read more great articles at We Are Anonymous.

AI is Being Used to Create Fake Celebrity Porn, Because of Course it Is

By Tom Pritchard on at

AI is one of the new big things in the tech industry, with an immense amount of hype surrounding the concept of machine learning and utilising it for the greater good of mankind. And then there are the people using AI to make more convincing fake celebrity porn – using AI to stitch celebrity faces into porn videos with greater accuracy than ever before.

The tool was developed by Reddit user deepfakes, using publicly accessible tools and a face swap algorithm that he developed himself. While the final result isn’t perfect, the tool can pull images from YouTube clips and Google and use AI to create a somewhat convincing video of a particular celebrity.

It doesn’t have to be a celebrity either. It could be a random person you saw on social media or someone you know. Like the ridiculously accurate lip-syncing technology we saw earlier this year, the prospects of how this might be used in future are horrifying. Particularly since The Next Web points out that deepfakes’ algorithm is mostly a combination of readily-available tools like TensorFlow and Keras, and it wouldn’t take a genius to figure out how to stick them together.

So far deepfakes has produced fake GIFs featuring Gal Gadot, Emma Watson, Aubrey Plaza, Maisie Williams. They’re obvious fakes, with glitches occurring throughout the clips, but they look advanced enough to make this a pretty scary prospect for the future. Particularly since AI researcher Alex Champandard told Motherboard that these clips could take a consumer-level graphics card a few hours to produce. Apparently a CPU could also do the job, but it would take days to complete.

Obviously an algorithm would need a plentiful supply of images to stitch something basic together, which might not be as feasible for non-public figures. But given all the high-profile hacks over the past few years, coupled with the abundance of pictures some people love to post on social media, the source material might not be as difficult to find as you’d hope. [Motherboard via The Next Web]

Former Facebook Exec: ‘You Don’t Realise It But You Are Being Programmed’

By Jennifer Ouellette on at

This is the year everyone—including founding executives—began publicly questioning the impact of social media on our lives.

Last month, Facebook’s first president Sean Parker opened up about his regrets over helping create social media as we know it today. “I don’t know if I really understood the consequences of what I was saying, because of the unintended consequences of a network when it grows to a billion or 2 billion people and it literally changes your relationship with society, with each other,” Parker said. “God only knows what it’s doing to our children’s brains.”

Chamath Palihapitiya, former vice president of user growth, also recently expressed his concerns. During a recent public discussion at the Stanford Graduate School of Business, Palihapitiya—who worked at Facebook from 2005 to 2011—told the audience, “I think we have created tools that are ripping apart the social fabric of how society works.”

Some of his comments seem to echo Parker’s concern [emphasis ours]. Parker has said that social media creates “a social-validation feedback loop” by giving people “a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever.”

Just days after Parker made those comments, Palihapitiya told the Stanford audience, “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” Palihapitiya said. “No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem—this is not about Russians ads. This is a global problem.”

It’s as if Parker and Palihapitiya got together at a bar that week to work out their inner demons. When the host asked Palihapitiya if he was doing any soul searching in regards to his role in building Facebook, he responded: “I feel tremendous guilt. I think we all knew in the back of our minds—even though we feigned this whole line of, like, there probably aren’t any bad unintended consequences. I think in the back, deep, deep recesses of, we kind of knew something bad could happen. But I think the way we defined it was not like this.”

He went on to explain what “this” is:

So we are in a really bad state of affairs right now, in my opinion. It is eroding the core foundation of how people behave by and between each other. And I don’t have a good solution. My solution is I just don’t use these tools anymore. I haven’t for years.

Speaking more broadly on the subject of social media, Palihapitiya said he doesn’t use social media because he “innately didn’t want to get programmed.” As for his kids: “They’re not allowed to use this shit.”

Then he got even more fired up: “Your behaviours—you don’t realise it but you are being programmed. It was unintentional, but now you gotta decide how much you are willing to give up, how much of your intellectual independence,” he told the students in the crowd. “And don’t think, ‘Oh yeah, not me, I’m fucking genius, I’m at Stanford.’ You’re probably the most likely to fucking fall for it. ‘Cause you are fucking check-boxing your whole Goddamn life.”

New Robots Imagine Future Actions To Figure Out How To Manipulate Objects They’ve Never Encountered Before

Credit: UC Berkeley

By Alton Parrish | Ineffable Island

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

The robot that knows its future

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5.

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

Credit: UC Berkeley video by Roxanne Makasdjian and Stephen McNally

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

“Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime. We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills,” said Frederik Ebert, a graduate student in Levine’s lab who worked on the project.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. In contrast to conventional computer vision methods, which require humans to manually label thousands or even millions of images, building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously. Indeed, video prediction models have also been applied to datasets that represent everything from human activities to driving, with compelling results.

“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”

The Berkeley scientists are continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.

Read more great articles at Ineffable Island.

Researchers demo AI that can change the weather and time of day in photos

NVIDIA Research is showing off a new project that uses artificial intelligence to change the time of day and weather in an image. The technology is called “unsupervised image-to-image translation,” and it involves a newly-created framework capable of producing high-quality image translations, such as turning a day photo into a night photo, or a summer photo into a winter photo.

It is, to use the technical term: bananas.

Unsupervised, in this case, refers to a type of AI training that doesn’t involve precise examples upon which the final results can be based. This is due to the variability inherent in taking one type of image, such as one showing a summer day, and translating it into a winter scene. Discussing this, the researchers explained in an abstract:

We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation.

The team provides several before-and-after examples of their AI’s work, demonstrating instances of a sunny day with blue sky being transformed into an overcast day, and a snow-covered winter environment being transformed into a sunny green environment.

The video below shows a video scene transformed from winter to summer:

NVIDIA also shared a video of a day scene transformed in night scene, though the change is far more obvious in this example:

Finally, the technology can also be used to transform one species into another, such as turning a house cat into a cheetah:

The team has shared a Google Photos album containing before-and-after images created with the AI, so if you want to see more photo editing madness, you can find it here.

Of course, the transitions are FAR from perfect at this stage, but some of the swaps are so extreme that even the imperfect creations still feel way beyond a computer’s capacity to do by itself. Adobe’s “Deep Fill” and “Project Cloak” are starting to look like a very small taste of the coming AI photo and video editing revolution.