Researchers demo AI that can change the weather and time of day in photos

NVIDIA Research is showing off a new project that uses artificial intelligence to change the time of day and weather in an image. The technology is called “unsupervised image-to-image translation,” and it involves a newly-created framework capable of producing high-quality image translations, such as turning a day photo into a night photo, or a summer photo into a winter photo.

It is, to use the technical term: bananas.

Unsupervised, in this case, refers to a type of AI training that doesn’t involve precise examples upon which the final results can be based. This is due to the variability inherent in taking one type of image, such as one showing a summer day, and translating it into a winter scene. Discussing this, the researchers explained in an abstract:

We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation.

The team provides several before-and-after examples of their AI’s work, demonstrating instances of a sunny day with blue sky being transformed into an overcast day, and a snow-covered winter environment being transformed into a sunny green environment.

The video below shows a video scene transformed from winter to summer:

NVIDIA also shared a video of a day scene transformed in night scene, though the change is far more obvious in this example:

Finally, the technology can also be used to transform one species into another, such as turning a house cat into a cheetah:

The team has shared a Google Photos album containing before-and-after images created with the AI, so if you want to see more photo editing madness, you can find it here.

Of course, the transitions are FAR from perfect at this stage, but some of the swaps are so extreme that even the imperfect creations still feel way beyond a computer’s capacity to do by itself. Adobe’s “Deep Fill” and “Project Cloak” are starting to look like a very small taste of the coming AI photo and video editing revolution.

Facebook’s New Suicide Detection A.I. Could Put Innocent People Behind Bars

(Activist Post) Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.

Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.

You can stop imagining, this is really happening.

A recent TechCrunch article warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. (Source)

A private corporation deciding who goes to jail? What could possibly go wrong?

Facebook’s A.I. automatically contacts law enforcement

Facebook is using pattern recognition and moderators to contact law enforcement.

Facebook is ‘using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.’

Dedicating more reviewers from our Community Operations team to review reports of suicide or self harm. (Source)

Facebook

Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.

Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community. (Source)

Why are police conducting wellness checks for Facebook? Are private corporations running police departments?

Not only do social media users have to worry about a spying A.I. but now they have to worry about thousands of spying Facebook ‘Community Operations’ people who are all to willing to call the police.

Our Community Operations team includes thousands of people around the world who review reports about content on Facebook…our team reviews reported posts, videos and live streams. This ensures we can get the right resources to people in distress and, where appropriate, we can more quickly alert first responders. (Source)

Should we trust pattern recognition to determine who gets hospitalized or arrested?

Pattern recognition is junk science

A 2010, CBS News article warns that pattern recognition and human behavior is junk science. The article shows, how companies use nine rules to convince law enforcement that pattern recognition is accurate.

A 2016, Forbes article used words like ‘nonsense, far-fetched, contrived and smoke and mirrors’ to describe pattern recognition and human behavior.

Cookie-cutter ratios, even if scientifically derived, do more harm than good. Every person is different. Engagement is an individual and unique phenomenon. We are not widgets, nor do we conform to widget formulas. (Source)

Who cares, if pattern recognition is junk science right? At least Facebook is trying to save lives.

Wrong.

Using an A.I. to determine who might need to be hospitalized or incarcerated can and will be abused.

Coinbase ordered to report 14,355 users to the IRS

Anyone moving more than $20,000 on the platform is subject to the new order

Illustration by Alex Castro / The Verge

Today, Coinbase suffered a major defeat at the hands of the Internal Revenue Service, nearly a year after the case was initially filed. A California federal court has ordered Coinbase to turn over identifying records for all users who have bought, sold, sent, or received more than $20,000 through their accounts in a single year between 2013 and 2015. Coinbase estimates that 14,355 users meet the government’s requirements. The full order is embedded below.

For each account, the company has been asked to provide the IRS with the user’s name, birth date, address, and taxpayer ID, along with records of all account activity and any associated account statements. The result is both a definitive link to the user’s identity and a comprehensive record of everything they’ve done with their Coinbase account, including other accounts to which they’ve sent money.

The order is significantly narrower than the IRS’s initial request, which asked for records on every single Coinbase account over the same period. That request would also have required all communications between Coinbase and the user, a measure the judge found unnecessarily comprehensive.

The government made no claim of suspicion against individual users, but instead argued that the order was justified based on the discrepancy between Coinbase users and US citizens reporting Bitcoin gains to the IRS. Coinbase boasts nearly 6 million customers, but according to a government filing, fewer than 1,000 US citizens have reported cryptocurrency holdings on their taxes.

The ruling has already proven controversial in the Bitcoin world. “We remain deeply unsatisfied with the lack of justification provided by the IRS,” Coin Center’s Peter Van Valkenburgh told The Verge. “Without better rationale for why these specific transactions were suspect, a similarly sweeping request could be made for customer data from any financial institution. It sets a bad precedent for financial privacy.”

Coinbase had vigorously opposed the order on similar grounds, but cast the final ruling as a partial victory. “Although we are disappointed not to be able to entirely defeat the summons,” Coinbase’s David Farmer wrote in a post after the ruling, “we are proud to fight for our customers and in the result we were able to achieve as a small company against a large government agency.”

The company is currently reviewing the order, and intends to notify any affected users before any documents are produced.

Update 11/30, 1:33PM ET: Updated with Coinbase statement.