Youtube Pushing Truth, Conspiracy Videos to Bottom of Search Results Following Las Vegas Massacre

Youtube gets to decide what is the truth and what you’re allowed to watch.

Global News Canada

YouTube is modifying its search algorithms to prevent conspiracy theories and fake news videos from making it to the top of its search results, following outrage over the high visibility of videos spreading misinformation about the Las Vegas mass shooting, the Wall Street Journal reports.

On Wednesday, the very first result in a YouTube search for “Las Vegas shooting” was a video that claimed the gunshots came from the fourth floor of the Mandalay Bay Resort and Casino, rather than the 32nd. The video showed images of a flashing light coming from a room on the lower floor, using it to advance a theory — ruled “false” by Snopes — that multiple shooters were involved in the massacre.

Other high-ranking videos suggested the attack was a false flag orchestrated to advance the gun control agenda, or that the attack may not have happened at all and that the victims were “actors.”

But as of Friday, the first page of search results for “Las Vegas shooting” yielded only results from reputable news organizations such as the BBC, CNN, the New York Times and the Guardian.

That’s because YouTube on Wednesday night began actively promoting more established sources in search results pertaining to the shooting as well as other major news events, the Journal reported, citing an unnamed source close to the video-sharing website.

Some of the implicated videos have since been taken down from YouTube, but others remain on the site.

According to The Guardian, YouTube had been preparing the changes for months but decided to accelerate the roll-out this week.

I doubt there was “outrage” over youtube’s search results coming back with truth videos ranked at the top. This is a made-up justification by the (((Deep State))) to justify its effort to keep the sheeple in the dark.

This rancid, toxic censorship by youtube opens the door for other video services to thrive by not censoring search results.

“Jew Haters” Trending on Twitter as Facebook Exposed for Running Ads Aimed at People Who Hate Jews

Fools or trolls are actually calling Jew Mark Zuckerberg a Nazi for wanting to make a few sheckels by selling ads to people who have some distaste for our Jewish friends.

When I put this post together about 8 P.M Central time U.S. “Jew Haters” was trending on Twitter. And Mark Zuckerberg was getting hammered.

Huffington Post

Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.

Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.

“There are times where content is surfaced on our platform that violates our standards,” said Rob Leathern, product management director at Facebook. “In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the Internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.

But Facebook apparently did not intensify its scrutiny of its ad buying platform. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories.

Artificial intelligence hasn’t been taught political correctness yet. It’s antisemitic. But it doesn’t know it’s antisemitic.

Twitter is overreacting to this minor incident which harmed no one.

New Artificial Intelligence Computer Algorithm Can Tell Whether You’re a Sodomite from a Photograph

Today, we don’t need our human built-in gaydar or artificial intelligence to tell us who’s a sodomite and who isn’t.

They get in your face and do everything possible to make it obvious, including telling you unwanted stories of anal sex.

The Guardian

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

Wait for it. Here comes the political agenda in 3 … 2 … 1 …

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

The rest of the article goes on to bring up the ethical issue involved with exposing queers who may not want to be exposed by AI.

For decades there was an uneasy truce. Don’t ask, don’t tell was basically how society operated. Most of us are willing to let fags do whatever they wish in private, but they now insist on making it public.

This new AI needs to be weaponized in order to be able to exclude the genetic dead ends that call themselves “gay” from our future.

THEY’RE SODOMITES? I’M SHOCKED!

A beauty contest was judged by AI and the robots didn’t like dark skin

ARTIFICIAL INTELLIGENCE JUDGED THESE WOMEN THE MOST BEAUTIFUL. NONWHITES ARE MISSING, UPSETTING SJWS.
https://www.flickr.com/photos/146970485@N04/29546849561/in/dateposted-public/

Haha, even robots are racist.

The Guardian

The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.

But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.

The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.

When Microsoft released the “millennial” chatbot named Tay in March, it quickly began using racist language and promoting neo-Nazi views on Twitter. And after Facebook eliminated human editors who had curated “trending” news stories last month, the algorithm immediately promoted fake and vulgar stories on news feeds, including one article about a man masturbating with a chicken sandwich.

While the seemingly racist beauty pageant has prompted jokes and mockery, computer science experts and social justice advocates say that in other industries and arenas, the growing use of prejudiced AI systems is no laughing matter. In some cases, it can have devastating consequences for people of color.

Beauty.AI – which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. While there are a number of reasons why the algorithm favored white people, the main problem was that the data the project used to establish standards of attractiveness did not include enough minorities, said Alex Zhavoronkov, Beauty.AI’s chief science officer.

Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.

SWEDISH BEAUTY. GOING EXTINCT, I’M AFRAID.
swedish girls

Continue reading