The fatal flaw at the heart of social media: Compromised information
Investment banks, hedge funds and other for-profit speculators of the world’s money supply are “scooping up computer scientists, not economists and investment bankers with MBAs,” because “artificial intelligence” is now the Holy Grail of investment strategy, not old-fashioned gurus like Warren Buffett, who are increasingly viewed as “redundant” because their minds are not “super-fast.”
As reported by the Financial Times, the so-called “quantitative investment world” of Goldman Sachs and Bridgewater, et al. is “play[ing] down the prospect of machines supplanting human[s]”—at least for now. But since “the human mind has not become any better than it was 100 years ago,” while the complexity of investments has grown immeasurably more complicated and unpredictable due to phenomena like algorithmic trading and a worldwide marketplace that includes China, “Eventually the time will come that no human investment manager will be able to beat the computer.”
Enter artificial intelligence. “A machine-learning algorithm will autonomously evolve and search for new patterns,” in the same way a human mind does, but thousands, if not millions, of times faster, making the human mind irrelevant. Buffett-style “intuitive trading strategies” will look clumsy in comparison—like 1950s NBA players competing against the likes of Kobe Bryant and Steph Curry.
Well, perhaps, But consider that the notion of pure, real-time, disinterested, objectively neutral analytic devices, crunching only numbers and disinterested in any external agenda, and powered by artificial intelligence, is a fiction. That’s what we thought about computers: That they would bring about “a million fold increase in the speed of calculations, a thousand fold decrease in cost, all this while scientists were ‘just beginning to explore these possibilities,’” as an idealistic 1962 prediction of the computer’s future had it. But other, more worried voices, were slowly emerging: this vast accumulation of data, an IBM analyst warned in the 1960s, “could be pooled, drawn on and used in ways for which they were not intended.”
Which brings us to viruses, bots, malware and the entire netherworld of awful stuff that crawls through and infects the world’s networks at the speed of light, seeking any and every unprotected nook and cranny. Last Thursday, a paper, published by DARPA (the Defense Advanced Research Projects Agency, the branch of the U.S. military that has worked on everything from satellite technology to the Internet to driverless cars), published “The DARPA Twitter Bot Challenge.” Impressed and alarmed by rapidly spreading “influence bots— realistic, automated identities that illicitly shape discussion on sites like Twitter and Facebook”–the Challenge seeks to up the scientific community’s game at detecting and combating such bots. The relationship between “influence bots” and artificial intelligence was anticipated by British physicist Alan Turing (“The Imitation Game” movie), whose “Turing test” postulated a “machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”
But the Turing test apparently did not anticipate a regime of outright deception and fraud on the computer side—a computer pretending to be a human that was controlled by a human pretending to be a computer. As this article, from the BBC, makes clear, bots, including influence bots, are already engaged in “automated deceit” that “can even trick the web-savvy.” The DARPA Twitter Bot Challenge was created because influence bots “pose a clear danger to freedom of expression”: If we don’t know whether the results our computers spit out are pure and objective and thus “real,” as opposed to malicious, agenda-driven and thus “unreal,” then we’re clearly capable of being led down a disastrous garden path.
(The DARPA paper cites examples of malicious influence bots by, for example, Russians engaging in a campaign of disinformation about its seizure of Ukraine, and ISIS spreading radicalism.)
The bankers and investment managers who are relying on artificial intelligence to replace “merely human” analysts mean well, but there is no guarantee that their findings may not be contaminated by bots and other forms of malware that purposefully distort conditions. Can they know that, for example, Chinese intelligence is not interfering in the analysis of oil prices over the next six months? Or that Russian mafia intelligence is not creating the impression that Chinese intelligence is the culprit? And on and on, through the looking glass. As the DARPA paper points out (and this is precisely the kind of stuff DARPA worries about), “Over the next few years, we can expect a proliferation of social media influence bots as advertisers, criminals, politicians, nation states, terrorists, and others try to influence populations.” The only protection against this menace, DARPA says, is “to significantly enhance the analytic tools that help analysts detect influence bots.” Unfortunately, the bad guys are in the race, too, busily developing software that thwarts bot-detection tools.
Which brings me to my headline. Re-read it. Influence bots mean that malicious coders may well influence the masses. Social media always has been over-hyped, but this news further undermines its early promise as the great leveler and democratizer of mankind. It turns out it may be anything but. How the world will deal with online information, including social media, that may be hopelessly compromised will keep “the good guys” busy for a long time, and could make an already anxious public more suspicious than ever of social media.
Another argument for the importance of professional critics.
It ain’t no thing to get 1,000 reviewers (either bots and/or $3/hr in Cebu) to make sure your CellarTracker scores are 96 points, you have 4.8 stars on Vivino and Twitter is filled with glowing reviews.
Having said this, misinformation doesn’t spell doom for social media – it simply puts more friction into the system.
Michael: Agree about professional critics. Been saying that for, like, forever. Question: How much “friction” can the system withstand until it burns up?
Misinformation and disinformation have been with us since the first person-to-person negotiation in pre-recorded history.
“Caveat emptor” invoked.
As for Warren Buffett and his Berkshire Hathaway colleagues, they conduct lots of Benjamin Graham-instructed analysis — not gut-feel intuition — before every deal.
http://www.investopedia.com/articles/07/ben_graham.asp
And wisdom (acting dispassionately and knowing when to walk away from a deal) trumps quicksilver “super-fast” quant minds.
Steve, there’s a (mis)quote that goes “there’s no problem in computer science that can’t be solved by another level of abstraction.” That’s why things never blow up. When it becomes too hard to deal with, just put some intelligence in a layer above it and let software deal with the complexity. This is what, say, Facebook does to your feed and Google does to the web.
OTOH, this is not what Twitter does… and maybe there’s your answer. Twitter may very well be burning up because it’s just a naked stream of content without meaningful abstractions for its users. That makes it a great vector for misinformation and manipulation.
From The Wall Street Journal “Business & Tech.” Section
(January 25, 2016, Page B6):
“Bogus Internet Traffic Still Pains Ad Business”
Link: http://www.wsj.com/articles/cmo-today-1453682010
By Suzanne Vranica
“CMO Today” Column
“Despite numerous warnings that the online advertising business is rife with fraud, marketers continue to waste billions — an estimated $7 billion, at least, this year — on buying online ads that people don’t see, according to the Association of National Advertisers.
“The trade group and ad-fraud-detection firm White Ops conducted a study last year that tracked online ad buys of 49 brands from August through September and found that fraud levels are ‘relatively unchanged’ from a similar study the two parties conducted in 2014.
“The problem of fake Web traffic generated by so-called BOTS, computer programs that mimic the mouse movements and clicks humans make to give the impression that a person is visiting a website that then lures in advertisers, has gotten a significant amount of attention over the past few years.
“But so far there has been little change, according to the most recent study. Moreover, for some advertisers that participated in the 2015 study, things got worse and more fraud traffic was detected. The ANA said that, in the 2015 study, advertisers found that 3% to 37% of their ad impressions were created by BOTS compared with the prior study, where the bot traffic ranged from 2% to 22%.
“Companies could lose more than $7 billion globally this year to ad fraud, the ANA and White Ops estimate.”
[Aside: And don’t forget this Wall Street Journal (March 23, 2014) article:
“‘Crisis’ in Online Ads: One-Third of Traffic Is Bogus”
Link: http://www.wsj.com/articles/SB10001424052702304026304579453253860786362
By Suzanne Vranica
Staff Reporter
“Billions of dollars are flowing into online advertising. But marketers also are confronting an uncomfortable reality: rampant fraud.
“About 36% of all Web traffic is considered fake, the product of computers hijacked by viruses and programmed to visit sites, according to estimates cited recently by the Interactive Advertising Bureau trade group.
“So-called bot traffic cheats advertisers because marketers typically pay for ads whenever they are loaded in response to users visiting Web pages—regardless of whether the users are actual people.
. . .
“Spending on digital advertising — which includes SOCIAL MEDIA and mobile devices — is expected to rise nearly 17% to $50 billion in the U.S. this year. That would be about 28% of total U.S. ad spending. Just five years ago, digital accounted for 16%.
. . .
“‘When you bundle BOTS, clicks fraud, viewablity and the lack of transparency [in automated ad buying], the total digital-media value equation is being questioned and totally challenged,’ says Bob Liodice, chief executive of the Association of National Advertisers trade group. Advertisers are beginning to question if they should increase their digital ad budgets, he says.”]
I, for one, welcome our new robot overloards
Bob, google “negative seo” … for $5, you can hire someone to generate a million spammy backlinks to your competitor, sending them to page 100 on google and costing them weeks / thousands of dollars to fix. $5.
Point is that it’s not always about DARPA class problems. Sometimes it’s a 13 year-old kid in Bangladesh who will run a script for you.