Datametrex AI (DM.V) discovers twitter-bots influencing opinions on the Don Cherry debacle

Another Remembrance Day has come and gone up here in Canada.

For some that means a visit to the cenotaph to participate in the annual ritualized recognition of our armed forces, and for the rest of us it’s a paid day off from work while we wear a poppy as a genuflection to social norms.

The custom has started to drift off in recent years as Hollywood feeds the younger generations watered-down cinematic versions of the wars, and the reality of the horrors are lost. Fewer people stop at the little old lady at the desk in the mall to give her $0.75 for a poppy, which they proudly wear for a few hours before losing it on the bus and never getting another one.

And Don Cherry is right pissed about it. Pissed enough, in fact, to go on a racist tirade and get his sorry loudmouth ass fired once and for all. If you don’t know who Don Cherry is you’re probably either A: American, B: not a hockey fan or C: both. He’s a former NHL coach turned hockey pundit known for his flamboyant suits, controversial opinions, and inappropriate injection of nationalistic fervour into what is, essentially, a children’s game.

But this story isn’t actually about Don Cherry. Instead, it’s about social engineering.

Datametrex AI (DM.V), a tech company focused on artificial intelligence, blockchain and machine learning, decided arbitrarily to turn on their fake-news filter on the Don Cherry firing scandal to see if known propaganda accounts and bots were getting involved in the conversations to try to further divide public opinion in Canada.

Long story short: they were.

The company analyzed 50,000 accounts in one day, and discovered 30 suspicious accounts using the issue to promote a divisive agenda. A number of the identified accounts were also involved in the hashtag ELXN43 campaign buzz on social media. Before this, DM had been contracted by Defence Research and Development Canada (DRDC) to keep abreast of social media discourse during the recent federal election.

The tool is called NexaNarrative, and it’s used for narrative tracking disinformation detection and publisher classification. It lets analysts track the spread of disinformation online and engage using the BEND doctrine of information warfare presently in use by Canada and most other NATO countries.

Tampering with the free market of ideas

Civics 101 time. A free democracy requires an educated public engaged in a steady free-exchange of ideas. The general idea is that the best ideas will rise to the top, and be pushed into action. This ideological point is a few hundred years old and has more holes than Don Cherry’s logic, but it’s remained in pride of place mostly because nobody else has any better ideas.

One of the biggest problems is that it was conceived hundreds of years before the internet introduced the idea of the internet troll, and hackers automated the idea for their own political purposes. Now we have bots on social media poking their noses into our discourses, and injecting whatever foreign biases wherever they can to suit a narrative that isn’t our own.

Some say that’s how the United States got stuck with this guy:

Don Cherry
The other white idiot | Source: vanity fair

Others don’t buy that narrative, and full disclosure, I’m one of them. It’s probably a mix of elements that came together in a perfect storm of populist authoritarian suck and gave us 4-8 years of schadenfreude entertainment. Oh, and the serious potential for nuclear doom. Can’t forget about that.

The problem with this, and if we’re being thorough here in our due diligence here, with a company like this is how much of a problem is this, and how much of this is hype and fear? Well—apparently it’s a bit of an issue. Researchers have discovered that as many as 15% of Twitter accounts are bots, which drive two-thirds of the links on the site. But like all things technological—the existence of bots and their effect on discourse has everything to do with how they’re deployed. There are bots devoted to beautifying the internet, and making it kinder, and more useful. And there are others which seek to weaponize the internet for their own purposes.

“In the run-up to the 2018 midterms, bots were used to disenfranchise voters, harass activists, and attack journalists. But at a fundamental level, Facebook and Twitter are dis-incentivized from doing anything about it,” said Sam Woolley, the director of the digital lab at the Institute for the Future.

Maybe that needs to change. Here’s what Twitter CEO Jack Dorsey had to say:

The big if…

There’s definitely a market for the kind of technology that DM is putting out, that’s for certain. But the big if attached to any of these claims is whether or not it actually works. The specifics aren’t in, but catching 30 suspected bots out of a field of 50,000 accounts, without any independent verification that these are in fact bots or trolls, doesn’t exactly prove anything.

“This project further demonstrates how important it is to have tools to identify fake news sources so government, corporations and high-profile individuals can control the narrative around their issues and brands,” said Marshall Gunter, chief executive officer of the company.

Still, the company released the data from their contract with the DRDC today, and now that the tool has been built and delivered, the company is looking to commercialize the solution and offer it to government and enterprise clients alike.

Either way, my curiosity is piqued. How about yours?

—Joseph Morton

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: