[ad_1]
For years it’s been obvious that online social services of all kinds are beset by a problem with bots. I work on a cybersecurity product that can go inline and literally watch bot traffic, and too often we’ve had to break the news to a young company that the vast majority of their traffic is bots logging into fake accounts.
Unsurprisingly, there is a lot that people don’t understand about how the bot ecosystem works. Some bots are performing valuable services like scraping open reservation times for your favorite restaurant. Some are performing a legitimate task, but doing so in a rogue manner that creates a security risk. And of course many, many more are just up to no good—and they’re easier to use than you might think.
There is one service where this problem has been most obvious due to the company’s fame and massive global reach—the social media platform now known as X, which most people still refer to as Twitter. Since I’ve been living and breathing this problem for years, I decided to take a closer look under the proverbial hood.
The black market for bot influence
Historically, Twitter is a social economy where the currency is users and their corresponding views, clicks, and likes. A person or organization with a large flow of those interactions is what we generally refer to as an influencer.
So with Twitter, the pressure is heightened. Whether it’s celebrities, pundits, corporate communications departments, or political campaigns, they all need more followers to accomplish their goals. The success of social media teams is gauged on these types of metrics.
Bots on Twitter prey on this natural obsession that users have with followers and getting more followers. So it shouldn’t be surprising that swarms of bots have arisen over time from a variety of sources with a variety of motivations. There are entire online marketplaces for buying fake followers, and this ecosystem of fake followers is exploited by bad actors.
This of course creates many flavors of adverse consequences. One is inadvertent advertising fraud, with companies often paying millions for ads that are not being seen by humans.
Another is simply the nature of influence. Elections are won or lost by a few thousand votes. Programmatic control of social media can and has been used to influence public opinion—to bury truths, amplify lies, and attack opponents.
And of course, the number of users also directly correlates to valuation for the service itself. During the years when Twitter was a publicly held entity, they were required to report on their user base by the SEC. The number of users affected the company’s stock.
This created a reverse incentive to silently ignore any problem with bots— you didn’t want to be an engineer at Twitter who proved accounts are fake.
An exercise in profile spelunking
Fascinated by this dynamic, I decided to see for myself how hard it was to build a base of followers and gain influence on Twitter. I created an account called Ted and invested in some amplification. After spending less than $1,000 in a little over a month, I gained more than 100,000 followers.
These followers were all purchased. I used a service called SMMstores.in which is just one of dozens of marketplaces for fake followers. These followers are all clearly fake, using the same photos and similar usernames. The marketplace also offered so-called “sticky followers” for a small premium—more sophisticated fake followers that are able to avoid basic defenses—so I added some of those to the mix as well.
Given that Twitter only allows its human users two accounts for work and personal use, it may seem strange that these marketplaces are able to create so many accounts. But as it turns out, it’s not that difficult to automate account creation.
With a couple hundred lines of code, a savvy programmer can develop a user bot and an account creation script in a few days, able to change IP addresses sufficiently to overcome basic defenses. And with a bit more effort, they can overcome more sophisticated defenses, too.
The X era and ‘Not a Bot’
Since Elon Musk purchased Twitter (and later renamed it X), the company has been privately held, so there is much less public disclosure of how the bot problem is being dealt with. Musk has publicly acknowledged the challenge, however, and it appears the new regime has had some success reducing bot traffic and that bot activity on X has indeed declined.
As of now, my Ted account has gone from 100,000 followers two years ago to 11,800 since Musk took over. Ted’s followers look more real—they have different usernames and different pictures. Interestingly, while most of the obviously fake followers were cleaned out, those more sophisticated sticky followers remained.
More recently, Musk announced a new step to curb spam and bot account activity on the platform. The “Not A Bot” program is actually pretty simple —X will charge users a nominal fee of $1 per user per year to use the site’s key features.
This is designed to eliminate the profit motive for services that may be creating millions of bot accounts per year. In theory, this should be effective in reducing those less sophisticated bots. For more sophisticated bots, visibility into web traffic is the only real solution.
While X is better than Twitter was at attempting to solve the bot problem, they still have work to do. The measures they’ve taken show that mass-produced, low-quality bots are relatively easy to detect and get rid of, but higher quality bots coming from well-funded and more sophisticated sources will be more difficult.
While would-be influencers like Ted may find their influence diminished through measures like Not a Bot and others, cybercrime is a lucrative business, and today we know that nations and governments are also involved in trafficking bots for influence. A $1 annual fee is unlikely to deter these more sophisticated actors from paying for that blue check.
What could help here? More rigor around people’s identities, possibly as stringent as verification of IDs for blue check accounts, can be one effective tool. But this is painful to execute across millions of users and could also have the undesirable effect of eliminating those bots that are performing valuable functions like scraping the site for content and search results. In many ways the site would lose its value if the good work done by those good bots were to disappear.
Solving the bot problem with minimal friction for customers is really a matter of visibility into client side signals to understand the site’s traffic. With the right tools, even the most sophisticated bots will stand out. Ultimately, X will need to invest in a more modern approach to bot mitigation if it is to be successful in truly fixing the problem.
[ad_2]
Source link