Even though the level of malicious – or ‘bad’ – bots as a proportion of internet traffic seems to have decreased slightly over the last year (2018-2019), the harmful impacts they have on business operations continue apace.
Bad bots are also diversifying in respect to the commercial sectors they target. According to bot mitigation specialist Distil Networks, just about every business and industry vertical now has its own bad bot problem and bot operators syndicate.
Bad bots are software programs created or used by cyber threats to automate their various attack plans. Business operations are particularly exposed to this threat because bad bots have a deleterious effect across a range of commercial activities. They interact with applications in the same way a legitimate user would. They enable cyber attackers, competitors and fraudsters to perform an array of malicious activities. And because it is largely an automated phenomenon, there’s no let-up in the problems they cause – which places additional pressure on already stretched cyber defences.
If a 2018 report from the Ponemon Institute and Radware is correct, more than 52% of all web traffic now emanates from automated sources such as bots – of both good and bad variety. For some businesses, it can constitute as much as 75% of the traffic visiting their websites.
Much of this automated traffic is categorically benign, or ‘good’ bots. It provides critical customer services and represents standard engagement models, such as search engine traffic, chatbots, virtual assistants, and suchlike. Bad bots, however, can be used for a range of nefarious purposes, such as steal (or ‘scrape’) website information, commit fraud, or skew performance metrics. And in the course of doing all this, they disrupt a business’s ‘good’ customer traffic.
Bad bots affect all types of applications, including web, mobile, and APIs. Although IT security leaders are in the frontline of deploying ways to deal with bad bots, business leaders should also be concerned about their impact, for several key reasons.
First, bad bots compromise the security of enterprise applications. Malicious online attackers can use bots to gain access to applications, gain proprietary knowledge about a business, and then misappropriate commercially-valuable data. Second, bad bots degrade internet availability and performance. Botnets made up of thousands of bots make it easy to mount distributed denial of service (DDoS) attacks. These attacks cause critical applications to experience lowered performance and availability. They can even bring down critical support systems.
Just the presence of bad bot web traffic mixed in with legitimate traffic can cause performance issues for online customers. In one case, a company that eliminated bad bot traffic saw web traffic decrease by 66%, while their website page speed and performance doubled (as Business Computing World reported in 2017). Bad bots also distort the information senior managers use to make business decisions. In the digitalised marketplace, enterprises make many decisions about how to best serve customers by using data about who they are, when they buy, and what they buy. Such calls are often made ‘on the fly’ and informed by data streams analysed in real-time.
Marketing teams reward bigger advertisement budgets to the last site a customer visited before purchasing a firm’s products or services; and customer experience (or ‘CX’) specialists use data about customer behaviour to push engagement. Bad bots that interact with a company’s applications alongside their customers skew this data. This causes these decisions to be misinformed or plain wrong, analytical insights and opportunities missed.
How the bot plague bites business
As Globaldots’ 2019 Bad Bot Report wryly points out, for a business whose websites, mobile apps, or APIs are the target of malicious bots, the adverse impacts pile up one against another. Not only do such targeted enterprises have to deal with the competitive pricing pressures that result from bad bot actions like data scraping – more about that later – but they also must maintain infrastructure uptime and redundancy so that real customers aren’t inconvenienced by the invasive traffic. In addition, they also suffer from skewed decision-making metrics, because their web traffic has been ‘polluted’ by bad bots, as Globaldots puts it.
Meanwhile, as Distil Networks’ recent bad bots study points out, mitigation of bad bot intrusion is just as fiddly and resource-draining as the bots themselves. For instance, one method would be to positively ID every single website visitor – human and/or bot. Sales executives will know that a priori requests for identity validation can inhibit or deter customer engagement, so this method has limited appeal, even if proven effective.
Malicious attackers make use of bad bots to actively compromise customer touchpoints to ruin customer experience and commit fraud. Altogether, these bots can create a negative brand perception for your company. Therefore, organisations of all types and sizes should be prepared to defend against bot attacks in all their forms. Airlines, financial services, and healthcare are among the sectors that malicious bots target the most, analysts say. Between 2016 and 2017, bad bots also caused an estimated €5.83bn ($6.5bn) in corporate losses from digital advertisement fraud, reports a study by the Association of National Advertisers (Bot Baseline: Fraud in Digital Advertising).
Many senior executives will be aware of the threat posed by bots, but will be less familiar with the full gamut of bot types and bots attack vectors – i.e., the path or means via which attacks can be channelled. When reviewing the following checklist bear in mind that it’s quite feasible for a company to be attacked by bots across several vectors simultaneously.
Price scraping means ‘scraping’ (copying) price information from an e-tailer’s webstore. It is most common in sectors where product lines are easy to compare, and purchase decisions are usually price-sensitive. Armed with real time pricing data provided by the bots, a price-scraping perpetrator gains an advantage by dynamically adjusting its own product prices in order to match or undercut its competitors.
Content scraping is the use of bots to duplicate proprietary aggregated online copyrighted or trademarked content, such as directories or reference guides, and then reuse it for illegitimate purposes. It can be characterised as intellectual property theft or plagiarism. The practice can be damaging to websites that invest resources in the aggregation and monetisation of big databases – online local business listings or online product catalogues, for example. If the scraped content is made freely available in the public domain, the original data owner’s business model is undermined; and if the scraped content is used to spam or for email fraud, their market reputation is damaged.
Denial of service attacks
According to Neustar’s Global DDoS Attacks Insights Report (2017), a DDoS attack at peak times can cost a targeted enterprise at least $100,000 per-hour in lost revenue. The cost of undermined customer and advertiser relationships is harder to quantify, but likely causes just as much damage.
Bot-infected devices exhaust resources with DDoS attacks. Ransom DDoS attacks – where companies being extorted for protection money – are also on the rise, Neustar says. Forrester’s Stop Bad Bots From Killing Customer Experience report notes that bot-infected devices can strain IT security resources with DDoS attacks, and weaken their ability to guard against other forms of cyber assault.
The proliferation of IoT devices and ‘bot-for-hire’ services (bad and good) has made DDoS an attractive attack method for cyber-attackers. They launch DDoS attacks by infecting connected devices with bots. They then direct them to disrupt routine customer traffic and applications. The Mirai botnet targeted domain name service provider Dyn, in a DDoS attack that made the websites of many Dyn customers inaccessible. Dyn lost up to 8% of its customer base as a result, some reports suggested.
Denial of inventory
Denial of inventory (known also as ‘inventory hoarding’) causes product items to be automatically held in online shopping trolleys without intention to purchase. With legitimate buyers prevented from purchasing the apparently ‘out of stock’ items, the targeted retailer loses revenues from sales to actual customers — with bots often picking the retailer’s most popular products. As well as ongoing loss of sales, if these attacks happen often enough the seeming perpetual absence of inventory can undermine the website’s credibility and quash repeat custom.
Card testing fraud
In this form of bad bot attack, cyber criminals first test stolen credit card details by making small online purchases on smaller, more vulnerable ecommerce sites. They must check the validity of the credit card details, and this tactic allows fraudsters to go mostly unnoticed by the fraud detection solutions.
Once they confirm the credit card is valid, they proceed with making higher-value purchases with larger online retailers. The given fraudster is now a recognised customer, so there’s a chance the order will not be flagged to the legitimate card holder as being suspicious.
Typically, criminals use bots to first test the card information, then target merchant sites that provide automated responses that provide decline details. With this information, payment protection specialist Verifi explains, fraudsters can adjust the credit card details to increases their chances of success. For instance, when a merchant website indicates that a card’s expiration date is incorrect, a fraudster can use the Dark Web and other tactics to determine the correct expiration date. These bot-driven transactions cause losses to retailers through chargebacks, logistics costs – and lost shipped goods.
Credential stuffing uses bad bots to make repeated account access attempts by rapidly ‘stuffing’ stolen credentials –username and password combinations – into the login fields. When the logins succeed, attackers take over the accounts, and use them for nefarious purposes Because so many account owners use the same credentials for their accounts, the success rate and pay-off for attackers can be high, while the bots do all the grunt work. Many organisations do not realise, says Martin McKeay, Senior Security Advocate at Akamai, that credential abuse and account checkers “may actually outnumber legitimate login attempts by a factor of greater than four-to-one”.
F5 Labs defines ‘spam relay’ as malicious actions that involves any type of unwanted ‘spammy’ behaviour. It includes the filling of inboxes with unwanted email containing malicious links, writing and posting bogus product reviews, creating fake social media accounts to post false or biased content, racking-up page views (for example, on a YouTube video) or followers (such as on Twitter or Instagram), writing provocative comments on forums or social media sites to stir-up controversy, vote rigging (of which more below), etc.
Click fraud typically involves a form of advertising fraud – the fraud being that a bad bot, not a human, is clicking on an advert, and therefore has no intention to purchase the advertised product or service. In reality, the goal is to boost revenue for a website owner (or other fraudster) who gets paid based on the number of adverts clicked. Such bots skew data reported to advertisers, warns
F5 Labs in its article, ‘Good Bots, Bad Bots, and What You Can Do About Both’. They also cost companies money, because they end-up paying for non-human clicks. And, of course, those companies derive no revenue from the fakey ‘shoppers’. Click fraud can also be used by companies to drive-up the advertising costs of their competitors. Click fraud accounts for wasted spending estimated at more than €1.24bn in 2016, and is set to grow, reckons Forrester’s Stop Bad Bots Killing Customer Experience. Forrester further states that manipulation of video traffic constitutes the largest click fraud: in 2016, a security company exposed a bot that ‘watched’ video advertising to ‘earn’ around €2.7m-€4.5m per day.
According again to F5 Labs’ article ‘Good Bots, Bad Bots’, Intelligence Harvesting involves the scanning of web pages, Internet forums, social media sites, and other content, to find legitimate email addresses and other information that attackers can use later for spam email, fraudulent advertisement campaigns, or even phishing attacks. Many organisations instruct their employees not to include personalised email addresses on webpages or presentations posted online, if avoidable.
Some bots may actually purchase products and services, but as they do, they disrupt legitimate customer engagement. In October 2017 Checkout-abuse bots purchased some 30,000 tickets to the musical Hamilton from ticket sales and distribution company Ticketmaster by spoofing unique customer identities (some bots can create fake new accounts). Similarly, so-called ‘Sneakerbots’ are specialised bots used to buy-up limited edition sneakers (or trainers). The highly-prized footwear is then resold on auctions sites at inflated prices. This might sound small fry, but (as Forrester points out) it’s driven by a high-value resale market that was estimated to be worth more than $1bn in 2016 – or so the Financial Times has claimed (22/11/18). ‘Sneakerbots’ are available for prices as low as €8-€9 for browser extensions, and up to €400-€500 for standalone software programs.