The personal information that was accessed by an unauthorized third party varied by individual and may have included first and last name, phone number, email address and physical address, the company said in a Thursday (Nov. 13) post on its website.
The incident did not involve Social Security numbers or other government-issued identification numbers, driver’s license information, or bank or payment card information, according to the post. It did not affect users of the company’s Wolt or Deliveroo platforms
DoorDash said in the post “we have no indication that affected personal information has been misused for fraud or identity theft at this time.”
The social engineering scam that enabled the data breach targeted one of DoorDash’s employees, the company said.
“The response team identified the incident, shut down the unauthorized party’s access, started an investigation and referred the matter to law enforcement,” the company said in the post.
Advertisement: Scroll to Continue
DoorDash said that following the incident, the company deployed new enhancements to its security systems, implemented additional training and awareness for its employees around this sort of scams, and brought in an external firm to assist in its investigation.
The PYMNTS Intelligence report “Vendors and Vulnerabilities: The Cyberattack Squeeze on Mid-Market Firms” found that social engineering attacks are a growing problem for middle-market companies.
The report found that 87% of mid-market firms were at least somewhat concerned about social engineering attacks targeting payments.
Another PYMNTS Intelligence report, “The State of Fraud and Financial Crime in the U.S. 2024: What FIs Need to Know,” found that social engineering fraud had increased 56% in the previous year.
The report found that social engineering scams exploit human psychology rather than technological loopholes, with fraudsters relying on “customer-centric” tactics and leveraging trust to bypass the robust security systems financial institutions have built around digital payments.
PYMNTS reported in October that artificial intelligence (AI) is making social engineering scams faster, cheaper and more convincing. For example, AI-generated voices that are now indistinguishable from genuine ones are enabling more persuasive scams.