We thus consider the following fraud lifecycle 1 Design flaw A vulnerability

We thus consider the following fraud lifecycle 1

This preview shows page 16 - 18 out of 114 pages.

We thus consider the following fraud lifecycle. 1. Design flaw: A vulnerability may be introduced into a system during the design process, as with the vulnerabilities in the EMV payment card protocols. 2. Implementation flaw: A vulnerability may alternatively be introduced by careless implementation, as when programmers fail to check the length of input strings leading to buffer-overflow exposures. 3. Vulnerability discovery: An exploitable flaw is discovered. The discoverer may be a responsible researcher who reports it to the vendor, or an attacker who uses it directly (a zero-day exploit ). 4. Patching: The vendor patches the exploit. In the case of an online service such as Google, a software change on the server can be done at once; in the case of an operating system it typically means shipping a monthly product update. 5. Post-patch exploit: The majority of exploits involve flaws for which patches are available, but on machines whose owners haven’t patched them. Many users don’t patch quickly (or at all) and many attackers reverse-engineer patches to discover the flaws that they were designed to fix. 16
Image of page 16
6. Botnet recruitment: Many exploited machines are recruited to botnets , networks of machines under the control of criminals that are used for criminal purposes (send- ing spam, hosting phishing websites, doing denial-of-service attacks, etc). 7. Bot discovery and disinfection: Infected machines are identified (because they are sending spam, hosting illegal websites etc.) and the ISP (if following best prac- tice) then takes them offline. 8. Asset tracing and recovery: Where criminals have succeeded in taking over a citizen’s bank account and start to transfer money out, typically to ‘mules’ who launder it, the banks’ fraud-detection systems notice this and freeze the account. A proper policy analysis of cyber-crime needs to consider all these steps. System vendors make socially suboptimal protection decisions because of wrong incentives: se- curity isn’t free, and they will provide less of it than they should if privacy laws aren’t enforced properly, or the costs of fraud fall on others. Ensuring that an adequate amount of security research gets done, and that most vulnerabilities are reported responsibly to vendors rather than sold to criminals, is also a matter of (sometimes complex) incentives. Patching introduces further tensions: an operating-system vendor might like to patch fre- quently, but as patches can break application software, this would impose excessive costs on other stakeholders (including customers who write their own application software). It would be ideal if users who don’t maintain their own software patched quickly, but often security fixes are bundled with upgrades that many customers don’t want. Botnet recruitment would be much harder if popular applications such as browsers had more usable security; yet many of the existing mechanisms appear designed by techies for techies, which raises a number of liability and even discrimination issues. Many machines
Image of page 17
Image of page 18

You've reached the end of your free preview.

Want to read all 114 pages?

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

Stuck? We have tutors online 24/7 who can help you get unstuck.
A+ icon
Ask Expert Tutors You can ask You can ask You can ask (will expire )
Answers in as fast as 15 minutes