High availability computing though also designed to maximize application and

High availability computing though also designed to

This preview shows page 7 - 8 out of 8 pages.

High-availability computing, though also designed to maximize application and system availability, helps firms recover quickly from a crash. Fault tolerance promises continuous availability and the elimination of recovery time altogether. High-availability computing environments are a minimum requirement for firms with heavy electronic commerce
Image of page 7
processing requirements or for firms that depend on digital networks for their internal operations. Disaster recovery planning devises plans for the restoration of computing and communications services after they have been disrupted by an event such as an earthquake, flood, or terrorist attack. Disaster recovery plans focus primarily on the technical issues involved in keeping systems up and running, such as which files to back up and the maintenance of backup computer systems or disaster recovery services. Business continuity planning focuses on how the company can restore business operations after a disaster strike. The business continuity plan identifies critical business processes and determines action plans for handling mission-critical functions if systems go down. Identify and describe the security problems posed by cloud computing. Accountability and responsibility for protection of sensitive data reside with the company owning that data even though it’s stored offsite. The company needs to make sure its data are protected at a level that meets corporate requirements. The company should stipulate to the cloud provider how its data are stored and processed in specific jurisdictions according to the privacy rules of those jurisdictions. The company needs to verify with the cloud provider how its corporate data are segregated from data belonging to other companies and ask for proof that encryption mechanisms are sound. The company needs to verify how the cloud provider will respond if a disaster strike. Will the cloud provider be able to completely restore the company’s data and how long will that take? Will the cloud provider submit to external audits and security certifications? Describe measures for improving software quality and reliability. Using software metrics and rigorous software testing are two measure for improving software quality and reliability. Software metrics are objective assessments of the system in the form of quantified measurements. Metrics allow an information systems department and end users to jointly measure the performance of a system and identify problems as they occur. Metrics must be carefully designed, formal, objective, and used consistently. Examples of software metrics include: A. Number of transactions that can be processed in a specified unit of time. B. Online response time. C. Number of known bugs per hundred lines of program code. Early, regular, and thorough testing will contribute significantly to system quality. Testing can prove the correctness of work but also uncover errors that always exist in software. Testing can be accomplished through the use of: A. Walkthroughs: A review of a specification or design document by a small group of people. B. Coding walkthroughs: Once developers start writing software, these can be used to review program code. C. Debugging: When errors are discovered, the source is found and eliminated.
Image of page 8

You've reached the end of your free preview.

Want to read all 8 pages?

  • Fall '14
  • Computer program

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture