Chapter 3: Cybersecurity Legal Standards and Baseline Controls

Big Data Law in Canada

Chapter 3:
Cybersecurity Legal Standards and Baseline Controls

 
Pic Ch 03.jpg

Chetan Phull · December 12, 2019

Chapter 3 is provided below. See also our service offerings related to this chapter:
Data Governance, Breach Planning, and Third Party Risk”; and
Data Privacy and Ransomware Litigation”.

Special thanks to Idan Levy for for his valuable legal research and editorial work in the preparation of this book.


 
 

Tip: Read this book in Fullscreen mode.

 
 

 
 

Full Book Online

Big Data Law in Canada may be read online in full.

 

Chapter-by-Chapter

Big Data Law in Canada may be read chapter-by-chapter.

 

 
 

HTML VERSION

Chapter 3:
Cybersecurity Legal Standards and Baseline Controls

I. Overview of Obligations

Private organizations are responsible for personal information under their control. They are obligated to implement security safeguards appropriate to the sensitivity of such information. These obligations apply to data controllers and processors alike. Moreover, the principal data controller remains responsible for the data handling practices of any third-party processor it contracts with.

[See PIPEDA, Sch. 1, Principles 4.1, 4.7; PIPEDA Report of Findings #2014-004 (Apr 23, 2014): “the Organization’s status as a third-party processor does not prevent it from being subject to the Act. The Act applies to all organizations that have personal information in their possession or custody, so long as the information was collected, used or disclosed in the course of a commercial activity that has a real and substantial connection to Canada.”; PIPEDA Report of Findings #2019-003 (Oct 16, 2019) at para. 38; OPC, “Guidelines for processing personal data across borders” (Jan 2009): “The law requires you to protect personal information while it is in the hands of a third party processor: failure to comply could result in complaints and legal action.”]

A data breach sustained by either the principal controller, or third-party processor, can invoke regulatory liability, post-breach obligations, and numerous civil claims including class actions. The relevant Canadian law is discussed in Chapter 4. The present chapter focuses on establishing a case for a legally minded breach plan, and identifying the most urgent steps toward compliance and mitigation of breach liability.

II. The Legal Cost of a Breach

The average total cost of a data breach in Canada, between July 2018 and April 2019, was $4.44 million CAD. After currency conversion, this amount is lower than the global average of $3.92 million USD and the U.S. average of $8.19 million USD. However, breach costs in general are on an upward trend, with most breaches caused by malicious actors, and the largest cost being lost business.

[See IBM Security Intelligence and Ponemon Institute LLC, “Cost of a Data Breach Report 2019” (Jul 23, 2019) at 3, 13, 16, 21, 24, 29-36, 68, 74.]

Legal support is its own category of breach cost. In addition to dealing with regulators, a breached company may also be forced to react immediately to numerous individual legal proceedings, or class action certifications, spread over a variety of jurisdictions.

[See Douez v. Facebook, Inc., 2017 SCC 33 at paras. 4, 38; Gill v. Yahoo, 2018 BCSC 290 at paras. 8-12, 37.]

Moreover, disputing legal issues on their merits may deplete the company’s resources and negate years of brand building. Consider the simple case of an e-mail hack, resulting in funds being misdirected due to fraudulent payment instructions. In such a case, liability may require factual discovery, to determine whether there was consent to act on payment instructions provided by e-mail.

[Compare Du v. Jameson Bank, 2017 ONSC 2422 at paras. 64 and 78 with the Lanark Leeds case, 2019 CanLII 69697 (ON SCSM) at paras. 56-65.]

This is just the beginning. Issues on the merits can become infinitely complex, tying a company in litigation for years. With respect to regulatory proceedings, the process is exacerbated by the OPC’s occasional failure to produce findings and recommendations within the prescribed one-year period.

[See PIPEDA, s.13(1); Haikola v. The Personal Ins. Co., 2019 ONSC 5982 at paras. 12-28, where the privacy dispute spanned from Nov 2012 to Oct 2019. The OPC investigation took three years. It was followed by commencement of a class action which was complicated by a jurisdictional issue. The class action settled in principle at mediation, with settlement terms negotiated over the following six months. Thereafter, the court’s approval of the settlement was required. See also the survey of cases on privacy torts provided at subsection IV of Chapter 4.]

During the years of litigation, there also remains the risk of damages unexpectedly skyrocketing due to advancements in technology. Consider that a breach can facilitate the harvesting of encrypted data today, in order to crack such encryption in the near future with quantum technology. Such risks have been acknowledged in Canadian and U.S. cases.

[See Steve Jurvetson, “How a quantum computer could break 2048-bit RSA encryption in 8 hours” (MIT Tech.Rev., May 30, 2019) referencing Craig Gidney and Martin Ekeråthe, “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubitsraig Gidney and Martin Ekerå” (Cornell U, arXiv:1905.09749 [quant-ph]) (May 23, 2019); OPC’s Equifax decision (PIPEDA Report of Findings #2019-001, Apr 9, 2019) at para. 148 re risk of future unauthorized use of personal data that has been exposed; Remijas v. Neiman Marcus Group, 794 F.3d 688, 689 (7th Cir. 2015) at 694 re harvesting of data for future nefarious uses; In re Adobe Sys., Inc. Privacy Litig., 66 F. Supp. 3d 1197, 1214 (N.D. Cal. 2014) re “substantial risk of future harm”.]

Insurance coverage may also become an issue. Consider a breach resulting from human error, which itself stemmed from malware or a phishing attack. In such cases, it will be necessary to determine if the fraud merely involved trickery of a human agent (probably not covered), as opposed to fraud involving malicious code to facilitate a spoofing attack (potentially covered). This kind of matter can become even more complicated when multiple insurance policies are involved. Expensive factual discovery will certainly be required.

[See Apache Corp. v. GAIC, 662 Fed. Appx. 252 (5th Cir. Oct. 18, 2016); Brick Warehouse LP v. Chubb, 2017 ABQB 413 at paras. 19-25; Medidata Solutions Inc. v. FIA, 729 Fed.Appx. 117 (2d Cir. 2018); Dentons Canada LLP v. TGIC, 2018 ONSC 7311 at para. 39 onward.]

Insurance coverage may also become an issue in egregious ransomware cases. Consider the ongoing Mondelez v. Zurich case in the U.S. In that case, the insurer has taken an “off coverage” position by attributing the breach to an act of war, which is an automatic exclusion of coverage under the policy.

[See Brian Corcoran,What Mondelez v. Zurich May Reveal About Cyber Insurance in the Age of Digital Conflict (LAWFARE, Mar 8, 2019); Riley Griffin, Katherine Chiglinsky and David Voreacos, “Was It an Act of War? That’s Merck Cyber Attack’s $1.3 Billion Insurance Question” (Insurance Journal and Bloomberg, Dec 3, 2019).]

Suffice it to say, the legal costs of a breach can be significant. To make matters worse, costs are virtually impossible to assess before the breach occurs, and are unpredictable in the midst of litigation. Depending on the nature of data entrusted to your company, and your company’s data governance practices, your company may very well be one breach away from complete disaster. While this may seem hyperbolic, the previously discussed rising trend of breaches and breach costs suggests otherwise.

The good news, is that the cost of a breach decreases significantly with prior contingency planning.

[See IBM Security Intelligence and Ponemon Institute LLC, “Cost of a Data Breach Report 2019” at 50-51, 68.]

III. Navigating Due Diligence Measures and Breach Liability

Although the big data space is becoming more in tune with risk management from a technical perspective, most companies still “do not know what they do not know” with respect to legal issues. Without the assistance of legal counsel, even the most cautious organization—and its board—can fall short of their duty of care with respect to cybersecurity and baseline controls, and post-breach obligations. Items to prioritize for legal due diligence include:

  • data flow audits to discover legal risks and liabilities;
  • establishment of solicitor-client privilege over aspects of a future “breach file”;
  • contracts competently negotiated with third-party data processors (e.g. cloud vendors);
  • documented audits of data practices regarding on-site system infrastructure, including third-party legacy and “black box” systems;
  • insurance coverage issues;
  • liability mitigation steps with insiders, including employee training and contractual indemnification;
  • protocol for post-breach regulatory compliance and forensic investigation;
  • pre-planned public relations messages.

After a breach occurs, having legal counsel becomes even more important. The specific legal questions affecting liability are not always intuitive. A company’s liability may arise from, or be exacerbated by, factors that are unobvious and beyond immediate access and control. For example, a data controller’s post-breach legal position can be affected by:

  • an activity log in the exclusive possession of a third-party processor;
  • any haphazard steps taken immediately post-breach to rehabilitate public image, including public addresses communicating prior and ongoing steps to mitigate damage;
  • the foreseeability of the particular breach in question;
  • the reasonability of anti-breach protocols;
  • the reasonability of actual measures taken post-breach;
  • whether the facts support a theory that the third-party processor acted outside the scope of its agency, or otherwise breached its contractual obligations;
  • whether the hack was committed against a third-party not bound by contract, or indemnified by contract;
  • whether a user contributed or caused the damage sustained, and that user’s ostensible intentions;
  • whether the hack was committed by a persistent and well-funded actor (e.g. state-sponsored hacker).

Moreover, breach reporting obligations must be fulfilled in all concerned jurisdictions. Canada, the U.S., Europe, Australia, and other jurisdictions have similar but different breach reporting requirements. Lawyers should be on retainer to fulfill such breach reporting requirements, and deal with regulators as appropriate.

Clearly, a proactive approach to avoiding breach liability—before it accrues—is the best approach. A Chief Data Officer should be appointed. Internal cybersecurity standards should be set and strictly adhered to. Appropriate standards should be entrenched in contracts with third-party data processors. Legacy and “black box” systems should be limited. Appropriate risk transfer mechanisms should be implemented. Cyber insurance policies should be examined and negotiated.

The legal rational for a breach plan, and various legal issues that may arise in breach-based litigation, are discussed in Chapter 4. Before considering a larger strategy based on that discussion, certain baseline cybersecurity controls should be implemented first—quickly.

IV. Baseline Cybersecurity Controls as a Primary Legal Defence

The regulatory framework for privacy and cybersecurity was discussed broadly in subsection I of Chapter 2. Various OPC decisions have helped to clarify the legal standard for cybersecurity controls, particularly in the private sector. For example, the OPC’s decision in TJX/Winners clarified the duty to “take reasonable security arrangements for such risks as unauthorized access, collection, use, disclosure, copying, modification, disposal or destruction.”

[See OPC’s TJX/Winners decision (PIPEDA Report of Findings #2007-389, Sep 25, 2017) at paras. 68-70.]

As stated in the decision, what is “reasonable” depends on the sensitivity of the personal information, in addition to “whether the security risk was foreseeable, the likelihood of damage occurring, the seriousness of the harm, the cost of preventative measures, and relevant standards of practice.”

[See OPC’s TJX/Winners decision (PIPEDA Report of Findings #2007-389, Sep 25, 2017) at para. 71.]

From this case, it stands to reason that the threshold for “relevant standards of practice” must be high. It was held that WEP encryption was inadequate and its use was unreasonable—even though most other retailers were exposed to the same encryption vulnerability.

[See OPC’s TJX/Winners decision (PIPEDA Report of Findings #2007-389, Sep 25, 2017) at paras. 71, 75-76, 80-81, 93, 98.]

The body of OPC cases discloses a broad and evolving spectrum of inadequate cybersecurity measures, specifically with respect to:

  • documented security policies and practices;
  • password administration;
  • key and password management;
  • multi-factor authentication;
  • encryption measures;
  • security monitoring and logging;
  • audit trails;
  • use of virtual private networks;
  • network segmentation including with firewalls;
  • network activity logging;
  • virus protection;
  • timely upgrading and patch implementation;
  • accountability of third-party data processors;
  • inconsistent security measures applied between pools of redundant data;
  • backup system testing.

We expect that SIM-hacks will soon also gain the attention of Canadian regulators, given the increase of such hacks in the last two years. This type of vulnerability was featured in a recent U.S. case regarding cryptocurrency theft by SIM-swap attack. Moreover, as of the date of publication, we make the factual observation that many Canadian banks still permit two-factor authentication by SMS.

[Michael Terpin v. AT and T Inc et al, No. 2:2018cv06975, Doc 29 (C.D. Cal. 2019) re $24 million crypto theft through SIM-swap attack; Sean Coonce, “The Most Expensive Lesson Of My Life: Details of SIM port hack” (Medium, May 20, 2019).]

In response to the pervasiveness of inadequate safeguards and security vulnerabilities in industry, the Canadian Centre for Cyber Security (“CCCS”) recently issued baseline controls for organizations with less than 499 employees. The baseline controls include an incident response plan (or “breach plan”), use of strong user authentication measures (discussed further in Chapter 5), employee training, and perimeter defences.

[See CCCS, “Annex A Summary of the Baseline Controls” (last updated Nov 20, 2019).]

Many of the CCCS baseline controls are generally stated, which leaves room to interpret the precise standard for adequate implementation. The body of OPC cases also points out security flaws, often without specifically stating what measures would be considered “adequate safeguards”. On this basis, we offer the following list of specific baseline controls, to provide a clearer sense of what the minimum legal standard for cybersecurity should look like:

  • implement firewall, anti-virus, and anti-malware systems;
  • implement an intrusion detection system (“IDS”) to monitor inbound and outbound network activity;
  • implement an appropriate level of IDS rigour with respect to: misuse versus anomaly detection, packet analysis at the network versus computer level, passive logging versus reactive blocking, etc.
  • constantly monitor for artifacts an attacker may leave behind. For example, log entries for unauthorized software execution attempts, failed logins, unauthorized attempted use of administrative privileges, file and directory access;
  • log all user logins, session IDs, and whether login was achieved with two-factor-authentication, attempts to escalate credential privileges, etc.;
  • implement data loss prevention (“DLP”) and data execution prevention (“DEP”) systems and policies;
  • create regular data images, securely off-site and on an ongoing basis, of data relating to:
    • system integrity, for the purposes of an emergency quick restore;
    • network activity, for the purposes of ensuring prior log entries are not being altered;
  • monitor data caches created for passive off-site streaming;
  • monitor and log all inbound DNS queries (recursive and non-recursive);
  • encrypt all sensitive data, including credentials;
  • use only encrypted channels with users including the “https” protocol for web traffic;
  • perform regular “white hat” penetration tests and backdoor searches;
  • implement blue-team versus red team exercises among trusted third-party hackers;
  • create a master kill switch for critical systems and data stores, as a final resort against denial of service (“DOS”) attacks, distributed denial of service (“DDOS”) attacks, and various other attacks designed to overwhelm system resources;
  • require complex user passwords with regular password changes;
  • implement random delays between failed login attempts, to limit brute force login penetration;
  • do not use default administrative credentials (for example, prevent the use of admin/admin for login and password);
  • do not permit anonymous or guest logins, or at the very least limit the read/write access of these classes without permitting remote read/write access escalation;
  • do not enable any code execution via URL;
  • do not store or reproduce login credentials on repositories;
  • require two- or three-factor-authentication for all users, but do not use SMS-based two-factor-authentication (to avoid penetration by SIM hacks);
  • when possible, restrict access to “white lists” of user that regularly comply with KYC practices, exclusively trusted IP addresses, and/or exclusively trusted client-side MAC addresses;
  • for businesses with significant on-site traffic, implement a physical chassis intrusion detector on all critical systems.
  • continually monitor the technical integrations between industrial control systems and legacy enterprise systems.

There are numerous additional measures that should also be implemented, from both legal and technical perspectives, to minimize breach risks and associated liability. Technical and legal assessments, effective contract negotiations with service vendors, and establishing early positions with third parties in writing are all part of the process.

V. Conclusion

Big data counsel should be hired early to assess your company’s legal baseline for cybersecurity, proactively minimize liability, and mitigate losses post-breach. The legal rational for breach preparation, and the specific legal issues that may arise post-breach, are discussed in Chapter 4.


The copyright and disclaimer, as contained in the publication page of Big Data Law in Canada, applies to the content of this webpage.