Skip to content

Ethics in Research (Part II)

In a previous post, I wrote about an example of academic misconduct from the Office of Research Integrity within the U.S. Department of Health & Human Services. In this post, I want to look a little closer to home — ethics in my home field, computer engineering.

Codes of Ethics

Computer engineering, like most engineering disciplines, has multiple professional organizations that impose codes of ethics on practitioners. The three I’ll highlight here are:

While the specifics of each code differ, they espouse and share a number of common principles, including responsibilities to:

  • Consistently act in the public interest
  • Avoid harm, both to society and to employers
  • Commit to honesty and integrity
  • Maintain a high degree of professionalism in all technical work
  • Ensure fairness and reject discrimination in all of its forms
  • Continue learning and improving one’s skills

Interestingly, the codes occasionally differ on certain aspects: the ACM’s Code of Ethics, for example, explicitly stresses the need to respect privacy, while the IEEE’s code includes direct reference to the rejection of bribery. These idiosyncrasies perhaps stem from the slightly different memberships of each organization: ACM members primarily work with software, where privacy concerns abound; IEEE members work across a wider variety of fields, including civil and industrial engineering, where concerns of graft outweigh violations of privacy.

Ethics in Computer Security

While many areas of computer engineering research carry weighty ethical concerns, I want to focus on one that I’m a bit more familiar with: computer security research. In recent years, a number of vulnerabilities have arisen that impact millions of computers across the globe. Some have rendered vast troves of consumer data vulnerable to malicious actors; others remain more theoretical in nature and present opportunities for the community to learn and grow. A central ethical concern across the board, however, is that of disclosure: chiefly, what are the responsibilities for computer security researchers to disclose the flaws they discover?

In the paragraphs that follow, I’ll briefly discuss two recent vulnerabilities, how they were disclosed to the public, and the ethical considerations associated with both.

Meltdown logo, free to use under CC0 (see https://meltdownattack.com/ for details).

Case 1: Spectre/Meltdown

In July 2017, a researcher at Google discovered two major hardware flaws that affect millions of computers around the globe. In a nutshell, these two flaws, dubbed Spectre and Meltdown, allow programs to observe and steal information (e.g., passwords and other sensitive data) from other running programs by breaking the normal isolation barriers put in place to prevent such theft. The fact that both flaws occur in hardware significantly increases their severity. Completely mitigating them requires replacing every affected computer, an obviously costly and impractical endeavor. Instead, researchers devised several software techniques to partially mitigate both vulnerabilities — another imperfect solution, but a much cheaper and more feasible one.

Spectre logo, free to use under CC0 (see https://meltdownattack.com/ for details).

Sounds good, right? Some quick software updates buy time for companies around the world to begin transitioning to new hardware: no real harm done. Except, of course, these vulnerabilities weren’t officially acknowledged until January 3, 2018. For over five months, information about the vulnerabilities remained under embargo, preventing all but a select number of privileged organizations (who were informed under the terms of an NDA)¬†from acting.

Ethically, I find this behavior very problematic. I can certainly understand the rationale for delaying public disclosure for a limited period of time — Intel, the company most affected by these vulnerabilities, needed time to verify the problems existed, assess the potential for harm by malicious actors, and develop fixes. But waiting five months to go public? I feel that both the company and the computer security researcher behind the discovery failed to properly consider the ethical implications –specifically, the potential for abuse by nefarious individuals. Even partial disclosure would have allowed affected individuals to consider their exposure and develop appropriate countermeasures and precautions.

Heartbleed Logo, free to use under CC0 (see http://heartbleed.com/ for details).

Case 2: Heartbleed

Contrast this with another major security flaw — this time in software — dubbed Heartbleed. This flaw in the popular OpenSSL cryptography library, discovered by two independent researchers on April 1 and 3, 2014, allowed millions of servers around the world to accidentally leak potential secrets in response to ordinary requests from users. Subsequent investigation revealed that the flaw originated in December 2011 due to a simple programming error, easily patched in code. Not so easy, however, was updating the millions of affected machines — operators needed to update the software on each machine, a non-trivial undertaking.

Unlike the Spectre and Meltdown leaks, knowledge of the Heartbleed bug was kept secret only for about a week, during which the patch was developed and applied. On April 7, 2014, the details of the vulnerability were revealed to the greater computing community and efforts began almost immediately to update affected systems.

I personally feel that these researchers and developers took a much more ethically sound approach to their disclosure than the folks involved with Spectre/Meltdown did. They too delayed public disclosure to allow time for verification, assessment of potential harm, and development of a fix — and arguably, Heartbleed’s status as a software flaw rather than a hardware flaw reduces the time necessary for this process. Still, I can’t help but feel that the researchers responsible for uncovering Heartbleed better understood the ethical problems of delaying disclosure for longer than necessary.

Takeaways: Ethics are Hard

Having not been in a position myself where I discovered a major security flaw and had to decide upon a means of public disclosure, I admittedly am not in much of a position to critique the aforementioned researchers for their decisions. Nevertheless, I feel that computer engineering researchers need to understand the vast scope of their research. We live in an era of arguably unprecedented technological growth, due in no small part to the amazing power of silicon-based computing. To be specific, however, what enables this growth isn’t computers themselves; it’s their ability to digest, process, and even create information. One thousand years ago, an idea would take decades to travel even a few hundred miles; one hundred years ago, it might take a few days; today, that same idea can reach nearly every living human being in a matter of minutes, if not sooner.

This unprecedented ability to exchange information means that public disclosure of vulnerabilities occurs almost instantaneously — to both those who are vulnerable and to those who would take advantage of the vulnerabilities. We must strive to balance our responsibilities to our clients and to the general public — above all seeking to do as little harm to both groups as possible. Sometimes, this means delaying public disclosure. And sometimes, it means forcing that disclosure, if for no other reason than to alert the public of the danger.

Published inGRAD 5104

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *