How to be a “good” hacker – the white hat burden in practice

Cyber security + Cybercrime Csaba Krasznay todayDecember 12, 2023 165

Background
share close

Ethical hacking is now a separate profession, with specific rules and practical difficulties that outsiders are not well aware of.

30 years ago, when I first got into hacking, there were hackers and there were crackers. Everyone in the scene knew that the hacker was “good” and the cracker was “bad”. True, there was very little legislation that defined what was and wasn’t allowed. However, the Hacker Ethics (https://en.wikipedia.org/wiki/Hacker_ethic) clearly defined the ethical principles that kept young (and not so young) hackers on the bright side. In fact, it was not really worth going to the dark side, because the Central-Eastern European countries of the early 1990s were still very far from being computerised, and the joy of discovery was much more important than any material gain.

Of course, as the world changed, so did the hackers, and in the 2000s the term cracker disappeared and was replaced by black, grey and white hats, and in 2001 the Budapest Convention on Cybercrime (https://www.coe.int/en/web/cybercrime/the-budapest-convention) was adopted, which foreshadowed that certain activities would be punished even if they were really fun to do. In Hungary, for example, the Hacktivity conference set the podium, where the hacker scene, together with the police and the first ethical hacking companies, figured out how to legally find bugs in software and systems. Of course, that didn’t mean that everyone wasn’t constantly worried during contract penetration testing that if something went wrong, the customer wouldn’t report it to the police.

As we moved forward in time, the exact rules naturally evolved, and penetration testing became very fashionable. As one colleague put it in an interview,

“if I throw a brick, I’m sure I’ll hit an ethical hacker”.

But precise rules do not mean that there are no disruptions in practice. Just a few examples! If we find a previously unknown vulnerability, what do we do with it? Do we report it to the vendor, who may not have the ability to professionally address and fix the reported vulnerability? Do we write to the public authority, who may have no legal responsibility to report vulnerabilities to the developer? Do we tell the press, which is very effective but has little place in ethical behaviour? Or do we sell it for good money to a grey-zone company that will pick up vulnerabilities and pass them on to a state secret service? God forbid, sell it on Darknet? Whichever solution we choose, it is far from certain that the bug we find will be fixed in the short term.

So, despite the existence of good solutions such as bug bounty programmes organised by companies or platforms that collect them, and the increasing number of direct contracts from companies to ethical hacking companies, there are simply far more bugs in systems than could be covered by purely legal frameworks. It is no coincidence that the legal and ethical management of vulnerabilities is now addressed by a number of cybersecurity initiatives and legislation.

For example, the Paris Call for Trust and Security in Cyberspace, a French government initiative to create a peaceful cyberspace, sets out this objective in point 5 (Non-proliferation): Develop ways to prevent the proliferation of malicious software and practices intended to cause harm (https://pariscall.international/en/principles) and in its statement of principle, it clearly suggests that vulnerabilities that are detected should be managed in a legal framework, involving trusted state actors. This is reinforced by the NIS2 Directive’s preamble (60) to (63) and Article 12, which clearly assigns the task of coordinated disclosure of detected vulnerabilities to national CSIRTs. (https://eur-lex.europa.eu/eli/dir/2022/2555)

Of course, this is only Europe and it is by no means certain that the public authority’s mission as set out in NIS2 will be achieved in the short term. This is why industry initiatives such as The Good Faith Cybersecurity Researchers Coalition (GFCRC) (https://gfcrc.org) are important, which has four goals:

  • Encourage, support, and educate good faith cybersecurity researchers;
  • Advocate for consistent, reasonable, principles-based public policy on vulnerability treatment;
  • Mobilise and coordinate stakeholders to work together towards policy reform;
  • Develop, collect, and share good practices and resources for better, safer coordinated vulnerability disclosure (CVD).

The coalition founded in France, of which White Hat IT Security’s CEO, Sándor Fehér is one of the board members, will support the practical implementation of European legislation along these lines and support similar efforts in non-EU member states. The coalition is open to public, private and academic stakeholders, but private individuals are also welcome to join. Voluntary contributions can be made at https://gfcrc.org/membership/.

Written by: Csaba Krasznay

Tagged as: , , .

Rate it
Previous post

Similar posts