There are good people out there on the internet—people who actually know what they are doing, understand how vulnerable some software and application packages can be, and would rather help us than steal our data. In a world of malware, spyware and 12-year old hackers, this idea is some comfort. But if the so-called “White Hats” are going to put a dent in cyber crime, the software companies need to listen to them. Do these companies actually care about finding exploits in their software, or is it just lip service?
I ask, because the people who make our software packages or install and manage applications have some very strange practices.
There was a recent episode, reported in mainstream media, where a young boy found a major exploit in a government department’s website. The hole in security was so severe that he was able to extract a database with user names and passwords. He documented it, and then reported it to the organisation.
He did not contact them anonymously; he used his real name and address. Shortly after making the report, he had the federal police on his doorstep, and he was arrested.
What is wrong with this scenario? Ninety-nine percent of cyber professionals are honest, law-abiding citizens of the country where they live. That one percent has tarred the whole industry very black indeed. Come to that, what is wrong with this government department? The person who found the problem and did not exploit it should be praised; the person who failed to do the right thing by setting up the system incorrectly should lose their job, or even be the one charged
This “shoot the messenger” mentality is typical of large organisations and government departments. No one is willing to take responsibility, and that leads to releasing substandard apps and operating systems. The lure of making lots of money with an app, along with the incredibly unrealistic deadlines that are applied to software development, are the primary reasons why we need these “White Hats.”
I have seen it happen first-hand that a company builds a high-end system, and someone forgets to do the right thing (i.e., best practice). Often someone else finds the problem, either by going through the proper procedures (that’s good), or by accident (that’s bad). Either way, the double- and triple-checks that are built into the process are what allow us to put our bank account numbers online, and still sleep soundly at night.
There’s an alternative approach to this problem, one that doesn’t involve having people who report problems arrested. Some software companies have begun offering a software bounty, a small amount of money for finding exploitable code within their applications. Google just recently gave away over $30,000 for one such exploit. Just $30K, for an exploit that could ruin their reputation, destroy their trust and maybe endanger their users and customers. That’s not very much. What would have happened if that Google exploit had been discovered by a less honest person, and instead of reporting it to Google, they just posted it on a criminal chatroom? I think Google would have lost millions.
Maybe large organisations should look at these bug-hunting bounties as an insurance policy. Create an environment where the “White Hats” can report problems, with proper documentation and duplicate access, and get paid accordingly. That could make it a lot easier, and more profitable, for the good guys to help us.