Why the sharing of cyber security information will not work!

We have all heard it by now: the USA wants to create and enforce a cyber-security and cyber crime information centre.  But the nature of business will make this very difficult to accomplish.  By forcing businesses to divulge information about crimes committed against them, such a program would clash with the principles they adhere to in order to appear strong and a good place to invest money.  No business wants to be put in such a vulnerable position.

Anything we do to make business, government and people safer on the internet is a plus.  Unfortunately, having a bureaucratic reporting system is not one of those things.  Mandatory reporting after the fact won’t strike at the root of the problem—what will really protect us is education.  Most users of technology and the internet have thrown their trust and cynicism out the window.  Everything is taken at face value.

The rise of boutique malware and surgically targeted SPAM through spear phishing have made the education of all users a crucial component of business and personal security.

To see why the bureaucratic solution won’t work, let’s take a hypothetical.  I am a mining company.  I report an incident to the bureaucratic group:  What was attacked, what got in, how it got in, what we did about the attack, and how we mitigated the effects.  Even if the information was anonymous, others would be able to deduce the target and, using other information from the internet, confirm who was attacked and when.

In addition to your peers and colleagues knowing that you were the target of an attack, the bad guys also have a place to get vital intel.  They will be able to see what worked, what didn’t work and how effective the attack was, and then deduce that they can use the same method to target someone else in the same industry.

And that’s not all.  Consider the logistics of implementing this idea.  Not only will there be a need for some level of management within the government organisation, but reporting and implementation would be nightmarish.

Large businesses and organizations would need to budget considerable amounts of time to reporting and documenting each incident.  To achieve this, they would need to reshuffle their staff.  In large organisations there are hundreds of security breaches.  Depending on the level of detail of the reports they are filling out, this could cost millions of dollars.  Is this meddling really worth it?

An alternative solution would be to come up with a reporting system that is self-regulated.  Within this structure would also be a compendium of best practices and preventative ideas that can be used to protect business infrastructure.  This system could be a repository of information on software, hardware, infrastructure components, even the internet of things.

The main features of this system would be that it is transparent and unbiased, almost like a cybercrime and cyber-security wiki.  If bad code is written or can be exploited, then it is on this site.  If Android apps are being exploited, then it should be on this site.  A flow-on effect from this is that software writers would be shamed into fixing up their products and making sure that they are secure.  Not that that is happening yet, but there is always a chance.

How do we get everyone involved?  This is the crucial part, and ultimately the hardest part.  Getting buy-in from a huge number of people, from government and business management, to software authors and programmers, to ICT Departments and to the general public—that would be an endeavour in itself.

It would also be a time-consuming and fruitless process unless we can persuade all the parties involved that there’s something in it for them.

This solution won’t work if only a few businesses or programmers take part: It has to be one-in, all-in or nothing.  And the only way to get full buy-in is to sell the benefits—tangible benefits that everyone agrees will also protect them.  “What’s in it for me” might seem like a cynical question, but it’s one that always has to be answered eventually.

One final word—such a reporting system has to be voluntary, not compulsory.  Make it compulsory, and the information will be slow, the management bureaucratic, and the results non-existent.  Whereas if it is voluntary, everyone who participates will understand why it’s needed.  They’ll put in the effort to make it work.

Roger Smith, is an educator. Teaching students at ADFA (UNSW) and showing them how vulnerable they are to cybercrime.

He is also CEO at R & I ICT Consulting Services Pty Ltd, an Amazon #1 author on Cybercrime and founder of the SME Security Framework. He is a Consultant who specialises in inexpensive and highly effective security strategies for small and medium businesses and not for profit organisations.

He has developed and authored the SME Security Framework and the Security Policy Training Course which are considered to be the definitive guides to helping SME's protect their organisation using the principles of Technology, Management, Adaptability and Compliance.