Guess what? That’s right, the only copy was on the flash drive.
Guess what else? Yep, the course work was due for submission 2 days later.
Cue much panic and frantic searching of every conceivable place to lose the flash drive, and once that was exhausted, two very late nights for my boy to recreate the work he’d lost.
And guess what else? The day after the course work was due, we found the flash drive in the rubber seal of the washing machine drum – and incredibly it still worked! Despite this lucky escape, we’re now doubly conscious of the issues using USB flash drives and are definitely changing the way we use them.
This whole 'data loss' episode reminded me of the depth of coverage in the press over the last 18 months or so about losing information like this and most of it of a much more serious nature than losing some exam course work. This sort of coverage has led to a widespread sense among businesses that they must act to improve their information security policies and systems.
Once a business decides to act, the vast majority of them look at beefing up their encryption and transference technologies. Often this is ideal in the circumstance, but can be pretty expensive and some businesses decide that the risk of inaction in this area is worth taking – just leave the information security processes as they are and wait and see what happens.
What if you could help mitigate your information risks another way? That is, without paying a sizable ransom to the various security vendors out there?
One of the biggest areas of information security risk is in developing data handling systems. I think it’s fair to say that most businesses, despite what their formal policy says, get hold of live production data – inclusive of real customer details – and copy it into their test or development environments.
Why do they do this? Simple, real data helps developer produce working code, because they can have confidence that the code they write works with real data – and likewise for testers in a test environment.
So if your developers and testers are using real customer data that means you need to provide production strength security protocols around those systems. This is where the information security vendors start to rub their hands with glee. On top of this, your dev and test platforms become much more rigid and inflexible, making changes more difficult, expensive and longer to deliver.
The conventional alternative to using real live data is to generate data to match the ‘planned for’ test conditions. The problem with this approach is that data in production systems is alive and it really does change constantly. Based purely on the analysts expectation of the data it’s nearly impossible to predict the conditions that the code will see when released into live service. This often leads to a high incidence of production faults.
Sounds like you’re caught between a rock and a hard place. Using real data means building in expensive and restrictive security protocols, but creating your own data leads to higher production faults.
There is a third way. It is possible to create realistic test data that truly mimics live conditions, maintains the utility of live customer data but contains no personally identifiable data within it. This includes maintaining the data quality errors of the system, demographic profile of an area, transactional profiles, etc.
How?
By taking an extract of real data, and securely analysing it, then randomising the data within agreed parameters and using locally defined reference material, it is possible to maintain all this data. If your business is smart enough, it will be doing this now.
My question is: if your business is not doing this now, what really is your attitude to risk?